Published on March 14, 2026
How to Query ChatGPT, Perplexity & Claude with a Single API Call
TL;DR: The Sellm async analysis API lets you send one prompt to ChatGPT, Perplexity, Gemini, Grok, Copilot, and Google AIO simultaneously, then poll for structured results including brand mentions, sentiment, position, and competitive share of voice.
The Problem: Six APIs, Six Integrations
If you want to know how AI engines talk about your brand, your product category, or your competitors, you need data from multiple providers. ChatGPT gives different answers than Perplexity. Gemini surfaces different brands than Grok. Google AIO pulls from its own index. Copilot has its own biases.
Building and maintaining separate integrations for each provider means dealing with different authentication schemes, response formats, rate limits, error handling patterns, and output structures. For a single prompt, you might write 200+ lines of provider-specific code before you even get to the analysis part.
The Sellm API eliminates that complexity. One POST request, one response format, structured analysis across every provider.
The Solution: One API, Six Providers
The /v1/async-analysis endpoint accepts a prompt, a list of providers, and a list of country geos. Sellm handles the fan-out: it queries each provider in parallel, stores the raw responses, runs structured analysis (brand extraction, sentiment scoring, position tracking), and returns a unified result.
The flow works in two steps:
- Submit -
POST /v1/async-analysisreturns immediately with a202 Acceptedand an analysis ID. - Poll -
GET /v1/async-analysis/{'{'}analysisId{'}'}returnsrunninguntil results are ready, then returns the full structured payload.
You can also configure a webhook to get notified when results are ready instead of polling.
Quick Start: curl
Submit a prompt to all six providers, targeting the US market:
curl -X POST https://api.sellm.io/v1/async-analysis \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": "best crm for european saas teams",
"replicates": 3,
"providers": ["chatgpt", "perplexity", "gemini", "grok", "copilot", "google_aio"],
"locations": ["US"]
}'
Response (202 Accepted):
{
"data": {
"id": "aa_01abc",
"projectId": "proj_123",
"status": "running",
"creditsReserved": 18,
"webhook": { "configured": false, "status": null },
"createdAt": "2026-03-14T10:00:00Z"
}
}
The creditsReserved value is replicates x providers x locations (3 x 6 x 1 = 18). Each credit costs less than 1 cent.
Poll for results:
curl https://api.sellm.io/v1/async-analysis/aa_01abc \
-H "Authorization: Bearer YOUR_API_KEY"
Python Example
A complete script that submits an analysis, polls until completion, and prints the results:
import requests
import time
API_KEY = "YOUR_API_KEY"
BASE_URL = "https://api.sellm.io/v1"
HEADERS = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json",
}
# Step 1: Submit the analysis
resp = requests.post(f"{BASE_URL}/async-analysis", headers=HEADERS, json={
"prompt": "best crm for european saas teams",
"replicates": 3,
"providers": ["chatgpt", "perplexity", "gemini", "grok", "copilot", "google_aio"],
"locations": ["US", "DE"],
})
resp.raise_for_status()
analysis_id = resp.json()["data"]["id"]
print(f"Submitted: {analysis_id}")
# Step 2: Poll until finished
while True:
result = requests.get(
f"{BASE_URL}/async-analysis/{analysis_id}",
headers=HEADERS,
).json()
status = result["data"]["status"]
if status in ("completed", "failed"):
break
print(f"Status: {status}, waiting...")
time.sleep(10)
# Step 3: Parse results
data = result["data"]
summary = data.get("summary")
if summary:
print(f"Share of Voice: {summary['sovPct']}%")
print(f"Average Position: {summary['avgPosition']}")
print(f"Average Sentiment: {summary['avgSentiment']}")
# Provider breakdown
for provider_id, metrics in data.get("providerBreakdown", {}).items():
print(f"\n{provider_id}:")
print(f" SOV: {metrics['sovPct']}%")
print(f" Position: {metrics['avgPosition']}")
print(f" Sentiment: {metrics['avgSentiment']}")
# Individual results
for r in data.get("results", []):
print(f"\n[{r['provider']}] Brands mentioned: {', '.join(r['brandsMentioned'])}")
if r.get("position") is not None:
print(f" Your position: {r['position']}")
JavaScript / Node.js Example
The same flow using fetch:
const API_KEY = "YOUR_API_KEY";
const BASE_URL = "https://api.sellm.io/v1";
async function analyzePrompt(prompt, providers, locations) {
// Submit
const submitResp = await fetch(`${BASE_URL}/async-analysis`, {
method: "POST",
headers: {
Authorization: `Bearer ${API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
prompt,
replicates: 3,
providers,
locations,
}),
});
const { data: submitted } = await submitResp.json();
console.log("Submitted:", submitted.id);
// Poll
let result;
while (true) {
const pollResp = await fetch(
`${BASE_URL}/async-analysis/${submitted.id}`,
{ headers: { Authorization: `Bearer ${API_KEY}` } }
);
result = await pollResp.json();
if (["completed", "failed"].includes(result.data.status)) break;
console.log("Waiting...");
await new Promise((r) => setTimeout(r, 10_000));
}
return result.data;
}
// Usage
const data = await analyzePrompt(
"best crm for european saas teams",
["chatgpt", "perplexity", "gemini", "grok", "copilot", "google_aio"],
["US", "DE"]
);
console.log("SOV:", data.summary?.sovPct + "%");
console.log("Providers:", Object.keys(data.providerBreakdown || {}));
Understanding the Response
When the analysis completes, the response contains three layers of data:
Summary
Top-level KPIs aggregated across all providers and replicates:
- sovPct - Share of voice: how often your brand is mentioned vs. all brands (0-100)
- avgPosition - Average position of your brand in AI responses (lower is better)
- avgSentiment - Average sentiment score toward your brand (0-1 scale)
- coveragePct - Percentage of responses that mention your brand at all
- topCompetitors - Most frequently mentioned competitor brands
Provider Breakdown
The same KPIs split by provider. This is where cross-provider comparison becomes powerful. A typical finding looks like:
{
"providerBreakdown": {
"chatgpt": { "sovPct": 35, "avgPosition": 2.1, "avgSentiment": 0.82 },
"perplexity": { "sovPct": 28, "avgPosition": 3.0, "avgSentiment": 0.75 },
"gemini": { "sovPct": 40, "avgPosition": 1.8, "avgSentiment": 0.88 },
"grok": { "sovPct": 22, "avgPosition": 4.2, "avgSentiment": 0.70 },
"copilot": { "sovPct": 30, "avgPosition": 2.5, "avgSentiment": 0.79 },
"google_aio": { "sovPct": 38, "avgPosition": 2.0, "avgSentiment": 0.85 }
}
}
This tells you immediately: your brand performs strongest on Gemini and weakest on Grok for this query. You can use this data to prioritize your optimization efforts.
Results Array
Individual analysis results for each provider-location-replicate combination. Each result contains:
- provider - Which AI engine produced this result
- country - ISO country code for the geo context
- replicateIndex - Which replicate (0-based), useful for measuring response consistency
- brandsMentioned - All brands detected in the response
- position - Your brand's position in the response (null if not mentioned)
- sentiment - Sentiment score toward your brand (0-1 scale)
Provider-Specific Analysis: Comparing Results Across Engines
The real value of querying multiple providers simultaneously is comparative analysis. Each AI engine has different training data, different retrieval mechanisms, and different biases. Here is what to look for:
- Consistency vs. divergence: If your brand ranks #1 on ChatGPT but doesn't appear on Perplexity, your content strategy may be optimized for one training set but not another.
- Sentiment gaps: High visibility with low sentiment on a specific provider suggests that the provider's training data includes negative coverage of your brand.
- Geographic variation: Running the same prompt with
["US", "DE", "FR"]reveals how AI engines treat your brand in different markets. - Replicate stability: Multiple replicates show how consistent each provider is. High variance means the provider's response is unpredictable for that query.
Use Cases
Brand Monitoring
Run your core brand queries weekly across all providers. Track share of voice trends over time. Alert when a competitor overtakes your position on any provider.
Competitive Intelligence
Submit competitor-focused prompts ("best alternatives to [competitor]") and analyze which brands AI engines recommend. Understand where you stand in the competitive landscape across all major AI platforms.
Content Optimization
Identify queries where your brand has low coverage or poor sentiment. Use the provider breakdown to understand which engines need attention, then optimize your content accordingly.
Market Research
Query category-level prompts ("best tools for X") across multiple geos to understand how AI engines perceive your market. Discover which brands dominate AI recommendations in your space.
Pricing
Async analysis is credit-based. Each credit covers one provider-location-replicate combination. The cost is less than 1 cent per credit.
For example, analyzing a prompt across 6 providers, 2 locations, with 3 replicates uses 36 credits (6 x 2 x 3). That's roughly $0.30 for a comprehensive cross-provider analysis of a single prompt across two markets.
Credits are reserved upfront when you submit the request and are consumed as results complete.
Supported Providers
| Provider ID | Engine | Type | Key Strength |
|---|---|---|---|
chatgpt |
ChatGPT (OpenAI) | Conversational AI | Largest user base, general-purpose recommendations |
perplexity |
Perplexity | AI search engine | Source-cited answers with real-time web access |
gemini |
Gemini (Google) | Conversational AI | Google ecosystem integration, deep web knowledge |
grok |
Grok (xAI) | Conversational AI | Real-time X/Twitter data integration |
copilot |
Copilot (Microsoft) | AI assistant | Bing-powered results, enterprise adoption |
google_aio |
Google AI Overviews | AI-enhanced search | Integrated into Google Search results |
Webhooks
Instead of polling, you can provide a webhook URL when submitting the analysis. Sellm will send a signed POST request (HMAC-SHA256) to your endpoint when the analysis completes or fails. Webhook delivery is retried automatically for up to 24 hours.
curl -X POST https://api.sellm.io/v1/async-analysis \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": "best crm for european saas teams",
"replicates": 3,
"providers": ["chatgpt", "perplexity", "gemini"],
"locations": ["US"],
"webhook": {
"url": "https://yourapp.com/sellm-webhook"
}
}'
Frequently Asked Questions
How long does an analysis take?
Most analyses complete within 2-5 minutes depending on the number of providers and replicates. Some providers are faster than others. The polling endpoint returns a running status until all tasks finish.
Can I query just one provider?
Yes. The providers array can contain a single provider ID. For example, ["chatgpt"] will only query ChatGPT. You can mix and match any combination of the six supported providers.
What are replicates for?
AI responses are non-deterministic. Running multiple replicates (e.g., 3 or 5) for the same prompt lets you measure how consistently a provider mentions your brand. The summary metrics are averaged across replicates.
How do I get an API key?
Create a project in the Sellm dashboard, then go to project settings to generate an API key. The API is available on all plans.
What does "share of voice" mean?
Share of voice measures how often your brand is mentioned relative to all brands in AI responses. If your brand appears in 3 out of 10 total brand mentions, your SOV is 30%. It's the primary metric for understanding your AI search visibility.
Is there a rate limit?
Yes. The async analysis endpoint is rate limited to prevent abuse. Rate limit headers (X-RateLimit-Limit, X-RateLimit-Remaining) are included in every response. Contact support if you need higher limits.
Can I track Claude (Anthropic) as a provider?
Claude tracking is available through Sellm's scheduled analysis runs within the dashboard and the standard analysis API endpoints. The async analysis endpoint currently supports ChatGPT, Perplexity, Gemini, Grok, Copilot, and Google AI Overviews. Check the API docs for the latest provider availability.