The Search API for LLMs

Query 6 AI engines with one API call. Get structured brand visibility data back. Less than $0.01 per prompt.

Brand mentions, position rankings, 4D sentiment, share of voice, and cited sources from ChatGPT, Claude, Perplexity, Gemini, Grok, and Copilot.

Structured data from every AI engine

Submit a prompt and get back JSON with brand mentions, position rankings, 4D sentiment scores, share of voice, and cited sources. No scraping. No browser automation. Just a REST call and structured results.

What you get from a single API call

How it works

  1. Submit your prompt - POST to /v1/async-analysis with your prompt, target providers, and country. The API queues the analysis across all selected AI engines.
  2. Poll for results - Check the status endpoint or register a webhook. Each AI engine responds at its own pace -- you get notified when results land.
  3. Get structured data - Receive JSON with brandsMentioned, position, brandSentiment, shareOfVoice, citedSources, and competitorMentions for each provider.

Less than 1 cent per prompt

Each API call queries up to 6 AI engines, extracts structured brand data, and returns it as JSON. All for less than $0.01 per prompt. Scale from a handful of queries to thousands without breaking your budget.

Supported AI platforms

The Sellm AI Search API supports ChatGPT, Claude, Perplexity, Gemini, Grok, and Microsoft Copilot. Query all 6 in a single request or filter to specific providers.

Frequently Asked Questions

What is the Sellm AI Search API?
The Sellm AI Search API lets you programmatically query 6 major AI engines -- ChatGPT, Claude, Perplexity, Gemini, Grok, and Copilot -- and get back structured brand visibility data. Submit a prompt, and receive brand mentions, position rankings, sentiment scores, and cited sources as JSON.
How much does it cost per prompt?
Less than 1 cent per prompt call. Pricing scales with your plan: Essential covers 200 prompts weekly, Pro covers 1,000 prompts weekly.
Which AI providers are supported?
The API supports ChatGPT, Claude, Perplexity, Gemini, Grok, and Microsoft Copilot. You can query all 6 in a single request or filter to specific providers.
Can I configure how many replicates are run per prompt?
Yes. You can set the number of replicates per prompt and provider to get statistically robust results. More replicates give you higher confidence in the data at the cost of additional API calls.
Do you support webhooks for async results?
Yes. You can register a webhook URL and receive a POST notification when your analysis results are ready, instead of polling the status endpoint.