Published on March 14, 2026
The Developer's Guide to Generative Engine Optimization (GEO) via API
TL;DR: GEO is how you get your brand recommended by ChatGPT, Claude, and Perplexity. The sellm API gives you the data to measure and improve your AI search visibility programmatically -- for less than 1 cent per prompt.
Generative Engine Optimization (GEO) is the new SEO. But unlike traditional SEO, you can't just open Google Search Console and check your rankings. AI search engines don't publish ranking lists. They generate answers -- and your brand is either mentioned in those answers or it isn't. Here's how to build a data-driven GEO strategy using an API.
What Is GEO and Why It Matters
GEO -- Generative Engine Optimization -- is the practice of optimizing your brand's visibility in AI-generated search results. When someone asks ChatGPT "what's the best project management tool for remote teams" or Perplexity "top CRM platforms for startups," the AI doesn't return a list of ten blue links. It writes an answer, and it either mentions your brand or it doesn't.
The numbers tell the story. Gartner predicted that traditional search volume would drop 25% by 2026 as users shift to AI-powered alternatives. Meanwhile, AI-referred sessions have grown 527% year-over-year according to Similarweb data. Buyers are already asking AI assistants for product recommendations, and this behavior is accelerating.
The problem is measurement. Traditional SEO has mature tooling -- Google Search Console, Ahrefs, SEMrush. GEO has almost nothing. You can't query ChatGPT's API and ask "where do I rank?" There is no equivalent of a SERP position. The only way to measure your AI search visibility is to systematically query AI providers with the prompts your customers use, then analyze what comes back.
That's exactly what the sellm API does.
The GEO Metrics That Matter
Effective GEO requires tracking five core metrics. Each captures a different dimension of your AI search presence:
1. Share of Voice (SOV)
Share of Voice measures how often your brand is mentioned across all prompts relative to competitors. If you track 50 prompts and your brand appears in 30 of them while your top competitor appears in 40, your SOV is lower -- and you know exactly where the gap is. SOV is the single most important GEO metric because it directly maps to discovery probability.
2. Position
Position tracks where your brand appears in the AI response. Being mentioned first ("For startups, Acme CRM stands out because...") is fundamentally different from being mentioned fifth in a list. The sellm API assigns positions based on the order brands appear in each response. Lower position numbers are better, just like traditional search rankings.
3. Coverage
Coverage is the percentage of your tracked prompts where your brand gets mentioned at all. A brand with 100% coverage is mentioned in every AI response you're tracking. A brand with 20% coverage has significant blind spots. Coverage tells you where you're invisible.
4. Sentiment (4 Dimensions)
Unlike traditional search where you either rank or you don't, AI search adds a qualitative layer. The sellm API scores sentiment across four dimensions, each on a 0-10 scale:
- Trustworthiness -- How reliable and credible does the AI present your brand?
- Authority -- Is your brand positioned as a leader or a follower in its category?
- Recommendation Strength -- How enthusiastically does the AI recommend your brand?
- Fit for Query Intent -- How well does the AI match your brand to what the user is asking for?
These four dimensions give you actionable guidance on why an AI platform positions your brand the way it does, not just where.
5. Cited Sources
Some AI providers -- particularly Perplexity -- cite their sources. The sellm API extracts cited URLs from responses so you can see exactly which pages AI engines are pulling information from. If a competitor's blog post is consistently cited, that tells you what content to create or improve.
How to Build a GEO Workflow with the Sellm API
Here's a step-by-step workflow for implementing GEO tracking and optimization programmatically.
Step 1: Define Your Prompt Universe
Your prompt universe is the set of queries your potential customers ask AI assistants. These fall into three categories:
- Buying-intent prompts: "What's the best [product category] for [use case]?" -- These directly influence purchase decisions.
- Comparison prompts: "[Brand A] vs [Brand B] for [use case]" -- These capture users who already have a shortlist.
- Category prompts: "Top [product category] tools in 2026" -- These shape early-stage awareness.
Start with 20-50 prompts that cover your core buying scenarios. You can manage these through the API's prompt management endpoints.
Step 2: Submit Prompts Across Multiple Providers
Once your prompts are configured, trigger an analysis run. The sellm API sends your prompt to your selected providers -- ChatGPT, Claude, Perplexity, Gemini, Grok, Copilot -- and collects the responses. Here's what triggering a run looks like:
curl -X POST https://sellm.io/v1/async-analysis \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": "best endpoint security platform for enterprise",
"providers": ["chatgpt", "claude", "perplexity"],
"country": "US",
"replicates": 5
}'
Each prompt is sent to each provider independently. The API handles rate limiting, retries, and response collection. You can submit multiple prompts by making separate API calls for each one.
Step 3: Analyze the Response
Once a run completes, pull the results. The API returns structured data for each prompt/provider combination:
curl https://sellm.io/v1/async-analysis/ANALYSIS_ID \
-H "Authorization: Bearer YOUR_API_KEY"
The response includes:
brandsMentioned-- Every brand detected in the AI response, in order of appearanceposition-- Your brand's position in the response (1 = first mentioned)brandSentiment-- The 4-dimension sentiment breakdown for your brandcitedSources-- URLs cited by the AI provider (when available)sovPct,coverage,avgPos-- Aggregate KPIs across all prompts
Step 4: Use Sentiment Dimensions to Guide Content Strategy
The four sentiment dimensions directly map to content optimization strategies:
| Low Dimension | What It Means | Action |
|---|---|---|
| Trustworthiness | AI doesn't find enough evidence to trust your brand | Add citations, case studies, third-party reviews, and data-backed claims to your content |
| Authority | Your brand isn't seen as a category leader | Get featured in industry publications, earn expert quotes, publish original research |
| Recommendation Strength | AI mentions you but doesn't strongly endorse | Address objections in your content, add comparison pages, highlight differentiators |
| Fit for Query Intent | Your content doesn't align with what users are asking | Create content that directly answers the query format, match the language your audience uses |
For example, if your trustworthiness score is 4/10 but authority is 8/10, you're seen as a leader but not backed by enough evidence. The fix isn't more PR -- it's adding hard data, customer quotes, and third-party validation to your content.
Step 5: Track Trends Over Time
GEO is not a one-time exercise. Run the same prompts on a regular cadence and compare results across analyses to monitor how your metrics change week over week. Submit the same prompt via POST /v1/async-analysis, then retrieve each analysis with GET /v1/async-analysis/{analysisId} to build your own trend data.
Compare sovPct, avgPos, and sentiment scores across analyses over time to correlate content changes with visibility improvements.
GEO Optimization Strategies Backed by Data
Based on patterns we've observed across thousands of analysis runs, these are the strategies that consistently improve AI search visibility:
Citation Optimization
AI providers -- especially Perplexity -- cite specific URLs when generating answers. To get cited, your content needs to be the most authoritative, well-structured source on a topic. This means:
- Create comprehensive, well-sourced pages that directly answer common queries
- Use clear headings and structured data (FAQ schema, HowTo schema) so AI crawlers can extract information
- Keep content updated -- AI models favor recent, accurate information
- Include original data, benchmarks, or research that can't be found elsewhere
Content Structure
AI engines parse content differently than traditional search crawlers. The content that performs best in GEO follows these patterns:
- Answer-style content: Structure pages to directly answer questions. "What is X?" followed by a clear definition performs better than burying the answer in paragraph five.
- FAQ sections: Explicit question-and-answer formats make it easy for AI models to extract and cite your information.
- Comparison tables: When AI is asked to compare options, it pulls from pages that already have structured comparisons.
Entity Signals
AI models build internal representations of brands as entities. Strengthen your entity signals by:
- Consistent brand naming: Use your exact brand name consistently across all channels. Variations confuse entity recognition.
- Schema markup: Implement Organization, Product, and SoftwareApplication schema to give AI crawlers structured data about your brand.
- Cross-platform presence: Be mentioned consistently on authoritative third-party sites -- review platforms, industry publications, and knowledge bases.
Measuring GEO ROI
To quantify the impact of your GEO efforts, run a before/after analysis:
- Baseline: Run your first analysis across all prompts and providers. Record your SOV, average position, coverage, and sentiment scores.
- Optimize: Implement the content changes suggested by your sentiment analysis -- add citations, restructure content, strengthen entity signals.
- Measure: Run analysis again after 2-4 weeks (AI models update their training data and retrieval indexes regularly). Compare the new scores to your baseline.
A typical GEO improvement cycle looks like this: a brand starts with 15% SOV and position #4, implements citation and content structure improvements, and within 4-6 weeks sees SOV increase to 30%+ with position improving to #2. The specific numbers vary by category competitiveness, but the pattern is consistent -- structured, authoritative, answer-oriented content outperforms generic marketing copy in AI search.
Pricing: Less Than 1 Cent Per Prompt
The sellm API is designed to make GEO tracking affordable enough to run continuously. Each prompt analyzed across a provider costs less than 1 cent. A weekly monitoring setup with 50 prompts across 5 providers runs about $10/month -- a fraction of what traditional SEO tools charge.
Paid plans offer weekly prompt quotas with automated scheduling. Full pricing details are available on the pricing page.
Frequently Asked Questions
What is the difference between GEO and traditional SEO?
SEO optimizes for ranking in a list of links on Google. GEO optimizes for being mentioned in AI-generated answers. The key differences: there are no fixed "positions" in AI search (brands are woven into prose), multiple AI providers matter (not just Google), and sentiment/recommendation quality matters as much as visibility. GEO requires different measurement tools because there's no equivalent of Google Search Console for ChatGPT or Claude.
Which AI search providers does the API track?
The sellm API tracks ChatGPT (OpenAI), Claude (Anthropic), Perplexity, Gemini (Google), Grok (xAI), and Copilot (Microsoft). Each provider is queried independently, and you can filter results by provider to understand how each platform treats your brand differently.
How often should I run GEO analysis?
Weekly runs are the most common cadence. AI models update their knowledge and retrieval indexes regularly, so weekly monitoring catches changes quickly. If you're actively optimizing content, you might run analysis twice a week to measure impact faster. The API supports automated scheduled runs so you don't have to trigger them manually.
Can I use the API without the dashboard?
You need the dashboard to create your initial project and generate an API key. After that, everything -- prompt management, run triggering, results retrieval, trend analysis -- can be done entirely through the API. Many teams use the API for automated pipelines and only open the dashboard for ad-hoc exploration.