Published on March 15, 2026
The API-First Approach to AI Search Optimization
Key takeaway: Manual AI search monitoring breaks down at scale. An API-first approach to GEO lets you automate tracking, A/B test content changes, and build feedback loops that continuously improve your AI visibility across ChatGPT, Claude, Perplexity, Gemini, Grok, and Copilot.
Traditional SEO has well-established workflows: crawl your site, check rankings, update content, measure results. But when it comes to AI search, those workflows fall apart. There is no "rank #1" in ChatGPT. There is no crawl report for Perplexity. And manually asking AI assistants about your brand every week does not scale.
Generative Engine Optimization (GEO) requires a fundamentally different approach. One that is built on programmatic access, automated measurement, and continuous iteration. In this article, we will walk through why an API-first approach to GEO is the only way to build a scalable AI visibility strategy, and how to implement it using the Sellm API.
Why Manual AI Search Monitoring Fails at Scale
Consider what it takes to manually monitor your brand's AI visibility:
- Open ChatGPT, Claude, Perplexity, Gemini, Grok, and Copilot
- Type each of your target prompts into each platform
- Read through every response and note where (and if) your brand appears
- Record the position, sentiment, and competitors mentioned
- Repeat this weekly to track trends
If you are tracking 30 prompts across 6 AI providers, that is 180 individual queries per week. Each response is multiple paragraphs long. Recording the results in a spreadsheet takes hours. And the data is subjective because two people will score "sentiment" differently.
This approach has three fatal problems:
- It does not scale. Every new prompt or provider multiplies the work. Agencies managing 10 brands need to do this 1,800 times per week.
- It is inconsistent. AI responses vary by session, location, and time of day. A single manual check captures one snapshot that may not be representative.
- It is not actionable. Without historical data in a structured format, you cannot measure the impact of content changes or identify trends.
The API-First GEO Workflow
An API-first approach flips the model. Instead of manually querying AI platforms, you define your monitoring parameters once and let automation handle everything:
Define Prompts --> Schedule Runs --> Collect Results --> Analyze Trends
| | | |
v v v v
API call Automated Structured Dashboards
(one-time) (weekly) JSON data & alerts
Here is what the workflow looks like with the Sellm API:
1. Submit an analysis
Each API call sends a single prompt to the providers and countries you choose. You specify replicates to control how many independent responses you get per provider-country pair:
import requests
import time
API_KEY = "sellm_your_api_key"
BASE = "https://sellm.io/api/v1"
HEADERS = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}
# Submit an async analysis for a developer tools prompt
resp = requests.post(
f"{BASE}/async-analysis",
headers=HEADERS,
json={
"prompt": "best API documentation platform",
"providers": ["chatgpt", "claude", "perplexity", "gemini", "grok", "copilot"],
"country": "US",
"replicates": 3,
},
)
analysis_id = resp.json()["data"]["id"]
print(f"Submitted: {analysis_id} (status {resp.status_code})")
2. Poll for results
The analysis runs asynchronously. Poll the GET endpoint until the status changes from running to succeeded (or failed):
# Poll until done
while True:
result = requests.get(f"{BASE}/async-analysis/{analysis_id}", headers=HEADERS).json()["data"]
if result["status"] != "running":
break
time.sleep(10)
print(f"Status: {result['status']}")
Sellm handles querying every AI provider, extracting structured data from responses, and computing metrics like Share of Voice, Coverage, Average Position, and Sentiment. You get clean JSON data instead of raw paragraphs.
3. Read structured results
Everything comes back in one response — summary KPIs, per-provider breakdowns, per-prompt breakdowns, and detailed per-result data:
# All data is in the response from GET /v1/async-analysis/{analysisId}
summary = result["summary"]
print(f"Share of Voice: {summary['sovPct']}%")
print(f"Coverage: {summary['coveragePct']}%")
print(f"Avg Position: {summary['avgPos']}")
print(f"Sentiment: {summary['sentiment']}")
A/B Testing Content Changes Against AI Visibility
This is where the API-first approach becomes truly powerful. Traditional SEO A/B testing takes weeks because Google needs to recrawl and reindex your pages. AI search engines respond to content changes much faster, and with programmatic access you can measure the impact precisely.
The before-and-after workflow
- Trigger a baseline run before making any content changes
- Update your website content (add citations, restructure for AI readability, improve E-E-A-T signals)
- Wait for AI models to reflect changes (typically 1-2 weeks for most providers)
- Trigger a follow-up run with the same prompts
- Compare the two runs to measure impact
Here is the code to do it:
def submit_and_wait(headers, base_url, prompt):
"""Submit an async analysis and wait for completion."""
resp = requests.post(
f"{base_url}/async-analysis",
headers=headers,
json={
"prompt": prompt,
"providers": ["chatgpt", "claude", "perplexity", "gemini", "grok", "copilot"],
"country": "US",
"replicates": 3,
},
)
analysis_id = resp.json()["data"]["id"]
while True:
result = requests.get(
f"{base_url}/async-analysis/{analysis_id}", headers=headers
).json()["data"]
if result["status"] == "succeeded":
return result
if result["status"] == "failed":
raise RuntimeError("Analysis failed")
time.sleep(10)
def compare_analyses(before, after):
"""Compare two analysis results and report changes."""
b_summary = before["summary"]
a_summary = after["summary"]
metrics = [("sovPct", "SOV %"), ("coveragePct", "Coverage %"), ("avgPos", "Avg Position"), ("sentiment", "Sentiment")]
print("\nMetric Before After Delta")
print("-" * 50)
for key, label in metrics:
b_val = b_summary.get(key) or 0
a_val = a_summary.get(key) or 0
delta = a_val - b_val
direction = "+" if delta > 0 else ""
print(f"{label:20s} {b_val:8.1f} {a_val:8.1f} {direction}{delta:.1f}")
# Per-provider SOV comparison
before_sov = {p["provider"]: p["sov"] for p in before.get("providerBreakdown", {}).get("sovByProvider", [])}
after_sov = {p["provider"]: p["sov"] for p in after.get("providerBreakdown", {}).get("sovByProvider", [])}
print("\nPer-provider SOV changes:")
for provider in after_sov:
b_sov = before_sov.get(provider, 0)
a_sov = after_sov[provider]
delta = a_sov - b_sov
if abs(delta) > 0.5:
print(f" {provider}: {b_sov:.1f}% -> {a_sov:.1f}% ({'+' if delta > 0 else ''}{delta:.1f}pp)")
# Usage:
prompt = "best API documentation platform"
# 1. Run baseline
baseline = submit_and_wait(HEADERS, BASE, prompt)
print(f"Baseline analysis: {baseline['id']}")
# 2. Make your content changes, then wait 1-2 weeks
# 3. Run follow-up
followup = submit_and_wait(HEADERS, BASE, prompt)
print(f"Follow-up analysis: {followup['id']}")
# 4. Compare
compare_analyses(baseline, followup)
What to test
Content changes that tend to improve AI visibility include:
- Adding structured data and citations to key pages
- Creating comparison pages that directly address "[Your Brand] vs [Competitor]" queries
- Improving FAQ sections with natural-language Q&A that AI models can extract
- Building topical authority with in-depth content clusters around your core topics
- Earning mentions on authoritative sites that AI models use as training data or retrieval sources
The API-first approach lets you measure which of these changes actually moves the needle, rather than guessing.
Building a GEO Feedback Loop
The most effective GEO strategies are not one-time projects. They are continuous feedback loops:
Monitor --> Analyze --> Optimize --> Measure --> (repeat)
| | | |
v v v v
Weekly Identify Update Compare
API runs weak spots content before/after
Automating the loop
Here is a script that runs weekly, identifies your weakest prompts, and generates an optimization report:
def generate_optimization_report(headers, base_url, prompt_list):
"""Run analyses for multiple prompts and identify where visibility is lowest."""
results = []
for prompt_text in prompt_list:
result = submit_and_wait(headers, base_url, prompt_text)
results.append(result)
# Collect prompt breakdowns from each analysis
all_prompts = []
for result in results:
for p in result.get("promptBreakdown", []):
all_prompts.append(p)
# Sort by share of voice (ascending = worst first)
all_prompts.sort(key=lambda p: p.get("sovPct", 0))
print("=== GEO Optimization Report ===")
print(f"Analyses: {len(results)}")
print()
print("Prompts needing attention (lowest SOV):")
for p in all_prompts[:10]:
sov = p.get("sovPct", 0)
pos = p.get("avgPos", "N/A")
competitors = p.get("topCompetitors", [])
print(f"\n Prompt: \"{p['prompt'][:80]}...\"")
print(f" SOV: {sov}% | Position: {pos}")
if competitors:
print(f" Top competitors: {', '.join(competitors)}")
print(f" Action: Create or improve content targeting this query")
Schedule this to run after every weekly analysis and send the report to your content team. They get a prioritized list of queries to optimize, with data on which competitors are ahead and by how much.
Integrating AI Search Data into Existing SEO Dashboards
Most teams already have SEO dashboards in tools like Looker Studio, Grafana, or Retool. The API-first approach makes it straightforward to add AI search data alongside your existing metrics.
Custom dashboard with the async analysis API
Run analyses for your key prompts and store the results to build trend charts over time:
const API_KEY = process.env.SELLM_API_KEY;
const BASE = "https://sellm.io/api/v1";
const headers = { Authorization: `Bearer ${API_KEY}`, "Content-Type": "application/json" };
async function runAnalysis(prompt: string) {
// Submit async analysis
const submitRes = await fetch(`${BASE}/async-analysis`, {
method: "POST",
headers,
body: JSON.stringify({
prompt,
providers: ["chatgpt", "claude", "perplexity", "gemini", "grok", "copilot"],
country: "US",
replicates: 3,
}),
});
const { data: { id } } = await submitRes.json();
// Poll until done
while (true) {
const res = await fetch(`${BASE}/async-analysis/${id}`, { headers });
const { data } = await res.json();
if (data.status !== "running") {
return {
date: new Date(data.finishedAt).toLocaleDateString(),
sov: data.summary.sovPct,
coverage: data.summary.coveragePct,
position: data.summary.avgPos,
sentiment: data.summary.sentiment,
};
}
await new Promise((r) => setTimeout(r, 10_000));
}
}
// Example: render in a React component with Recharts
// const result = await runAnalysis("best API documentation platform");
// Store result in your database, then chart historical data:
//
//
//
//
Combining with traditional SEO data
The most valuable dashboards combine AI search data with traditional metrics:
| Metric Source | What It Shows | Why It Matters |
|---|---|---|
| Google Search Console | Traditional search rankings | Baseline organic visibility |
| Sellm API | AI search visibility | Growing share of discovery traffic |
| Analytics (GA4) | Referral traffic from AI platforms | Actual traffic impact of AI mentions |
| Brand monitoring | Social and press mentions | Upstream signals that influence AI models |
Real Workflow Examples
Example 1: Weekly monitoring cron job
A simple GitHub Actions workflow that runs every Monday:
# .github/workflows/geo-monitor.yml
name: Weekly GEO Monitor
on:
schedule:
- cron: "0 9 * * 1" # Every Monday at 9 AM UTC
workflow_dispatch: # Allow manual triggers
jobs:
monitor:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run monitoring script
env:
SELLM_API_KEY: ${{ secrets.SELLM_API_KEY }}
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
run: python3 scripts/geo_monitor.py
- name: Upload results
uses: actions/upload-artifact@v4
with:
name: geo-results-${{ github.run_id }}
path: results/*.json
Example 2: CI/CD integration for content changes
Trigger a baseline run before deploying content changes, then compare after:
# In your deployment pipeline
import os
import json
PROMPT = "best API documentation platform"
def save_baseline():
"""Run before content deployment. Saves result for later comparison."""
result = submit_and_wait(HEADERS, BASE, PROMPT)
with open(".geo-baseline.json", "w") as f:
json.dump(result, f)
print(f"Baseline saved: {result['id']}")
def measure_impact():
"""Run 1-2 weeks after content deployment."""
with open(".geo-baseline.json") as f:
baseline = json.load(f)
followup = submit_and_wait(HEADERS, BASE, PROMPT)
compare_analyses(baseline, followup)
# Before deploy: python -c "from geo_test import save_baseline; save_baseline()"
# After 2 weeks: python -c "from geo_test import measure_impact; measure_impact()"
Example 3: Multi-brand monitoring for agencies
import requests
# Each brand has its own API key and target prompt
BRANDS = {
"Acme Corp": {"key": os.environ["SELLM_KEY_ACME"], "prompt": "best API documentation platform"},
"Widget Inc": {"key": os.environ["SELLM_KEY_WIDGET"], "prompt": "best widget management software"},
"FooBar SaaS": {"key": os.environ["SELLM_KEY_FOOBAR"], "prompt": "best SaaS analytics tool"},
}
def agency_report():
"""Generate a cross-brand visibility report."""
report = []
for brand_name, config in BRANDS.items():
headers = {"Authorization": f"Bearer {config['key']}", "Content-Type": "application/json"}
# Submit analysis for each brand
resp = requests.post(
f"{BASE}/async-analysis",
headers=headers,
json={
"prompt": config["prompt"],
"providers": ["chatgpt", "claude", "perplexity", "gemini", "grok", "copilot"],
"country": "US",
"replicates": 3,
},
)
analysis_id = resp.json()["data"]["id"]
# Poll until done
while True:
result = requests.get(f"{BASE}/async-analysis/{analysis_id}", headers=headers).json()["data"]
if result["status"] != "running":
break
time.sleep(10)
if result["status"] != "succeeded":
continue
summary = result["summary"]
report.append({
"brand": brand_name,
"sov": summary["sovPct"],
"coverage": summary["coveragePct"],
"position": summary["avgPos"] or 0,
"sentiment": summary["sentiment"] or 0,
})
# Sort by SOV descending
report.sort(key=lambda r: r["sov"], reverse=True)
print("=== Agency GEO Report ===")
print(f"{'Brand':20s} {'SOV':>8s} {'Coverage':>10s} {'Position':>10s} {'Sentiment':>10s}")
print("-" * 60)
for r in report:
print(f"{r['brand']:20s} {r['sov']:7.1f}% {r['coverage']:9.1f}% {r['position']:10.1f} {r['sentiment']:9.1f}")
Why Programmatic Access Matters for Agencies
Agencies face a unique scaling challenge. Managing AI visibility for a single brand is feasible manually. Managing it for 10, 20, or 50 brands is not.
An API-first approach solves the core problems agencies encounter:
- Unified reporting: Pull data from every client project into a single dashboard. No more switching between accounts or copying data into spreadsheets.
- Consistent methodology: Every client gets the same analysis pipeline. Results are comparable across brands, industries, and time periods.
- Automated client reports: Generate weekly or monthly reports programmatically. Include trend charts, competitor analysis, and recommended actions without manual effort.
- Scalable onboarding: Set up a new client in minutes via API calls: create the project, add prompts, configure the schedule. No manual dashboard clicks.
- White-label integration: Embed AI search data into your existing agency dashboard or client portal. The API returns clean JSON that fits any front-end.
Each Sellm project has its own API key, so agencies can manage all clients from a single codebase while keeping data isolated.
Pricing
The Sellm API is included on all plans with no additional cost. Every plan gets full programmatic access to all endpoints. Each prompt analysis costs less than 1 cent.
For agencies, each client project is a separate subscription, making it straightforward to scale AI visibility monitoring across all clients with fully automated data collection and reporting.
Getting Started
Building an API-first GEO workflow takes less than an hour:
- Create a Sellm account and set up your first project
- Generate an API key in Project Settings > API Keys
- Add your target prompts via the API or dashboard
- Trigger your first run and verify the results
- Set up a weekly cron job or GitHub Action to automate monitoring
- Build your dashboard or integrate with existing tools
The complete API reference is available at sellm.io/docs/api. For a hands-on tutorial with complete code, see our guide on building an AI search monitoring dashboard in 30 minutes.
Start Building Your GEO Workflow
Define your prompts, trigger your first run, and start measuring your AI visibility across ChatGPT, Claude, Perplexity, Gemini, Grok, and Copilot.
Get StartedFrequently Asked Questions
How is GEO different from traditional SEO?
Traditional SEO optimizes for search engine result pages (SERPs) where your website appears as a link. GEO optimizes for AI-generated answers where your brand is mentioned (or not) in the response text. The ranking factors, measurement methods, and optimization tactics are fundamentally different.
Can I use the API to monitor competitors?
Yes. Every analysis run extracts all brands mentioned in AI responses, not just yours. The prompt breakdown shows which competitors appear, their position, and their share of voice. You can build competitive dashboards directly from the API data.
How quickly do AI models reflect content changes?
It varies by provider. Perplexity and Copilot use real-time web search, so changes can appear within days. ChatGPT and Claude rely on training data and periodic updates, so changes may take weeks or months. This is why monitoring all providers matters.
Is the API suitable for agency use with multiple clients?
Yes. Each client project has its own API key and isolated data. Agencies can manage all clients from a single codebase, generate cross-brand reports, and automate client onboarding via the API. See the multi-brand monitoring example above.
What are the API rate limits?
All authenticated endpoints are rate-limited to 60 requests per minute per API key. Rate limit headers (X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset) are included in every response. For most monitoring workflows, this limit is more than sufficient.