Published on March 14, 2026
Build an AI Search Monitoring Dashboard in 30 Minutes
What you'll build: A monitoring dashboard that tracks your brand's visibility across ChatGPT, Claude, Perplexity, Gemini, Grok, and Copilot with automated weekly runs, trend charts, and Slack alerts.
Discovery is shifting. When a potential customer asks ChatGPT "best CRM for startups" or Claude "top project management tools," the brands that appear in those answers capture trust and traffic without a single click on a Google result. But most companies have no idea whether AI assistants are recommending them or their competitors.
In this tutorial, you'll build a working AI search monitoring dashboard using the Sellm API. By the end, you'll have automated weekly monitoring across six AI providers, a database of historical results, and Slack alerts when your visibility changes.
What We'll Build
The finished dashboard tracks five key metrics across every major AI search platform:
- Share of Voice (SOV%) - How often your brand is mentioned relative to competitors
- Coverage% - What percentage of prompts mention your brand at all
- Average Position - Where your brand appears in the response (1 = mentioned first)
- Sentiment - How positively AI platforms describe your brand (0-1 scale)
- Competitor Movement - When a competitor overtakes you on any prompt
The architecture is straightforward:
Sellm API --> Weekly Cron Script --> SQLite/Supabase --> Dashboard
|
+--> Slack Webhook (alerts on changes)
Step 1: Set Up the Sellm API
First, create a project and generate an API key. If you already have a Sellm account, go to Project Settings > API Keys.
Verify your API key
curl -s https://sellm.io/api/v1/project \
-H "Authorization: Bearer sellm_your_api_key" | python3 -m json.tool
You should see your project details:
{
"data": {
"id": "proj_abc123",
"name": "My Brand",
"brand": "Acme Corp",
"createdAt": "2026-01-15T10:00:00Z"
}
}
Define your prompts
Prompts are the queries sent to AI providers. With the async analysis API, you submit one prompt per request along with the providers and locations to analyze. For a CRM brand dashboard, you might monitor prompts like these:
- Brand queries: "What is [Brand]?" / "Tell me about [Brand]"
- Category queries: "Best CRM for small business" / "Top CRM software 2026"
- Comparison queries: "[Brand] vs [Competitor]" / "Compare [Brand] and [Competitor]"
- Problem queries: "How to manage customer relationships for a startup"
- Recommendation queries: "Which CRM do you recommend for a small team?"
For meaningful coverage, we recommend 20-50 prompts across these categories. You can manage your prompt list in the Sellm dashboard or store them in your monitoring script.
Step 2: Create a Weekly Monitoring Script
This Python script submits an async analysis for each prompt, waits for completion, and collects the results. The Sellm async analysis API uses a simple two-step flow: POST a prompt to start the analysis, then poll the GET endpoint until it finishes. The completed response contains everything - summary KPIs, provider breakdown, prompt breakdown, and individual results.
Save this as monitor.py:
import requests
import time
import json
import os
API_KEY = os.environ["SELLM_API_KEY"]
BASE_URL = "https://sellm.io/api/v1"
HEADERS = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json",
}
PROMPTS = [
"best CRM for small business",
"top CRM software for startups",
"which CRM should I use for a small team",
"HubSpot vs Salesforce vs Pipedrive for small business",
"how to manage customer relationships for a startup",
]
PROVIDERS = ["chatgpt", "claude", "perplexity", "gemini", "grok", "copilot"]
LOCATIONS = ["US"]
REPLICATES = 3
def submit_analysis(prompt):
"""Submit a single prompt for async analysis."""
resp = requests.post(
f"{BASE_URL}/async-analysis",
headers=HEADERS,
json={
"prompt": prompt,
"providers": PROVIDERS,
"locations": LOCATIONS,
"replicates": REPLICATES,
},
)
resp.raise_for_status()
data = resp.json()["data"]
print(f"Submitted: {data['id']} (status: {data['status']}, credits: {data['creditsReserved']})")
return data["id"]
def wait_for_completion(analysis_id, timeout=600, interval=15):
"""Poll until the analysis finishes or times out."""
elapsed = 0
while elapsed < timeout:
resp = requests.get(
f"{BASE_URL}/async-analysis/{analysis_id}", headers=HEADERS
)
resp.raise_for_status()
data = resp.json()["data"]
status = data["status"]
print(f" Status: {status}")
if status == "succeeded":
return data
if status == "failed":
if data.get("hasPartialResults"):
print(f" Analysis {analysis_id} failed with partial results")
return data
raise RuntimeError(f"Analysis {analysis_id} failed")
time.sleep(interval)
elapsed += interval
raise TimeoutError(f"Analysis {analysis_id} did not complete within {timeout}s")
if __name__ == "__main__":
all_results = []
for prompt in PROMPTS:
print(f"\nAnalyzing: {prompt}")
analysis_id = submit_analysis(prompt)
result = wait_for_completion(analysis_id)
all_results.append(result)
# Print summary for this prompt
summary = result.get("summary")
if summary:
print(f" SOV: {summary['sovPct']}% Coverage: {summary['coveragePct']}% "
f"Avg Position: {summary['avgPos']} Sentiment: {summary['sentiment']}")
# Print provider breakdown
pb = result.get("providerBreakdown", {})
for entry in pb.get("sovByProvider", []):
print(f" {entry['provider']}: SOV={entry['sov']}%")
# Save all results for the dashboard
with open("analysis_results.json", "w") as f:
json.dump(all_results, f, indent=2)
print(f"\nAll results saved to analysis_results.json")
Run it with:
export SELLM_API_KEY="sellm_your_api_key"
python3 monitor.py
Step 3: Store Results in a Database
For trend tracking, store each analysis result in a database. Here's a minimal SQLite version:
import sqlite3
import json
def init_db(db_path="dashboard.db"):
conn = sqlite3.connect(db_path)
conn.execute("""
CREATE TABLE IF NOT EXISTS analyses (
analysis_id TEXT PRIMARY KEY,
prompt TEXT,
finished_at TEXT,
sov_pct REAL,
coverage_pct REAL,
avg_pos REAL,
sentiment REAL
)
""")
conn.execute("""
CREATE TABLE IF NOT EXISTS provider_metrics (
analysis_id TEXT,
provider TEXT,
sov REAL,
coverage REAL,
sentiment REAL,
PRIMARY KEY (analysis_id, provider),
FOREIGN KEY (analysis_id) REFERENCES analyses(analysis_id)
)
""")
conn.execute("""
CREATE TABLE IF NOT EXISTS prompt_metrics (
analysis_id TEXT,
prompt TEXT,
sov_pct REAL,
coverage REAL,
avg_pos REAL,
sentiment REAL,
top_competitors TEXT,
PRIMARY KEY (analysis_id, prompt),
FOREIGN KEY (analysis_id) REFERENCES analyses(analysis_id)
)
""")
conn.commit()
return conn
def store_analysis(conn, result):
s = result.get("summary", {})
conn.execute(
"INSERT OR REPLACE INTO analyses VALUES (?, ?, ?, ?, ?, ?, ?)",
(
result["id"],
result["prompt"],
result.get("finishedAt"),
s.get("sovPct"),
s.get("coveragePct"),
s.get("avgPos"),
s.get("sentiment"),
),
)
# Store provider breakdown from sovByProvider, coverageByProvider, sentimentByProvider
pb = result.get("providerBreakdown", {})
sov_map = {e["provider"]: e["sov"] for e in pb.get("sovByProvider", [])}
cov_map = {e["provider"]: e["coverage"] for e in pb.get("coverageByProvider", [])}
sent_map = {e["provider"]: e["sentiment"] for e in pb.get("sentimentByProvider", [])}
for provider in sov_map:
conn.execute(
"INSERT OR REPLACE INTO provider_metrics VALUES (?, ?, ?, ?, ?)",
(result["id"], provider, sov_map.get(provider), cov_map.get(provider), sent_map.get(provider)),
)
# Store prompt breakdown
for p in result.get("promptBreakdown", []):
conn.execute(
"INSERT OR REPLACE INTO prompt_metrics VALUES (?, ?, ?, ?, ?, ?, ?)",
(
result["id"],
p["prompt"],
p.get("sovPct"),
p.get("coverage"),
p.get("avgPos"),
p.get("sentiment"),
json.dumps(p.get("topCompetitors", [])),
),
)
conn.commit()
If you prefer a hosted database, the same schema works with Supabase PostgreSQL - just swap sqlite3 for supabase-py.
Step 4: Build the Dashboard
With data in your database, you can build a dashboard with any framework. Here's what to display:
Key Metrics Cards
Show the latest values with week-over-week deltas:
# Query latest two analyses for delta calculation
cursor = conn.execute(
"SELECT sov_pct, coverage_pct, avg_pos, sentiment FROM analyses ORDER BY finished_at DESC LIMIT 2"
)
rows = cursor.fetchall()
current = rows[0]
previous = rows[1] if len(rows) > 1 else current
metrics = {
"Share of Voice": {"value": f"{current[0]}%", "delta": round(current[0] - previous[0], 1)},
"Coverage": {"value": f"{current[1]}%", "delta": round(current[1] - previous[1], 1)},
"Avg Position": {"value": f"{current[2]:.1f}", "delta": round(previous[2] - current[2], 1)},
"Sentiment": {"value": f"{current[3]:.2f}", "delta": round(current[3] - previous[3], 2)},
}
SOV Trend Chart
Plot share of voice over time to spot upward or downward trends:
cursor = conn.execute(
"SELECT finished_at, sov_pct FROM analyses ORDER BY finished_at ASC"
)
dates = []
sov_values = []
for row in cursor:
dates.append(row[0][:10]) # YYYY-MM-DD
sov_values.append(row[1])
# Use matplotlib, plotly, or any charting library
import matplotlib.pyplot as plt
plt.plot(dates, sov_values, marker="o")
plt.title("Share of Voice Trend")
plt.ylabel("SOV %")
plt.xticks(rotation=45)
plt.tight_layout()
plt.savefig("sov_trend.png")
Provider Comparison Table
Compare your visibility across AI platforms - you might rank well in ChatGPT but poorly in Claude:
cursor = conn.execute("""
SELECT provider, sov, coverage, sentiment
FROM provider_metrics
WHERE analysis_id = (SELECT analysis_id FROM analyses ORDER BY finished_at DESC LIMIT 1)
ORDER BY sov DESC
""")
for row in cursor:
print(f"{row[0]:12s} SOV: {row[1]:5.1f}% Coverage: {row[2]:5.1f}% Sentiment: {row[3]:.2f}")
Example output:
ChatGPT SOV: 34.0% Coverage: 80.0% Sentiment: 0.78
Perplexity SOV: 28.0% Coverage: 75.0% Sentiment: 0.81
Claude SOV: 22.0% Coverage: 65.0% Sentiment: 0.75
Gemini SOV: 18.0% Coverage: 60.0% Sentiment: 0.72
Grok SOV: 15.0% Coverage: 55.0% Sentiment: 0.69
Copilot SOV: 12.0% Coverage: 50.0% Sentiment: 0.70
Competitor Comparison
The prompt breakdown shows which competitors are mentioned alongside your brand and where they rank:
cursor = conn.execute("""
SELECT prompt, sov_pct, avg_pos, top_competitors
FROM prompt_metrics
WHERE analysis_id = (SELECT analysis_id FROM analyses ORDER BY finished_at DESC LIMIT 1)
ORDER BY sov_pct ASC
""")
print("Prompts where you have the lowest visibility:")
for row in cursor.fetchall()[:5]:
competitors = json.loads(row[3])
print(f" '{row[0][:60]}...'")
print(f" SOV: {row[1]}% Position: {row[2]} Top competitors: {', '.join(competitors)}")
Step 5: Add Slack Alerts
Get notified when your visibility changes significantly. Add this to your monitoring script:
import requests as req
SLACK_WEBHOOK = os.environ.get("SLACK_WEBHOOK_URL")
def send_alert(message, details=""):
if not SLACK_WEBHOOK:
return
payload = {
"blocks": [
{"type": "header", "text": {"type": "plain_text", "text": message}},
{"type": "section", "text": {"type": "mrkdwn", "text": details}},
]
}
req.post(SLACK_WEBHOOK, json=payload)
def check_alerts(conn, current_summary):
"""Compare current analysis to previous and alert on significant changes."""
cursor = conn.execute(
"SELECT sov_pct, coverage_pct, avg_pos FROM analyses ORDER BY finished_at DESC LIMIT 1 OFFSET 1"
)
prev = cursor.fetchone()
if not prev:
return # First analysis, nothing to compare
prev_sov, prev_cov, prev_pos = prev
cur_sov = current_summary["sovPct"]
cur_cov = current_summary["coveragePct"]
cur_pos = current_summary["avgPos"]
alerts = []
# SOV dropped more than 5 percentage points
sov_delta = cur_sov - prev_sov
if sov_delta < -5:
alerts.append(f"Share of Voice dropped {abs(sov_delta):.1f}pp ({prev_sov}% -> {cur_sov}%)")
# Coverage dropped more than 10 percentage points
cov_delta = cur_cov - prev_cov
if cov_delta < -10:
alerts.append(f"Coverage dropped {abs(cov_delta):.1f}pp ({prev_cov}% -> {cur_cov}%)")
# Position worsened by more than 1 spot
if cur_pos is not None and prev_pos is not None:
pos_delta = cur_pos - prev_pos
if pos_delta > 1:
alerts.append(f"Avg position worsened by {pos_delta:.1f} ({prev_pos:.1f} -> {cur_pos:.1f})")
if alerts:
send_alert(
"AI Visibility Alert",
"\n".join([f"- {a}" for a in alerts])
)
Key Metrics Explained
Understanding what the Sellm API returns helps you build a more useful dashboard. When an async analysis completes, the GET response includes a summary, providerBreakdown, promptBreakdown, and detailed results array - all in a single response.
| Metric | API Field | What It Means |
|---|---|---|
| Share of Voice | sovPct |
Percentage of brand mentions that are yours vs. all brands mentioned (0-100). Higher is better. |
| Coverage | coveragePct (summary) / coverage (promptBreakdown) |
Percentage of results where your brand appears in the AI response at all (0-100). |
| Average Position | avgPos |
Mean rank when your brand is mentioned. 1.0 = always first. Lower is better. Null if not mentioned. |
| Sentiment | sentiment |
How positively the AI describes your brand, on a 0-1 scale. Null if not mentioned. |
| Sentiment Dimensions | sentimentDimensions |
Broken down into trustworthiness, authority, recommendation strength, and fit for query intent (each 0-1). |
| Top Competitors | topCompetitors |
Top 2 competitor brands by mention count for each prompt (string array). |
Understanding the Full Response
Each completed async analysis returns all the data you need in one GET request. Here is an example of the response structure you will work with:
# The GET /v1/async-analysis/{analysisId} response contains everything:
# data.summary - aggregate KPIs
{
"sovPct": 25,
"coveragePct": 67,
"avgPos": 3.2,
"sentiment": 0.74
}
# data.providerBreakdown - KPIs split by AI provider
{
"sovByProvider": [{"provider": "ChatGPT", "sov": 33}, {"provider": "Claude", "sov": 20}],
"coverageByProvider": [{"provider": "ChatGPT", "coverage": 80}, {"provider": "Claude", "coverage": 60}],
"sentimentByProvider": [{"provider": "ChatGPT", "sentiment": 0.78}, {"provider": "Claude", "sentiment": 0.71}]
}
# data.promptBreakdown[] - per-prompt detail
{
"prompt": "best CRM for small business",
"sovPct": 25,
"coverage": 67,
"avgPos": 3.2,
"sentiment": 0.74,
"volume": 12,
"opportunities": 18,
"topCompetitors": ["Salesforce", "HubSpot"],
"sentimentDimensions": {
"trustworthiness": 0.8,
"authority": 0.7,
"recommendation_strength": 0.75,
"fit_for_query_intent": 0.72
},
"details": {
"brandHits": 4,
"competitorHits": {"salesforce": 6, "hubspot": 5},
"positions": [2, 3, 4],
"sentiments": [0.8, 0.7, 0.72]
}
}
# data.results[] - individual result per provider/replicate
{
"prompt": "best CRM for small business",
"provider": "chatgpt",
"country": "US",
"replicateIndex": 0,
"position": 3,
"brandsMentioned": ["Salesforce", "HubSpot", "Acme CRM", "Pipedrive"],
"brandSentiment": {
"trustworthiness": 0.8,
"authority": 0.7,
"recommendation_strength": 0.75,
"fit_for_query_intent": 0.72
},
"citedUrls": ["https://example.com/crm-guide"],
"citedDomains": ["example.com"],
"responseText": "Here are the best CRMs for small business...",
"enrichment": null
}
Node.js Alternative
Prefer JavaScript? Here's the same monitoring logic in Node.js:
const API_KEY = process.env.SELLM_API_KEY;
const BASE = "https://sellm.io/api/v1";
const headers = {
Authorization: `Bearer ${API_KEY}`,
"Content-Type": "application/json",
};
const PROMPTS = [
"best CRM for small business",
"top CRM software for startups",
"which CRM should I use for a small team",
];
async function submitAnalysis(prompt) {
const res = await fetch(`${BASE}/async-analysis`, {
method: "POST",
headers,
body: JSON.stringify({
prompt,
providers: ["chatgpt", "claude", "perplexity", "gemini", "grok", "copilot"],
locations: ["US"],
replicates: 3,
}),
});
const { data } = await res.json();
console.log(`Submitted: ${data.id} (credits: ${data.creditsReserved})`);
return data.id;
}
async function waitForCompletion(analysisId, timeout = 600000) {
const start = Date.now();
while (Date.now() - start < timeout) {
const res = await fetch(`${BASE}/async-analysis/${analysisId}`, { headers });
const { data } = await res.json();
if (data.status === "succeeded") return data;
if (data.status === "failed") throw new Error("Analysis failed");
console.log(` Status: ${data.status}`);
await new Promise((r) => setTimeout(r, 15000));
}
throw new Error("Timeout");
}
async function main() {
for (const prompt of PROMPTS) {
console.log(`\nAnalyzing: ${prompt}`);
const analysisId = await submitAnalysis(prompt);
const result = await waitForCompletion(analysisId);
const { summary, providerBreakdown, promptBreakdown } = result;
console.log(` SOV: ${summary.sovPct}%`);
console.log(` Coverage: ${summary.coveragePct}%`);
console.log(` Avg Position: ${summary.avgPos}`);
console.log(` Sentiment: ${summary.sentiment}`);
for (const entry of providerBreakdown.sovByProvider) {
console.log(` ${entry.provider}: SOV=${entry.sov}%`);
}
for (const pb of promptBreakdown) {
console.log(` Top competitors: ${pb.topCompetitors.join(", ")}`);
}
}
}
main().catch(console.error);
Scaling Up
Once the basic dashboard is working, here are ways to get more value:
- Multiple prompt sets: Group prompts by funnel stage (awareness, consideration, decision) and filter results by category in your dashboard
- Location-based tracking: Monitor how visibility differs across regions by passing different
locationsarrays (e.g.,["US", "GB", "DE"]) in your async analysis requests - A/B testing content changes: Submit the same prompts before and after updating your website content to measure the impact on AI visibility
- Competitive dashboards: Use the
promptBreakdownanddetails.competitorHitsto build competitor-specific views showing where each rival outranks you - Executive reports: Schedule a weekly email with the top-line metrics and any alerts that fired
Pricing
Each prompt analysis costs less than 1 cent. Monitoring 50 prompts weekly across 3 providers with 3 replicates = 450 analyses/week, costing under $4.50/week.
Next Steps
- Create a Sellm account and add your first prompts
- Generate an API key in Project Settings
- Run the monitoring script above to verify everything works
- Set up a cron job or GitHub Action to run it weekly
- Connect to your preferred dashboard tool (Grafana, Retool, or a simple HTML page)
The full API reference is available at sellm.io/docs/api. If you run into questions, reach out to our team at sellm.io/contact.
Start Monitoring Your AI Visibility
Set up your first dashboard in 30 minutes and see where your brand stands across ChatGPT, Claude, Perplexity, Gemini, Grok, and Copilot.
Get StartedFrequently Asked Questions
Do I need separate API calls for summary, provider breakdown, and results?
No. The GET /v1/async-analysis/{analysisId} endpoint returns everything in a single response once the analysis completes: summary, providerBreakdown, promptBreakdown, and the full results array. Just poll until status is "succeeded".
How often can I trigger manual runs?
Manual runs are limited to 1 per 7-day window per project. Scheduled weekly runs happen automatically and don't count toward this limit. For most monitoring use cases, the weekly schedule is sufficient.
Which AI providers does the API track?
Sellm tracks ChatGPT (OpenAI), Claude (Anthropic), Perplexity, Gemini (Google), Grok (xAI), and Microsoft Copilot. You can assign specific providers to each analysis request or track all of them.
What are the API rate limits?
All authenticated endpoints are rate-limited to 60 requests per minute per API key. Rate limit headers (X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset) are included in every response.