How to Track Brand Mentions in ChatGPT with API (Step-by-Step Guide)

Updated: March 14, 2026 · Complete tutorial with Python and JavaScript code examples

AI search is quietly replacing Google for millions of buying decisions. When someone asks ChatGPT "best CRM for startups" or Claude "top project management tools," the brands that get mentioned win real customers — often without a single click to a website. If you are not tracking whether AI models recommend your brand, you are flying blind in the fastest-growing search channel.

This guide walks you through using the Sellm API to programmatically monitor your brand mentions across ChatGPT, Claude, Perplexity, Gemini, and Grok. By the end, you will have a working script that submits prompts, polls for results, and extracts the exact metrics that matter: position, share of voice, sentiment, and coverage.

Why Tracking AI Brand Mentions Matters

Traditional SEO tools track Google rankings. But an increasing share of your potential customers never reach Google at all. They ask an AI assistant directly, get a recommendation, and act on it. Consider:

If ChatGPT recommends your competitor instead of you for your core keywords, you are losing deals you never even knew existed. The Sellm API lets you detect this programmatically and track changes over time.

Prerequisites

Step 1: Get Your API Key

Sign in to sellm.io and navigate to your project. Open Settings → API Keys and click Create API Key. Copy the key immediately — it is only shown once.

Your API key is scoped to a single project and carries the same permissions as the dashboard. Store it securely as an environment variable:

export SELLM_API_KEY="sk_live_your_api_key_here"

Step 2: Submit Your First Analysis

The POST /v1/async-analysis endpoint accepts a prompt and queues it for analysis across the AI providers and geographies you specify. Here is a minimal curl example:

curl -X POST https://api.sellm.io/v1/async-analysis \
  -H "Authorization: Bearer $SELLM_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "best crm for european saas teams",
    "replicates": 3,
    "providers": ["chatgpt", "claude", "perplexity"],
    "locations": ["US", "DE"]
  }'

The response confirms the analysis has been accepted and returns an ID for polling:

{
  "data": {
    "id": "aa_01abc",
    "projectId": "proj_123",
    "status": "running",
    "creditsReserved": 18,
    "webhook": {
      "configured": false,
      "status": null
    },
    "createdAt": "2026-03-14T10:00:00.000Z"
  }
}

Credits reserved equals replicates × providers × locations. In this example: 3 replicates × 3 providers × 2 locations = 18 credits.

Step 3: Poll for Results

Use GET /v1/async-analysis/{'{'}analysisId{'}'} to check status. The endpoint returns "status": "running" until the analysis finishes, then returns the full result payload.

curl https://api.sellm.io/v1/async-analysis/aa_01abc \
  -H "Authorization: Bearer $SELLM_API_KEY"

While running, you get a minimal response:

{
  "data": {
    "id": "aa_01abc",
    "status": "running",
    "brandName": "YourBrand",
    "providers": ["chatgpt", "claude", "perplexity"],
    "locations": ["US", "DE"],
    "replicates": 3,
    "creditsReserved": 18,
    "createdAt": "2026-03-14T10:00:00.000Z",
    "startedAt": "2026-03-14T10:00:01.000Z",
    "finishedAt": null,
    "webhook": { "configured": false, "status": null }
  }
}

Typical analysis takes 30–90 seconds. Poll every 5–10 seconds until status changes to "completed" or "failed".

Step 4: Extract Brand Mentions from the Response

Once completed, the response includes a summary, providerBreakdown, and promptBreakdown with the metrics you need:

{
  "data": {
    "id": "aa_01abc",
    "status": "completed",
    "brandName": "YourBrand",
    "providers": ["chatgpt", "claude", "perplexity"],
    "locations": ["US", "DE"],
    "replicates": 3,
    "creditsReserved": 18,
    "createdAt": "2026-03-14T10:00:00.000Z",
    "startedAt": "2026-03-14T10:00:01.000Z",
    "finishedAt": "2026-03-14T10:01:12.000Z",
    "webhook": { "configured": false, "status": null },
    "summary": {
      "sovPct": 15,
      "coveragePct": 50,
      "avgPos": 3.5,
      "sentiment": 0.72
    },
    "providerBreakdown": {
      "sovByProvider": [
        { "provider": "ChatGPT", "sov": 20 },
        { "provider": "Claude", "sov": 10 },
        { "provider": "Perplexity", "sov": 15 }
      ],
      "coverageByProvider": [
        { "provider": "ChatGPT", "coverage": 66 },
        { "provider": "Claude", "coverage": 33 },
        { "provider": "Perplexity", "coverage": 50 }
      ],
      "sentimentByProvider": [
        { "provider": "ChatGPT", "sentiment": 0.8 },
        { "provider": "Claude", "sentiment": 0.65 },
        { "provider": "Perplexity", "sentiment": 0.71 }
      ]
    },
    "promptBreakdown": [
      {
        "prompt": "best crm for european saas teams",
        "sovPct": 15,
        "coverage": 50,
        "avgPos": 3.5,
        "sentiment": 0.72,
        "volume": 12,
        "opportunities": 18,
        "topCompetitors": ["HubSpot", "Salesforce"],
        "sentimentDimensions": {
          "trustworthiness": 0.75,
          "authority": 0.68,
          "recommendation_strength": 0.80,
          "fit_for_query_intent": 0.65
        },
        "details": {
          "brandHits": 9,
          "competitorHits": {
            "hubspot": 14,
            "salesforce": 11
          },
          "positions": [2, 4, 5, 3, 4, 2, 3, 5, 4],
          "sentiments": [0.8, 0.7, 0.65, 0.75, 0.7, 0.8, 0.72, 0.68, 0.7]
        }
      }
    ]
  }
}

Key fields to focus on:

Step 5: Build a Python Script for Weekly Monitoring

Here is a complete Python script that submits an analysis, polls for results, and prints a summary report. You can schedule this with cron or any task scheduler for weekly monitoring.

#!/usr/bin/env python3
"""Weekly AI brand mention tracker using the Sellm API."""

import os
import time
import json
import requests

SELLM_API_KEY = os.environ["SELLM_API_KEY"]
BASE_URL = "https://api.sellm.io/v1"
HEADERS = {
    "Authorization": f"Bearer {SELLM_API_KEY}",
    "Content-Type": "application/json",
}

PROMPTS = [
    "best crm for startups",
    "top project management tools for remote teams",
    "best email marketing platform for ecommerce",
]

PROVIDERS = ["chatgpt", "claude", "perplexity", "gemini", "grok"]
LOCATIONS = ["US"]
REPLICATES = 3
POLL_INTERVAL = 10  # seconds
MAX_WAIT = 300  # 5 minutes


def submit_analysis(prompt: str) -> str:
    """Submit a prompt for async analysis. Returns the analysis ID."""
    resp = requests.post(
        f"{BASE_URL}/async-analysis",
        headers=HEADERS,
        json={
            "prompt": prompt,
            "replicates": REPLICATES,
            "providers": PROVIDERS,
            "locations": LOCATIONS,
        },
    )
    resp.raise_for_status()
    return resp.json()["data"]["id"]


def poll_result(analysis_id: str) -> dict:
    """Poll until the analysis completes or fails."""
    elapsed = 0
    while elapsed < MAX_WAIT:
        resp = requests.get(
            f"{BASE_URL}/async-analysis/{analysis_id}",
            headers=HEADERS,
        )
        resp.raise_for_status()
        data = resp.json()["data"]
        if data.get("finishedAt") is not None:
            return data
        time.sleep(POLL_INTERVAL)
        elapsed += POLL_INTERVAL
    raise TimeoutError(f"Analysis {analysis_id} did not finish within {MAX_WAIT}s")


def print_report(result: dict) -> None:
    """Print a human-readable summary of the analysis."""
    summary = result.get("summary")
    if not summary:
        print(f"  No summary available (status: {result.get('status')})")
        return

    print(f"  Share of Voice: {summary['sovPct']}%")
    print(f"  Coverage:       {summary['coveragePct']}%")
    print(f"  Avg Position:   {summary['avgPos']}")
    print(f"  Sentiment:      {summary['sentiment']}")

    breakdown = result.get("providerBreakdown", {})
    if breakdown.get("sovByProvider"):
        print("  --- By Provider ---")
        for item in breakdown["sovByProvider"]:
            print(f"    {item['provider']}: SOV {item['sov']}%")

    prompts = result.get("promptBreakdown", [])
    for p in prompts:
        competitors = ", ".join(p.get("topCompetitors", []))
        print(f"  Top competitors: {competitors}")


def main():
    print("=== Weekly AI Brand Mention Report ===
")
    for prompt in PROMPTS:
        print(f'Prompt: "{prompt}"')
        try:
            analysis_id = submit_analysis(prompt)
            print(f"  Submitted (ID: {analysis_id}). Polling...")
            result = poll_result(analysis_id)
            print_report(result)
        except Exception as e:
            print(f"  Error: {e}")
        print()


if __name__ == "__main__":
    main()

Usage: Set your API key and run the script:

export SELLM_API_KEY="sk_live_your_key"
python3 track_mentions.py

Step 6: Build a Node.js Version

Here is the equivalent script in Node.js using the built-in fetch API (Node.js 18+):

#!/usr/bin/env node
/**
 * Weekly AI brand mention tracker using the Sellm API.
 * Requires Node.js 18+ for native fetch.
 */

const SELLM_API_KEY = process.env.SELLM_API_KEY;
if (!SELLM_API_KEY) {
  console.error("Set SELLM_API_KEY environment variable");
  process.exit(1);
}

const BASE_URL = "https://api.sellm.io/v1";
const HEADERS = {
  Authorization: `Bearer ${SELLM_API_KEY}`,
  "Content-Type": "application/json",
};

const PROMPTS = [
  "best crm for startups",
  "top project management tools for remote teams",
  "best email marketing platform for ecommerce",
];

const PROVIDERS = ["chatgpt", "claude", "perplexity", "gemini", "grok"];
const LOCATIONS = ["US"];
const REPLICATES = 3;
const POLL_INTERVAL_MS = 10_000;
const MAX_WAIT_MS = 300_000;

async function submitAnalysis(prompt) {
  const resp = await fetch(`${BASE_URL}/async-analysis`, {
    method: "POST",
    headers: HEADERS,
    body: JSON.stringify({
      prompt,
      replicates: REPLICATES,
      providers: PROVIDERS,
      locations: LOCATIONS,
    }),
  });
  if (!resp.ok) throw new Error(`Submit failed: ${resp.status} ${await resp.text()}`);
  const json = await resp.json();
  return json.data.id;
}

async function pollResult(analysisId) {
  let elapsed = 0;
  while (elapsed < MAX_WAIT_MS) {
    const resp = await fetch(`${BASE_URL}/async-analysis/${analysisId}`, {
      headers: HEADERS,
    });
    if (!resp.ok) throw new Error(`Poll failed: ${resp.status}`);
    const { data } = await resp.json();
    if (data.finishedAt !== null) return data;
    await new Promise((r) => setTimeout(r, POLL_INTERVAL_MS));
    elapsed += POLL_INTERVAL_MS;
  }
  throw new Error(`Analysis ${analysisId} timed out after ${MAX_WAIT_MS / 1000}s`);
}

function printReport(result) {
  const summary = result.summary;
  if (!summary) {
    console.log(`  No summary (status: ${result.status})`);
    return;
  }
  console.log(`  Share of Voice: ${summary.sovPct}%`);
  console.log(`  Coverage:       ${summary.coveragePct}%`);
  console.log(`  Avg Position:   ${summary.avgPos}`);
  console.log(`  Sentiment:      ${summary.sentiment}`);

  const breakdown = result.providerBreakdown || {};
  if (breakdown.sovByProvider) {
    console.log("  --- By Provider ---");
    for (const item of breakdown.sovByProvider) {
      console.log(`    ${item.provider}: SOV ${item.sov}%`);
    }
  }

  for (const p of result.promptBreakdown || []) {
    const competitors = (p.topCompetitors || []).join(", ");
    console.log(`  Top competitors: ${competitors}`);
  }
}

async function main() {
  console.log("=== Weekly AI Brand Mention Report ===\n");
  for (const prompt of PROMPTS) {
    console.log(`Prompt: "${prompt}"`);
    try {
      const analysisId = await submitAnalysis(prompt);
      console.log(`  Submitted (ID: ${analysisId}). Polling...`);
      const result = await pollResult(analysisId);
      printReport(result);
    } catch (err) {
      console.log(`  Error: ${err.message}`);
    }
    console.log();
  }
}

main();

Usage:

export SELLM_API_KEY="sk_live_your_key"
node track_mentions.mjs

Step 7: Set Up Webhooks for Real-Time Notifications

Instead of polling, you can receive results as soon as they are ready by providing a webhook in your submission request:

curl -X POST https://api.sellm.io/v1/async-analysis \
  -H "Authorization: Bearer $SELLM_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "best crm for startups",
    "replicates": 3,
    "providers": ["chatgpt", "claude", "perplexity"],
    "locations": ["US"],
    "webhook": {
      "url": "https://your-server.com/sellm-webhook"
    }
  }'

When the analysis finishes (success or failure), Sellm sends a signed HTTP POST to your webhook URL. The payload is signed with HMAC-SHA256, and delivery is retried for up to 24 hours if your server returns a non-2xx response.

Here is a minimal webhook handler in Python (Flask):

from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route("/sellm-webhook", methods=["POST"])
def handle_webhook():
    payload = request.get_json()
    data = payload["data"]

    if data.get("summary"):
        summary = data["summary"]
        print(f"Analysis {data['id']} completed:")
        print(f"  SOV: {summary['sovPct']}%")
        print(f"  Coverage: {summary['coveragePct']}%")
        print(f"  Sentiment: {summary['sentiment']}")

        # Send to Slack, email, or your data warehouse
        # slack_notify(summary)
        # store_in_database(data)

    return jsonify({"received": True}), 200

And in Node.js (Express):

import express from "express";

const app = express();
app.use(express.json());

app.post("/sellm-webhook", (req, res) => {
  const { data } = req.body;

  if (data.summary) {
    console.log(`Analysis ${data.id} completed:`);
    console.log(`  SOV: ${data.summary.sovPct}%`);
    console.log(`  Coverage: ${data.summary.coveragePct}%`);
    console.log(`  Sentiment: ${data.summary.sentiment}`);

    // Send to Slack, email, or your data warehouse
  }

  res.json({ received: true });
});

app.listen(3000, () => console.log("Webhook server on :3000"));

Key Metrics to Track

The Sellm API returns four core metrics that together give you a complete picture of your AI search visibility:

Metric Field What It Tells You Target
Position avgPos Where your brand appears in the list of recommendations. 1 = first mentioned. Below 3.0
Share of Voice sovPct Your brand's share of total mentions vs. competitors. Measures competitive standing. Above 20%
Sentiment sentiment How positively the AI describes your brand (0–1 scale). Above 0.7
Coverage coveragePct Percentage of responses that mention your brand at all. Measures reach. Above 50%

Track these weekly across providers. If your ChatGPT coverage is 80% but Claude coverage is 20%, you know exactly where to focus your content optimization efforts.

Pricing

Sellm API pricing is based on credits. Each replicate-provider-location combination costs 1 credit. A typical analysis with 3 replicates across 5 providers and 1 location uses 15 credits — less than 1 cent per prompt on most plans.

Paid plans include enough credits for comprehensive weekly monitoring across hundreds of prompts.

Frequently Asked Questions

How long does an async analysis take?

Most analyses complete in 30–90 seconds, depending on the number of providers and replicates. Analyses with more providers or higher replicate counts take longer because each provider query runs sequentially with rate-limit-aware scheduling. Use webhooks to avoid polling if latency is not critical.

Can I track different brands in the same project?

Each project is configured with a single target brand. The API analyzes mentions relative to that brand and its configured competitors. To track multiple brands, create separate projects — each with its own API key.

What providers are supported?

The API currently supports chatgpt, claude, perplexity, gemini, and grok. You can select any subset of providers per analysis request. Not all provider-country combinations are available — the API returns a 400 error for unsupported pairs.

Is there a rate limit on the API?

Read endpoints (GET) have generous rate limits suitable for dashboard polling. Async analysis submissions are limited by your plan's credit balance. Rate limit headers (X-RateLimit-Remaining, X-RateLimit-Reset) are included in every response so you can adapt your polling frequency.

Start Tracking Your AI Visibility

Sign up, generate an API key, and run your first analysis in under 5 minutes.

Get Started →