How to Check If ChatGPT Recommends Your Product (API Method)

Every day, millions of people ask ChatGPT, Claude, and Perplexity questions like "what's the best project management tool?" or "which CRM should I use?" If your product isn't in those answers, you're losing customers to competitors you can't even see. Here's how to check — programmatically — whether AI engines recommend your product.

The Problem: You Don't Know What AI Says About You

Unlike Google, where you can check Search Console for impressions and clicks, AI search engines give you zero visibility into what they tell users. You can manually type a prompt into ChatGPT and check, but that doesn't scale. The response changes based on timing, location, and even the model version. You need an automated, repeatable way to monitor this.

Manual Checking vs API Monitoring

The manual approach — typing prompts into ChatGPT one by one — has serious limitations:

The API approach solves all of these: submit prompts programmatically, get structured results, track trends, compare providers.

Step 1: Define Your Buying-Intent Prompts

Start with the questions your potential customers actually ask AI engines. These fall into categories:

Aim for 20–50 prompts that cover the buying journey. More prompts = more statistical confidence.

Step 2: Submit Your Prompts via the Sellm API

Use the async analysis endpoint to query multiple AI engines at once:

import requests
import time

API_KEY = "your_sellm_api_key"
BASE_URL = "https://sellm.io/api/v1"

prompts = [
    "best project management tools for startups",
    "which project management software should I use",
    "top project management apps 2026",
    "compare monday.com vs asana vs clickup",
    "project management tool for remote teams",
]

results = []

for prompt in prompts:
    # Submit analysis across all providers
    resp = requests.post(
        f"{BASE_URL}/async-analysis",
        headers={"Authorization": f"Bearer {API_KEY}"},
        json={
            "prompt": prompt,
            "providers": ["chatgpt", "claude", "perplexity", "gemini", "grok"],
            "country": "US",
            "replicates": 3  # Run 3 times for statistical confidence
        }
    )
    analysis_id = resp.json()["data"]["analysisId"]

    # Poll until complete
    while True:
        status_resp = requests.get(
            f"{BASE_URL}/async-analysis/{analysis_id}",
            headers={"Authorization": f"Bearer {API_KEY}"}
        )
        data = status_resp.json()["data"]
        if data["status"] in ["succeeded", "failed"]:
            results.append(data)
            break
        time.sleep(8)

    print(f"✓ {prompt}")

Step 3: Check If Your Brand Is Mentioned

Each result contains a brandsMentioned array and a position field. If your brand appears, position tells you where (1 = first mentioned, 2 = second, etc.).

YOUR_BRAND = "YourProduct"

for result in results:
    summary = result["summary"]
    print(f"Prompt: {result['prompt']}")
    print(f"  Mentioned: {'Yes' if summary['coveragePct'] > 0 else 'No'}")
    print(f"  Position: {summary['avgPos'] or 'Not mentioned'}")
    print(f"  Share of Voice: {summary['sovPct']}%")
    print(f"  Sentiment: {summary['sentiment'] or 'N/A'}")
    print()

Step 4: Interpret the Results

The key metrics to focus on:

MetricWhat It MeansGood Target
coveragePct% of replicates where you appeared>50%
avgPosAverage position when mentioned1–3
sovPctYour mentions vs total brand mentions>10%
sentimentOverall sentiment (0–1)>0.7

Step 5: Analyze by Provider

Different AI engines may have very different opinions about your product. Use providerBreakdown to see which engines recommend you and which don't:

for result in results:
    breakdown = result.get("providerBreakdown", {})
    for entry in breakdown.get("coverageByProvider", []):
        provider = entry["provider"]
        coverage = entry["coverage"]
        print(f"  {provider}: {'Recommends you' if coverage > 0 else 'Does NOT recommend you'}")

What to Do If You're NOT Recommended

If AI engines don't mention your product, here's your action plan:

  1. Check what they recommend instead — look at brandsMentioned and topCompetitors to see who's winning
  2. Analyze their cited sources — check citedUrls to understand what content AI engines trust
  3. Improve your online authority — get mentioned on review sites, comparison articles, and industry publications
  4. Create answer-style content — publish pages that directly answer the prompts with structured, factual content
  5. Build entity signals — consistent brand naming, schema markup, Wikipedia presence

What to Do If a Competitor IS Recommended

Use the promptBreakdown data to understand exactly how competitors rank:

for pb in result.get("promptBreakdown", []):
    competitors = pb.get("topCompetitors", [])
    competitor_hits = pb.get("details", {}).get("competitorHits", {})
    print(f"Prompt: {pb['prompt']}")
    print(f"  Top competitors: {competitors}")
    print(f"  Competitor hits: {competitor_hits}")

Study what makes them visible: their content, citations, reviews, and authority signals. Then build a strategy to match or exceed those signals.

Set Up Ongoing Monitoring

A single check is a snapshot. To track progress, run these prompts weekly:

# Save results to track trends
import json
from datetime import datetime

weekly_report = {
    "date": datetime.now().isoformat(),
    "prompts_checked": len(results),
    "mentioned_in": sum(1 for r in results if r["summary"]["coveragePct"] > 0),
    "avg_position": sum(r["summary"]["avgPos"] or 0 for r in results) / len(results),
    "avg_sov": sum(r["summary"]["sovPct"] for r in results) / len(results),
}

with open(f"reports/ai-visibility-{datetime.now().strftime('%Y-%m-%d')}.json", "w") as f:
    json.dump(weekly_report, f, indent=2)

Pricing

Each prompt analysis costs less than 1 cent. Checking 50 prompts across 5 providers with 3 replicates = 750 credits, which scales affordably at <$0.01 per prompt.

Frequently Asked Questions

How accurate is this compared to manually checking ChatGPT?

The API queries the actual AI engines in real time, so you get the same responses a real user would. Running multiple replicates gives you statistical confidence that manual spot-checks can't provide.

Does the response change over time?

Yes. AI engines update their models and search indexes regularly. That's why ongoing monitoring matters — a brand that's recommended today might not be tomorrow, and vice versa.

Can I check multiple products or brands?

Yes. Create separate projects in Sellm for each brand, each with its own API key. You can monitor as many brands as your plan supports.

What if my brand is mentioned but with negative sentiment?

The API provides 4-dimensional sentiment: trustworthiness, authority, recommendation_strength, and fit_for_query_intent. Low scores in specific dimensions tell you exactly what to improve — for example, low trustworthiness might mean you need more third-party reviews and citations.