GUIDE

Published on March 15, 2026

How to Monitor Google AI Overviews & AI Mode Programmatically via API

What you'll learn: How to track your brand's visibility in Google AI Overviews and AI Mode using the Sellm API's google_aio provider. Compare AI Overview citations against organic ChatGPT, Perplexity, and Claude results, detect when competitors appear in AI Overviews, and set up automated monitoring.

Google AI Overviews and AI Mode are fundamentally changing how users interact with search results. Instead of scanning ten blue links, users increasingly see AI-generated summaries at the top of the page that synthesize information from multiple sources. If your brand is cited in these overviews, you gain visibility before any organic result. If you're not, you're invisible in the most prominent part of the search results page.

This guide shows you how to programmatically monitor your brand's presence in Google AI Overviews using the Sellm API, compare those results with other AI search platforms, and build automated alerts for changes.

What Are Google AI Overviews and AI Mode?

Google AI Overviews (formerly Search Generative Experience or SGE) are AI-generated answer panels that appear at the top of Google search results for many queries. When a user searches for something like "best project management software for remote teams," Google may display a multi-paragraph AI-generated summary that cites specific brands, products, and sources before showing any traditional organic results.

Google AI Mode is the fully conversational AI search experience within Google Search. Users can ask follow-up questions, get detailed comparisons, and have a back-and-forth dialogue with Google's AI. It behaves more like ChatGPT or Perplexity but is integrated directly into Google Search.

Why AI Overviews Matter for Your Brand

How the Sellm API Tracks AI Overviews

The Sellm API includes a dedicated google_aio provider that monitors Google AI Overviews and AI Mode responses. It works the same way as the ChatGPT, Perplexity, or Claude providers: you send a prompt, the API queries Google's AI features, and returns structured data about which brands were mentioned, in what order, and with what sentiment.

Here's what makes the google_aio provider different from traditional SERP tracking:

Feature Traditional SERP Tracker Sellm google_aio
Tracks organic rankings Yes No (AI-generated content only)
Tracks AI Overview citations No Yes
Brand mention detection No Yes, with position and sentiment
Competitor analysis Keyword-level only Brand-level across AI responses
Cross-platform comparison Google only Google AI + ChatGPT + Perplexity + Claude + Gemini + Grok + Copilot
Sentiment analysis No Yes (0-10 scale with dimensions)

Getting Started: Your First AI Overview Query

1. Get your API key

If you don't have a Sellm account yet, sign up. Then go to Project Settings > API Keys and generate a key.

2. Query a single prompt with google_aio

The simplest way to see what Google's AI Overview says about your brand is a single API call:

curl -X POST https://sellm.io/api/v1/async-analysis \
  -H "Authorization: Bearer sellm_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "best accounting software for freelancers",
    "replicates": 3,
    "providers": ["google_aio"],
    "locations": ["US"]
  }'

The replicates parameter tells the API to query Google AI Overviews multiple times for the same prompt. This matters because AI-generated responses are non-deterministic: the same query can produce different citations on different runs. Running 3 replicates gives you statistical confidence in the results.

3. Check the results

Poll the task status, then fetch results once completed:

curl -s "https://sellm.io/api/v1/async-analysis/{analysisId}" \
  -H "Authorization: Bearer sellm_your_api_key" | python3 -m json.tool

Poll until the status is "succeeded". The response includes everything: summary, providerBreakdown, promptBreakdown, and results[] with structured data about every brand mentioned in the AI Overview:

{
  "data": {
    "status": "succeeded",
    "summary": {
      "sovPct": 19,
      "coveragePct": 67,
      "avgPos": 3.0,
      "sentiment": 0.75
    },
    "results": [
      {
        "provider": "google_aio",
        "prompt": "best accounting software for freelancers",
        "mentioned": true,
        "position": 3,
        "sentiment": 0.75,
        "citedUrls": [...],
        "citedDomains": [...]
      }
    ],
    "promptBreakdown": [
      {
        "prompt": "best accounting software for freelancers",
        "sovPct": 19,
        "avgPos": 3.0,
        "sentiment": 0.75
      }
    ]
  }
}

Comparing AI Overview Results vs. Other AI Platforms

One of the most valuable things you can do is compare how Google AI Overviews cite your brand versus how ChatGPT, Perplexity, and Claude recommend it. The same prompt often produces very different brand citations across platforms.

Query all providers at once

curl -X POST https://sellm.io/api/v1/async-analysis \
  -H "Authorization: Bearer sellm_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "best accounting software for freelancers",
    "replicates": 3,
    "providers": ["google_aio", "chatgpt", "perplexity", "claude", "gemini", "grok", "copilot"],
    "locations": ["US"]
  }'

Python script to compare providers

Here's a script that queries a prompt across all providers and compares the results:

import requests
import time
import json
import os

API_KEY = os.environ["SELLM_API_KEY"]
BASE_URL = "https://sellm.io/api/v1"
HEADERS = {
    "Authorization": f"Bearer {API_KEY}",
    "Content-Type": "application/json",
}

PROVIDERS = ["google_aio", "chatgpt", "perplexity", "claude", "gemini", "grok", "copilot"]


def analyze_prompt(prompt, providers=PROVIDERS, locations=None):
    """Send a prompt to multiple providers and return results."""
    payload = {
        "prompt": prompt,
        "replicates": 3,
        "providers": providers,
        "locations": locations or ["US"],
    }

    resp = requests.post(
        f"{BASE_URL}/async-analysis",
        headers=HEADERS,
        json=payload,
    )
    resp.raise_for_status()
    analysis_id = resp.json()["data"]["analysisId"]
    print(f"Analysis created: {analysis_id}")

    # Poll until succeeded
    for _ in range(40):
        time.sleep(15)
        resp = requests.get(f"{BASE_URL}/async-analysis/{analysis_id}", headers=HEADERS)
        resp.raise_for_status()
        data = resp.json()["data"]
        if data["status"] == "succeeded":
            return data
        if data["status"] == "failed":
            raise RuntimeError(f"Analysis {analysis_id} failed")
        print(f"  Status: {data['status']}")

    raise TimeoutError("Analysis did not complete in time")


def compare_providers(data):
    """Print a comparison table of brand visibility across providers."""
    summary = data["summary"]
    print(f"\nOverall: SOV {summary['sovPct']}%, Avg Position {summary['avgPos']}, Sentiment {summary['sentiment']}")

    breakdown = data.get("providerBreakdown", {})
    print(f"\n{'Provider':<15} {'SOV %':<8}")
    print("-" * 30)
    for entry in breakdown.get("sovByProvider", []):
        print(f"{entry['provider']:<15} {entry['sov']:<8}")


if __name__ == "__main__":
    prompt = "best accounting software for freelancers"
    print(f"Analyzing: '{prompt}'")
    data = analyze_prompt(prompt)
    compare_providers(data)

Example output:

Overall: SOV 19%, Avg Position 3.0, Sentiment 0.75

Provider        SOV %
------------------------------
Google AIO      22
ChatGPT         18
Perplexity      28
Claude          0
Gemini          12
Grok            16
Copilot         20

This comparison reveals critical insights. In the example above, the brand appears in Google AI Overviews at position 2 but is completely absent from Claude's responses. That's a content gap worth investigating: what does Claude's training data say about your brand, and how can you improve it?

Detecting When Your Brand Appears in AI Overviews

Not every Google search query triggers an AI Overview, and your brand won't appear in all of the ones that do. Monitoring which prompts cite your brand (and which don't) helps you prioritize content optimization efforts.

Coverage tracking script

import requests
import os

API_KEY = os.environ["SELLM_API_KEY"]
BASE_URL = "https://sellm.io/api/v1"
HEADERS = {"Authorization": f"Bearer {API_KEY}"}


def get_analysis_data(analysis_id):
    """Get a completed async analysis by ID."""
    resp = requests.get(
        f"{BASE_URL}/async-analysis/{analysis_id}",
        headers=HEADERS,
    )
    resp.raise_for_status()
    return resp.json()["data"]


def print_coverage_report(analysis_id):
    """Print a coverage report for AI Overviews from an async analysis."""
    data = get_analysis_data(analysis_id)

    if data["status"] != "succeeded":
        print(f"Analysis status: {data['status']} (not yet succeeded)")
        return

    summary = data["summary"]
    print("=== Google AI Overview Summary ===")
    print(f"  SOV:       {summary['sovPct']}%")
    print(f"  Coverage:  {summary['coveragePct']}%")
    print(f"  Avg Pos:   {summary['avgPos']}")
    print(f"  Sentiment: {summary['sentiment']}")

    # Check provider breakdown for google_aio specifics
    breakdown = data.get("providerBreakdown", {})
    for entry in breakdown.get("sovByProvider", []):
        if entry["provider"] == "Google AIO":
            print(f"\n  Google AIO SOV: {entry['sov']}%")

    # Check prompt-level results
    prompt_breakdown = data.get("promptBreakdown", [])
    for pb in prompt_breakdown:
        print(f"\n  Prompt: {pb['prompt'][:80]}")
        print(f"    SOV: {pb['sovPct']}%, Avg Pos: {pb['avgPos']}, Sentiment: {pb['sentiment']}")


if __name__ == "__main__":
    import sys
    analysis_id = sys.argv[1] if len(sys.argv) > 1 else input("Enter analysis ID: ")
    print_coverage_report(analysis_id)

Monitoring Competitor Citations in AI Overviews

Understanding which competitors Google's AI features cite is just as important as tracking your own brand. The Sellm API returns all brands mentioned in each response, not just yours.

Competitor tracking with Python

import requests
import json
import os
from collections import defaultdict

API_KEY = os.environ["SELLM_API_KEY"]
BASE_URL = "https://sellm.io/api/v1"
HEADERS = {"Authorization": f"Bearer {API_KEY}"}

COMPETITORS = ["HubSpot", "Salesforce", "Pipedrive", "Zoho CRM", "Freshsales"]


def get_competitor_report(analysis_id):
    """Analyze competitor presence in AI Overviews."""
    resp = requests.get(
        f"{BASE_URL}/async-analysis/{analysis_id}",
        headers=HEADERS,
    )
    resp.raise_for_status()
    data = resp.json()["data"]
    results = data.get("results", [])

    # Filter to google_aio results only
    aio_results = [r for r in results if r["provider"] == "google_aio"]

    competitor_stats = defaultdict(lambda: {
        "mentions": 0,
        "total_position": 0,
        "prompts": [],
    })

    for result in aio_results:
        brands = result.get("brands", [])
        for brand in brands:
            name = brand["name"]
            if name in COMPETITORS:
                stats = competitor_stats[name]
                stats["mentions"] += 1
                stats["total_position"] += brand.get("position", 0)
                stats["prompts"].append(result["prompt"][:60])

    print("=== Competitor Presence in Google AI Overviews ===\n")
    print(f"{'Competitor':<20} {'Mentions':<10} {'Avg Position':<15} {'Coverage':<10}")
    print("-" * 60)

    total_prompts = len(set(r["prompt"] for r in aio_results))
    for name in COMPETITORS:
        stats = competitor_stats[name]
        mentions = stats["mentions"]
        avg_pos = (
            stats["total_position"] / mentions if mentions > 0 else 0
        )
        coverage = (mentions / total_prompts * 100) if total_prompts > 0 else 0
        print(f"{name:<20} {mentions:<10} {avg_pos:<15.1f} {coverage:<10.1f}%")


if __name__ == "__main__":
    import sys
    analysis_id = sys.argv[1] if len(sys.argv) > 1 else input("Enter analysis ID: ")
    get_competitor_report(analysis_id)

Example output:

=== Competitor Presence in Google AI Overviews ===

Competitor           Mentions   Avg Position    Coverage
------------------------------------------------------------
HubSpot              18         1.8             90.0%
Salesforce            15         2.1             75.0%
Pipedrive             12         3.4             60.0%
Zoho CRM              8         4.2             40.0%
Freshsales             5         5.1             25.0%

This data tells you exactly who you're competing against in AI Overviews. If HubSpot appears in 90% of your target prompts at an average position of 1.8, they're dominating the AI Overview space for your category. Knowing this helps you prioritize which prompts and content to optimize first.

Setting Up Automated Weekly Monitoring

For ongoing tracking, set up your Sellm project with prompts that include google_aio as a provider. The platform's built-in scheduler will automatically run these queries weekly and store the results.

Run analysis via the API

curl -X POST https://sellm.io/api/v1/async-analysis \
  -H "Authorization: Bearer sellm_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "best accounting software for freelancers",
    "providers": ["google_aio", "chatgpt", "perplexity", "claude", "gemini", "grok", "copilot"],
    "locations": ["US"],
    "replicates": 3
  }'

We recommend including google_aio alongside other providers for every prompt. This gives you a complete picture of your brand's visibility across all AI search surfaces, not just Google.

Recommended prompt categories for AI Overview monitoring

Google AI Overviews tend to appear most frequently for certain query types. Prioritize these:

Weekly diff alerts

Combine the Sellm trends endpoint with a simple diff to detect changes in your AI Overview visibility:

import requests
import os

API_KEY = os.environ["SELLM_API_KEY"]
BASE_URL = "https://sellm.io/api/v1"
HEADERS = {"Authorization": f"Bearer {API_KEY}"}
SLACK_WEBHOOK = os.environ.get("SLACK_WEBHOOK_URL")


def get_latest_two_runs():
    """Fetch the two most recent completed runs."""
    resp = requests.get(
        f"{BASE_URL}/analysis/trends?limit=2",
        headers=HEADERS,
    )
    resp.raise_for_status()
    return resp.json()["data"]


def check_aio_changes():
    """Compare AI Overview metrics between last two runs."""
    runs = get_latest_two_runs()
    if len(runs) < 2:
        print("Not enough runs to compare.")
        return

    current = runs[0]
    previous = runs[1]

    # Extract google_aio provider metrics
    cur_aio = next(
        (p for p in current.get("providers", []) if p["provider"] == "google_aio"),
        None,
    )
    prev_aio = next(
        (p for p in previous.get("providers", []) if p["provider"] == "google_aio"),
        None,
    )

    if not cur_aio or not prev_aio:
        print("No google_aio data available.")
        return

    alerts = []

    sov_delta = cur_aio["sov"] - prev_aio["sov"]
    if abs(sov_delta) > 5:
        direction = "increased" if sov_delta > 0 else "decreased"
        alerts.append(
            f"AI Overview SOV {direction} by {abs(sov_delta):.1f}pp "
            f"({prev_aio['sov']}% -> {cur_aio['sov']}%)"
        )

    cov_delta = cur_aio["coverage"] - prev_aio["coverage"]
    if abs(cov_delta) > 10:
        direction = "increased" if cov_delta > 0 else "decreased"
        alerts.append(
            f"AI Overview coverage {direction} by {abs(cov_delta):.1f}pp "
            f"({prev_aio['coverage']}% -> {cur_aio['coverage']}%)"
        )

    if alerts:
        message = "Google AI Overview Alert:\n" + "\n".join(f"- {a}" for a in alerts)
        print(message)

        if SLACK_WEBHOOK:
            requests.post(SLACK_WEBHOOK, json={
                "blocks": [
                    {"type": "header", "text": {"type": "plain_text", "text": "Google AI Overview Alert"}},
                    {"type": "section", "text": {"type": "mrkdwn", "text": "\n".join(f"- {a}" for a in alerts)}},
                ]
            })
    else:
        print("No significant changes in AI Overview metrics.")


if __name__ == "__main__":
    check_aio_changes()

Node.js: Full AI Overview Monitoring Script

Here's a complete Node.js implementation for monitoring AI Overviews:

const API_KEY = process.env.SELLM_API_KEY;
const BASE = "https://sellm.io/api/v1";
const headers = { Authorization: `Bearer ${API_KEY}` };

async function analyzeWithAIO(prompt, locations = ["US"]) {
  const res = await fetch(`${BASE}/async-analysis`, {
    method: "POST",
    headers: { ...headers, "Content-Type": "application/json" },
    body: JSON.stringify({
      prompt,
      replicates: 3,
      providers: ["google_aio", "chatgpt", "perplexity"],
      locations,
    }),
  });
  const { data } = await res.json();
  return data.analysisId;
}

async function pollResults(analysisId) {
  for (let i = 0; i < 40; i++) {
    await new Promise((r) => setTimeout(r, 15000));
    const res = await fetch(`${BASE}/async-analysis/${analysisId}`, { headers });
    const { data } = await res.json();
    if (data.status === "succeeded") return data;
    if (data.status === "failed") throw new Error("Analysis failed");
  }
  throw new Error("Timeout waiting for results");
}

function compareAIOvsOthers(data) {
  const summary = data.summary;
  console.log("\n=== AI Overview vs Other Providers ===");
  console.log(`Overall: SOV ${summary.sovPct}%, Avg Position ${summary.avgPos}`);

  const breakdown = data.providerBreakdown || {};
  for (const entry of breakdown.sovByProvider || []) {
    console.log(`  ${entry.provider}: SOV ${entry.sov}%`);
  }
}

async function main() {
  const prompt = "best accounting software for freelancers";
  console.log(`Analyzing: "${prompt}"`);

  const analysisId = await analyzeWithAIO(prompt);
  console.log(`Analysis: ${analysisId}`);

  const data = await pollResults(analysisId);
  compareAIOvsOthers(data);
}

main().catch(console.error);

Key Insights: AI Overviews vs. Traditional AI Search

After monitoring hundreds of brands across both Google AI Overviews and standalone AI search platforms, several patterns emerge:

Pricing

The google_aio provider is available on all Sellm plans. Each prompt analysis costs less than 1 cent.

Each prompt configured with google_aio counts as one prompt toward your plan limit, the same as any other provider. There's no additional cost for AI Overview tracking.

Next Steps

  1. Create a Sellm account and add google_aio to your prompts
  2. Run a comparison across all providers to see where you stand in AI Overviews vs. ChatGPT, Perplexity, and Claude
  3. Identify prompts where your competitors appear in AI Overviews but you don't
  4. Set up weekly monitoring with Slack alerts for changes
  5. Use the data to prioritize content optimization for the queries where AI Overview visibility matters most

The full API reference is at sellm.io/docs/api. For questions about AI Overview monitoring, reach out at sellm.io/contact.

Track Your Brand in Google AI Overviews

See where your brand stands in Google AI Overviews and compare with ChatGPT, Perplexity, Claude, Gemini, Grok, and Copilot.

Get Started

Frequently Asked Questions

What's the difference between google_aio and the gemini provider?

The google_aio provider monitors Google AI Overviews and AI Mode, which are integrated into Google Search results. The gemini provider monitors Google's standalone Gemini chatbot (gemini.google.com). They use different models and produce different results: AI Overviews are heavily influenced by Google's search index, while Gemini behaves more like a general-purpose AI assistant.

Does every Google search trigger an AI Overview?

No. Google selectively shows AI Overviews based on query type, user location, and other factors. Informational and commercial investigation queries ("best X for Y") are most likely to trigger them. Navigational queries ("facebook login") rarely do. The Sellm API queries prompts specifically designed to trigger AI responses.

Can I track AI Overviews in different countries?

Yes. Use the locations parameter when configuring prompts. AI Overview content and citations vary significantly by region. For example, a "best CRM" query in the US may cite different brands than the same query in Germany.

How often does Google update AI Overview citations?

AI Overview citations can change at any time as Google updates its models and search index. In practice, changes tend to be more gradual than with ChatGPT or Claude (which shift significantly during model updates). Weekly monitoring is sufficient for most brands; daily monitoring is available on paid plans.