Generative engine optimization (GEO) Audit - Deiser Use case
Today we are going to do a GEO audit of Deiser and identify the gaps in the brand's keyword footprint inside large‑language‑model (LLM) answer sets.
Generative Engine Optimization (GEO) is the practice of optimizing your brand or website for visibility inside answers generated by AI chatbots and large language models (LLMs), such as ChatGPT, rather than in classic web search results.
Deiser is a Madrid‑based Atlassian Platinum Solution Partner and Platinum Marketplace Partner that has specialised in the Atlassian ecosystem since 2007. Beyond consulting and licensing services, Deiser develops marketplace apps such as Projectrak, Budgety, Exporter and Workload that extend Jira for project tracking, financial planning and reporting. The company supports a global customer base, including Spanish IBEX‑35 enterprises, helping them adopt agile practices, optimize ITSM/ESM workflows and accelerate digital‑transformation initiatives.
How GEO Is Different From SEO
Before diving into the audit numbers, let's clarify what makes Generative Engine Optimization (GEO) fundamentally different from classic Search Engine Optimization (SEO).
Traditional SEO is a game of ranking across a page of blue links, usually ten per page. Even if your brand lands at position seven or eight, you can still expect some clicks. Visibility is distributed, and every position counts for something.
GEO, on the other hand, plays by new rules. To show up, your content must first be selected in the LLM's retrieval layer, and then actually cited in the model's final narrative summary. Out of potentially thousands of results, only a handful, sometimes just two or three, make it into that coveted answer set. Everyone else is invisible.
That means the visibility curve in GEO isn't just steep; it is nearly vertical. If your brand drops one or two spots, it can vanish completely from user view. In short, GEO is a true winner-takes-all arena. Brand presence within the LLM's summary is no longer just an advantage; it is now the main battleground for attention.
SEO analysis of Deiser
Organic footprint
Deiser currently ranks for ≈ 3,100 keywords on Google. In the May 2025 snapshot, 296 of those trigger SERP Features (featured snippets, PAAs, knowledge panels), 224 sit in the Top 3 positions, and 331 appear somewhere on page 1. The remaining ~2,200 terms live beyond page one and contribute only single-digit traffic
About 40% of Deiser's organic traffic comes from just three countries: Spain, Mexico, and the United States.

Country breakdown
Country | Branded Traffic (% of total) | Top Non-Branded Keywords (and share if known) |
---|---|---|
US | 10% (Deiser) | • Differences between CapEx and OpEx (~60%) |
Spain | 30% (branded) | • What is Agile • What is CapEx • Atlassian |
Mexico | 10% (branded) | • What is Agile • What are Objective Data • What is CapEx • Atlassian |
Keyword exploration roadmap
Based on this initial analysis, our next step is to focus the GEO audit on these three countries and these key keywords. We will examine how Deiser appears for this selection of terms, such as "Projectrak para Jira," "Budgety for Jira," "automatización de flujos de trabajo en Jira," and "gestión de proyectos en la nube", across leading large language models like ChatGPT, Claude, and Perplexity. This approach will give us a clear comparison between Deiser's current visibility in Google search and its presence inside AI-generated answers.
GEO audit: Deiser case
Scope and approach
To understand how Deiser appears in AI-generated answers, we ran a three-part GEO audit:
- Brand-Presence Audit: How often is Deiser named directly in answers from major LLMs?
- Cross-LLM Citation Audit: Which URLs are surfaced by each model, and how frequently do they belong to Deiser?
- Geo-Specific Citation Audit (ChatGPT Web): For Spain, Mexico, and the US, how often does ChatGPT cite Deiser when asked our highest-priority questions?
Why two metrics?
Citations are the new equivalent of SEO rankings. Every link in an LLM's answer is a potential entry point to your site, even if the user's question is only loosely related to your business.
Brand mentions show how often the model explicitly names "Deiser" in its response. This is a new kind of KPI, one that's almost invisible in classic SEO, but is rapidly gaining importance as more buyers make decisions inside the chat window.
Brand presence audit - analysis
We used Sellm, a market intelligence tool for LLMs, to run hundreds of prompts across our defined keyword list. Sellm does three key things:
- Generates hundreds of prompts across defined keyword lists.
- Captures the raw answers from each model (GPT‑4‑o, GPT‑Search, Claude 3, Perplexity, DeepSeek, etc.).
- Parses for brand mentions and the rank order in which they appear.
What we have below, is a list of keywords that have been run and Deiser brand positioning on it. The percentages, represent the frequency of presence of a brand for a given keyword, and in brackets the ranking of that in the searches.
When we look at the data, it's clear: Deiser only surfaces as a brand in LLM answers when the query is directly about Atlassian or marketplace apps. Finance explainer keywords that drive SEO traffic, like "CapEx vs OpEx", are essentially invisible in LLM results.

Learning 1: Your brand appears only when the question is tightly aligned to your core business offering. Generalist, tangential, or traffic-focused content simply does not surface.
The Two Families of LLMs: Search vs Static
There are two broad families of LLMs, and Deiser's visibility differs sharply between them:
- Search-Capable Models (Examples: GPT-Search, DeepSeek-Search, Perplexity): These models pull fresh results from the live web, so the brands and links they mention can shift from day to day. When the ranking set remains stable, they often recycle similar answer templates, which makes performance somewhat predictable until the next refresh. Deiser performs strongest in this family, appearing in almost every relevant search and usually ranking high among the cited brands.
- Static, non-search models (e.g., GPT-4-o, Claude 3, base DeepSeek): These models give answers based on data frozen at the last training cycle, which may be months old. As a result, it's harder and slower to gain visibility, but once a brand is cited, its position tends to stick for a while. Deiser still appears in most of these static models, but not all. For example, the base DeepSeek model currently omits the brand entirely.
One key difference from traditional search is that GEO only considers the first 8 to 10 brands. There is no "second page", if your brand is not cited in the answer set, it effectively disappears.
Learning 2: Ranking in LLMs is a true winner-takes-all scenario. The experience is more volatile, harder to track, and brands outside the top recommendations have no visibility at all. (there is no page 2).
Brand presence audit - analysis in depth
Our tool Sellm also helps us break down the response structure for every keyword. For each prompt, we can see which brands are included, their order, and how many answer variations mention them. This gives us a live heat map of the competitive field, allowing us to spot both opportunities and threats in real time.

For example, take the query: "Atlassian partner solutions in Spain." In every ChatGPT run, and nearly every other major LLM, Deiser consistently ranks just behind the core Atlassian brands (Atlassian, Jira, Confluence) and ahead of all regional competitors. Holding that #4 spot across models confirms two things:
- Its authority on Atlassian tooling is clearly recognized by the most influential AI systems.
- The immediate battleground is no longer just about being visible, but about overtaking the official Atlassian products within the top recommendation set.
This kind of insight helps us pinpoint where the Deiser brand already dominates and where there is still room to climb higher in AI-driven search results.
Citation audit - analysis
What about cases where "Deiser" isn't mentioned directly in the answer text? Are those keywords lost causes? Not at all. In GEO, brands can still win visibility through citations, being listed among the sources that LLMs reference.
Cross LLM audit
Using Sellm's citation heatmap, we inspected every URL surfaced for each test keyword across GPT-Search, Perplexity, Claude, DeepSeek, and ChatGPT. This gives us a line-by-line view of which domains each model trusts, even when the narrative doesn't mention the brand by name.
Example: "Projectrak para Jira: seguimiento avanzado de proyectos" With the keyword "Projectrak para Jira: seguimiento avanzado de proyectos", Deiser performs exceptionally well in terms of citations, regularly appearing as a referenced source even when not explicitly named in the LLM answer.

Learning 3: Search-enabled LLMs still drive valuable traffic via citations. It's critical to position Deiser's URLs as go-to sources for strategic keywords.
In the next section, we stress-test this idea with a granular, country-level keyword audit. Here's where GEO really separates itself from SEO: Unlike Google, LLMs do not distribute citations evenly across all well-optimized posts. In fact, some articles that rank well in classic SEO may receive zero visibility in LLM answers if they are not tightly connected to your core authority. Instead, LLMs show a strong bias towards content that is directly relevant and authoritative for the user's question. This means that, for many topics, your best SEO performers may not make the cut in GEO at all.
Cross Geographical audit
Because Spain, Mexico, and the United States together supply around 40% of Deiser's organic traffic, and because ChatGPT makes up the bulk of LLM usage, we ran a country-specific citation test on ChatGPT Web.
How we did it:
- We asked ChatGPT to generate the most relevant search questions about Deiser, based on the company's website.
- We combined this list with top-traffic blog articles and filtered it down to 35 queries across five clusters/groups of keywords: Atlassian Apps, DevOps & ITSM, Licensing, Agile & Project Management, and Finance Terminology.
- For each keyword, we ran three independent prompts per locale (Spanish for Spain and Mexico, English for the US), then counted how many times a deiser.com or blog.deiser.com URL appeared in the citation block (on a scale of 0–3).
Why three prompts?
ChatGPT's retrieval layer shuffles the sources it cites from run to run. By repeating the query three times, we smooth out any one-off volatility and get a more reliable picture of structural patterns.
Keyword | Keyword translated | Mexico | Spain | USA | Total |
---|---|---|---|---|---|
Atlassian Tools (Jira, Confluence, Forge, etc.) | 16 | ||||
automatización de flujos de trabajo en Jira | automation of workflows in Jira | 0 | 0 | 0 | 0 |
automatización de tareas en Jira para optimizar el trabajo diario | automating tasks in Jira to optimize daily work | 0 | 0 | 0 | 0 |
cómo mejorar la colaboración en equipos con Confluence | how to improve team collaboration with Confluence | 0 | 0 | 0 | 0 |
desarrollo de aplicaciones en la nube con Atlassian Forge | developing cloud apps with Atlassian Forge | 0 | 0 | 0 | 0 |
Exporter para Jira: exportación eficiente de datos de proyectos | Exporter for Jira: efficient export of project data | 1 | 0 | 0 | 1 |
gestión de proyectos en la nube con Atlassian Cloud | project management in the cloud with Atlassian Cloud | 0 | 0 | 0 | 0 |
gestión eficiente de productos con Jira Product Discovery | efficient product management with Jira Product Discovery | 1 | 0 | 1 | 2 |
informes de proyecto efectivos utilizando herramientas Atlassian | effective project reporting using Atlassian tools | 1 | 1 | 1 | 3 |
integración de aplicaciones en el Marketplace de Atlassian | integration of apps in the Atlassian Marketplace | 0 | 0 | 0 | 0 |
Jira Product Discovery para la gestión de productos | Jira Product Discovery for product management | 1 | 1 | 0 | 2 |
mejores prácticas para gestionar proyectos con Confluence | best practices for managing projects with Confluence | 0 | 0 | 0 | 0 |
optimización de activos de TI con Atlassian Assets | IT asset optimization with Atlassian Assets | 1 | 1 | 0 | 2 |
optimización del uso de Atlassian Assets en 2024 | optimizing Atlassian Assets usage in 2024 | 1 | 1 | 1 | 3 |
Projectrak para Jira: seguimiento avanzado de proyectos | Projectrak for Jira: advanced project tracking | 1 | 1 | 1 | 3 |
uso de Atlassian Forge para el desarrollo en la nube | using Atlassian Forge for cloud development | 0 | 0 | 0 | 0 |
DevOps & ITSM | 6 | ||||
Adopción de prácticas DevOps para mejorar el ciclo de vida del software | adoption of DevOps practices to improve software lifecycle | 0 | 0 | 0 | 0 |
comparativa de herramientas ITSM: Jira Service Management vs. ServiceNow | ITSM tools comparison: Jira Service Management vs ServiceNow | 1 | 1 | 1 | 3 |
comparativa entre Jira Service Management y ServiceNow en 2025 | comparison between Jira Service Management and ServiceNow 2025 | 1 | 1 | 1 | 3 |
diferencias entre CAPEX y OPEX en proyectos de TI | differences between CAPEX and OPEX in IT projects | 0 | 0 | 0 | 0 |
gestión de servicios de TI con Jira Service Management | IT service management with Jira Service Management | 0 | 0 | 0 | 0 |
gestión de servicios empresariales (ESM) y cómo mejorarlos | enterprise service management (ESM) and how to improve them | 0 | 0 | 0 | 0 |
implementación de prácticas DevOps en equipos de desarrollo | implementing DevOps practices in development teams | 0 | 0 | 0 | 0 |
mejora de la gestión de servicios empresariales con ESM | improving enterprise service management with ESM | 0 | 0 | 0 | 0 |
mejores prácticas en la gestión de servicios de TI | best practices in IT service management | 0 | 0 | 0 | 0 |
Licensing & Planning | 5 | ||||
licencias Atlassian: cómo elegir la adecuada para tu empresa | Atlassian licenses: how to choose the right one for your company | 0 | 0 | 0 | 0 |
uso de Budgety para Jira en la planificación financiera de proyectos | using Budgety for Jira in project financial planning | 1 | 1 | 1 | 3 |
ventajas del Data Center de Atlassian para grandes organizaciones | benefits of Atlassian Data Center for large organizations | 1 | 1 | 0 | 2 |
Agile & Project Management | 0 | ||||
beneficios de Agile en la gestión de proyectos | benefits of Agile in project management | 0 | 0 | 0 | 0 |
evolución del rol del gestor de proyectos con herramientas digitales | evolution of the project manager role with digital tools | 0 | 0 | 0 | 0 |
establecimiento de OKRs efectivos para equipos tecnológicos | setting effective OKRs for tech teams | 0 | 0 | 0 | 0 |
funciones y responsabilidades de una PMO en proyectos de TI | roles and responsibilities of a PMO in IT projects | 0 | 0 | 0 | 0 |
guía rápida para líderes tecnológicos sobre OKRs desde cero | quick guide for tech leaders on OKRs from scratch | 0 | 0 | 0 | 0 |
implementación de metodologías ágiles en empresas | implementing agile methodologies in companies | 0 | 0 | 0 | 0 |
implementación de metodologías ágiles en equipos de desarrollo | implementing agile methodologies in development teams | 0 | 0 | 0 | 0 |
qué es una PMO y su importancia en la gestión de proyectos | what is a PMO and its importance in project management | 0 | 0 | 0 | 0 |
transformación del rol del Project Manager con herramientas Atlassian | transformation of the Project Manager role with Atlassian tools | 0 | 0 | 0 | 0 |
AI & Automation | 0 | ||||
cómo implementar herramientas de inteligencia artificial en equipos de TI | how to implement AI tools in IT teams | 0 | 0 | 0 | 0 |
cómo utilizar Atlassian Intelligence en equipos de trabajo | how to use Atlassian Intelligence in work teams | 0 | 0 | 0 | 0 |
integración de inteligencia artificial en procesos de TI | integrating AI in IT processes | 0 | 0 | 0 | 0 |
Conclusions
- Atlassian-focused queries are golden: Whenever a search is directly about Jira, Confluence, or Marketplace apps, ChatGPT almost always cites Deiser in Spain, Mexico, and the US. This keeps the brand highly visible at exactly the point where prospects are ready to make buying decisions.
- Side topics get ignored: Content about CapEx vs OpEx, OKRs, or generic agile coaching, which might still perform well in Google, receives zero citations in LLM answers. ChatGPT and other models simply overlook any content that doesn't fall within Deiser's recognized Atlassian expertise.
Learning 4: LLMs reward content that is clearly and unquestionably authoritative for the user's question. "Traffic-grab" topics, no matter how well they perform in classic SEO, are largely ignored. In GEO, being relevant to your core proposition is far more important than targeting high search volume.
SEO vs GEO: What the Audit Reveals for Deiser
In Google search, Deiser can still attract clicks even from lower-ranking positions like #7 or #8. Visibility is distributed, and even second-tier results capture some audience. In contrast, GEO is all-or-nothing. Large language models only surface a small handful of brands, usually fewer than eight, in their answer sets. If Deiser is included in that short list, the brand is fully visible; if it drops off by just one position, it disappears entirely.
Another clear difference is how the models treat side topics. Posts about CapEx vs OpEx or generic agile advice might still rank in Google and drive organic traffic. However, in GEO, these same topics are left out because they are not seen as directly relevant to Atlassian tooling, which is Deiser's real area of authority. This shift means that only content tightly aligned with the core business and expertise is likely to appear in LLM answers.
There is an upside, however. While some Google keywords are fiercely competitive, they can be less crowded in GEO. As long as a query is unambiguously about Atlassian, Deiser still has the opportunity to break into the LLM answer se, even against larger sites that dominate traditional search. The takeaway is clear: it's time to focus on Atlassian-centered content and stop putting effort into topics that LLMs will never associate with the brand.
Where GEO Is Headed: Future of the industry
Looking ahead, the landscape is set to evolve rapidly. Today, clicks from LLM answers still have value, but the total volume of that traffic is likely to decline over time. As chatbots become better at answering questions directly in the chat window, users need less "extra reading" and are less likely to visit the source site. Citations will continue to matter, but their impact will be measured differently.
At the moment, chat models sometimes "hallucinate" or mis-remember facts, prompting users to check the original source. This accidental traffic is still beneficial, but as models improve and become more accurate, this effect will fade. Meanwhile, tracking which keywords drive real conversions will get more challenging, since many buyers will make up their minds inside the chat itself, beyond the reach of traditional web analytics. New metrics, such as how often the bot names your brand or displays your link, will become the true scoreboard for digital visibility.
In this new environment, brand presence becomes the primary goal. If the assistant says "Deiser," the brand is part of the conversation, regardless of whether the user clicks through or not. Investing in authoritative, expert content that LLMs trust will deliver more value than chasing raw visitor numbers.
Finally, industry analysts predict that the "answer layer", dominated by ChatGPT and perhaps one or two rivals, could match Google's query volume by 2028. Mastering GEO now is similar to having a head start in SEO twenty years ago: early investments will create advantages that are difficult to overcome later.