That development hasn’t gone unnoticed in the PR world. Many agencies have already begun promoting ‘LLM optimisation’ or ‘GEO’ (Generative Engine Optimization) as the next big thing. So do we at Progress!
But if we look beyond the hype and look closely at the available research, the story becomes more complex. Our deepdives, analysis and results in Semrush and Evertune made us wonder. They frame it as a near-term strategic imperative, and obviously, the risk is there that many will invest without clear strategy, measurement or outcome. A recent report by PR Agency One “Prove Me Wrong: Tracking Brand Visibility Inside Most LLM Chatbots” offers an interesting counterweight. Their findings show an emerging trend, but also underline how early and unstable this field still is. And they nail down our thoughts whenever we were looking at GEO / LLM optimization dashboards: what are we looking at and how trustworthy is this? Because, to be honest: most of the reporting out there is inaccurate
At Progress Communications, we recognize the potential. But we also believe it is our responsibility to communicate clearly about what is actually measurable, and what is still speculation.
Premature bold claims
Millions of users now ask LLMs questions that would previously have gone to search engines. Tools like Evertune, which track how often brands are recommended by AI models, are gaining traction. Especially in domains where Progress is active: consumer tech, software and mobility categories.
These signals are important. They tell us that brand visibility inside LLMs is becoming a new layer of reputation. One that sits somewhere between top-of-funnel search, earned media and product comparison websites. It is the moment of Awareness and Interest. Which is often precisely where we navigate as PR agency. But there is a crucial nuance: We currently do not fully understand how or why models recommend specific brands.
LLM answers vary by prompt phrasing, your location, the language, and even time of day. The underlying training data is opaque. Models update frequently and so far we haven’t seen the golden egg: a predictable, repeatable method to directly improve a brand’s visibility inside AI answers. And certainly not in smaller markets such as the Benelux.
That makes bold claims like ‘we can make you rank higher in ChatGPT’ in our opinion premature and misleading.
Most of the data that LLMs rely on is still overwhelmingly English-language. In practice, this means:
- Dutch and Flemish brand content is less present in training datasets
- Local news outlets, trade publications and niche B2B titles are underrepresented in many AI training datasets
- Flemish/Dutch product reviews and user-generated content (blogs, forums) are sparse, giving models less context for accurate comparisons
- Dutch-language queries often receive more generic or English-biased results
- Smaller, regional brands appear far less frequently in AI answers, regardless of their actual market relevance
For a Benelux agency like Progress Communications, this is an important point. Our clients hire us to increase their visibility in the Dutch and Flemish (and Belgian-French) market. Promising them that we can ‘optimise’ LLM outputs today would not be responsible.
What do we do at Progress with GEO?
Well, there is a lot that we can do today and we actively monitor the developments around AI Visibility. For example, we can:
- Track how often brands are mentioned inside model answers
- Identify which product specs or proof points repeatedly get omitted by AI, which is a useful insight for future content planning
- Compare visibility across Dutch, Flemish and English queries
- Analyse sentiment and consistency of generated descriptions
- Benchmark changes around major product launches, allowing you to assess how well information is reflected in AI-generated explanations
- Check for inconsistencies over time by running periodic tests, capturing whether a model’s answers drift, improve or degrade
- Identify clear factual gaps, contradictions or misconceptions
- Map which sources LLMs appear to reference most often
This provides us insights. It shows whether our client is present, absent, misrepresented or undervalued compared to competitors. And it helps guide content decisions and PR planning.
What we cannot responsibly promise (yet)
- That specific PR content will directly influence LLM outputs
- That certain publishers will ‘train’ an LLM to recommend a brand
- That keyword strategies can improve position inside AI responses
- That LLM visibility can be ‘optimised’ like SEO
Like any emerging technology in our field (and when you work in tech, you have seen a number of them!), LLM visibility deserves attention and we are working at the forefront on it. But we also think, that serving it as guaranteed service today would be unwise.
Are you curious to know what LLM’s currently say about your brand?
At Progress we are happy to help you explore:
- Dutch, Flemish and English visibility
- Category-level comparisons
- Brand descriptions and sentiment
- Early gaps or inconsistencies in AI-generated answers
- And very important: We map which sources LLMs appear to reference most often