GEO
A Marketer’s Guide to SEO for LLMs
Websites aren’t dying. They’re just getting a new class of “users.”
By Bobby Knapp, Principal Solution Architect, and Alex Liss, VP, Data Science & AI.
Image generated using Midjourney.
The minute ChatGPT launched to major acclaim in 2022, marketers had one question: “How do I get it to give the answers I want about my brand?” A natural question, given that for years influencing brand discoverability had been possible through SEO and SEM.
The answer at the time wasn’t easy or delightful to hear: you kinda can’t. That’s because the training material for an LLM was pre-baked into the model as it was learning. And that training material often came from typical web crawls. So if you already had good webcrawl best practices implemented on your site, it was likely going to pick up the basic signals you wanted to send.
But AI doesn’t stand still. Now ChatGPT, Perplexity, Gemini and other LLM chat platforms can access the internet to look things up, giving marketers another go at influencing the answer. In fact, Gartner is predicting that search engine volume will drop 25% by 2026 because of AI chatbots as well as other virtual agents.
Do you have questions about what this means for marketers? We certainly did. So we sat down with two of our experts on LLM models and data and system architectures, Principal Solution Architect, Bobby Knapp, and VP, Data Science & AI, Alex Liss, and they broke it all down for us.
Huge: First off, what should we call this new kind of SEO — the kind where we’re focused on optimizing for LLMs?
Alex: I’ve started using Generative Engine Optimization or GEO because it feels the catchiest. I’ve also seen AIAO (AI Agent Optimization), but that’s just a mouthful and sounds a bit like Old MacDonald’s farm.
Bobby: I’ve seen a lot of names being floated out there including LLMO, SAIO, LEO, AEO and AIO. If my 15 years in technology have taught me anything, it’s that you can never have too many standards for the same thing. So I have proposed “Language Model Framework for Agentic Optimization” (LMFAO). Internal review is still pending, but I feel confident. ;)
Huge: OK so we have a name – Generative Engine Optimization. Why are people turning to generative engines like ChatGPT to search in the first place?
Bobby: Chat experiences built on large language models (LLMs) bring personalization to search but, more importantly, allow people to start their journey wherever they want. We’ve been trained for 30 years (all the way back to AOL!) to use keywords. But imagine if you could start your discovery from any point. Try searching Google with something like “Where should my family go on vacation for $5000?”—yet that’s a fairly normal search in a generative engine.
My favorite personal example was when I was trying to pick what airline to gain status with. I entered my location, most frequently traveled destinations, and what type of perks mattered to me. Instead of getting an endless list of travel and points blogs, I got a well-researched comparison with a direct conclusion and recommendation. If you’re the brand manager at Delta or United, you should be prepared for people searching this way, because that's where GEO comes in.
Alex: Generative interfaces are causing a shift from searching by keyword to having a conversation. Smartphone natives like Gen Alpha are used to personalized AI that functions as an answer engine and tailors the queries with you at each step. That’s going to become increasingly widespread as you have sites like Perplexity offering a built-in commerce offering. OpenAI’s Deep Research also offers a glimpse of a new paradigm of agentic search. You can give an AI really specific instructions for something complicated, and it will go off and get it done by itself. For now that’s just research, but over time you can expect to see that integrated with commerce and direct linking functionality.
Alex Liss and Bobby Knapp.
Huge: So LLMs are providing answers and starting to be asked to shop or accomplish more complex tasks? How could I, as a marketer, know if my site is LLM-friendly?
Bobby: I think the more important question is “How can I know if my brand is LLM-friendly?” In traditional search, only about 12% of queries end with a user clicking a result outside of the top 3. In this world, catering your site to search engines is the only way to break through the noise. GEO shifts the paradigm in two major ways: it selects what sources to use on behalf of the user; and it uses a much larger number of sources for each query.
Most generative search engines average in the range of 6-7 sources per search with each source being from a different domain. Even if your first-party content appeared in every relevant search, you only control about 15% of the narrative. More importantly, some studies have shown significant bias towards sourcing from content aggregators and user generated content with earned media being the single largest contributor.
Succeeding in GEO is about much more than optimizing content on your site. It’s about getting your brand out to as many channels as possible with as much positive attention as possible, particularly in the realms of earned media and UGC.
Alex: A major part of GEO is emphasizing user intent. This means site content will move from being indexed by functional descriptors of what it is, but contextual descriptors of what it does. A great example comes from the femcare brand Viv. Last summer, they pivoted their blog content to emphasize action-oriented language like “here is why Viv is your safest tampon choice” for women concerned about the presence of contaminants like lead and arsenic. This positioned Viv to take advantage of a trending topic going into generative AI search engines around sustainable tampons. Qualified visitors who came in through these AI searches converted at 4x higher than average.
As a general best practice, the open-source coalition Schema.org has an framework for enriching your site tags with LLM-optimized metadata that captures intent through action-oriented queries from LLM search. A richer schema for your site can deliver AI readability, capture the nuance of your content including different formats (images, videos, audio files), structure and narrative flow. There are also emerging standards like Llms.txt to help LLM search crawlers understand the contents of your site.
Huge: Will the shift from SEO to GEO pose any risks?
Alex: There is a definite risk for publishers, as more queries are answered directly in the generative search engine, instead of traditional search which drives traffic to site. Newer generative experiences like Perplexity are experimenting with monetizing sponsored ads at a CPM of $30-$60, which is much higher than traditional channels like Display or Paid Social. Publishers looking to monetize their content through ads could face reduced traffic as well as higher traffic acquisition costs if they need to run ads on Perplexity to capture the lost eyeballs.
Bobby: The bigger risk is to be misunderstood or misrepresented. By now, we’ve all seen inaccurate AI summaries, or some that are missing crucial information. My personal favorite was a group text where one friend linked a news article while another was telling us about his new job. I picked up my phone to see a generative summary telling me: “CNN Breaking News: My boss seems pretty awesome.”
Brands need to start testing their content with both people and generative AI. You need to look at how your brand's tone of voice impact LLM interpretation and understanding. It needs to speak to generative search just as well as it does customers. You need to make sure when Perplexity is speaking for you, it delivers the message that you want.
Huge: What are the opportunities that GEO presents?
Bobby: New standards are bound to take shape, and staying ahead of the curve will generate an inordinate amount of momentum in the short term. As Alex mentioned, one of the most obvious advancements is a form of robots.txt for agents. An llms.txt will instruct agents where to look on your site for the information they want. While this isn’t yet a fully agreed-upon standard, it’s not a bad practice to implement now. It can also serve as an internal tool to guide your own agents and how they digest your content.
It reminds me a lot of the era of when single page applications were growing in popularity, but web crawlers couldn’t execute javascript. It resulted in a lot of sites being unable to be indexed. The solution was to create a static site that mirrored your own site, but was built specifically for web crawling. It was a completely separate version that no human was ever meant to see. Those that implemented this technique first saw their page rankings skyrocket. We may see a similar situation where agents experience a version of your site that is tailored to them.
Alex: Along with disruption comes opportunity. Companies looking to optimize their site metadata for LLM queries will be able to use their own LLM tools to do rapid testing and iteration. For example, imagine loading your sitemap, html tags and metadata schema into an LLM, then chatting with it. Which audiences and behaviors is my site optimized for? Will the target audience be able to find my content? How can I tailor my metadata to stand out vs. my competitors? Companies can use LLMs to help them adapt and uncover additional opportunities for audience engagement.
Huge: Will an AI agent see any ads? And if so, could that change its recommendation?
Bobby: An agent may see your ad, but they won’t really care. It will either be filtered out or represent a small part of the context that is ignored in favor of the more organized message of the content. Agents also serve to mask the user behind the query, making targeted advertising more difficult. Where ads can be most impactful is in generating online discourse. If your advertising gets people talking about your products in social media, especially more discussion-driven platforms like Reddit, then you will see more success in regards to your advertising creating impact in your GEO results.
Huge: What else is on the horizon that a marketer should be thinking about?
Alex: Taking things even one step further. In the last 6 months, Anthropic’s Claude added a Computer Use feature, while OpenAI launched Operator in January. Both let the intelligent system use a computer (including a browser) the way a human would. In other words, LLMs are able to use the interface itself.
You can ask one of these chat experiences to complete a task (like add groceries or shopping for a new air purifier) and it will actually go off and do these things. For now, these features are not in mainstream usage, but these early versions give us a hint about what’s to come.
Bobby: Right now, the big thing is that these generative searches and agents are trying to operate in a world that wasn’t designed for them. Sites have been optimized endlessly through A/B testing and user research to convert a user visit to a sale. Do we think agents are going to respond in the same way? Does perfecting the size, shape, color and text of the add to cart button matter to them?
There will be an evolution from agents parsing content designed for us, to content designed for agents, to ultimately agents interacting with other agents. A recent video showed a demo of the customer AI assistant talking to the hotel’s AI assistant and switching to the protocol that sounds like we’ve gone back to 56k modems. It haunts my dreams.
The best way to prepare right now is to have a robust AI strategy internally. If you are building and testing data and context to get the best performance out of your own tools, then you’ll also be learning what works best for external agents as well.
Huge: Thanks, Alex and Bobby.