SEO in the New Era of AI-Based Chatbots (Artificial Intelligence)

Alessio Pomaro
6 min readFeb 28, 2023
SEO in the New Era of AI-Based Chatbots (Artificial Intelligence)

Google and Microsoft, through the recent presentations of Bard and Prometheus, have proven to be the main players in the path towards a new online search experience. A user experience that could turn into a dialogue (in the form of a chat) with a virtual assistant. A system that distills search results into a single answer, and allows for research refinement through follow-up questions.

I would like to immediately disambiguate the first question that probably arises from this concept, because it becomes useful for better understanding the nature of these systems. Is this something very similar to Google’s current featured result? No, because the featured result is extracted from one of the search results, instead the chatbots response is produced by a language model (Language Model or LM) distilling the information of the main search results.

The hood is similar, but the engine is completely different.

The main question, however, concerns the impact that chatbots will have on search, on the SEO world and on the business that are based on the SERP (search engine results page).

What impact could AI Chatbots have on traffic to websites?

Publishers are expressing concern about this way of consulting the results, above all because they fear that users will have less and less need to continue reading by clicking on a result.

Google did little to reassure them by showing a demo of Bard with no source references or publisher mentions. The Bing integration, by contrast, has these elements prominently. And it is the same direction taken by other conversational search engines that are gaining some visibility in this period, such as Perplexity.ai, You.com and Neeva AI, which have actually implemented hybrid solutions (search engine + language model) ahead of Google and Microsoft.

Concern from publishers is a déjà vu for someone who has been working in SEO for a few years. The same considerations have already emerged in the past in relation to the numerous information present directly in the SERP when you perform a search (SERP features), and reached its peak in relation to the featured snippet. Once again we wonder about the fact that if publishers don’t get traffic and fail to monetize their content, they will stop producing it.

Will these new technologies take the discussion to another level?

Google and Microsoft have said that traffic to websites is critical. Satya Nadella, CEO of Microsoft said that the new interpretation of the search only makes sense if it generates traffic to content producers, and that the response generated by the AI is just another way of representing the “10 blue links.

Yusuf Mehdi, CMO of Microsoft stated that the references within the Bing answers are for this. Caitlin Roulston, director of communications at Microsoft, however, declined to share click data during the testing period. Which might make sense, given that probably most of the user tests stopped at testing the chatbot’s responses.

Nadella also spoke about SEO, saying that there will be more motivation in creating more authoritative content that guarantees the presence of the result among those considered by the response generated by the AI.

Where there is research… there is SEO!

I believe that the acceleration that has led to the public release of ChatGPT has confused us and generated anxiety, making us forget the history of the evolution of SEO and SERPs.

I am convinced that websites will continue to exist, indeed they will improve and SEO will continue to be important. The new search mode will not replace the one we know, but will give us more possibilities.

What sense would it make, for example, to change a fast and reliable system like the current response to “know simple” intents? None, probably.

Where, instead, would it become really useful? In more complex searches, where more information is requested within a question. In the example below, I asked “What is LCP and how is it optimized?”. The first image is Perplexity’s answer. As we can see, it responds to both focuses, indicating the sources of information.

A search using Perplexity.ai

The following image is related to New Bing (via Skype). In this case, a follow-up question is also proposed to disambiguate the question. But, even in this case, the answer deepens both focuses.

A search using The New Bing (via Skype)

The following is Google’s answer; in this case, what we get cannot be equally effective, because there should be a result that exactly answers the question in a part of the content. There should be an extremely precise and “tailored” content on such a specific question.

A search using Google

In general, we can say that if searches become longer than 6–7 words, the results lose effectiveness. This problem does not occur in language models because, based on search engine results, they are much more adept at composing a relevant and complete answer.

A part of the clicks in the SERP will be absorbed by new ways of offering direct answers, but how would the balance change if we evaluated the research sessions and not the single searches? We should probably evaluate how many search sessions a website completes, instead of focusing on searches that don’t generate clicks. The new search modes, in fact, also introduce new types of searches and new ways to refine searches.

Is it our brand that produces SERP answers to user questions? Is ours the link that will lead the user to close the search session on a web page for conversion?

Because this should be the goal of the strategy: to be the reference for users in our reference sector, accompanying them in every phase of the research session.

How is this achieved? Through the study of users, their needs and the market, and working so that Experience, Expertise, Authoritativeness, Trustworthiness (EEAT) are recognized in the contents of our projects.

For this reason, as I often say..

..as long as search exists, SEO will always play a key role.

Transactional searches

So far I have mainly described information searches, for which the completion of the search session, for the simpler ones, could take place in the SERP. How do they change if we refer to more commercial queries?

From what we have seen to date, AI-based research assistants do not allow you to purchase products and/or services. As a result, traffic from search engines to websites may vary as described above. But there could be a higher conversion rate because users could fulfill a large part of the evaluation/exploration loop (referring to the Messy Middle model) in SERPs through chat to arrive at the website in an area of the customer journey closest to conversion.

Thanks to the improvement in the quality of traffic, those involved in SEO could also be able to better measure the impact of the actions implemented and produce more accurate forecasts.

But to get all this, let’s go back to a concept expressed previously: are our sources generating the answers? Are we the reference for users? These are the questions we will have to think about, and on which we will probably focus our SEO action in the future.

Questions remain? Certain!

We are assuming that online search will evolve towards hybrid solutions, composed of the union between the search engine (which selects the best results based on the relevance, quality and authority and reliability of the source) and a language model (which amalgamates the information into responses for the user).

To this combination, I would also add another actor: the browser. In the following video, we see a very interesting interaction involving:

  • Chrome + Perplexity.ai extension,
  • Edge + Bing sidebar (New Bing).
Language models and search: Chrome + Perplexity and Edge + Bing tests

I believe that this type of solution can create extraordinary experiences, but it is essential that the problem related to “hallucinations” be resolved, i.e. the generation of content that is correct in form, but based on concepts that deviate from reality.

“Scaling neural network models — making them bigger — has made their faux writing more and more authoritative-sounding, but not more and more truthful.”
- Gary Marcus -

Will it be a fixable problem? Yes, but the “time” variable will play a determining role because people may lose faith in systems that do not always distill information reliably.

--

--

Alessio Pomaro

Head of AI @ Search On Media Group, Docente, Speaker, Autore