1. SEJ
  2.  ⋅ 
  3. Generative AI

Google’s Liz Reid Says LLMs Unlock Audio And Video Indexing

  • Multimodal LLMs let Google understand audio and video at a level that wasn't possible before.
  • Reid hinted at a future where Google surfaces sources you already subscribe to.
  • Both developments are possible due to multilingual access.

Google's head of Search described how multimodal LLMs help Google understand audio and video, and discussed a direction for subscription-aware search.

Google’s Liz Reid Says LLMs Unlock Audio And Video Indexing

In a podcast interview, Google VP of Search Liz Reid described two ways LLMs are changing what Google can index and how it ranks results for individual users.

Reid told the Access Podcast that multimodal AI models now allow Google to understand audio and video content at a deeper level than was previously possible. She also pointed to a future where search results adapt based on a user’s paid subscriptions.

What’s New

Multimodal Understanding Is Expanding What Google Can Index

Reid said LLMs being multimodal has opened up content formats that Google previously struggled to process.

Reid told the hosts:

“The great thing about LLM is they’re multimodal. So we can actually understand audio content and video content actually at a level we couldn’t years ago.”

She went further, describing how Google can now go beyond basic transcription when analyzing video.

“Now you can understand audio much better. Now you can understand video much better. Now you can understand not just the video transcription but like what is the video more about or what’s the style or other things like that.”

Reid connected this to a long-standing gap in how search works for non-English speakers. For users in India who speak Hindi or other languages, the web often lacks the information they need in their language. Previously, translating all web content into every language wasn’t scalable. LLMs changed that.

“Now with an LLM, you can take information in one language, understand it, and then output in another language. Like that opens up information.”

Google has been moving in this direction for some time. In October 2025, Reid told the Wall Street Journal that Google had adjusted ranking to surface more short-form video, forums, and user-generated content.

The comments also add context to Google’s Audio Overviews experiment launched in Search Labs last June, which generates spoken AI summaries of search results.

That wasn’t possible a few years ago. In 2021, Google and KQED tested whether audio content could be made searchable and found that speech-to-text accuracy wasn’t high enough, particularly for proper nouns and regional references. Reid’s comments suggest that the barrier has fallen.

Subscription-Aware Search Could Change How Results Are Personalized

Reid also outlined a direction for personalization that goes beyond Google’s existing Preferred Sources feature.

She told the hosts Google wants to surface content from outlets a user pays for, not paywalled results from sources they can’t access.

“If you love this source and you do have a relationship with it then that content should surface more easily for you on Google.”

Reid gave a practical example. Say 20 interviews on a topic are paywalled but a user subscribes to one outlet. Google should make it easy to find the one they can read.

“We should surface the one that they’re paying for and not the six that they can’t get access to more.”

She suggested the company has “taken small steps so far but want to do more” to strengthen how audiences and trusted sources connect through search. She also mentioned the possibility of micropayments for individual articles, though she acknowledged that model hasn’t taken off historically.

Google expanded Preferred Sources globally for English-language users in December, and announced a feature that highlights links from users’ paid news subscriptions. Google said it would prioritize those links in a dedicated carousel, starting in the Gemini app, with AI Overviews and AI Mode to follow. At the time, Google said users who pick a preferred source click to that site twice as often on average. Reid’s comments suggest the company sees subscription-aware search as a broader evolution of that same direction.

Why This Matters

The multimodal capabilities Reid pointed to expand which content formats get discovered through search. Podcasts, video series, and audio-first content have historically been harder for Google to evaluate beyond metadata and transcripts. Google’s growing ability to assess relevance and depth from audio and video directly changes who can be found through search and how.

For brands and creators investing in non-text formats, Google’s ability to surface that work is catching up to where the audience already is.

The subscription-aware personalization direction matters for any publisher with a paywall or membership model. Search results that adapt to what individual users pay for would tighten the connection between subscriber retention and search visibility. Paywalled content could perform better for the audience that matters most to the publisher, rather than being deprioritized because most users can’t access it.

Looking Ahead

Reid didn’t attach timelines to either development. The multimodal indexing capabilities she talked about appear to be current, while the subscription-aware personalization is a stated direction with some existing features already in place.

Google I/O is scheduled for May 19-20. Reid said on the podcast that the company is “actively building” but that the pace of AI development means some features could come together as late as April and still make it to the stage.


Featured Image: Mawaddah F/Shutterstock

Category News Generative AI
SEJ STAFF Matt G. Southern Senior News Writer at Search Engine Journal

Matt G. Southern, Senior News Writer, has been with Search Engine Journal since 2013. With a bachelor’s degree in communications, ...