Google’s Gary Illyes and others answered many questions related to AI at Google Search Central Live Tokyo 2023, sharing new insights about Google’s approaches and recommendations on AI generated content.
Japanese search marketing expert Kenichi Suzuki (Twitter profile) presented at Search Central Tokyo 2023 and subsequently published a blog post in Japanese that summarized the top insights from the event.
Some of what was shared is currently well known and documented, such as it doesn’t matter to Google whether the content is AI generated or not.
For both AI generated content and translated content, what matters most to Google is content quality.
How Google Treats AI Generated Content
Labeling AI Generated Content
What may be less well known is whether or not Google distinguishes between AI generated content or not.
The Googler, presumably Gary Illyes, responded that Google does not label AI generated content.
Should Publishers Label AI Generated Content?
Currently the EU is asking social media companies to voluntarily label AI generated content in order to combat fake news.
And Google currently recommends (but does not require) that publishers label AI generated images using IPTC image data metadata, adding that image AI companies will in the near future begin adding the metadata automatically.
But what about text content?
Are publishers required to label their text content as AI generated?
Surprisingly, the answer is no, it’s not required.
Kenichi Suzuki wrote that as far as Google was concerned, it’s not required to explicitly label AI content.
The Googler said they’re leaving it up to publishers to make that judgment call as to whether it’s a better user experience or not.
The English translation of what Kenichi wrote in Japanese is:
“From Google’s point of view, it is not necessary to explicitly label AI-generated content as AI-generated content, as we evaluate the nature of the content.
If you judge that it is necessary from the user’s point of view, you can specify it.”
He also wrote that Google cautioned against publishing AI content as-is without having a human editor review it before publishing.
They also recommended to take the same approach with translated content, that a human should review before publishing as well.
Natural Content is Ranked at the Top
One of the most interesting comments by Google was a reminder that their algorithms and signals are based on human content and because of that will rank natural content at the top.
The English translation of the original Japanese is:
“ML (machine learning)-based algorithms and signals are learning from content written by humans for humans.
Therefore, understand natural content and display it at the top.”
How Does Google Handle AI Content and E-E-A-T?
E-E-A-T is an acronym that means Experience, Expertise, Authoritativeness, and Trustworthiness.
It’s something that was first mentioned in Google’s search quality raters guidelines, recommending that the raters look for evidence that the author is writing from an position of experience in the topic.
An artificial intelligence, at this time, cannot claim experience in any topic or of a product.
So it’s seemingly impossible for an AI to meet the quality threshold for certain kinds of content that require Experience.
The Googler responded that they are having internal discussions about it and haven’t yet arrived at a policy.
They said that they will announce a policy once they have settled on it.
Policies on AI are Evolving
We’re living in a moment of transition because of the availability of AI and it’s lack of trustworthiness.
Mainstream media companies that rushed to test AI generated content have quietly slowed down to reassess.
ChatGPT and similar generative AI like Bard were not expressly trained to create content.
So perhaps it’s not surprising that Google currently recommends that publishers continue to keep their eye on the quality of their content.
Read the original article by Kenichi Suzuki:
Featured image by Shutterstock/takayuki