Google recently published two new algorithms related to natural language processing. One of them claims a new state of the art for understanding how to answer questions. Google has a longstanding goal to transform search into a computer that can understand questions and answer them like a human. Google itself says these algorithms are just the beginning.
Just Because Google Publishes Research…
It’s important to note that Google’s boilerplate stance on research papers and patents can be paraphrased thusly, “Just because Google publishes a research paper or patent does not mean that Google is actually using it.”
Understanding Questions by the Answers
The first algorithm is titled, Learning Semantic Textual Similarity from Conversations. The algorithm learns how to understand questions by studying responses as a way to understand what the questions really mean. Here’s how Google explains it:
“The intuition is that sentences are semantically similar if they have a similar distribution of responses. For example, “How old are you?” and “What is your age?” are both questions about age, which can be answered by similar responses such as “I am 20 years old”. In contrast, while “How are you?” and “How old are you?” contain almost identical words, they have very different meanings and lead to different responses.”
This algorithm is trained in conversational nuances using Reddit and other sources in order to be able to understand from actual conversations what questions mean.
It’s easier to understand similarities between long questions. But it becomes harder for short questions. The research claims to be able to train a machine to understand differences between short questions.
Here is what the researchers concluded:
“In this paper, we present a response prediction model which learns a sentence encoder from conversations. We show that the encoder learned from input-response pairs performs well on sentence level semantic textual similarity. The basic conversation model learned from Reddit conversations is competitive with existing sentence-level encoders on public STS tasks. A multitask model trained on Reddit and SNLI classification achieves the state-of-the-art for sentence encoding based models on the STS Benchmark task.”
How Could This Algorithm Be Used?
Nuance: An earlier research paper, Measuring the Sentence Level Similarity (not published by Google) on a related topic provides a description of how a similar algorithm could be applied in the real world:
“Sentence similarity measures are becoming increasingly more important in text-related research and application areas such as text mining, information extraction, automatic question-answering…”
Google’s research paper is silent as to how this algorithm could be used. However Google’s AI Blog post about these two algorithms, Advances in Semantic Textual Similarity noted that these algorithms allow them to use as few as 100 labeled examples to build useful text classifiers. That means that they can understand more with just a minimum of data instead of the millions and billions they’ve previously worked with.
Bill Slawski Comments
Nuance: I asked Google patent expert, Bill Slawski of GoFishDigital, about this algorithm and here is what he said:
I wrote about a Continuation patent from Google in the post, Google’s Related Questions Patent or ‘People Also Ask’ Questions. What was interesting about this updated patent was that it introduced the idea of a “question graph” for questions that Google might collect answers to.
Google told us that they would be creating datastores of natural language questions and answer in at least one patent: Natural Language Search Results for Intent Queries. By approaching identification of questions differently (with potentially some redundancy), it adds the ability to create a larger and potentially better question graph.
Why These Algorithms are Important
Google has a stated goal of creating an AI that is similar to the Star Trek computer. The Star Trek computer is the ideal that Google is reaching for. This has been a goal since at least 2013, probably earlier. Slate published an article in 2013, subtitled, Google has a single towering obsession: It wants to build the Star Trek computer. Here’s a key quote from that article:
“A few weeks ago, I was chatting with Tamar Yehoshua, director of product management on Google’s search. “Is there a roadmap for how search will look a few years from now?” I asked her. “Our vision is the Star Trek computer,” she shot back with a smile. “You can talk to it—it understands you, and it can have a conversation with you.””
The connection between the Star Trek computer and Google voice assistant are so close, that the Google voice project was initially named after the actress who played the voice of the Star Trek computer.
In the Star Trek make believe world the actors speak the trigger word, “Computer” and the computer listens and provides answers. In the real world we speak the trigger words “Ok Google” and Google’s voice assistant answers.
These algorithms can be seen as a contribution to Google’s goal of becoming like the computer in Star Trek. But there are also many other uses for these technologies beyond the question answering capabilities.
How the Star Trek Paradigm Resembles Google Voice Assistant
In the Star Trek world, interfacing with a computer is strictly a matter of speaking out a trigger word then asking a question. A typical scenario plays out like this:
In the Star Trek World:
KIRK: Computer. <– (Trigger Word)
This is Admiral James T. Kirk requesting security access.
Computer. Destruct Sequence One, code one, one-A.
COMPUTER VOICE: Destruct Sequence is activated.
In the Real World:
YOU: Ok Google <– (Trigger Word)
What is a good restaurant to eat in Madrid at an affordable price?
Google Voice Asssistant: The best cheap eats in Madrid are…
Is Google Alone in the AI First Paradigm?
No. It’s an open secret among technology companies that there is a race to becoming AI first. Qi Li, the COO of Baidu referenced the race towards becoming AI first in his statement announcing that he’s stepping down:
“”I’m honored to have participated in Baidu’s transition into an AI-first company,” Lu said. Microsoft and Google, among others, have also sought to reshape themselves around AI.”
Reshaping search in an Assistant Paradigm has been a longstanding goal at Google and it’s not alone. Even Zillow has been reported to be transitioning from Search to the AI Assistant Paradigm. A recent article in Mashable quoted Zilllow on AI:
“Currently, the site is undergoing what Wacksman describes as “the evolution from a search box to an assistant.” The ideas is to transform Zillow from a simple real-estate search engine to a tool that understands you.”
Takeaway: How Site Publishers Fit into the AI Assistant Paradigm
These technologies are reported to be in the beginning stages.
According to Google:
“We believe that what we’re showing here is just the beginning, and that there remain important research problems to be addressed, such as extending the techniques to more languages…”
Google also noted that these technologies fall short of understanding text at the paragraph and document level. Those are the next challenges on their way to transitioning to an AI Assistant.
It’s important to keep up on these developments because at some point there will be more indications of where publishers and merchants fit into this new world. This means paying attention to new developments in Schema.org Structured Data requirements as outlined by Google in their developers pages.
Read Google’s announcement on their AI Blog, Advances in Semantic Textual Similarity
Read more about the voice search here.
Images by Shutterstock, Modified by Author