A new study suggests the performance of virtual assistants, like Google Assistant, Alexa, and Siri, may have plateaued.
Perficient Digital released its annual evaluation of virtual assistants. Perhaps the most interesting finding is that none of the top virtual assistants are improving when it comes to answering questions accurately.
Not only are they not improving, but the accuracy of every virtual assistant dropped slightly compared to last year’s study. This may be an indication that these technologies have reached their limit, says Eric Enge of Perficient Digital:
“Overall, though, progress has stalled to a certain degree. We’re no longer seeing major leaps in progress by any of the players. This may indicate that the types of algorithms currently in use have reached their limits. The next significant leap forward will likely require a new approach.”
Virtual assistants might not be improving but that doesn’t mean they can’t be useful. Let’s take a look at how they performed against each other.
Comparing Virtual Assistants
Google Assistant (on a Smartphone) reigns supreme when it comes answering the most questions. It also has the overall highest percentage of responses that are fully and correctly answered.
Coming in last with the most incorrect responses is Apple’s Siri. The accuracy of Siri’s answers were found to have dropped by 12% compared to last year.
Microsoft’s Cortana came in first as the virtual assistant that attempted to answer the most questions. However, Cortana also came in last in answering questions accurately.
Perhaps unsurprisingly, Google Assistant and Google Home placed first when measuring to what degree virtual assistants supported featured snippets. Siri and Alexa do not support featured snippets at all, and Cortana supports them to a small degree.
Although Siri came in last where it really counts, it was tied with Alexa for having the most jokes. Cortana was found to be decidedly unfunny.
For more data on the performance of virtual assistants, see the full study here.