Every year, Loup Ventures analyst Gene Munster makes a efficiency test with the main digital assistants to evaluate the performance and competency of each. Last year, we saw that Siri, while evolving in turtle steps, was still far behind Google Assistant in these respects; In 2019, the scenario didn't change much.
The Munster test consists of asking 800 questions for each of the assistants: Google, Crab and Alexa Cortana was eliminated from testing by the shift in focus Microsoft is applying to the platform. Answers are analyzed under two parameters: first, if the assistant in question understood the command / question, and second, if she answered the request correctly.
The questions are divided into five categories: locations (for example, “which is the nearest coffee shop?”), trade (“Buy me napkins”), navigation (“How do I get to the bus center?”), information (“Who does Bahia play with tonight?”) And commands (“Remind me to call Joo at 2 pm today”).
In the 2019 tests, Google Assistant maintained the leadership position, understanding 100% of the questions and correctly answering 92.9% of them in the last survey conducted in July last year, this rate was 85.5%. Siri, in turn, remained second: she understood 99.8% of the questions (a slight jump from the 99% of the previous test) and answered correctly 83.1% also an improvement over the 78.5% over the 2018 survey.
Alexa ranked third in the survey: she understood more questions than Siri (99.9%), but correctly answered 79.8% of them. Still, the jump from Amazon's assistant was disgusting: In last year's test, she had answered only 52.4% of the questions correctly.
By dividing the questions into categories, we see that Siri outperformed its two main competitors on the command side by performing correctly. 93% of requests; On the other hand, Ma's assistant ranked last in the trade categories (68%) and information (76%).
Can Ma keep this upward trajectory? And, most importantly, can this path intensify to match Siri's main competitors? We'll have to wait and see.