contador web Saltar al contenido

Siri improves on testing virtual assistants, but is still a long way from Google Assistant

That Crab it still has serious problems to be taken seriously in the world of digital assistants, no one will question. But at least Apple seems to be working on the evolution of the tool, if this research by Loup Ventures portrays reality well.

For some years now, analysts at the firm have been conducting a kind of research with the top four digital assistants (Crab, Google Assistant, Alexa and Cortana) which consists of the application of 800 questions in 5 categories: location (which involves recommending nearby places such as restaurants and parks); commerce (which involves researching and purchasing items by the assistant); navigation (which involves the device's navigation system); information (with common questions about events and people); and command (which sets actions on the device as an alarm or reminder).

The results are then analyzed under two factors: whether the assistant understood the question and then whether she answered it correctly. Comparing the answers obtained in 2017 and 2018, the firm concluded that Siri had, indeed, a considerable evolution in one year: in April 2017 it had understood 95% of the questions and answered 66.1% of them correctly; in the most recent research, 99% requirements have been understood and 78.5% of them, answered correctly.

The increasing numbers, however, are not prices for Google Assistant, which in the last check had a 100% index in total understanding, and answered 85.5% of the questions correctly. Siri, however, now ranks second in the survey:

Loup Ventures test with digital assistants, 2018

It is good to note that Cortana and Alexa have a certain disadvantage in the measurement: while Siri and Google Assistant were tested in their ?home? environments (that is, the first in a iPhone and the second one Pixel XL), assistants from Microsoft and Amazon were asked questions through their iOS apps. to question whether the results would be the same if the test was applied to a Windows Phone and an Amazon Echo, for example.

Dividing the questions into their categories, it is possible to note that Siri only surpassed its Mountain View competitor in ?one? in one, which shows that Apple's endeavor to make its assistant more proactive within the domains of the iPhone has been fruitful. Apple's assistant, on the other hand, lost second place in the ?information? category to Alexa.

Loup Ventures test with digital assistants, 2018

According to the researchers, the tendency is for digital assistants to continue to improve exponentially, as companies are increasingly better at removing the main obstacle faced by them: the correct contextual interpretation of the question. Loup Ventures ventures that two categories that will come out of the tools in the near future are those of ride apps, like Uber, and payments.

Of course, here we must take into account that these measurements are all made in english if the test were carried out in our good old Portuguese, we will certainly have very different and inferior results. Will it take time for other languages ??to match my assistants' language?

via Apple World Today