Applications can detect dangerous users and ban them; understand

99 meluncurkan 'kartu identitas' untuk mengurangi kontak dalam balapan

Artificial intelligence (AI) is present in applications to improve its functioning and make it more secure.

The technology can be applied to identify patterns of user preferences for content suggestions – such as Spotify’s music recommendations, for example – or to avoid misunderstandings and even crimes by malicious people.

According to 99, security and artificial intelligence features helped to reduce the volume of serious occurrences in the mobility app by 60% in 2019.

READ: Apps downloaded 382 million times can spy on users, says study

Airbnb, a vacation rental app, also has AI initiatives that scan social networks for «psychopathic indexes» to verify user registration.

Check below for details on how artificial intelligence works, especially in its machine learning mode, and how platforms like 99 and Airbnb have been using technology to offer safer services to users.

1 of 3 99 uses artificial intelligence to avoid dangerous behavior by malicious users – Photo: Rodrigo Fernandes / dnetc

99 uses artificial intelligence to avoid dangerous behavior by malicious users – Photo: Rodrigo Fernandes / dnetc

Want to buy cell phones, TV and other discounted products? Discover Compare dnetc.

What is artificial intelligence (AI)?

Artificial intelligence (AI) has been a well-established area of ​​study since the 1950s.

It works with the simulation of intellectual interactions and allows the creation of computer programs that achieve communication and level of successful responses when interacting with humans.

– thus acting as a kind of synthetic «brain».

Artificial intelligence is mainly applied to the machine learning subfields (machine learning), which allows you to «guess» what the user wants.

This technique is common in applications to deliver a personalized experience to users – Spotify, for example, suggests music through machine learning.

The technology takes into account the user’s song history to make connections and find out which new tracks might appeal to him most.

How can AI predict suspicious behavior?

The investments made by 99 in artificial intelligence have already produced good effects, according to the company.

According to the application, it reduced by 60% the number of serious occurrences on the platform throughout 2019.

The AI ​​algorithms deployed by the company are able to predict risk of incidents, analyze hazards involving routes at certain times and the history of user behavior.

The application also interacts with data on risk areas and route sharing, which allows foreseeing the possibility of crimes in certain cases.

For example, a late night cash run for a newly created account at 99 can be considered suspicious by the app.

As a security measure to prevent the application of a possible scam, the platform then requires additional identity validation (such as informing the CPF or date of birth) or may even result in an automatic application block.

2 of 3 Investment in artificial intelligence has reduced serious occurrences in the use of platform 99, according to the company – Photo: Disclosure / 99

Investment in artificial intelligence has reduced serious occurrences in the use of platform 99, according to the company – Photo: Disclosure / 99

Another outstanding case is the use of artificial intelligence made by Airbnb.

The vacation rental platform is developing algorithms to check whether registered users show signs of psychopathy and can and do become threats to hosts or guests.

Airbnb’s artificial intelligence performs a verification of posts on social networks and other content shared online by the newly registered account.

Signs such as friendship with fake social media accounts, production of hate messages and pornography, and involvement with drugs help to develop a user confidence index.

According to the behavior detected from the publications, it is possible to profile the user and predict unwanted behavior.

The tool would help prevent malicious people from entering the platform, and avoid misunderstandings and scams.

3 out of 3 The Airbnb platform also uses artificial intelligence to ensure user protection – Photo: Karen Malek / dnetc

The Airbnb platform also uses artificial intelligence to ensure the protection of users – Photo: Karen Malek / dnetc

Concerns on the platform increased after the case of a mass shootout in California in December 2019.

On that occasion, a party held at a rental property on Airbnb in the suburban area of ​​San Francisco ended tragically, with injuries and injuries.

five dead.

Although the owner of the rented property had not authorized the party to be held, it had more than 100 guests.

However, Airbnb’s use of AI is not restricted to security measures.

The platform uses technology to identify similarities in terms of accommodation and destinations and, based on this data, offer proposals to customers.

This is done through algorithms that analyze how much time the user spends observing certain options.

Thus, the platform identifies user preferences and suggests the most likely options for securing a reservation.

Despite the advantages, artificial intelligence has also generated controversy and controversy.

An increasingly evident risk arising from the use of AI is the reproduction of prejudices.

Many studies from different areas have already shown that such systems can corroborate racial prejudices, for example.

A study referring to the COMPAS system, used in the United States to analyze the criminal recidivism index in Broward County, Florida, classified blacks as belonging to the high risk of criminal recidivism category.

FaceApp, an application that was successful with the elderly effect, has also been involved in similar controversies.

The app’s «beautification» filter made users’ skin lighter.

Google Assistant: four curiosities about the software

Google Assistant: four curiosities about the software