ImageNet Roulette is an art project that uses artificial intelligence (AI) to categorize people's faces from photos. The platform, however, became hotly contested last week by classifying photos of people of diverse ethnicities with racist offensive terms.
READ: PC mimics company CEO voice and steals $ 1 million
Given the criticism, those responsible clarified that the bias in the results and the discussion around this issue is one of the objectives of the project. "It reveals the major problems in classifying humans by race, gender, emotion or trait. All of this involves politics, and it's not biased," IA researcher Kate Crawford explained on Twitter.
Artificial intelligence l Vin Diesel photo as skinhead Photo: Reproduction
Want to buy a cell phone, TV and other discounted products? Meet the Compare dnetc
ImageNet Roulette is part of the Training Humans exhibition devised by artist Trevor Paglen, IA researcher Kate Crawford and developer Leif Ryge. The work is exhibited in Milo, Italy, but can also be freely accessed on the Internet from: imagenet-roulette.paglen
The platform works through the WordNet database, which is made up of English words obtained from online search engines, and artificial intelligence processing techniques. The AI analyzes the content of the image and seeks, from this analysis, to assign words that can describe or give some judgment about the photographed face.
Website allows you to upload an image, or then use a link to a photo available on the Internet Photo: Playback / Filipe Garrett
The polemics surrounding the experiment came about because the word database contains both common and harmless terms as well as more aggressive and even racist ones. Therefore, AI ends up inferring from positive and negative judgment to characterize people from photos. For example, the word man may be related to terms such as criminal to the extent that it may be linked to athlete, or physician. The chances of AI inferring a negative term increase if the person in question is not Caucasian.
Twitter users have published some examples of unpleasant results, which involve biases to describe people of different ethnicities. In other cases, the tool may infer that a criminal person uses offensive terms according to the skin or features of the physiognomy. In a political response to racist results, Kate Crawford, an expert in the field of artificial intelligence and one of the project's creators, went public to comment on the critics via Twitter.
It reveals the deep problems with classifying humans – be it race, gender, emotions or characteristics. It's politics all the way down, and there's no simple way to 'debias' it.
– September 16, 2019
Crawford explains that the high frequency of aggressive results by ImageNet Roulette points to major problems in the classification of humans, whether by race, gender, emotions or characteristics. For the researcher, before being a problem, the biased bias of ImageNet Roulette is a point that raises discussions about how we rank others around us since, in the end, what ImageNet Roulette only makes terms that humans tend to to use when judging and characterizing people.
It is also worth mentioning the lack of security in submitting personal photos to unknown services on the Internet. In addition, uploading the image allows it to be included in the artificial intelligence database used to train the system over time.
Can I create artificial intelligence at home? Exchange tips on the dnetc forum
What ransomware: five tips to protect yourself