contador web Saltar al contenido

Apple publishes its first research on artificial intelligence

At the beginning of this month, we learned that the Apple released its researchers to publish findings regarding artificial intelligence, which was previously strictly prohibited.

J taking advantage of this freedom, on December 22 the first research entitled ?Learning from Simulated and Unsupervised Images through Adversarial Training? (something like ?Learning from Simulated and Unsupervised Images through Adverse Event Training?). Basically, it shows a program very similar to facial / object recognition that already exists in the Photos app, but in a more advanced way.

In this work, we propose simulation + unsupervised learning, whose aim is to improve the realism of synthetic images from a simulator, using unmarked real data. The enhanced realism allows for training of better machine learning models in large sets of information without any data collection or human annotation.

Artificial Intelligence Research

In the survey, Apple discusses the positives and negatives of using synthetic images in the facial / object recognition process. It is much easier (and also cheaper) to use images generated by a computer (which are already automatically marked, identified) than real images, which need to be identified one by one, by a human being. However, quality and ?realism? are lost when the synthetic image.

As you can see in the illustration above, synthetic images are very different from the real ones. When the program was based on them, the ?judgment? of what that object was was either non-existent or erroneous. Therefore, the research brought an easier way to do the process, passing the synthetic image through what was called refiner (refiner), in order to be able to transform them into something much more realistic, based on real images, thus improving the quality of recognition.

The research extends on this subject, showing the process of each experiment done both qualitatively and with a user study. The work only presents images (eyes and hands forming words and / or letters from the American sign language), but the Apple research team makes explicit that it hopes to start the same type of tests with videos.

(via MacRumors)