Apple artificial intelligence researchers will now be able to publish their findings [atualizado]

Now let's take a trip to October 2015, when the Bloomberg published a story on how the Apple would be hampering its own development in artificial intelligence by the fact that they keep their jobs secret. After that reflection, a few more “attacks” were made on Ma's work in this area, including unpleasant comments about Siri, coming from the influential journalist Walt Mossberg.

Going back to the present day, this panorama of Apple seems to be about to change for the better. This is because, through the newly hired director of artificial intelligence research, Russ Salakhutdinov, Apple announced at the conference Neural Information Processing Systems (NIPS) that does allow the publication of findings in AI.

Apple starts publishing (of AI materials / research), according to @rsalakhu at # nips2016

Previously, the Cupertino giant considered all of its discoveries to be "intellectual property". Now, in addition to their researchers being able to publish findings, they will also be able to exchange ideas with other experts. Whatever made the company change its mind (at least in relation to AI), great that it is now following the same path already followed by companies like Google and Facebook, which also already publish their findings in this and several others areas.

With this initiative to share research from the area, the various contracts and also the research and development (R&D) center in Japan that is supposed to be also focused on AI, we hope that Ma will soon be able to bring developments and innovations for both Siri and others. possible segments that use technology.

(via AppleInsider)

Update · 12/08/2016 s 10:24

THE Quartz he had access to some of the slides of this presentation on artificial intelligence in Barcelona (Spain). Apparently, several topics were addressed, including: health and vital signs, volumetric detection of LiDAR (Light Detection And Ranging), forecasting with structured outputs, image processing and colorization, intelligent assistant and language modeling and activity recognition.

One of the slides showed Apple's image recognition algorithms, which at 3,000 images per second (!) Twice as fast as Google's. Another showed Apple's ability to build a 4.5x smaller neural system without loss in accuracy, which is also faster. Between processes in particular it is based on a technique in which one neural system teaches the other to make decisions according to the situation; this would be linked to working on photos and audio.

In addition to these, more other areas were covered in the presentation: deep generative models, model compression, holistic scene understanding, model reliability, deep reinforcement learning, unsupervised learning, transfer of learning, reasoning, attention and memory, and training efficient in distributed computing.

Whatever the reason, Apple has kept the information for so long, now others may also benefit from its discoveries and, in the end, who wins is us.

(via MacRumors)