THE new iPhone SE it was officially launched a few days ago and, at that point, we already know a multitude of features of Ma's new entry-level smartphone, but we still need to go into some details, like the camera.
As seen in the disassembly of iFixit, the new iPhone SE uses a sensor identical to that of 8. Still, the post-processing of the first is better than the second due to the A13 Bionic chip and new algorithms.
In fact, the new iPhone SE is the first Apple device that can generate a depth effect (in images in Portrait Mode) using only software techniques, with the help of ML, according to a new analysis published by Halide, the prominent photography app.
Let's get to the details: if the camera of the new iPhone SE and the 8 are the same, why can't the latter create images with depth? According to the developers of Halide, the newly launched device's camera can do more because it uses a “monocular depth estimate” from 2D images.
In addition, many will certainly argue that the iPhone XR it also has only one camera and can take pictures with Portrait Mode, which would not make the new SE an avant-garde in that area. However, as highlighted in the developers' analysis, XR does not generate images with depth exclusively by software, but rather, a help from the camera hardware in this process.
They also found that, unlike other iPhones, the new SE can create a depth map from an image of another photo already taken that the Halide app can track when using the built-in Portrait Mode.
On the other hand, the Portrait Mode of the new iPhone SE is somewhat limited, as it only works with people (similar to the iPhone XR); something that is related to the neural network that powers the resource.
Precisely, when an image in Portrait Mode without a captured person, the device fails to create a depth map because it does not accurately recognize the shape of the object of the image, the following photo was taken using Halide's internal Portrait Mode, which expands that option on the SE and XR iPhones.
Finally, the most interesting thing is that as Apple's machine learning improves, Portrait Mode can expand to even more devices even, who knows, to the rear camera of iPads.
If you are interested, it is worth checking the analysis of the LiDAR scanner of the new iPad Pro made, also, by the developers of Halide.
via The Loop