A recent company study OptoFidelity to check the accuracy of touchscreens is giving you what to say, just add the words "Apple" and "failure" in the title of any article for the thing to gain a huge proportion. In this case, the title in question was "TPPT tests from OptoFidelity show significant flaws in the accuracy of touches on iPhones".
In short, they tested the touch accuracy of an iPhone 5s, iPhone 5c and Samsung GALAXY S III. The test was performed by artificial (mechanical) fingers of robots, which made hundreds of touches / gestures on the screens. Then, the locations of these touches are compared to the exact locations where the touches were recorded. If the touch was recorded within 1 mm of the exact location that the mechanical finger touched, it is shown on the map as a green dot otherwise the red dot.
The result, you can check below:
Yes, according to them the accuracy of the touches on the screens of the iPhones 5s and 5c is much worse than that of the GALAXY S III, which did very well in the test and lost accuracy only on the edges of the screen. For the most curious, the complete study can be read in this PDF.
Only that some people are questioning the methodology of the study, which consequently influences the result obtained by OptoFidelity. One of these people Nick Arnott, from Neglected Potential.
As he well noted, the green area of ??the iPhone result shows exactly that imaginary region that is easily reached by the finger, regardless of the hand you are holding (although he used it as an example on the right). Take the test yourself: take your device and see how your finger covers this entire area, since you don't need to stretch it to touch that part of the screen, very different from trying to touch the upper left / right corner, for example.
This image shows exactly the point that Arnott wants to reach, see s:
There, we can see the results of the accuracy test of overlapping touches on the iOS keyboard. In the center, on the T and Y keys, for example, the black circles that represent the touch of the mechanical finger show green dots almost centered inside it, indicating that the iPhone recorded the touches very close to where they really happened.
But if you go a little to the left or to the right, the inaccuracy starts. Notice that the points start to change in the same direction. In the letters E and W, for example, we see the green dots moving to the left side of the black circle, the exact location of the touch of the mechanical finger. In the next area, letter Q, the iPhone starts to register the tones 1 or more to the left of where the real touch of the mechanical finger took place. The same thing happens to the other side, of course, reversing the direction of the touches / inaccuracies.
The conclusion of the test indicates that this is a failure in the accuracy of the respective part of the touchscreen, but is that a failure or a feature?
For Arnott, in fact, for anyone, there is a pattern to these inaccuracies: the further you move away from the area easily covered by your finger, the greater the inaccuracy of the touch. The cat's jump here is precisely this, because this inaccuracy ends up making things easier for the user, making the exact location that the finger wants to touch closer to the area easily accessed by him. Notice that the exact point of the finger that comes into contact with the screen changes according to how far it is stretched. Again, do a test (the most extreme) and try to touch the iPhone's battery icon above. You can see that the part of the finger that touches the screen is different from the part that touches when you want to touch the middle of the screen, for example.
The perception we have of the screen also varies, after all, we do not see the device at a right angle (90) at all times. Summing up Arnott's idea, the OptoFidelity test is totally robotic / automatic and does not take into account more than normal variations in the use of such a device by a human.
The robots saw the targets of their taps on the screen at a perpendicular angle. They are also touching the screen at a right angle, every time. This is generally not the way people interact with their phones.
What the test * does * show is the accuracy of user touches based on where these people want to touch. Arnott's bet is exactly this: despite having no way to prove it, he believes that Apple bases its studies and tests precisely on this type of behavior, trying to understand where the user wants to touch based on the way real people hold an iPhone, something totally different from what OptoFidelity evaluated.
For planting skeptics, it doesn't cost to remember a commercial aired by Apple at the time of the iPhone 5 launch. Check it out:
If the company came up with a commercial to highlight the screen size they consider perfect and how, with just one finger, we can access practically the entire screen of the device, I put my hand on the fire and I'm with Arnott : Apple takes all this (actual use of the product) into account when calibrating the accuracy of the touches on the device's screen.
Finally, OptoFidelity, which sells automated test solutions, it is good to make it clear that it has also carried out studies similar to those of Agawi that we have already highlighted here on the website. In it, a camera measures the response time of an app, that is, the latency between the time that a user (in this case, mechanical fingers) touches the device's screen and the response on the device's display.
In these, iPhones achieved a great result.