Thoughts on the iPhone X

Yesterday Apple revealed the iPhone X, which had been extremely hyped for years. The highlight feature of the phone in my opinion is clearly its HDR display. Apple is the first mobile vendor to ship end-to-end support for HDR, while Samsung’s Galaxy S8 was the first device to at least feature full hardware support.

I wrote about this at great length over the past couple months, so for all the details check out these three articles. The first article was mostly wrong about the UI elements, but overall they were hopefully pretty accurate.

Much to my surprise, though, the X’s display really appears to use a diamond PenTile subpixel layout. Apple’s own marketing states that it “uses subpixel anti-aliasing to tune individual pixels for smooth, distortion-free edges.” That’s effectively confirmation, and exactly the same as what Samsung Electronics has always done with OLED. This means there’s definitively no near-term hope that S-Stripe can be scaled up economically to large phone panel sizes at high pixel densities. Samsung hasn’t been holding back.

One thing I will add is that while I appreciate Apple’s intention with advertising contrast of 1,000,000:1 as opposed to an infinite ratio, it’s also ok to say it’s infinite for OLED because of the near perfect blacks. Not shipping ProMotion is probably due to power budgeting, though it’s possibly because of performance constraints.

There are pretty much zero surprises on the silicon side overall. I might write more about the A11 another time, but the IPC improvements are relatively modest. I know many people will raise an eyebrow at this, but I would encourage you to completely ignore Geekbench. And I can’t emphasize enough how almost everything written online about Apple’s CPUs is wrong.

If you subtract out the efficiency gains from removing 32-bit support, you’re left with maybe very roughly a 15% improvement in CPU IPC for the big cores, assuming equivalent clocks to the A10. Apple could have pushed performance and efficiency further, if not for 10FF being really bad. The era of the hyper Moore’s Law curve in mobile is officially over, in my opinion, though maybe the A10 already signaled that. It’s all rough sledding from here on out, based on the state of foundry challenges.

The design of the CPU itself is completely unsurprising. Once it was leaked that it was hexacore, it was obvious that the smaller cores would have been designed for a higher performance target than last year’s Zephyr cores. The design is fully cache coherent but also clearly not ARM SMP.

Regarding “Apple’s first custom GPU,” well… I’ll put it this way: there are probably even people at Apple who don’t really consider it to be fully custom. Personally, I don’t consider it to be custom based on what I know, but I can’t say more. Sorry to be vague, but it’s complicated.

In terms of performance, Apple only claimed a 30% performance improvement, which is not massive. 50% power at iso performance on 10nm is also not necessarily impressive if you think the GPU in the A10 burned too much power in the first place. The reality again is that TSMC’s 10FF is really bad, though, so Apple probably couldn’t achieve more. That Apple went with a tri-core design is interesting. New architectural features are the biggest thing that Apple is touting, but I know nothing about graphics and can’t comment on those.

Apple didn’t say anything else about other interesting things they did in terms of silicon, so I will also not talk about them.

Setting aside the (no doubt really expensive) IR system, the single most impressive advancement might be the new camera color filter. This is a really big deal, because it’s insanely hard to improve on the Bayer RGBG color filter array. We haven’t seen any vendors attempt do this in years in mobile, but it was inevitable some vendor would try again to ship an alternative. Whether Apple’s implementation is RGBW ala Aptina or another subpixel arrangement, I have no idea. I know essentially nothing about image filtering and demosaicing, so there’s nothing more I can say. For the cameras themselves, Apple has finally shipped larger image sensors, though we don’t know the pixel size yet.

The A11’s video encoding performance is also extremely impressive, and it’s fair to say Apple is way ahead of the competition. 4K60 encode simply requires a massive amount of data bandwidth. I’m not sure at this point if deep learning is being applied in this area or not, but Apple is definitely employing its own special techniques to accomplish this.

Face ID seems to be significantly slower than TouchID but more secure. I was skeptical about the latter claim, until Apple explained the sensors and the dot projector. That seems like more than enough data, but keep in mind there will be corner case problems, bugs, and oh right sometimes the deep learning classifier will just miss. Reliability and robustness also matter, in other words. And as Apple tried to lightly dismiss, it won’t work for identical twins. Overall, Face ID should be considered a convenience regression over Touch ID as a necessary concession for the thin bezels of the display, since Apple failed to get fingerprint recognition to work through the display. It works, but Apple is surely unhappy internally.

I almost recently published an article on things I believed Apple needed to improve about the iPhone. The five areas I was going to highlight were: speaker quality, portrait mode quality, shipping a dedicated DLA, camera sensor pixel size, and switching several imaging algorithms to deep learning implementations. As a pleasant surprise, Apple addressed all five of these areas (though I had strong hunches it would do so).

Waterproofing hurts speaker quality, and the primary speaker in the iPhone 7 regressed in some regards despite the improvements to volume and dynamic range. Portrait mode on the iPhone 7 Plus can honestly produce some pretty poor results. The people who work on these algorithms have PhDs in computational photography, though, so I probably shouldn’t criticize what I don't know. I don’t think the previous implementation used deep learning, but I could be wrong. For portrait mode on the front camera of the iPhone X at least, Apple appears to have switched to a deep learning implementation, if I understood Phil Schiller correctly.

Dedicated deep learning ASICs like Apple’s Neural Engine are clearly the direction the industry is moving, so it’s hardly unique or honestly that hard for Apple to do so as well. Inference is too important not to specifically accelerate, so this is something Apple and everyone else clearly need. Implementations should be all over the map and vary quite a bit in terms of results. No one knows how these chips should ideally be designed, so everyone will be experimenting for many years. Apple did disclose a tiny amount of detail on the Neural Engine, such as its being a dual-core design, which was nice of them to do.

There are also a new accelerometer and gyroscope, which is unsurprising if you’re familiar with sensor design standards for VR platforms such as Oculus’s Gear VR or Google’s Daydream. Previously Apple has preferred to source lower power sensor implementations, so perhaps these new sensors are more accurate but draw greater power. The cameras themselves are now also individually calibrated, which is probably really important.

Additionally, there is now hardware codec support for FLAC, which was to be expected given software codec support in iOS 11. ALAC is basically irrelevant now.

The biggest mystery to me going into yesterday’s keynote was the iPhone X’s wireless charging. Since it was revealed to use completely standard Qi charging, I will hazard a guess as to what this is really all about. It’s no secret that Apple wants to push for a completely wireless future, and the Lightning connector’s days are clearly numbered. To get to that point will require far-field wireless charging using resonant technologies.

My understanding, and what I don’t think people realize, is that a resonant specification depends on an inductive specification. If this is the reasoning, then Apple has to help propagate an industry standard to make inductive charging as ubiquitous as possible around the world. Thus, Apple would be pushing for wireless charging now even if it doesn’t see a ton of value in inductive alone. This is just my theory, so I could certainly be wrong.

As an aside, Schiller had to directly contradict his own past comments about the utility of wireless charging (and NFC), which I think is an excellent example of why you should generally avoid saying negative things about other companies or people.

I might write more about the Apple Watch Series 3 in the future, but I at least previously explained all of the cellular details here.

Technical corrections are always appreciated.