Expectations for WWDC 2017

Apple has some obvious priorities to address this year at its Worldwide Developers Conference (WWDC). Firstly, it needs to significantly redesign iOS for the upcoming OLED iPhone. There are major technical considerations for both hardware and software that Apple has to deal with for its OLED transition. These considerations extend far past a dark mode for apps, which I would also anticipate. Apple has notably already dealt with OLED before for the Apple Watch. I don’t think it will try to match iOS with watchOS aesthetically, but perhaps their overall appearances will be a little closer. And I highly doubt iOS will change from rendering black text on white backgrounds for maximum readability.

iOS’ dominant white and blue are pretty much the worst colors for an OLED user interface, though, so something more akin to watchOS’ use of black, grey, and green is required (to some degree). Apple could even maybe selectively utilize some red tones. For further context on colors and OLED design considerations for energy efficiency, lifetime, and text legibility, Brandon Chester and I discussed some of these topics at length beginning at 55:03 on the first Tech Specs podcast. I will also publish some further thoughts on the reasoning behind the UI changes at a future time.

Secondly, Apple needs to deliver on deep learning. Despite what Apple says externally, internally executive leadership seems to have been caught off-guard by the sudden, massive progress in AI brought about by the deep learning revolution. Backchannel’s exclusive access piece did not instill confidence, but rather marketing desperation, by trying to conflate all areas of machine learning together as one thing. Ignoring the technical errors in the article, it doesn’t matter that Apple has been using machine learning since the 80s; all that people should care about is its current competitiveness in deep learning. In contrast, when Google mentions machine learning, as far as I know it is always referring to deep learning.

Depending on how you prefer to segment technology, many would argue that deep learning is the most important advance in software since either the touchscreen smartphone's user interface or the internet. In contrast to almost every other technology industry buzzword or phrase of the month, having an “AI-first” strategy is actually credibly meaningful. Last year I argued on Twitter that Apple needed to show a suite of deep learning-powered services, or it was going to look really behind. That is thankfully exactly what it did. At this year’s WWDC, Apple needs to demonstrate a continuing wave of progress company-wide on AI. If you want to get a sense of how seriously Google has invested in internal training on deep learning to remain the market leader, see its alternative exclusive access Backchannel piece from a couple months prior to Apple’s.

These articles are sometimes published months after interviews are granted to journalists, but there was one thing in particular I noted at the time of the Apple piece. Craig Federighi was quoted as saying, “We don’t have a single centralized organization that’s the Temple of ML in Apple.” Not many days before the article was published, it was reported that Apple had acquired Turi, which formed the basis of its new machine learning division, an obvious necessity for internal tooling and research. Perhaps I am reading too much into this, but that might suggest Apple’s deep learning strategy was still in flux at the time of the interview.

It may not seem fair, but this year Apple has to continue to prove it can keep up to some extent with the market leaders. If not, its competitive positioning in AI might come to resemble its mastery of the cloud: perpetually years behind. If it sounds like I am being negative about Apple, believe me I’m not. The company is ridiculously competent at almost everything, but it shouldn’t be graded on a curve on server infrastructure or AI. And by “AI,” think of deep learning algorithms, not futuristic images of omniscient assistants from science fiction.

Thirdly, Apple needs to ship a ton of iPad-specific software improvements. I know these are definitely coming, and I suspect Apple will deliver in spades. The Split View multitasking UI is one example that everyone agrees must be replaced; an icon grid with greater information density could help. Adding drag and drop functionality seems really likely. Using the iPad as a drawing tablet or secondary display for the Mac has been rumored multiple times. And while it would require extensive OS-level engineering to bring about securely, multi-user support also seems like a strong possibility.

Brandon mentioned to me last fall that he thought Apple would transition to a 10.5” iPad display size in 2017 in order to switch to using two regular size classes in portrait mode, which would make sense. I eventually saw supply chain rumors that the new iPad would be 10.5”, so that display size with an iPad mini-sized UI seems pretty likely.

Unsurprisingly, I am hoping that iOS 11 will also provide a major performance revamp for the platform. Don’t expect any miracles, but it would be nice if there's at least less Gaussian blur. There's ample opportunity for Apple to continue to tune how iOS works, to better fit its CPU and GPU architectures and make the most of their microarchitectural advantages.

Apple also clearly needs to showcase a significantly improved Siri. (I’ve always been hesitant to describe digital assistants as “AI.”) I’ll leave hardware rumor reporting to the press, but my guesses would be that the A11X iPad Pros, spec-bumped MacBook and MacBook Pros, and Siri speaker all get announced on Monday. We may or may not see the rumored iOS-wide voice command accessibility this year, but the latter product will probably fully depend on a more capable Siri. My one prediction is that the Siri speaker has nothing to do with mesh Wi-Fi. I think Apple would prefer to ship something useful like an 802.11ad-capable router.

Siri never really worked for me personally, and never understood my voice at all on the Apple Watch, until iOS 10 and watchOS 3 were released. Siri is much better now, but its word error rate is still higher than Google's. Where Apple is definitely market-leading is API design, which is criminally under-appreciated as a competitive advantage. SiriKit is probably the best overall voice assistant API, but it’s also the most ambitious in terms of flexibility. Continued expansion of the deliberately limited API surface is required.

I’m also hoping to see broader deployment of differential privacy and similar experimental technologies. Apple is still going to have to pay the efficiency tax and perform deep learning inference on device with non-ideal hardware, until it can ship more appropriate silicon. My sentiments are likewise the same on security, given last year’s political battle between Apple and the FBI.

I’m not sure if Apple will ever make its secret iOS VR framework public, but if it does, it will probably wait until at least the iPhone 8 announcement. Quality VR basically requires OLED.

Apple needs to continue to make writing functional smartwatch apps much, much easier with watchOS’s API, while still preserving energy efficiency. tvOS deserves a better and more performant multitasking implementation, like the original one. iOS 11 will probably drop 32-bit support, to significantly reduce memory usage. Apple at some point should ship improvements for family management of content and media, especially within the Photos app.

From a developer point of view, I wonder if UITableView might be deprecated. Auto Layout seems to be a performance killer, so some sort of magically more efficient way of arranging layouts would be nice, however difficult to conceive. There is also unending room for improvement for macOS security and the Mac App Store, but one shouldn’t hope for too much.

Lastly, this is the last year that I will hold out hope for a swipe keyboard. It would be an enormous improvement for one-handed use and accessibility. Maybe it could even work in a floating window on the iPad? Please, Apple?