ARM's brand refresh

I can't think of any semiconductor companies with well-designed logos or wordmarks, but at least this one is better than the old one?

I will definitely never get used to writing "Arm."

SoC suicide

This isn’t really a fix. Sky-high voltages + thermal stress = it’s dead, Jim.

Worth noting: AnandTech got a lot of grief when it didn’t recommend the Nexus 6P or any other Snapdragon 810 or 808 device.

The HDR iPhone 8

I want to write about what should be the highlight feature of the iPhone 8: its HDR display...

This article is available for subscribers on Patreon.

"The New Firefox and Ridiculous Numbers of Tabs"

I’m going to switch to Firefox for a while to try this out.

Even though Firefox is not my main browser, its “Don't load tabs until selected” option has always been my favorite browser feature. The number of tabs I want to load on first launch is exactly one. In an ideal world, the resource overhead of tabs you’re not currently looking at should be as close to zero as possible.

Why you shouldn’t use someone else’s TV calibration settings

I think you will be thoroughly persuaded by this article.

In short, individual display variance is too significant, and you will probably make things worse. If you want to individually calibrate your TV, don't do it yourself, and certainly not by eye. Have a professional do it.

If you want to learn about TVs and home theater equipment, I recommend following Chris Heinonen and reading his articles. Note that I am only recommending him, specifically.

The iOS 11 redesign

Previously I wrote that iOS should be redesigned for OLED. After seeing iOS 11 debut at WWDC and thinking further, though, I realized Apple might go farther than I originally believed...

This article is available for subscribers on Patreon.

Introducing subscriptions

Today I am launching subscriptions for Tech Specs, at $10 a month.

I need your support in order make this blog a sustainable effort. I know that $10 is not insignificant, but it's honestly what I think will be necessary to get Tech Specs off the ground. I'm trying to keep my costs as close to zero as humanly possible, and to date have funded everything out of pocket.

Your money will go towards:

  • Access to all Tech Specs articles. At least one in-depth piece per week, on average. While it could end being more, I would rather overdeliver than overpromise
  • A small number of free articles for everyone, generally introductory educational pieces or minor news commentary
  • Maybe even better podcast audio quality. It’s theoretically possible…

Some highlights of recent coverage include:

I will continue to write about hardwaresoftware, and design. Examples of topics I would like to address include: the iPhone 8, Fuchsia, virtual reality, augmented reality, deep learning, self-driving cars, HDR, displays, color, battery life, smartwatches, Bluetooth LE, how Apple’s AirPods work, how Android works, the Apple Watch, Android Wear, watchOS, benchmarks, understanding the supply chain, eSIMs, real technology economics and financial theory from academia, and the future of computing.

Some of these topics I can write about in great detail. Beyond that, I have many ideas for the future of the blog. There are also certain guests I would like to host on the podcast (eventually).

There will be no ads, ever. I believe in the subscription model.

Advertising's enormous advantage is of course the democratization of content — everyone has access. I am highly sympathetic to this benefit, which is why I will continue to make introductory articles available to everyone from time to time. They will always be a very important part of the blog.

But the advertising model on the internet is often detrimental. When you have reputable technology websites flooded with highly questionable ads and auto-play videos that greatly inhibit the performance and battery life of readers’ devices, the system is broken. This is not an indictment of journalism, but simply economic reality. And inherently all advertising is consequential to the message of its medium.

For all of these reasons, I prefer the subscription model. It also allows me to avoid the temptation of clickbait headlines. Before publishing a piece I can ask myself whether I even have anything of value to add on a topic. If journalists have already covered it well, then that's great, and I’m happy to share those articles.

I also believe there is a great need for tech coverage that provides at least some of the perspective of the industry itself, and it's worth emphasizing how much the average industry observer does not get to see. Talk to an engineer at any tech company, and it's clear that "how things actually work" is often radically different than how it's portrayed online. There is an tremendous amount of work that goes into creating, testing, and manufacturing tech products, and the vast majority of this work goes completely unappreciated in the public record. What goes into making a product is often just as important as the final result.

I genuinely want to do something different. I’ll do this by covering things that are not normally discussed in the press, or often are not on the internet at all. Sometimes I’ll be able to go into much greater depth on technical subjects. Relatedly, nothing is more important to me than accuracy, and I will always correct any identified mistakes.

Lastly, within the realm of independent content I am indebted to several influences, including Jessica Lessin and The Information, Dan Luu, and Chris Pirillo. My thanks to them for the inspiration.

Thank you all for your consideration and your support. It’s deeply appreciated.

Misconceptions about Android

Android's open source nature makes it vastly easier to learn about than closed source OSes. As such, I want to address several misconceptions about the platform that constantly come up. This article will be occasionally updated on an ongoing basis as I think of more topics to include.

GMS

GMS actually stood for Google Mobile Suite, not Google Mobile Services, at least originally. So many people assumed it stood for the latter that even Google seems to use it now.

Perhaps it was a situation like Qualcomm’s Gobi, its cellular firmware API that so many people confused for a modem brand that eventually Qualcomm gave up and rebranded its modems to Gobi. Or Samsung’s ISOCELL, the deep trench isolation implementation that so many people thought was Samsung’s camera brand, that it also recently gave up and branded its CMOS image sensors as ISOCELL at Mobile World Congress 2017.

Kernel version

I really can’t explain it any better than this thread by Tim Murray.

Force quitting apps

In short: don’t do it. Android manages itself perfectly fine. Unnecessarily closing and re-opening apps causes thrashing, slows down app reloading, and hurts battery life.

If memory serves, with Android X.X (can someone please remind me?), swiping away an app in the multitasking UI no longer force quit even the background services of an app in AOSP. Swiping away apps can actually still force quit them on a specific device, though, depending on the vendor’s chosen implementation.

If, however, an app is actually stalled or causing real problems in the background, you may have to manually force quit it. But in general, avoid doing so. Be nice to your NAND’s endurance, folks.

Project Treble

Based on recent changes to AOSP, Treble appears to be an attempt at a stable driver API. By painfully rewriting its various HALs to conform to a new standardized hardware IDL, the Android team is speeding up Android updates by allowing silicon and device vendor bringup efforts to be more parallelized, and by making updates a bit more economically viable for the silicon vendors. This does not mean, though, that users no longer have to worry about SoC vendor support getting in the way of updates. It’s a huge deal, but it’s not the same thing as having a stable driver ABI. See: Fuchsia.

Updates

To be elaborated upon…

F2FS

F2FS is a file system developed by Samsung LSI and upstreamed into Linux. It was designed for NAND flash memory and is claimed to be faster than ext4, Android’s default file system. The Android team disputes this based on its own testing, and says it sees no significant performance differences between the two. Regardless, there are several issues with F2FS, and more importantly Google cannot hardware accelerate file-based encryption with it. There isn’t a pressing need to replace ext4, though of course a better, more feature-rich file system could always supercede it.

Security

Malware on Android is often portrayed as an ever-growing, constant crisis. While Android does have tons of major security concerns, the overall issue is still hugely overstated.

Firstly, the term malware can mean absolutely anything. The vast majority of stories about mobile security spread FUD and sensationalism, to the detriment of readers. I won’t pretend to be a security expert, but even imperfect sandboxing probably goes a long way compared to the completely unsandboxed traditional PC application environments. It doesn’t seem clear to me whether Android or macOS is more secure overall, for example. As with many things, it probably depends.

There is however an extreme case: the Chinese market. Because Android is out of Google’s control in China, the OS genuinely is a security nightmare in the country. I remember waiting for a flight at the airport in Beijing and watching with amusement as some seemingly low-threat app started downloading itself onto my phone over the air. All I did was merely have Wi-Fi on; I hadn’t attempted to connect to any access points.

Everyone knows the fundamental issue with Android security: the horrible update problem. If devices consistently received timely updates for multiple years, the perception of Android’s security architecture would be radically different. I would personally attribute that to licensing and Linux's deliberately non-stable driver ABI, but there are a few hundred other opinions out there on the matter. And of course the overall topic of security is much, much more complex than what I am addressing here.

Which leads us to...

Android Things

What is Android Things, really? Why is it a distinct platform from Android, in other words? One very important difference is how Google manages the BSPs (board support packages) and drivers for Android Things. It works with the silicon vendors but provides the BSPs itself. Device vendors cannot modify the behavior of kernel drivers or HALs. Developers that need to add drivers for peripheral devices to add to their baseboard are able to write user space drivers, unsurprisingly called user drivers.

To be elaborated upon...

Furthermore, the intersection of updates and the Internet of Things would seem to be an obvious disaster, so how does Google address the issue? The company is actually releasing monthly BSP security patches through its Developer Console that soon roll out directly to devices, with the caveat that the direct updates will only be for the same platform version of Android the device is on (such as, say, 7.X Nougat).

Project Brillo is not exactly the same thing as Android Things. Brillo was killed, and the initiative was changed in terms of goals and morphed into Android Things. The Accessory Development Kit (ADK) was also somewhat of a predecessor to Android Things.

Android Wear

Android Wear employs a different, imperfect solution to the update problem: it's closed source. (Some UI view components, though, were open sourced at I/O this year.)

To be elaborated upon...

Touch latency

To oversimplify: Android touch latency was never good, until Android 7.1 essentially finally “solved” the problem. Additional features were added to the new Hardware Composer 2 (HWC2) HAL in 7.1 which can reduce touch latency by up to 20ms, or 1.2 frames, but not always. (It is not correct to say that touch latency is simply 20ms faster in 7.1.) According to the Android team, this was done by staggering some operations on batched input events, not doing everything on the VSync frame boundary, in order to reduce the likelihood of triple buffering and the increased latency that it causes.

The improvement in touch latency is extremely noticeable, and it immediately impressed me on the Nexus 6P after installing 7.1. While the HWC2 improvements make a huge difference, silicon and device vendor implementations still matter! A device still needs to have a quality touch controller, touch stack, and associated software. There are also other parameters that can be tuned by vendors, such as move sensitivity, which should vary based on device size.

It’s also important to understand that touch and general input responsiveness is a function of rendering performance. This is why, say, G-SYNC improves input latency when it manages to improve performance, and especially when exceeding 60fps under VSync. Thus on any device, the higher the realized display refresh rate, the lower the input latency. This is how Adaptive-Sync and proprietary variable refresh implementations will soon benefit Android touch latency.

Graphics renderer

For years people have debated the causes of Android’s infamous “lag” problem. The causes of jank on Android have never been as simple as a binary distinction of whether the OS is hardware-accelerated (running graphics operations on the GPU to some extent) or not. I have no idea what the true reasons are, but I do know that Android’s rendering pipeline is extremely complicated and requires graphics expertise to really understand. At the end of the day, most signs point to initial design decisions made in the early days of Android that are not easily undone.

While many have unrealistically hoped for a single “performance boosting thing,” Android performance does constantly improve in each new release. As previously discussed, one of the most exciting developer features introduced in O is an optional new graphics renderer. Ordinarily a new renderer might imply breaking all existing apps. This obviously wouldn’t be acceptable. The new renderer will probably be launched in Android P, so it will be extremely interesting to see what it entails, and what it gains.

Frame pacing

Even though mobile OSes like Android always run at VSync (and there is thus no distracting tearing on such small displays), one big thing I don’t think most people realize is that frame pacing on Android is awful. Commonly referred to as judder, jitter, or micro-stutter (as opposed to stutter or hitching), uneven frame delivery contributes to perceptual jank. (All of these graphical problems — stutter, judder, pauses, etc. — are often collectively referred to as jank.)

I'm not sure if the frame pacing issue is tied to the something in the NDK. I can't provide much more insight at the moment, but to give an idea of how bad things can be, see this video. Here’s another good example.

Variable refresh

One thing to note is that adaptive sync/refresh will benefit Android more than Apple’s ProMotion benefits iOS.

To be elaborated upon…

Color

No vendor should target anything other than sRGB for a device display until Android O ships. In other words, the display’s software calibration must target sRGB to be correct, because that is the only color space that Android currently supports.

To be elaborated upon…

 

I hope this article at least conveys how almost all engineering decisions involve tradeoffs. There are rarely magic bullets that solve everything.

If anyone spots any errors, corrections are always welcome.

Erica Sadun on iPad multitasking in iOS 11

I found Erica's opinions valuable in terms of thinking through iOS 11's multitasking UI redesign, even though I don't agree with her conclusion (that the new design is worse overall). Her concerns about whether it serves all users are commendable. In general it's critical to consider all points of view on subjective decisions. If you don't think about the other side of an argument, you haven't really thought through a problem.

I think the new multitasking UI is awesome and necessary for implementing Spaces, but Erica correctly identifies many of the downsides to the redesign. All of her points are valid, but it's also worth noting that none of these features are strictly necessary to use an iPad. If users never discover the multitasking UI in the first place, they can continue using the iPad exactly the same as always.

All engineering involves making tradeoffs. The cost of not implementing this more power user-friendly redesign would be the iPad continuing to stagnate. Tablets need to continuing evolving to do more than phones, and they've arguably taken far too long to do so. The increase in complexity is a necessary tradeoff in order to make tablets more valuable in their own right. That's not to say there isn't definitely a lot of room for improvement with this new UI, though, of course.

I personally like that a side effect of the new UI is that users can no longer easily swipe away apps, hurting performance and battery life even though they are often actually trying to improve battery life. The previous UI probably would have made it too easy to accidentally swipe away a Space that users had bothered to set up.

iOS 11's new Control Center is also pretty much exactly what I wanted to improve watchOS: an untruncated vertical scrolling list. Having to perform separate swipes to access multitasking and Control Center would be more confusing and time-consuming for users. The unified bottom swipe not only encourages more frequent multitasking, but it also provides a simplification of the previous four-finger gesture shortcut. There are always many aspects to consider regarding accessibility.

Design is hard.

Matthewmatosis on leaks

Warning: there is a fair amount of swearing in the link above.

Matthewmatosis is my favorite YouTuber, as he makes amazing videos commenting on game design. In this video segment, Matthew talks about leaks in the video game industry, but what he says applies equally to the tech or any other industry. His feelings are obviously not profound, but I could never say it any better myself. The developers deserved their moment of joy after three long years of work.

I also saw the leaks of Mario + Rabbids Kingdom Battle before E3, and I wish I never had. The game's unveiling would genuinely have been an awesome surprise otherwise. Thankfully the game at least looks pretty great. And do check out Matthew's videos if you're interested in game design. I can't recommend them enough.

The Tech Specs Podcast — Episode 2

For the second Tech Specs Podcast, I'm joined by Josh Ho, whose writing you may be familiar with from AnandTech's mobile coverage. Topics include: four months of new devices, Apple’s SoCs, benchmarks, technology misconceptions, and explaining things on Twitter.

You can download the Tech Specs Podcast on iTunes and Google Play Music.

What happened with the A10X?

New 10.5” and 12.9” iPad Pros were widely anticipated going into this year’s WWDC. It had been 19 months since the launch of the original 12.9” iPad Pro, and nine months since the release of the A10. Big updates were due for the iPad line.

Apple first went through the feature and display improvements of the new models as expected. (I had a pretty good hunch ProMotion would be announced going back to last year.) Then it highlighted the usual specs. CPU performance has increased 30%. GPU performance has increased 40%. This is in comparison to… the A9X. Huh?

This is not what silicon people were expecting to hear. Apple pushes performance like crazy, and it was supposed to be releasing a 10nm SoC. These numbers are not impressive by Apple’s standards. Comparing to the A9X and even older SoCs is deliberate marketing framing to make the numbers sound more impressive than they are.

Could Apple actually have pushed purely for efficiency this time? Given its history of CPU designs, this seemed unlikely, though still possible. Apple also neglected to mention any efficiency advances, which it would likely tout in that case. Most surprisingly of all, Apple made no indirect mention of 10nm. This was deeply suspicious. Given the deliberate vagueness of the performance figures, however, it was hard to make much of the situation.

First, though, I need to apologize for an incredibly stupid error in my thinking about this SoC. I expected an A11X, as it looked like Apple had been working on a 10nm design*, and Apple essentially always pushes performance. If you’re familiar with mobile silicon, it was logical to expect that it was waiting to launch a new SoC on the bleeding-edge node. And if you don’t think Apple would be that aggressive on timing or performance, the A9X’s release was successfully brought forward half a year, a massive accomplishment.

I knew to expect that the A11 would use a 64-bit-only CPU, to allow for a more spatially and performance-efficient core design. I didn’t, however, make the rather obvious connection of dropping 32-bit support in iOS 11 to the near-simultaneous release of Apple’s first CPU without 32-bit support. That is to say, any Apple CPU released before iOS 11 would necessarily need to retain 32-bit registers, and therefore it would make little sense to release a new micro-architecture before the fall. This is because iOS 10.3 still needed to run on the 32-bit A6 and A6X. That the new iPad Pros were seemingly delayed a couple months and released just a few months before the A11 is irrelevant, if unfortunate from Apple’s perspective.

Based on what I know about 10nm, though, there seem to be two likely possibilities. The first is that this was Apple’s plan all along. The A10X would deliberately not push the envelope for whatever reasons. There would be no new CPU microarchitecture, aside from maybe some small improvements. The addition of a third CPU cluster and the necessary logic and tuning to make it all work, however, would be far from trivial. And a lot of work would also go into the new GPU, of course.

The second possibility is that the original 10nm A10X design had to be canceled once it was clear that yields were not going to meet Apple’s targets.

There is a further wrinkle to the picture. If the A10X’s cache sizes really have changed significantly, it would perhaps lend credence to one or the other possibility.

Either way, everything I’ve heard about the A10X points to it still being a 14nm TSMC design, so my best guess is that the original design for the A10X was indeed canceled. This would not be terribly unusual in the world of silicon design, but it would be a rare public setback for Apple’s silicon team. If this is really what happened, the iPad Pro’s intended SoC would have effectively been sacrificed to ensure adequate supply of the A11 for the iPhone 8. For obvious reasons, that would be the correct choice of action.

The bigger implication is that the signs really don’t bode well for 10nm. Final clockspeeds for Snapdragon 835 and Exynos 8895 were really low compared to theoretical expectations. Yields are terrible, and the node is looking as bad as 20nm did. It could even possibly be worse, if TSMC and Samsung Foundry are struggling with issues similar to those that Intel first grappled with during its 14nm ramp-up. Moore’s Law is slowing down because of fundamental physics, not a lack of engineering willpower.

Further details are required, so please consider none of this confirmed. This is merely my best guess as to what happened. Unfortunately the details of these things often never come to light publicly.

And to be absolutely clear: I am not saying that Apple did anything wrong or is trying to be misleading in any way. It still has the fastest mobile CPU by a country mile. 10nm is just not going well.

Updates

1) Much to many's surprise, TechInsights today finally publicly provided a die shot of the A10X. Even more surprisingly, it's fabbed on 10nm after all. TechInsights estimates a die shrink of about 45%. Additionally, it echos my previous suspicion that the 10FF ramp was delayed a quarter, and thus the new iPad Pros were indeed delayed from the initial plan of a spring unveiling.

How then do you explain the A10X's poor final clocks and lack of performance increase per-core? From what I can tell, TSMC delivered some fairly awful results this node. 10FF seems to be very leaky and is probably significantly worse than Samsung Foundry's 10LPE. TSMC prioritized the development of its 7nm process, and this is perhaps the consequence: a short-lived, bad 10nm.

Process immaturity aside, whether Apple was originally hoping to design the A10X differently is impossible to guess. I suspect its team working on the A11 is currently killing themselves to somehow make 10FF work.

2) Leakage doesn't actually seem to be the problem with 10nm. Transistor performance improvement has stalled. This is not good.

 

* Discovered by Ashraf Eassa of Motley Fool.

4D Toys

I couldn't not share this. Enjoy.

Expectations for WWDC 2017

Apple has some obvious priorities to address this year at its Worldwide Developers Conference (WWDC). Firstly, it needs to significantly redesign iOS for the upcoming OLED iPhone. There are major technical considerations for both hardware and software that Apple has to deal with for its OLED transition. These considerations extend far past a dark mode for apps, which I would also anticipate. Apple has notably already dealt with OLED before for the Apple Watch. I don’t think it will try to match iOS with watchOS aesthetically, but perhaps their overall appearances will be a little closer. And I highly doubt iOS will change from rendering black text on white backgrounds for maximum readability.

iOS’ dominant white and blue are pretty much the worst colors for an OLED user interface, though, so something more akin to watchOS’ use of black, grey, and green is required (to some degree). Apple could even maybe selectively utilize some red tones. For further context on colors and OLED design considerations for energy efficiency, lifetime, and text legibility, Brandon Chester and I discussed some of these topics at length beginning at 55:03 on the first Tech Specs podcast. I will also publish some further thoughts on the reasoning behind the UI changes at a future time.

Secondly, Apple needs to deliver on deep learning. Despite what Apple says externally, internally executive leadership seems to have been caught off-guard by the sudden, massive progress in AI brought about by the deep learning revolution. Backchannel’s exclusive access piece did not instill confidence, but rather marketing desperation, by trying to conflate all areas of machine learning together as one thing. Ignoring the technical errors in the article, it doesn’t matter that Apple has been using machine learning since the 80s; all that people should care about is its current competitiveness in deep learning. In contrast, when Google mentions machine learning, as far as I know it is always referring to deep learning.

Depending on how you prefer to segment technology, many would argue that deep learning is the most important advance in software since either the touchscreen smartphone's user interface or the internet. In contrast to almost every other technology industry buzzword or phrase of the month, having an “AI-first” strategy is actually credibly meaningful. Last year I argued on Twitter that Apple needed to show a suite of deep learning-powered services, or it was going to look really behind. That is thankfully exactly what it did. At this year’s WWDC, Apple needs to demonstrate a continuing wave of progress company-wide on AI. If you want to get a sense of how seriously Google has invested in internal training on deep learning to remain the market leader, see its alternative exclusive access Backchannel piece from a couple months prior to Apple’s.

These articles are sometimes published months after interviews are granted to journalists, but there was one thing in particular I noted at the time of the Apple piece. Craig Federighi was quoted as saying, “We don’t have a single centralized organization that’s the Temple of ML in Apple.” Not many days before the article was published, it was reported that Apple had acquired Turi, which formed the basis of its new machine learning division, an obvious necessity for internal tooling and research. Perhaps I am reading too much into this, but that might suggest Apple’s deep learning strategy was still in flux at the time of the interview.

It may not seem fair, but this year Apple has to continue to prove it can keep up to some extent with the market leaders. If not, its competitive positioning in AI might come to resemble its mastery of the cloud: perpetually years behind. If it sounds like I am being negative about Apple, believe me I’m not. The company is ridiculously competent at almost everything, but it shouldn’t be graded on a curve on server infrastructure or AI. And by “AI,” think of deep learning algorithms, not futuristic images of omniscient assistants from science fiction.

Thirdly, Apple needs to ship a ton of iPad-specific software improvements. I know these are definitely coming, and I suspect Apple will deliver in spades. The Split View multitasking UI is one example that everyone agrees must be replaced; an icon grid with greater information density could help. Adding drag and drop functionality seems really likely. Using the iPad as a drawing tablet or secondary display for the Mac has been rumored multiple times. And while it would require extensive OS-level engineering to bring about securely, multi-user support also seems like a strong possibility.

Brandon mentioned to me last fall that he thought Apple would transition to a 10.5” iPad display size in 2017 in order to switch to using two regular size classes in portrait mode, which would make sense. I eventually saw supply chain rumors that the new iPad would be 10.5”, so that display size with an iPad mini-sized UI seems pretty likely.

Unsurprisingly, I am hoping that iOS 11 will also provide a major performance revamp for the platform. Don’t expect any miracles, but it would be nice if there's at least less Gaussian blur. There's ample opportunity for Apple to continue to tune how iOS works, to better fit its CPU and GPU architectures and make the most of their microarchitectural advantages.

Apple also clearly needs to showcase a significantly improved Siri. (I’ve always been hesitant to describe digital assistants as “AI.”) I’ll leave hardware rumor reporting to the press, but my guesses would be that the A11X iPad Pros, spec-bumped MacBook and MacBook Pros, and Siri speaker all get announced on Monday. We may or may not see the rumored iOS-wide voice command accessibility this year, but the latter product will probably fully depend on a more capable Siri. My one prediction is that the Siri speaker has nothing to do with mesh Wi-Fi. I think Apple would prefer to ship something useful like an 802.11ad-capable router.

Siri never really worked for me personally, and never understood my voice at all on the Apple Watch, until iOS 10 and watchOS 3 were released. Siri is much better now, but its word error rate is still higher than Google's. Where Apple is definitely market-leading is API design, which is criminally under-appreciated as a competitive advantage. SiriKit is probably the best overall voice assistant API, but it’s also the most ambitious in terms of flexibility. Continued expansion of the deliberately limited API surface is required.

I’m also hoping to see broader deployment of differential privacy and similar experimental technologies. Apple is still going to have to pay the efficiency tax and perform deep learning inference on device with non-ideal hardware, until it can ship more appropriate silicon. My sentiments are likewise the same on security, given last year’s political battle between Apple and the FBI.

I’m not sure if Apple will ever make its secret iOS VR framework public, but if it does, it will probably wait until at least the iPhone 8 announcement. Quality VR basically requires OLED.

Apple needs to continue to make writing functional smartwatch apps much, much easier with watchOS’s API, while still preserving energy efficiency. tvOS deserves a better and more performant multitasking implementation, like the original one. iOS 11 will probably drop 32-bit support, to significantly reduce memory usage. Apple at some point should ship improvements for family management of content and media, especially within the Photos app.

From a developer point of view, I wonder if UITableView might be deprecated. Auto Layout seems to be a performance killer, so some sort of magically more efficient way of arranging layouts would be nice, however difficult to conceive. There is also unending room for improvement for macOS security and the Mac App Store, but one shouldn’t hope for too much.

Lastly, this is the last year that I will hold out hope for a swipe keyboard. It would be an enormous improvement for one-handed use and accessibility. Maybe it could even work in a floating window on the iPad? Please, Apple?

Adaptive-Sync for mobile devices

One of the most exciting display features in recent years has been variable refresh. While gaming monitors have long offered higher refresh rates, NVIDIA pioneered variable refresh PC monitors with G-SYNC in 2013. By controlling refresh rates from the display’s end of the display chain, G-SYNC significantly improved perceptual smoothness by removing stutter within a certain frame rate range, eliminating tearing entirely, and reducing input lag. Here is a video of the original G-SYNC demo explaining its benefits.

AMD soon followed with its competing FreeSync, and VESA then added a free implementation called Adaptive-Sync to the DisplayPort 1.2a specification. Originally, Adaptive-Sync was actually introduced in 2009 for displays using internal video signaling, in the eDP specification which mobile devices use for their integrated displays.

Following the rollout of panel self-refresh, I have been eagerly anticipating the adoption of Adaptive-Sync in mobile display stacks. There hasn’t been a real reason to include it, however, because everything in mobile OSes targets 60fps.

The reason for this is simple: going past 60Hz increases display energy consumption. Driving higher frame rates also requires greater compute and thus further increased power draw. Additionally, there is an important technological distinction between IGZO and LTPS displays, the former generally being limited to larger mobile panels, while the latter’s greater efficiency has led it dominate the market for small panels for smartphones and tablets.

I won’t explain display backplanes here, but IGZO transistors do provide certain advantages, such as faster switching speed due to higher subthreshold swing. IGZO displays thus make higher refresh rates more easily achievable than they are for LTPS panels in mobile and laptops. And although a display stack running at higher refresh rates will burn more power, variable refresh can significantly mitigate this increase.

You may have noticed that the iPad Pros and 2016 MacBook Pros are advertised as supporting variable refresh rates. (This will be a feature of the iPhone 8, too.) This is actually a different implementation than that of PCs and desktop monitors. What Apple is instead doing is driving display refresh down to a constant 30fps when the screen is displaying static content. This provides a significant improvement to battery life.

Over the past several years mobile displays have been produced with increasingly higher peak brightness and wider color gamuts. Where do you go from here? Higher refresh rates.

Ordinarily, whether you are targeting 90 or 120Hz refresh, this would require quite a improvement in system-wide performance. Android and iOS both target 60fps, but neither actually manages to run at a consistent 60fps (and with smooth frame-pacing) pretty much ever, excepting small numbers of extremely performant apps.

iOS used to run quite consistently at 60fps across all of its system apps prior to iOS 7, but then pervasive Gaussian blur, increasingly complex UI layouts, and other factors led to a persistent and significant decline in performance. The iPhone 5 running iOS 6 was the last mobile device to truly run at 60fps, in other words.

Thus, without a major focus on improving system performance, don’t hold you breath for iOS to achieve a consistent 60Hz frame delivery anytime soon. That would require a serious OS-wide code review effort, and it’s not a given that Apple cares enough.

Even if that were to happen, some things are basically impossible. Assume that Apple wants to target a 120fps device refresh rate, a nice even multiple of 60 and 24fps. There is absolutely no chance that it can magically get every first and third party iOS app to render within the 8.3ms render window (1/120 seconds).

As you might have guessed, there is another way — Adaptive-Sync.

To support the feature, the display controller (part of the SoC) must support the feature, and the display driver (DDIC) and the panel itself must be capable of higher refresh rates. I believe now is the time for mobile implementations to finally arrive. A device like a large tablet particularly has the energy capacity to spare, so much so that battery cell volume in the iPad Pros was replaced with larger speaker cavities partially to save weight.

Note that using Adaptive-Sync to avoid dropping frames is not as good as hitting a target frame rate in the first place. Rendering smoothly at 55fps is still inferior to hitting a steady 90fps delivery target, for example, because you are simply seeing fewer frames. But you do achieve the same benefits as with PC implementations: stutter is eliminated, and input lag is reduced. (There is already no tearing on mobile.)

For Android, I think Google could even in theory eventually disable triple buffering should Adaptive-Sync ever become ubiquitous in mobile. Only a frame of latency would be gained back, but it would be a clean win. There is no reason for this to ever happen, though.

In conclusion, I hope to soon see tablets and other devices that support Adaptive-Sync. It’s a killer feature that would make a huge difference for both input responsiveness and motion image quality.

 

Update

I somehow missed that Qualcomm has its own variable refresh implementation called Q-Sync for Snapdragon 835's display controller. Qualcomm didn't mention it to me at CES, so I have no idea about the details. I will try to find out more, but it sounds like it may be a proprietary implementation since it seemingly requires Q-Sync-specific support from display driver vendors. I'm a bit skeptical about adoption. Thanks to Naseer for the heads-up!

Thoughts on Google I/O 2017

There were an enormous number of announcements at this year’s Google I/O. In particular there seemed to be the most advancements in Android development announced since 2014. Rather that attempt to comprehensively summarize the event, here are some scattered thoughts.

Android

The biggest Android-related news was clearly the surprise adoption of Kotlin. The main reason it was a surprise was because of the tone the Android team conveyed in response to requests for Kotlin support at last year’s I/O. While many team members used Kotlin, they seemed to suggest that Java was going to remain the platform language for the foreseeable future.

I’m not a programmer so my opinions on languages are invalid, but everything I’ve ever read about Kotlin makes it seem like a pretty good language. I have no idea how well it performs, though. Because Kotlin was designed for seamless Android interoperability from its inception, it can pretty much immediately replace Java for any developer now that it is officially supported in the Android SDK. There will be some gaps in tooling, but it’s as easy a developer transition as they come. The next step will be transitioning APIs to Kotlin. I don’t think many members of the Android team will miss Java. 

The app lifecycle has always been a nightmare on Android. The new architecture model proposed by the Android team surely has to be an enormous improvement, but it understandably necessitates new classes. The team is notably proposing more of a reactive-style model instead of a model-view-viewmodel (MVVM) paradigm, to quote Yigit Boyar. We’ll see how that turns out.

Android O is a major performance release, probably the most significant one since 5.0 Lollipop. One of the key efforts by the Android team for O was the elimination of binder lock contention. The result is a “massive improvement in jank immediately after boot as services all start up.” Running the O beta on my Nexus 6P, I was amazed how much of an obvious and appreciable difference this makes. For a thorough description of Binder by Dianne Hackborn, see here.

Significant improvements in OS boot time are also claimed, with the Pixel smartphone starting up in half the time on O. App startup times have also been sped up by different optimizations. I suspect all those app racing videos on the internet played a roll in spurring this effort, though I would strongly caution you not to assume those videos are really representative benchmarks.

Also of major note is that Android O features an optional new rendering pipeline, which you can enable from developer settings. Nothing has been said about it, but it is based on Skia. I don’t know anything about graphics libraries, but both Android and Fuchsia have used Skia from day one so I have no idea what the new renderer entails. Perhaps it’s a Vulkan rewrite or a more fully hardware-accelerated pipeline, but I’ve found no information yet online. If anyone knows more, please reach out.

Regarding runtime improvements, ART has switched from using a mark-and-sweep algorithm to a concurrent copying garbage collector. Claimed results are less time spent reclaiming and allocating memory and lower overall usage. I know very little about garbage collection, so I wonder what the tradeoffs are. You should watch the ART presentation if you want to learn more about the new collector and the many other improvements. I do know, however, that Josh's desired simple optimization was unsurprisingly not implemented.

You may be surprised to learn that ART has also at long last added support for automatic vectorization. I won’t explain SIMD utilization here, but I may write an entire article about this topic in the future.

One nice addition that will indirectly improve performance and battery life is that the Play Developer Console will now flag apps that perform poorly in regard to wake locks, wakeups, and jank. Google also said that wake locks will be restricted in future releases of the platform, so developers be warned. These restrictions should be deeply appreciated by users.

Because Android releases are not developed in public, we still know extremely little about Project Treble. Aside from some vague high-level comments, the most specific information given was that "device-specific vendor components" have been sandboxed in some manner. Reducing the bring-up costs of Android updates for higher-end vendors like Qualcomm should be a huge help for the overall ecosystem, but I am skeptical that it will have much impact on a company like MediaTek, which has little financial incentive to provide updates in general. Treble also does not chance the fact that Linux does not have a stable driver ABI. I should point out that it sounds like the transition was a miserable technical slog for many members of the Android team, so thanks to them for their efforts.

With the announcement of Android Go, I immediately wondered if the platform's memory profile had changed. The last time there was a significant increase in RAM requirements for Android was the 5.0 release. There is no Android Compatibility Definition Document for O yet, so it is unclear if the minimum memory requirements will be changing (Section 7.6.1). Based on the ART session, however, overall memory usage should be lower in O.

Much to my surprise, graphics drivers are now updatable through the Play Store. This is not a small detail, and I suspect it was a benefit of Project Treble. Google is also now offering Android developers the equivalent of Apple’s App Slicing. Within security, tamper-resistant secure elements (ala Apple’s “secure enclave”) are now supported in O.

As anyone could predict, deep learning was a huge focus at I/O. Google’s TensorFlow happens to be the most popular deep learning library. While it has been available on Android since launch (and was later made available on iOS), Apple managed to provide a GPU-accelerated mobile framework before Google, with convolutional neural network kernels available in Metal Performance Shaders on iOS 10. The lighter-weight TensorFlow Lite was thus a big (and much needed) announcement, although developers will also at least be able to leverage vendor-specific acceleration libraries through Qualcomm’s Snapdragon Neural Processing Engine SDK. In the near future, TensorFlow Lite will leverage the new Android Neural Network API to allow developers to accelerate their AI algorithms on GPUs or DSPs.

I won’t beat around the bush — the changes in Android 5.0-9.0 have made the platform much more similar to iOS overall. I think the Android team has prioritized the right areas of improvement, even if they’re shipping fewer obvious consumer-facing features.

Everything else

Johnny Lee confirmed that Google’s VPS (Visual Positioning Service) is marketing's branding of the combination of area learning and point clouds stored in the cloud. Standalone Daydream VR headsets use stereo visual odometry and 3D SLAM to provide positional tracking, with drift correction based on area learning. The combination of hardware and algorithms is marketed as WorldSense, which is of course based on a specialized version of Tango.

Google also showed off some amazing VR technology called Seurat, which renders extremely high-fidelity graphics on mobile in real-time via unknown means. The technology could be anything, but it isn’t magic. For similarly impressive demos, check out OTOY’s “eight-dimensional” holographic light field rendering. (Update: Seurat is indeed supposedly some form of light field rendering. This Disney Research paper was released simultaneously.)

Within deep learning, Google stated that it was working on CTC-based sequence models for natural language processing, with “a whole bunch of new implementations" coming soon.

Lastly, I was wrong about the Flutter sessions. There were numerous memes, but none of the animal variety. I apologize for the error.