Vega

From my perspective, AMD is currently the most fascinating company in tech. Its Zen CPU microarchitecture and Ryzen desktop CPUs met or even exceeded expectations, realizing comparable performance and IPC to Broadwell and providing Intel with real competition in the X86 space for the first time in years. I am increasingly convinced by CEO Lisa Su’s efforts to turn around the company from the dire straits it was in until the launch of Zen.

AMD’s new Vega GPU architecture has been especially interesting to follow in recent months. I will caveat this article by saying I’m not very familiar with desktop parts, especially GPUs, so I don’t really know much beyond the basics.

What I don’t think most people know, though, is that GPUs are process-constrained. Vega is fabbed on GlobalFoundries’ 14LPP process, which is licensed from Samsung Foundry. TSMC’s 16FF+ was a little better than 14LPP in terms of power and performance, though it’s at least possible process maturity may have closed some of the gap over time. Quite how so many people expected Vega 10 to outperform NVIDIA’s GP104 GPU escapes me, then, given that the two GPUs are fabbed on very comparable processes. (HBM2 memory should make a difference, though, on paper.) I think many people simply assumed that if Vega came out later than NVIDIA’s Pascal, then it must be better.

If you are not familiar with the current state of the PC GPU market, NVIDIA has had a significant efficiency advantage since the introduction of its Maxwell architecture in 2014. It was later revealed that NVIDIA had adopted a tile-based rasterizer, which played a major though not exclusive role in eeking out this efficiency advantage.

Beyond that, it was apparent once AMD announced the TBPs (typical board power ratings) for the first Vega cards that the architecture is fairly terrible on power efficiency. This is not good, because power efficiency is pretty much the most important metric for any IC. To speculate on the reasons behind it at this point would be wild guessing, but it does appear that some things went wrong.

Speaking from experience in the mobile space, I’ve seen vendors who are uncompetitive to some degree on efficiency often boost performance to match the competition on benchmarks, by operating their silicon at more inefficient points in the performance/watt curve. That said, Vega’s being able to match Pascal’s performance is not something to be taken for granted either, and is thankfully the case. Vega's clock speeds are also a non-worry.

Software-wise, AMD’s drivers were clearly running very late. Software historically has not been AMD’s strength, though I am optimistic things will be improving from now on. However, one wonders why the drivers and various new features are so delayed.

Everyone knows that Vega was late. While HBM2 yields likely played a role, there’s probably more to it. Someone smart said that AMD did the right thing to delay the products (as opposed to ostensibly doing something stupid).

To me, it looks like AMD probably had enough issues with Vega that it had to rush out a respin. On the one hand, that would clearly not be good. On the other hand, if so, I’m really glad AMD paid to do it and delayed the non-Frontier cards. Respins are really expensive, and in mobile consumers are often not so lucky to get them. That is the extent of my familiarity with these things at least. The situation with Vega is not the end of the world since its performance is still competitive, and Vega will sell out for quite a long time regardless.

For architecture and competitive analysis, I recommend reading AnandTech (and only AnandTech), though of course useful benchmarks are found on many sites. I would also recommend waiting a week or two to see how the AT review gets updated, because it's impossible to actually analyze much of anything before a review embargo.

And as much as this will probably pain gamers to hear, I consider Vega’s performance on deep learning operations to be much more important than its gaming credentials. There is an inordinate amount of money at stake if AMD can manage to move the needle with Radeon Instinct and HIP against NVIDIA’s domination in deep learning.

A2DP and HFP were switched over to BLEA as of iOS 9

A former Apple engineer has shared that Apple switched A2DP and HFP over to its own Bluetooth LE audio standard in iOS 9. This blew my mind.

For background, here are some of the basics. There are two “Bluetooths”: Classic and Low Energy (LE). The former is the streaming standard that everyone knows through wireless headsets and speakers, while the latter is basically what every modern peripheral device or hardware accessory, such as a smartwatch, uses to transmit data.

LE is also called Bluetooth Smart. LE is bursty and lower power (though not necessarily inherently more efficient), and was designed to enable devices running on coin cell batteries. You can do crazy things like stream video over it, though, if you so desire. (Don't do that.)

I’ve been vaguely keeping track of progress on BLE audio for a few years. I knew that the Bluetooth SIGwas working on an LE audio standard, but am amazed that Apple secretly deployed its own in 2015. But it’s not magic, and is still based on LE. “Configuring the HAs is performed through LE services & characteristics, but the audio streaming channel is secret sauce.”

Bluetooth LEA, as Apple calls it, is not used by the AirPods. I’m not sure why, but it may simply be because LEA’s quality is still inferior to Classic audio streaming. Streaming audio is inherently difficult because of LE’s lower duty cycle, which is what makes LE more efficient in general.

Pairing is the same as for the AirPods, using standard LE protocols, though there may be specific codec features that Apple depends on. To emphasize, this is all still built on top of standard Bluetooth. And I believe the SIG is working on a similar pairing UX feature. (Keep in mind that pairing is not required with LE as it is with Classic. Otherwise, say, Bluetooth beacons wouldn’t exist.)

 

Aside:

I frequently see people complaining that “Bluetooth sucks” or “Bluetooth is always supposed to get better next year.” Before they were announced, for some reason people even wondered if Apple was going to replace “Bluetooth” for its AirPods. The problem is that people are almost always thinking of the wrong Bluetooth.

I won’t fully explain it here, but basically Classic and LE are different radios. To oversimplify: you can think of Bluetooth 4.0 and later as a completely different spec than 3.0 and earlier. For example, Bluetooth 5 has absolutely nothing to do with the Bluetooth that people normally think of (Classic).

 

* Thanks to Brendan Sharks for suggesting a correction to the article title.

ARM's brand refresh

I can't think of any semiconductor companies with well-designed logos or wordmarks, but at least this one is better than the old one?

I will definitely never get used to writing "Arm."

SoC suicide

This isn’t really a fix. Sky-high voltages + thermal stress = it’s dead, Jim.

Worth noting: AnandTech got a lot of grief when it didn’t recommend the Nexus 6P or any other Snapdragon 810 or 808 device.

The HDR iPhone 8

I want to write about what should be the highlight feature of the iPhone 8: its HDR display...

This article is available for subscribers on Patreon.

"The New Firefox and Ridiculous Numbers of Tabs"

I’m going to switch to Firefox for a while to try this out.

Even though Firefox is not my main browser, its “Don't load tabs until selected” option has always been my favorite browser feature. The number of tabs I want to load on first launch is exactly one. In an ideal world, the resource overhead of tabs you’re not currently looking at should be as close to zero as possible.

Why you shouldn’t use someone else’s TV calibration settings

I think you will be thoroughly persuaded by this article.

In short, individual display variance is too significant, and you will probably make things worse. If you want to individually calibrate your TV, don't do it yourself, and certainly not by eye. Have a professional do it.

If you want to learn about TVs and home theater equipment, I recommend following Chris Heinonen and reading his articles. Note that I am only recommending him, specifically.

The iOS 11 redesign

Previously I wrote that iOS should be redesigned for OLED. After seeing iOS 11 debut at WWDC and thinking further, though, I realized Apple might go farther than I originally believed...

This article is available for subscribers on Patreon.

Introducing subscriptions

Today I am launching subscriptions for Tech Specs, at $10 a month.

I need your support in order make this blog a sustainable effort. I know that $10 is not insignificant, but it's honestly what I think will be necessary to get Tech Specs off the ground. I'm trying to keep my costs as close to zero as humanly possible, and to date have funded everything out of pocket.

Your money will go towards:

  • Access to all Tech Specs articles. At least one in-depth piece per week, on average. While it could end being more, I would rather overdeliver than overpromise
  • A small number of free articles for everyone, generally introductory educational pieces or minor news commentary
  • Maybe even better podcast audio quality. It’s theoretically possible…

Some highlights of recent coverage include:

I will continue to write about hardwaresoftware, and design. Examples of topics I would like to address include: the iPhone 8, Fuchsia, virtual reality, augmented reality, deep learning, self-driving cars, HDR, displays, color, battery life, smartwatches, Bluetooth LE, how Apple’s AirPods work, how Android works, the Apple Watch, Android Wear, watchOS, benchmarks, understanding the supply chain, eSIMs, real technology economics and financial theory from academia, and the future of computing.

Some of these topics I can write about in great detail. Beyond that, I have many ideas for the future of the blog. There are also certain guests I would like to host on the podcast (eventually).

There will be no ads, ever. I believe in the subscription model.

Advertising's enormous advantage is of course the democratization of content — everyone has access. I am highly sympathetic to this benefit, which is why I will continue to make introductory articles available to everyone from time to time. They will always be a very important part of the blog.

But the advertising model on the internet is often detrimental. When you have reputable technology websites flooded with highly questionable ads and auto-play videos that greatly inhibit the performance and battery life of readers’ devices, the system is broken. This is not an indictment of journalism, but simply economic reality. And inherently all advertising is consequential to the message of its medium.

For all of these reasons, I prefer the subscription model. It also allows me to avoid the temptation of clickbait headlines. Before publishing a piece I can ask myself whether I even have anything of value to add on a topic. If journalists have already covered it well, then that's great, and I’m happy to share those articles.

I also believe there is a great need for tech coverage that provides at least some of the perspective of the industry itself, and it's worth emphasizing how much the average industry observer does not get to see. Talk to an engineer at any tech company, and it's clear that "how things actually work" is often radically different than how it's portrayed online. There is an tremendous amount of work that goes into creating, testing, and manufacturing tech products, and the vast majority of this work goes completely unappreciated in the public record. What goes into making a product is often just as important as the final result.

I genuinely want to do something different. I’ll do this by covering things that are not normally discussed in the press, or often are not on the internet at all. Sometimes I’ll be able to go into much greater depth on technical subjects. Relatedly, nothing is more important to me than accuracy, and I will always correct any identified mistakes.

Lastly, within the realm of independent content I am indebted to several influences, including Jessica Lessin and The Information, Dan Luu, and Chris Pirillo. My thanks to them for the inspiration.

Thank you all for your consideration and your support. It’s deeply appreciated.

Misconceptions about Android

Android's open source nature makes it vastly easier to learn about than closed source OSes. As such, I want to address several misconceptions about the platform that constantly come up. This article will be occasionally updated on an ongoing basis as I think of more topics to include.

GMS

GMS actually stood for Google Mobile Suite, not Google Mobile Services, at least originally. So many people assumed it stood for the latter that even Google seems to use it now.

Perhaps it was a situation like Qualcomm’s Gobi, its cellular firmware API that so many people confused for a modem brand that eventually Qualcomm gave up and rebranded its modems to Gobi. Or Samsung’s ISOCELL, the deep trench isolation implementation that so many people thought was Samsung’s camera brand, that it also recently gave up and branded its CMOS image sensors as ISOCELL at Mobile World Congress 2017.

Kernel version

I really can’t explain it any better than this thread by Tim Murray.

Force quitting apps

In short: don’t do it. Android manages itself perfectly fine. Unnecessarily closing and re-opening apps causes thrashing, slows down app reloading, and hurts battery life.

If memory serves, with Android X.X (can someone please remind me?), swiping away an app in the multitasking UI no longer force quit even the background services of an app in AOSP. Swiping away apps can actually still force quit them on a specific device, though, depending on the vendor’s chosen implementation.

If, however, an app is actually stalled or causing real problems in the background, you may have to manually force quit it. But in general, avoid doing so. Be nice to your NAND’s endurance, folks.

Project Treble

Based on recent changes to AOSP, Treble appears to be an attempt at a stable driver API. By painfully rewriting its various HALs to conform to a new standardized hardware IDL, the Android team is speeding up Android updates by allowing silicon and device vendor bringup efforts to be more parallelized, and by making updates a bit more economically viable for the silicon vendors. This does not mean, though, that users no longer have to worry about SoC vendor support getting in the way of updates. It’s a huge deal, but it’s not the same thing as having a stable driver ABI. See: Fuchsia.

Updates

To be elaborated upon…

F2FS

F2FS is a file system developed by Samsung LSI and upstreamed into Linux. It was designed for NAND flash memory and is claimed to be faster than ext4, Android’s default file system. The Android team disputes this based on its own testing, and says it sees no significant performance differences between the two. Regardless, there are several issues with F2FS, and more importantly Google cannot hardware accelerate file-based encryption with it. There isn’t a pressing need to replace ext4, though of course a better, more feature-rich file system could always supercede it.

Security

Malware on Android is often portrayed as an ever-growing, constant crisis. While Android does have tons of major security concerns, the overall issue is still hugely overstated.

Firstly, the term malware can mean absolutely anything. The vast majority of stories about mobile security spread FUD and sensationalism, to the detriment of readers. I won’t pretend to be a security expert, but even imperfect sandboxing probably goes a long way compared to the completely unsandboxed traditional PC application environments. It doesn’t seem clear to me whether Android or macOS is more secure overall, for example. As with many things, it probably depends.

There is however an extreme case: the Chinese market. Because Android is out of Google’s control in China, the OS genuinely is a security nightmare in the country. I remember waiting for a flight at the airport in Beijing and watching with amusement as some seemingly low-threat app started downloading itself onto my phone over the air. All I did was merely have Wi-Fi on; I hadn’t attempted to connect to any access points.

Everyone knows the fundamental issue with Android security: the horrible update problem. If devices consistently received timely updates for multiple years, the perception of Android’s security architecture would be radically different. I would personally attribute that to licensing and Linux's deliberately non-stable driver ABI, but there are a few hundred other opinions out there on the matter. And of course the overall topic of security is much, much more complex than what I am addressing here.

Which leads us to...

Android Things

What is Android Things, really? Why is it a distinct platform from Android, in other words? One very important difference is how Google manages the BSPs (board support packages) and drivers for Android Things. It works with the silicon vendors but provides the BSPs itself. Device vendors cannot modify the behavior of kernel drivers or HALs. Developers that need to add drivers for peripheral devices to add to their baseboard are able to write user space drivers, unsurprisingly called user drivers.

To be elaborated upon...

Furthermore, the intersection of updates and the Internet of Things would seem to be an obvious disaster, so how does Google address the issue? The company is actually releasing monthly BSP security patches through its Developer Console that soon roll out directly to devices, with the caveat that the direct updates will only be for the same platform version of Android the device is on (such as, say, 7.X Nougat).

Project Brillo is not exactly the same thing as Android Things. Brillo was killed, and the initiative was changed in terms of goals and morphed into Android Things. The Accessory Development Kit (ADK) was also somewhat of a predecessor to Android Things.

Android Wear

Android Wear employs a different, imperfect solution to the update problem: it's closed source. (Some UI view components, though, were open sourced at I/O this year.)

To be elaborated upon...

Touch latency

To oversimplify: Android touch latency was never good, until Android 7.1 essentially finally “solved” the problem. Additional features were added to the new Hardware Composer 2 (HWC2) HAL in 7.1 which can reduce touch latency by up to 20ms, or 1.2 frames, but not always. (It is not correct to say that touch latency is simply 20ms faster in 7.1.) According to the Android team, this was done by staggering some operations on batched input events, not doing everything on the VSync frame boundary, in order to reduce the likelihood of triple buffering and the increased latency that it causes.

The improvement in touch latency is extremely noticeable, and it immediately impressed me on the Nexus 6P after installing 7.1. While the HWC2 improvements make a huge difference, silicon and device vendor implementations still matter! A device still needs to have a quality touch controller, touch stack, and associated software. There are also other parameters that can be tuned by vendors, such as move sensitivity, which should vary based on device size.

It’s also important to understand that touch and general input responsiveness is a function of rendering performance. This is why, say, G-SYNC improves input latency when it manages to improve performance, and especially when exceeding 60fps under VSync. Thus on any device, the higher the realized display refresh rate, the lower the input latency. This is how Adaptive-Sync and proprietary variable refresh implementations will soon benefit Android touch latency.

Graphics renderer

For years people have debated the causes of Android’s infamous “lag” problem. The causes of jank on Android have never been as simple as a binary distinction of whether the OS is hardware-accelerated (running graphics operations on the GPU to some extent) or not. I have no idea what the true reasons are, but I do know that Android’s rendering pipeline is extremely complicated and requires graphics expertise to really understand. At the end of the day, most signs point to initial design decisions made in the early days of Android that are not easily undone.

While many have unrealistically hoped for a single “performance boosting thing,” Android performance does constantly improve in each new release. As previously discussed, one of the most exciting developer features introduced in O is an optional new graphics renderer. Ordinarily a new renderer might imply breaking all existing apps. This obviously wouldn’t be acceptable. The new renderer will probably be launched in Android P, so it will be extremely interesting to see what it entails, and what it gains.

Frame pacing

Even though mobile OSes like Android always run at VSync (and there is thus no distracting tearing on such small displays), one big thing I don’t think most people realize is that frame pacing on Android is awful. Commonly referred to as judder, jitter, or micro-stutter (as opposed to stutter or hitching), uneven frame delivery contributes to perceptual jank. (All of these graphical problems — stutter, judder, pauses, etc. — are often collectively referred to as jank.)

I'm not sure if the frame pacing issue is tied to the something in the NDK. I can't provide much more insight at the moment, but to give an idea of how bad things can be, see this video. Here’s another good example.

Variable refresh

One thing to note is that adaptive sync/refresh will benefit Android more than Apple’s ProMotion benefits iOS.

To be elaborated upon…

Color

No vendor should target anything other than sRGB for a device display until Android O ships. In other words, the display’s software calibration must target sRGB to be correct, because that is the only color space that Android currently supports.

To be elaborated upon…

 

I hope this article at least conveys how almost all engineering decisions involve tradeoffs. There are rarely magic bullets that solve everything.

If anyone spots any errors, corrections are always welcome.

Erica Sadun on iPad multitasking in iOS 11

I found Erica's opinions valuable in terms of thinking through iOS 11's multitasking UI redesign, even though I don't agree with her conclusion (that the new design is worse overall). Her concerns about whether it serves all users are commendable. In general it's critical to consider all points of view on subjective decisions. If you don't think about the other side of an argument, you haven't really thought through a problem.

I think the new multitasking UI is awesome and necessary for implementing Spaces, but Erica correctly identifies many of the downsides to the redesign. All of her points are valid, but it's also worth noting that none of these features are strictly necessary to use an iPad. If users never discover the multitasking UI in the first place, they can continue using the iPad exactly the same as always.

All engineering involves making tradeoffs. The cost of not implementing this more power user-friendly redesign would be the iPad continuing to stagnate. Tablets need to continuing evolving to do more than phones, and they've arguably taken far too long to do so. The increase in complexity is a necessary tradeoff in order to make tablets more valuable in their own right. That's not to say there isn't definitely a lot of room for improvement with this new UI, though, of course.

I personally like that a side effect of the new UI is that users can no longer easily swipe away apps, hurting performance and battery life even though they are often actually trying to improve battery life. The previous UI probably would have made it too easy to accidentally swipe away a Space that users had bothered to set up.

iOS 11's new Control Center is also pretty much exactly what I wanted to improve watchOS: an untruncated vertical scrolling list. Having to perform separate swipes to access multitasking and Control Center would be more confusing and time-consuming for users. The unified bottom swipe not only encourages more frequent multitasking, but it also provides a simplification of the previous four-finger gesture shortcut. There are always many aspects to consider regarding accessibility.

Design is hard.

Matthewmatosis on leaks

Warning: there is a fair amount of swearing in the link above.

Matthewmatosis is my favorite YouTuber, as he makes amazing videos commenting on game design. In this video segment, Matthew talks about leaks in the video game industry, but what he says applies equally to the tech or any other industry. His feelings are obviously not profound, but I could never say it any better myself. The developers deserved their moment of joy after three long years of work.

I also saw the leaks of Mario + Rabbids Kingdom Battle before E3, and I wish I never had. The game's unveiling would genuinely have been an awesome surprise otherwise. Thankfully the game at least looks pretty great. And do check out Matthew's videos if you're interested in game design. I can't recommend them enough.