LG Display rumored to be investing in mobile OLED production

The Electronic Times recently reported that Google wants to invest ~$880 million in LG Display for future production of OLED displays. If this rumor is true, I suspect the potential strategic investment would not just be for securing displays for future Pixel devices, but for helping LG Display to seriously re-enter the mobile OLED market. The company previously sold flexible OLEDs to LG Electronics for its G Flex and G Flex 2 smartphones in 2013 and 2015, respectively, but I am not aware of any other smartphones ever using LGD OLED.

To date Samsung Display has been far ahead of everyone else in mobile OLED due to its vastly greater investment in the segment. LG Display could probably make reasonable OLED displays, though, if it had the financial incentive to make major investments in smaller panels. It has already proven its OLED capabilities with its Apple Watch displays, however difficult they were to make, and of course its leading W-OLED panels for the TV market.

This rumored change in strategy would probably be more about industrial design demands than display quality considerations. Several vendors, including Google, want to be able to compete with Samsung and Apple’s (upcoming) bleeding-edge smartphones that strongly associate OLED displays with high-end industrial design. While OLED is not necessary at all for creating a design with minimal bezels, some or even most of these vendors likely require OLED because they want to bring curved displays to market, sacrificing some image quality in the process.

To be clear, there are many advantages (and some disadvantages) to working with OLED displays over traditional LCDs from an industrial design point of view, which I won't fully enumerate here. One of the major differences, while it sounds obvious, is that OLEDs do not have LCMs (liquid crystal modules).

No matter what, it's not at all clear that investing in OLED over LCD long term would be a smart move, and most display suppliers remain skeptical of the former. OLED is better overall now, but it has strong downsides in terms of lifetime, costs (due to lower yields), and various quality deficiencies such as severe off-angle color shifting and chromatic aliasing. LCD meanwhile constitutes the lion's share of the market. microLED won't come to market for years, but it has greater potential than OLED should its production become economically feasible.

If vendors bothered to pay for high quality displays, we would see smartphones other than those from Samsung with correctly calibrated OLEDs with leading-edge quality. Perhaps a vendor or two other than Apple may one day do that. For now, given Samsung Display’s massive lead I remain skeptical that anyone can compete with it on quality over the next few years.

"Samsung introduces HDR10+ format to combat Dolby Vision"

Samsung: "Would you like another hole in your head?"
Consumer: "Yes, Samsung. Yes, I would."

We weren't necessarily going to have an HDR format war. But Samsung pushing yet another HDR standard could possibly precipitate one.

The only party that benefits from HDR10+ is Samsung.

Google retires Octane

A couple weeks ago I had caught wind of some web benchmark being killed. The first thing I did was check Octane, but the test harness was still unchanged.

Yesterday Google announced that Octane has been retired. It claims the deprecation is due to Octane being over-optimized against for years, sometimes to the detriment of real world application performance. You can still run the benchmark, but the page notes that it is no longer being maintained.

This looks really bad. Octane was far from the worst benchmark, and it had been optimized against for years by pretty much everyone anyway. It did not suddenly become outdated overnight. (This does not in any way mean that benchmarks are somehow useless or unnecessary.)

If Google is working on a new browser benchmark, great. But it's hard to believe that Google didn't actually kill Octane because Edge now beats Chrome on it, which Microsoft immediately promotes upon launching Edge in the Creators Update for Windows 10.

Why the new iPads are delayed

After Apple's rather surprising admission of fault yesterday about not updating the Mac Pro, I would like to address another area where the company happens to be blame-free. By now I have read every possible conspiracy theory under the sun about why Apple hasn't shipped [insert_your_desired_device_here]. Some of the most recent speculation is that Apple didn't announce new high-end iPads because some upcoming iPad-specific software is not ready yet. This is probably nonsense.

Anyone who follows mobile silicon knows how simple the current situation is in all likelihood: the iPads are delayed because Apple can't ship the A11X (Fusion) in sufficient volumes yet to its desired quality metrics (final clocks, et al.). More succintly, Apple has not yet introduced an A11X iPad because 10nm is a bit of a disaster. And despite what you may read, a 10nm tablet SoC would be an A11X, not an A10X.

Everything I have heard points to both Samsung Foundry and TSMC suffering very poor yields currently, in the realm of 30-40%. 10nm is just a shrink node, but it turns out that shrinking transistors is excruciatingly challenging these days because of pesky physics. And if 10nm ends up being an outright bad node, we've seen this leaky transistor nightmare before.

If you're not familiar, 10nm from Samsung Foundry and TSMC are not at all the same as Intel's 10nm. Their 10nm is actually very comparable to Intel's 14nm, with nearly equivalent density. All node names are marketing nonsense these days anyway. 14nm was particularly egregious given the reuse of 20nm's BEOL, and TSMC didn't even call it 14nm simply because "four" sounds like "death" in Mandarin; "16nm" doesn't really exist.

Internal delays are still real delays. With yesterday as an extreme exception, Apple doesn't like to talk about products until just before they're ready to ship. When it does talk in advance, even off-the-record, things can go wrong, and forward-looking statements can go unfulfilled. It doesn't suffer a negative marketing impact by keeping internal delays internal, but much more importantly it also doesn't realize the greater profits it would have if it had been able to ship on time. The A11X delay hurts its bottom line.

The situation really is probably that simple. It's not that Apple suddenly feels like it can and should wait longer between iPad refreshes (19 months now for the 12.9" iPad Pro). And despite Intel's newly constant delays, it is often not actually to blame for Your Theoretical New Mac of Choice not being released. This is a broader topic I may address another time.

On Tizen and buffer overflows

"'It may be the worst code I've ever seen,' he told Motherboard in advance of a talk about his research that he is scheduled to deliver at Kaspersky Lab's Security Analyst Summit on the island of St. Maarten on Monday. 'Everything you can do wrong there, they do it. You can see that nobody with any understanding of security looked at this code or wrote it. It's like taking an undergraduate and letting him program your software.'"

Eh, it's Tizen. I already expected this.

"One example he cites is the use of strcpy() in Tizen. 'Strcpy()' is a function for replicating data in memory. But there's a basic flaw in it whereby it fails to check if there is enough space to write the data, which can create a buffer overrun condition that attackers can exploit. A buffer overrun occurs when the space to which data is being written is too small for the data, causing the data to write to adjacent areas of memory. Neiderman says no programmers use this function today because it's flawed, yet the Samsung coders 'are using it everywhere.'"

...

Sometimes reblogging takes the form of a desperate prayer that people will finally care about how unbelievably bad things are.

ARM's big announcements

While ARM's big.LITTLE has evolved from its initial simplistic CPU migration and cluster migration iterations to simultaneous multi-processing (global task scheduling) to energy aware scheduling, the strict segmentation between big and LITTLE clusters has remained non-ideal. For example, the efficiency of shared memory access among CPUs and the speed of task migration have been significant downsides. Yesterday ARM announced the future of multi-core CPU design for its IP ecosystem, a series of technologies collectively branded DynamIQ which has major implications for SoC design.

I believe the reason DynamIQ took so long to announce, and was probably a ton of work to bring about, was how many interconnected systems were required to be redesigned in concert. And it was probably harder still to get them all working together well and efficiently. New interconnect designs and features are necessary to make the newly possible cluster designs work, and there is a new memory subsystem design for which no details are yet being provided. IP blocks such as accelerators can also now be plugged into these fabrics through a new dedicated low-latency port.

One of the biggest takeaways is that it will finally be possible to combine different core designs together within a cluster. If all of the enabling cache design and scheduling are well-implemented, such CPU designs could theoretically realize significant performance and efficiency improvements. Hopefully a DynamIQ system will not be too much harder to implement for silicon vendors, but I wouldn't assume it will be easy. ARM will really have to make it workable with its stock IP at the very least.

It's hard to say much more about DynamIQ, as ARM is still holding back most of the important details, which I would not really be qualified talk about anyway. There are other announcements such as new CPU instructions for deep learning, which I personally care less about but are still very important for many designs, such as IoT systems without a DSP or GPU. Since Cortex-A53's successor is likely coming at some point, depending on ARM's design goals, I wonder if the first DynamIQ systems will be based entirely on that new core.

Fuchsia’s hypervisor

It's hard to imagine launching a new mobile OS without support for existing apps. In regards to Fuchsia, I initially thought the Magenta kernel might be compatible with the Android user space. While the potential implementation details for Android support have not been clear, another possibility has become more apparent. I'm not saying anything specific will or will not happen, so please take the following with a grain of salt.

Before starting this blog I spotted some initial comments in Fuschia source about a hypervisor. There was extremely little mentioned, it seemed almost tangential or experimental, and honestly I thought, "nah, that wouldn't make any sense."

Google has since been developing a hypervisor as part of Magenta though. (The first commits were seemingly on the same day as my first blog post.) I have been hesitant to write anything, because you can virtualize anything.

This one difference could imply a ton. If this is what Google is doing, then Fuchsia really is a fully new OS, and I can understand why some people would take offense to the idea of it being a mashup of Android and Chrome OS. Significant amounts of code (Mojo) are derived from Chromium, however, and the ability to run Android apps in VMs could look roughly the same as user space compatibility would to users, at least superficially. Fuchsia will still end up gradually replacing Android and Chrome OS for consumers, while Chrome OS will live on for education.

In this scenario, Google would not be swapping kernels outright, but instead running two different kernels on a Fuchsia device. (I will again stress that the consumer definition of an OS is not just the kernel either.) The kernel underlying both Fuchsia and Android/Linux would be Magenta. Perhaps using a microkernel would make this a far more manageable or performant approach.

It is entirely possible to run virtual machines like containers, and many companies offer such solutions. Given such an implementation, it is possible to virtualize an individual app without providing an entire second desktop environment for users to manage, despite the app being run on a guest VM.

Hypervisors and containers are not new ideas whatsoever. Fuchsia’s implementation is a Type-1 hypervisor (bare metal) and is currently being built for x86 and MSM8998, which offer hardware-assisted virtualization commonly featured by CPUs. I have basically zero knowledge about virtualization, so there's not much more I can say beyond that.

Running Android on top of a hypervisor would make Android more of a legacy environment than a legacy API for Fuchsia, per se. It would also make Google’s statements that Chrome OS is not going away and that it wouldn’t make any sense for Android and Chrome OS to merge strictly true on a technical level. Again, this still means Fuchsia would crucially provide a stable driver ABI that Linux does not offer.

The massive upside to this approach would be that Magenta would be a clean slate free of the constraints of Linux, Unix, and possibly POSIX (to some degree?). I’m not a kernel expert, but I understand why this would be a huge deal resulting in numerous important technical differences vs. a *nix OS. I’m sure the Fuchsia team would stress the advantages of its decisions. Performance, efficiency, and a million other things could potentially be improved over Linux.

As for the downsides, wouldn’t a hypervisor significantly hurt mobile battery life? Performance and other functional tradeoffs also seem inevitable; virtualization is certainly not costless. But if the downsides are moderate, Fuchsia’s native performance is hopefully unaffected, and the need for virtualization is eliminated long term by Fuchsia replacing Android entirely, these seem like potentially reasonable costs. Without benchmarking, though, there is no hard data upon which to base an opinion.

Relatively seamless Android compatibility is a must in my opinion, because I don’t see how Fuchsia could be so much radically better on its own that consumers and developers would pounce on it otherwise. I’m sure there will be all sorts of carrots and sticks to incentivize writing Flutter apps, but it’s hard to picture Android developers re-writing all of their apps anytime soon. This is the same developer base that Google has struggled to even get to adopt something as crucial as the JobScheduler API to not waste users' battery life, which should have been the OS’ responsibility in the first place.

Google will probably first market Flutter heavily at I/O 2017 as a better, reactive, and cross-platform way of making apps, and then at some point add on “and they’ll run on Fuchsia as well!” It’s hard to imagine wild developer and device vendor enthusiasm for such an approach without making it clear that Android will eventually become legacy. This is due to the network effects implicit with technological platforms. (Unfortunately “network effects” is used as a hand-wavey phrase with little substance behind it, and no one in the tech industry seems familiar with the technical academic details from economics, but that is a discussion for another day.)

There are other considerations at play such as the AOSP vendor ecosystem, and I’m sure Google performed all due diligence in assessing its strategic technical options. There’s also still a lot in flux technically. Mojo IPC was changed, for example, and Mojo itself was seemingly absorbed into Magenta — Fuchsia’s API could end up being called anything regardless. Previous tricky questions such as “how will Fuchsia maintain compatibility with libraries like bionic?” would become irrelevant, though, if Android is simply virtualized. One remaining key question is how Fuchsia and Android apps would interact. This challenge seems extremely important and far from trivial.

As with any of my articles, technical corrections are both encouraged and highly appreciated.

Intel to acquire Mobileye for $15 billion

I am very, very against instant takes on these sorts of things in general, but it's hard not to instinctively think this is a mistake. Mobileye's tech is going to be ancient history.

Geneva 2017

The Geneva International Motor Show is going on this week. It’s generally regarded as the greatest auto show in the world, and I would kill to attend it to see the new Italian exotics alone.

One of the biggest headlines this year is the 2017 Porsche GT3. The GT3 has regained a manual option, so all is right again in this world. The GT3 in my opinion sets the standard for all other drivers’ cars, as it vies for Car of the Year awards with alarming consistency.

The above link is a walkthrough of the car by Andreas Preuninger, the head of GT cars at Porsche and one of the world’s leading experts on engineering drivers’ cars. This is unfortunately a rather short interview with Andreas, but here are some gems from the past, describing two phenomenal recent Porsches.

The launch I was most looking forward to, though, was that of Ferrari’s successor to the F12berlinetta, the 812 Superfast. As with the rest of its current line, the 812 was penned in-house by Ferrari, unlike the F12 which was of course co-designed with Pininfarina. The 812’s styling is controversial, though I actually like it in the metal, much to my surprise. I didn’t care for the surface detailing of the F12, so Ferrari has “fixed” its front mid-engined GT in my mind, at least if you’re looking at the rear from above. Aerodynamics is not doing any favors for taillight design these days…

The Nintendo Switch's hardware

Nintendo's newest system launches tomorrow. By now, there is very little to discuss about the Switch's hardware that has not already been covered in detail online, though I would like to highlight the excellent technical overviews of both Digital Foundry and AnandTech. Ryan Smith also figured out immediately that the Switch actively converts DisplayPort to HDMI. (I don't like Nintendo's docking implementation, for what it's worth.)

If you're familiar with mobile silicon, it wasn't very hard to understand the Switch's hardware. The key points are that it uses a revised NVIDIA Tegra X1 SoC with a Maxwell GPU, still fabbed on TSMC's 20SoC process. 20nm was unequivocally a bad node. Transistor leakage was a massive challenge on the process, which came just before the transition to FinFETs. Essentially this means that the X1 in the Switch is far from competitive in terms of computational efficiency.

Furthermore, in order to fit its power and thermal budgets, Nintendo had to downclock the X1's CPU, GPU, and memory quite considerably to provide reasonable handheld battery life. Resulting performance is not very impressive to say the least. There wasn't much that Nintendo could do, however, since NVIDIA had nothing newer to sell it that could ship within Nintendo's target deadlines.

Some, namely Apple, were able to wrangle 20nm well enough to take advantage of its benefits, but many silicon vendors stumbled severely with it. To see a game console utilize 20SoC is frankly a bit depressing. A lesser problem is that ARM's Cortex-A57 is not exactly a terribly efficient CPU architecture by 2017 standards. The Maxwell GPU, however, did feature a new, more efficient tiling architecture that was perfectly competitive upon its initial release.

Less known to many is that the original X1 shipped in a rather sad state. NVIDIA failed to make a working interconnect, and ended up shipping broken silicon. The end result was that the four LITTLE CPUs had to be disabled, so only the four big CPUs were actually active in the original SHIELD TV and Google's Pixel C. This did not stop NVIDIA from advertising eight functional CPU cores, however.

New to the Switch is a revised X1 chipset, for which NVIDIA probably removed the four LITTLE cores entirely, and likely replaced the broken interconnect with something simpler and hopefully fully functional. This would be the minimal level of fixes that I hope Nintendo demanded. Beyond that, the revised X1 in the Switch and the 2017 SHIELD TV likely features fairly minor improvements. It's possible that there are differences between the two new chipset revisions, but there is no public information available either way.

Updates:

1) To be clear, even if NVIDIA is calling both chipsets (2015 and 2017) "X1," if it really has removed the LITTLE cores and replaced the interconnect of the latter, it would actually be a new chipset. It's also worth noting that both SHIELD TVs come with 3 GB of LPDDR4 RAM, while the Switch features 4 GB, an important difference.

2) I was wrong: the A53s are still there, and the logic is still broken. Unbelievable. There's no public evidence of what has changed then, though if I had to guess, there might be some semi-custom tweaks to the GPU and not much else. (Those could be important differences, mind you.) Not making any claims!

Why Fuchsia needs color management

There is no substitute for color management. Unfortunately, Android has a major color management problem.

Ultra HD (UHD) is a generational advance in consumer display standards. UHD marketing generally refers to the combination of high dynamic range (HDR) output, an expanded color space beyond sRGB, and a minimum resolution of 2160p (4K). While I intend to write an introductory HDR article in the future, today I want to focus on the many technologies required in order to correctly render UHD content.

There are only two color gamuts permitted for devices and content to be certified by the UHD Alliance: DCI-P3 and Rec. 2020. I am specifically referring to the gamuts, not the color spaces. In order to support an HDR-capable panel, a display vendor also has to utilize an HDR-capable DDIC (display driver). And in software, there will be a separate ICC profile specifically for HDR. (HDR for mobile may also require local tone mapping, but I’m not knowledgeable enough to know the details.)

Android gained "HDR" support in Nougat for Android TV. While strictly speaking this is true, I put HDR in quotes because there is no color management. What Google means is that Nougat supports the HDR10, Dolby Vision, and HLG electro-optical transfer functions (EOTFs), aka non-linear gamma. This almost works.

UHD content currently requires a display that targets the P3 color space. If you watch UHD content on a P3 display from, say, Google Play on Android TV, what actually happens is that desaturated frames are presented to the hardware because sRGB is the assumed color gamut, but the TV then oversaturates the image back to the correct P3 color space. The wrong color values get reversed, in other words. The net result will be a correct image, but only on P3 displays. All of the UI controls overlaid on top of the content will, however, be oversaturated.

While LG and Samsung — if LG even manages to correctly calibrate a panel to a target gamut for once — will claim that they are shipping HDR devices with the G6 and Galaxy S8, and the hardware will be technically capable, Android still has no color management. Want to watch a UHD movie in a separate window while multi-tasking with other apps at the same time? Too bad, all of your color and gamma will be wrong. Color (or HDR) modes are not a real solution, despite what Microsoft will tell you.

Extended sRGB, or e-sRGB, is a linear extension of the sRGB color space. This is an available option for some kinds of applications, such as OpenGL apps. Unfortunately HDR will probably break windowed e-sRGB apps on HDR displays due to the non-linear gamma. There is no substitute for color management, and all apps will require it.

It will be interesting to see whether the LG G6 is calibrated for sRGB or P3, because targeting one inherently compromises on the other. Neither strategy will work perfectly regardless, because Android is stuck on sRGB. sRGB remains the de-facto standard for most content, and this critically includes web content. Without color management, there is simply no way to simultaneously support multiple color gamuts. This is why Samsung calibrates to sRGB and includes a separate P3 color mode, even though almost no consumers will ever know or bother to change color modes. At best, this is a very annoying necessity.

The LG G6 will also suffer from color banding, due to insufficient color gradation steps. Even though its display hardware should be capable of 10-bit output, Snapdragon 821 only has an 8-bit display controller pipeline. Sadly, the iPhone 8 will probably be the first mobile device that can support UHD end-to-end, despite other hardware being HDR-capable dating back to 2016’s Note7.

It is not clear to me if adding color management to Android is even technically possible at this point. But for Fuchsia’s new rendering pipeline, there is probably no excuse not to design it with color management in mind from the outset. Mobile UHD devices require it, if they are ever going to actually work. Don’t let consumers down, Google.

 

Updates:

1) I'm extremely happy to say that Romain Guy himself says "there is no reason why color management cannot be added to Android." Awesome! Hopefully these problems will be addressed in Android O then :)

2) Color management is indeed a feature of Android O.

"Self-Driving Cars Have a Bicycle Problem"

"Deep3DBox is among the best, yet it spots only 74 percent of bikes in the benchmarking test. And though it can orient over 88 percent of the cars in the test images, it scores just 59 percent for the bikes."

As will be a long-running theme on this blog, self-driving cars are much, much more difficult to realize than many people in tech think they are.

The Tech Specs Podcast - Episode 1

For the inaugural Tech Specs Podcast, I'm joined by Brandon Chester, former Mobile Editor at AnandTech. Topics include: Google's Fuchsia, the Samsung Galaxy S8, the LG G6, a tiny bit about the iPhone 8, OLED design considerations, CES, and Apple going UHD.

You can download the Tech Specs Podcast on iTunes and Google Play Music.

What Apple needs to do to go UHD

1) License Dolby Vision for HDR. (This is optional, but I imagine Apple will push DV when all is said and done if they care about quality.)
2) Re-enable HEVC decode in future shipping silicon (likely in an existing SoC).*
3) Support HDCP 2.2 output.
4) Launch UHD iTunes. Apple would certainly have been working on encoding a UHD library and providing the necessary infrastructure around that for a long time.

*It's unclear if Apple has gotten any further with its licensing negotiations around the HEVC Advance issue. Perhaps there is finally licensing progress? Otherwise, AVC it is. I'm not sure, but I don't believe the UHD Alliance mandates HEVC encoding. AVC encoding would certainly be quite burdensome on bandwidth, and Apple has waited.

I will amend this post if I find any further information.