Why the new iPads are delayed

After Apple's rather surprising admission of fault yesterday about not updating the Mac Pro, I would like to address another area where the company happens to be blame-free. By now I have read every possible conspiracy theory under the sun about why Apple hasn't shipped [insert_your_desired_device_here]. Some of the most recent speculation is that Apple didn't announce new high-end iPads because some upcoming iPad-specific software is not ready yet. This is probably nonsense.

Anyone who follows mobile silicon knows how simple the current situation is in all likelihood: the iPads are delayed because Apple can't ship the A11X (Fusion) in sufficient volumes yet to its desired quality metrics (final clockspeeds, et al.). More succintly, Apple has not yet introduced an A11X iPad because 10nm is a bit of a disaster. And despite what you may read, a 10nm tablet SoC would be an A11X, not an A10X.

Everything I have heard points to both Samsung Foundry and TSMC suffering very poor yields currently, in the realm of 30-40%. 10nm is just a shrink node, but it turns out that shrinking transistors is excruciatingly challenging these days because of pesky physics. And if 10nm ends up being an outright bad node, we've seen this leaky transistor nightmare before.

If you're not familiar, 10nm from Samsung Foundry and TSMC is not at all the same as Intel's 10nm. Their 10nm is actually very comparable to Intel's 14nm, with nearly equivalent density. All node names are marketing nonsense these days anyway. 14nm was particularly egregious given the reuse of 20nm's BEOL, and TSMC didn't even call it 14nm simply because "four" sounds like "death" in Mandarin; "16nm" doesn't really exist.

Internal delays are still real delays. With yesterday as an extreme exception, Apple doesn't like to talk about products until just before they're ready to ship. When it does talk in advance, even off-the-record, things can go wrong, and forward-looking statements can go unfulfilled. It doesn't suffer a negative marketing impact by keeping internal delays internal, but much more importantly it also doesn't realize the greater profits it would have if it had been able to ship on time. The A11X delay hurts its bottom line.

The situation really is probably that simple. It's not that Apple suddenly feels like it can and should wait longer between iPad refreshes (19 months now for the 12.9" iPad Pro). And despite Intel's newly constant delays, it is often not actually to blame for Your Theoretical New Mac of Choice not being released. This is a broader topic I may address another time.

On Tizen and buffer overflows

"'It may be the worst code I've ever seen,' he told Motherboard in advance of a talk about his research that he is scheduled to deliver at Kaspersky Lab's Security Analyst Summit on the island of St. Maarten on Monday. 'Everything you can do wrong there, they do it. You can see that nobody with any understanding of security looked at this code or wrote it. It's like taking an undergraduate and letting him program your software.'"

Eh, it's Tizen. I already expected this.

"One example he cites is the use of strcpy() in Tizen. 'Strcpy()' is a function for replicating data in memory. But there's a basic flaw in it whereby it fails to check if there is enough space to write the data, which can create a buffer overrun condition that attackers can exploit. A buffer overrun occurs when the space to which data is being written is too small for the data, causing the data to write to adjacent areas of memory. Neiderman says no programmers use this function today because it's flawed, yet the Samsung coders 'are using it everywhere.'"

...

Sometimes reblogging takes the form of a desperate prayer that people will finally care about how unbelievably bad things are.

ARM's big announcements

While ARM's big.LITTLE has evolved from its initial simplistic CPU migration and cluster migration iterations to simultaneous multi-processing (global task scheduling) to energy aware scheduling, the strict segmentation between big and LITTLE clusters has remained non-ideal. For example, the efficiency of shared memory access among CPUs and the speed of task migration have been significant downsides. Yesterday ARM announced the future of multi-core CPU design for its IP ecosystem, a series of technologies collectively branded DynamIQ which has major implications for SoC design.

I believe the reason DynamIQ took so long to announce, and was probably a ton of work to bring about, was how many interconnected systems were required to be redesigned in concert. And it was probably harder still to get them all working together well and efficiently. New interconnect designs and features are necessary to make the newly possible cluster designs work, and there is a new memory subsystem design for which no details are yet being provided. IP blocks such as accelerators can also now be plugged into these fabrics through a new dedicated low-latency port.

One of the biggest takeaways is that it will finally be possible to combine different core designs together within a cluster. If all of the enabling cache design and scheduling are well-implemented, such CPU designs could theoretically realize significant performance and efficiency improvements. Hopefully a DynamIQ system will not be too much harder to implement for silicon vendors, but I wouldn't assume it will be easy. ARM will really have to make it workable with its stock IP at the very least.

It's hard to say much more about DynamIQ, as ARM is still holding back most of the important details, which I would not really be qualified talk about anyway. There are other announcements such as new CPU instructions for deep learning, which I personally care less about but are still very important for many designs, such as IoT systems without a DSP or GPU. Since Cortex-A53's successor is likely coming at some point, depending on ARM's design goals, I wonder if the first DynamIQ systems will be based entirely on that new core.

Fuchsia’s hypervisor

It's hard to imagine launching a new mobile OS without support for existing apps. In regards to Fuchsia, I initially thought the Magenta kernel might be compatible with the Android user space. While the potential implementation details for Android support have not been clear, another possibility has become more apparent. I'm not saying anything specific will or will not happen, so please take the following with a grain of salt.

Before starting this blog I spotted some initial comments in Fuschia source about a hypervisor. There was extremely little mentioned, it seemed almost tangential or experimental, and honestly I thought, "nah, that wouldn't make any sense."

Google has since been developing a hypervisor as part of Magenta though. (The first commits were seemingly on the same day as my first blog post.) I have been hesitant to write anything, because you can virtualize anything.

This one difference could imply a ton. If this is what Google is doing, then Fuchsia really is a fully new OS, and I can understand why some people would take offense to the idea of it being a mashup of Android and Chrome OS. Significant amounts of code (Mojo) are derived from Chromium, however, and the ability to run Android apps in VMs could look roughly the same as user space compatibility would to users, at least superficially. Fuchsia will still end up gradually replacing Android and Chrome OS for consumers, while Chrome OS will live on for education.

In this scenario, Google would not be swapping kernels outright, but instead technically would be running two different kernels on a Fuchsia device. (I will again stress that the consumer definition of an OS is not just the kernel either.) The kernel underlying both Fuchsia and Android/Linux would be Magenta. Perhaps using a microkernel would make this a far more manageable or performant approach.

It is entirely possible to run virtual machines like containers, and many companies offer such solutions. Given such an implementation, it is possible to virtualize an individual app without providing an entire second desktop environment for users to manage, despite the app being run on a guest VM.

Hypervisors and containers are not new ideas whatsoever. Fuchsia’s implementation is a Type-1 hypervisor (bare metal) and is currently being built for x86 and MSM8998, which offer hardware-assisted virtualization commonly featured by CPUs. I have basically zero knowledge about virtualization, so there's not much more I can say beyond that.

Running Android on top of a hypervisor would make Android more of a legacy environment than a legacy API for Fuchsia, per se. It would also make Google’s statements that Chrome OS is not going away and that it wouldn’t make any sense for Android and Chrome OS to merge strictly true on a technical level. Again, this still means Fuchsia would crucially provide a stable driver ABI that Linux does not offer.

The massive upside to this approach would be that Magenta would be a clean slate free of the constraints of Linux, Unix, and possibly POSIX (to some degree?). I’m not a kernel expert, but I understand why this would be a huge deal resulting in numerous important technical differences vs. a *nix OS. I’m sure the Fuchsia team would stress the advantages of its decisions. Performance, efficiency, and a million other things could potentially be improved over Linux.

As for the downsides, wouldn’t a hypervisor significantly hurt mobile battery life? Performance and other functional tradeoffs also seem inevitable; virtualization is certainly not costless. But if the downsides are moderate, Fuchsia’s native performance is hopefully unaffected, and the need for virtualization is eliminated long term by Fuchsia replacing Android entirely, these seem like potentially reasonable costs. Without benchmarking, though, there is no hard data upon which to base an opinion.

Relatively seamless Android compatibility is a must in my opinion, because I don’t see how Fuchsia could be so much radically better on its own that consumers and developers would pounce on it otherwise. I’m sure there will be all sorts of carrots and sticks to incentivize writing Flutter apps, but it’s hard to picture Android developers re-writing all of their apps anytime soon. This is the same developer base that Google has struggled to even get to adopt something as crucial as the JobScheduler API to not waste users' battery life, which should have been the OS’ responsibility in the first place.

Google will probably first market Flutter heavily at I/O 2017 as a better, reactive, and cross-platform way of making apps, and then at some point add on “and they’ll run on Fuchsia as well!” It’s hard to imagine wild developer and device vendor enthusiasm for such an approach without making it clear that Android will eventually become legacy. This is due to the network effects implicit with technological platforms. (Unfortunately “network effects” is used as a hand-wavey phrase with little substance behind it, and no one in the tech industry seems familiar with the technical academic details from economics, but that is a discussion for another day.)

There are other considerations at play such as the AOSP vendor ecosystem, and I’m sure Google performed all due diligence in assessing its strategic technical options. There’s also still a lot in flux technically. Mojo IPC was changed, for example, and Mojo itself was seemingly absorbed into Magenta — Fuchsia’s API could end up being called anything regardless. Previous tricky questions such as “how will Fuchsia maintain compatibility with libraries like bionic?” would become irrelevant, though, if Android is simply virtualized. One remaining key question is how Fuchsia and Android apps would interact. This challenge seems extremely important and far from trivial.

As with any of my articles, technical corrections are both encouraged and highly appreciated.

Intel to acquire Mobileye for $15 billion

I am very, very against instant takes on these sorts of things in general, but it's hard not to instinctively think this is a mistake. Mobileye's tech is going to be ancient history.

Geneva 2017

The Geneva International Motor Show is going on this week. It’s generally regarded as the greatest auto show in the world, and I would kill to attend it to see the new Italian exotics alone.

One of the biggest headlines this year is the 2017 Porsche GT3. The GT3 has regained a manual option, so all is right again in this world. The GT3 in my opinion sets the standard for all other drivers’ cars, as it vies for Car of the Year awards with alarming consistency.

The above link is a walkthrough of the car by Andreas Preuninger, the head of GT cars at Porsche and one of the world’s leading experts on engineering drivers’ cars. This is unfortunately a rather short interview with Andreas, but here are some gems from the past, describing two phenomenal recent Porsches.

The launch I was most looking forward to, though, was that of Ferrari’s successor to the F12berlinetta, the 812 Superfast. As with the rest of its current line, the 812 was penned in-house by Ferrari, unlike the F12 which was of course co-designed with Pininfarina. The 812’s styling is controversial, though I actually like it in the metal, much to my surprise. I didn’t care for the surface detailing of the F12, so Ferrari has “fixed” its front mid-engined GT in my mind, at least if you’re looking at the rear from above. Aerodynamics is not doing any favors for taillight design these days…

The Nintendo Switch's hardware

Nintendo's newest system launches tomorrow. By now, there is very little to discuss about the Switch's hardware that has not already been covered in detail online, though I would like to highlight the excellent technical overviews of both Digital Foundry and AnandTech. Ryan Smith also figured out immediately that the Switch actively converts DisplayPort to HDMI. (I don't like Nintendo's docking implementation, for what it's worth.)

If you're familiar with mobile silicon, it wasn't very hard to understand the Switch's hardware. The key points are that it uses a revised NVIDIA Tegra X1 SoC with a Maxwell GPU, still fabbed on TSMC's 20SoC process. 20nm was unequivocally a bad node. Transistor leakage was a massive challenge on the process, which came just before the transition to FinFETs. Essentially this means that the X1 in the Switch is far from competitive in terms of computational efficiency.

Furthermore, in order to fit its power and thermal budgets, Nintendo had to downclock the X1's CPU, GPU, and memory quite considerably to provide reasonable handheld battery life. Resulting performance is not very impressive to say the least. There wasn't much that Nintendo could do, however, since NVIDIA had nothing newer to sell it that could ship within Nintendo's target deadlines.

Some, namely Apple, were able to wrangle 20nm well enough to take advantage of its benefits, but many silicon vendors stumbled severely with it. To see a game console utilize 20SoC is frankly a bit depressing. A lesser problem is that ARM's Cortex-A57 is not exactly a terribly efficient CPU architecture by 2017 standards. The Maxwell GPU, however, did feature a new, more efficient tiling architecture that was perfectly competitive upon its initial release.

Less known to many is that the original X1 shipped in a rather sad state. NVIDIA failed to make a working interconnect with cache coherency, and ended up shipping broken silicon. The end result was that the four LITTLE CPUs had to be disabled, so only the four big CPUs were actually active in the original SHIELD TV and Google's Pixel C. This did not stop NVIDIA from advertising eight functional CPU cores, however.

New to the Switch is a revised X1 chipset, for which NVIDIA probably removed the four LITTLE cores entirely, and likely replaced the broken interconnect with something simpler and hopefully fully functional. This would be the minimal level of fixes that I hope Nintendo demanded. Beyond that, the revised X1 in the Switch and the 2017 SHIELD TV likely features fairly minor improvements. It's possible that there are differences between the two new chipset revisions, but there is no public information available either way.

 

Updates

1) To be clear, even if NVIDIA is calling both chipsets (2015 and 2017) "X1," if it really has removed the LITTLE cores and replaced the interconnect of the latter, it would actually be a new chipset. It's also worth noting that both SHIELD TVs come with 3 GB of LPDDR4 RAM, while the Switch features 4 GB, an important difference.

2) I was wrong: the A53s are still there, and the logic is still broken. Unbelievable. There's no public evidence of what has changed then, though if I had to guess, there might be some semi-custom tweaks to the GPU and not much else. (Those could be important differences, mind you.) Not making any claims!

Why Fuchsia needs color management

There is no substitute for color management. Unfortunately, Android has a major color management problem.

Ultra HD (UHD) is a generational advance in consumer display standards. UHD marketing generally refers to the combination of high dynamic range (HDR) output, an expanded color space beyond sRGB, and a minimum resolution of 2160p (4K). While I intend to write an introductory HDR article in the future, today I want to focus on the many technologies required in order to correctly render UHD content.

There are only two color gamuts permitted for devices and content to be certified by the UHD Alliance: DCI-P3 and Rec. 2020. I am specifically referring to the gamuts, not the color spaces. In order to support an HDR-capable panel, a display vendor also has to utilize an HDR-capable DDIC (display driver). And in software, there will be a separate ICC profile specifically for HDR. (HDR for mobile may also require local tone mapping, but I’m not knowledgeable enough to know the details.)

Android gained "HDR" support in Nougat for Android TV. While strictly speaking this is true, I put HDR in quotes because there is no color management. What Google means is that Nougat supports the HDR10, Dolby Vision, and HLG electro-optical transfer functions (EOTFs), aka non-linear gamma. This almost works.

UHD content currently requires a display that targets the P3 color space. If you watch UHD content on a P3 display from, say, Google Play on Android TV, what actually happens is that desaturated frames are presented to the hardware because sRGB is the assumed color gamut, but the TV then oversaturates the image back to the correct P3 color space. The wrong color values get reversed, in other words. The net result will be a correct image, but only on P3 displays. All of the UI controls overlaid on top of the content will, however, be oversaturated.

While LG and Samsung — if LG even manages to correctly calibrate a panel to a target gamut for once — will claim that they are shipping HDR devices with the G6 and Galaxy S8, and the hardware will be technically capable, Android still has no color management. Want to watch a UHD movie in a separate window while multi-tasking with other apps at the same time? Too bad, all of your color and gamma will be wrong. Color (or HDR) modes are not a real solution, despite what Microsoft will tell you.

Extended sRGB, or e-sRGB, is a linear extension of the sRGB color space. This is an available option for some kinds of applications, such as OpenGL apps. There is no substitute for color management, and all apps will require it.

It will be interesting to see whether the LG G6 is calibrated for sRGB or P3, because targeting one inherently compromises on the other. Neither strategy will work perfectly regardless, because Android is stuck on sRGB. sRGB remains the de-facto standard for most content, and this critically includes web content. Without color management, there is simply no way to simultaneously support multiple color gamuts. This is why Samsung calibrates to sRGB and includes a separate P3 color mode, even though almost no consumers will ever know or bother to change color modes. At best, this is a very annoying necessity.

The LG G6 will also suffer from color banding, due to insufficient color gradation steps. Even though its display hardware should be capable of 10-bit output, Snapdragon 821 only has an 8-bit display controller pipeline. Sadly, the iPhone 8 will probably be the first mobile device that can support UHD end-to-end, despite other hardware being HDR-capable dating back to 2016’s Note7.

It is not clear to me if adding color management to Android is even technically possible at this point. But for Fuchsia’s new rendering pipeline, there is probably no excuse not to design it with color management in mind from the outset. Mobile UHD devices require it, if they are ever going to actually work. Don’t let consumers down, Google.

 

Updates

1) I'm extremely happy to say that Romain Guy himself says "there is no reason why color management cannot be added to Android." Awesome! Hopefully these problems will be addressed in Android O then :)

2) Color management is indeed a feature of Android O.

"Self-Driving Cars Have a Bicycle Problem"

"Deep3DBox is among the best, yet it spots only 74 percent of bikes in the benchmarking test. And though it can orient over 88 percent of the cars in the test images, it scores just 59 percent for the bikes."

As will be a long-running theme on this blog, self-driving cars are much, much more difficult to realize than many people in tech think they are.

The Tech Specs Podcast — Episode 1

For the inaugural Tech Specs Podcast, I'm joined by Brandon Chester, former Mobile Editor at AnandTech. Topics include: Google's Fuchsia, the Samsung Galaxy S8, the LG G6, a tiny bit about the iPhone 8, OLED design considerations, CES, and Apple going UHD.

You can download the Tech Specs Podcast on iTunes and Google Play Music.

What Apple needs to do to go UHD

1) License Dolby Vision for HDR. (This is optional, but I imagine Apple will push DV when all is said and done if they care about quality.)
2) Re-enable HEVC decode in future shipping silicon (likely in an existing SoC).*
3) Support HDCP 2.2 output.
4) Launch UHD iTunes. Apple would certainly have been working on encoding a UHD library and providing the necessary infrastructure around that for a long time.

*It's unclear if Apple has gotten any further with its licensing negotiations around the HEVC Advance issue. Perhaps there is finally licensing progress? Otherwise, AVC it is. I'm not sure, but I don't believe the UHD Alliance mandates HEVC encoding. AVC encoding would certainly be quite burdensome on bandwidth, and Apple has waited.

I will amend this post if I find any further information.

"Proof"

Since writing my last piece, some Googlers reached out and said my assessment was largely correct. The Andromeda part was clearly wrong, though, so I will try to rectify that.

Based on code like this that was pointed out to me, it looks like Andromeda might be tied to Android's free-form window mode for laptops/2-in-1s and possibly tablets. Though my initial article incorrectly conflated the two independent efforts, Andromeda and Fuchsia will still eventually combine together as a laptop platform. I think the app “chrome” will probably look like Android (because they will be Android or eventually Flutter apps), but with floating windows and elements of or even an overall UI similar to that of Chrome OS. Supported inputs would be mouse or trackpad and keyboard, and possibly touch. Not necessarily wildly different visually from Chrome OS today, in other words, but using the Android API. Andromeda may also explain the “Chrome OS will merge into Android” claim originally made by the Wall Street Journal. The underlying OS will still eventually be Fuchsia, though.

For those wondering about ARC, it was kind of a failed experiment. It sort of worked, but it didn’t really prove to be a workable solution. I really don’t want to get into this any further here. And as stated before, NaCl looks dead.

I think the Android API and runtime will continue to function as before on all Fuchsia devices, except now the underlying OS will be Fuchsia, and the kernel will be Magenta, not Linux. And then there would also be Mojo, Flutter, etc. at least starting on Andromeda devices. It’s hard to imagine pushing both Flutter and the Android API forever, though. Android will likely gradually have to fade away (over many, many years).

Back to the more important point: yes, Fuchsia will be Google’s new OS underpinning all its consumer devices eventually. I think both a “Pixel 3” laptop (or whatever this hypothetical product would be called) and the Pixel 2 smartphone will probably eventually run on top of Fuchsia, but I make zero promises as to anything shipping at any particular point in time, because I have no idea. Still, that is my suspicion based on public commits.

And again, if Google doesn’t break compatibility with the Linux user space, yes, it really can swap out the Android kernel (Linux) for Magenta/Fuchsia, and leave the Android API in its place. Standards like POSIX do exist, after all. Here is some code pointing to exactly that. (Updated thoughts.)

 

And here as well is some “proof” about Fuchsia replacing Linux (in extremely deliberate scare quotes):

Pink + Purple == Fuchsia. “Pink” is a reference to Apple’s Taligent operating system (which ran legacy Mac OS apps on top of a microkernel). “Purple” is Project Purple, the original iPhone project.

 

Sure sounds like Fuchsia will run Android apps on smartphones to me! (Yes, engineers working on ambitious special projects love to make witty references, and I think these Apple-related ones are really classy and ace.) Andromeda also happens to be a type of Fuchsia plant, which may be a coincidence. I remember the Purple reference from last year, but didn’t know what Pink referred to at the time. And I never thought to connect these references to Fuchsia being able to run Android apps on a modern smartphone platform. Major kudos to Zargh for connecting the dots! (Click the links for his explanation of the references in greater detail, but I don't really want to draw even more attention to the engineers for the sake of their privacy.)

And as someone on Hacker News once again commented: “ANDROid + chROME + DArt = ANDROMEDA?” I would doubt the Dart part though.

Lastly, I will again note efforts such as A/B (seamless) updates that Google added in Nougat, which may help out with the Fuchsia transition. Crazier things have happened. Samsung once migrated Galaxy Gear owners from Android to Tizen, for one.

Google’s not-so-secret new OS

To clarify: I unfortunately used Andromeda equivalently with Fuchsia in this article. They are independent things, but this article is entirely about Fuchsia. Since it is an Android effort, however, Andromeda could eventually be built on top of the same OS (kernel) for phones, laptops, etc. - Fuchsia. I make no claims about release timing or what the final marketing names will be. Please replace Andromeda with Fuchsia in your head while reading this article. For a more recent and much more accurate summary of Fuchsia, please see this article.

 

I decided to dig through open source to examine the state of Google’s upcoming Andromeda OS. For anyone unfamiliar, Andromeda seems to be the replacement for both Android and Chrome OS (cue endless debates over the semantics of that, and what it all entails). Fuchsia is the actual name of the operating system, while Magenta is the name of the kernel, which specifically is a microkernel. Many of the architectural design decisions appear to have unsurprisingly been focused on creating a highly scalable platform.

It goes without saying that Google isn’t trying to hide Fuchsia. People have clearly discovered that Google is replacing Android’s Linux kernel. Still, I thought it would be interesting for people to get a better sense of what the OS actually is. This article is only intended to be an overview of the basics, as far as I can comment reasonably competently. (I certainly never took an operating systems class!)

To my naive eyes, rather than saying Chrome OS is being merged into Android, it looks more like Android and Chrome OS are both being merged into Fuchsia. It’s worth noting that these operating systems had previously already begun to merge together to an extent, such as when the Android team worked with the Chrome OS team in order to bring Update Engine to Nougat, which introduced A/B updates to the platform.

Google is unsurprisingly bringing up Andromeda on a number of platforms, including the humble Intel NUC. ARM, x86, and MIPS bring-up is exactly what you would expect for an Android successor, and it also seems clear that this platform will run on Intel laptops. More on this later.

My best guess is that Android as an API and runtime will live on as a legacy environment within Andromeda. That’s not to say that all development of Android would immediately stop, which seems extremely unlikely. But Google can’t push two UI APIs as equal app frameworks over the long term: Mojo is clearly the future.

Ah, but what is Mojo? Well it’s the new API for writing Andromeda apps, and it comes from Chromium. Mojo was originally created to “extract a common platform out of Chrome's renderer and plugin processes that can support multiple types of sandboxed content.” It seems to have enabled Android apps in Chrome OS, and now it will serve an even more extensive role as the developer API for Andromeda. (Sidenote: as far as I can tell, Native Client is well and truly dead. Pour one out for the valiant effort that was NaCl.)

Mojo in Fuchsia features intriguingly extensive language support. C/C++, Dart, Go, Java, Python, and Rust all have bindings to Mojo. I am guessing that C/C++ is for native development, Go is for networking, Java is for Android(?), Python is for scripting, and Rust is for writing portions of the kernel. (Or perhaps Rust's usage is minimal, suggests a commenter on Hacker News.) Mixing and matching languages aside, the main UI API is based on, yes, Dart.

Flutter is an existing Google widget framework for apps written in Dart, and it has been repurposed to become the UI framework for Andromeda. Flutter includes a series of Material Design widgets and was engineered to render apps up to 120fps.* I imagine Andromeda’s standard UI components will look similar if not identical to those of Android. A physically based renderer, Escher, is apparently being used to render, well, material elements and shadows in a high quality manner.

I have very strong reservations about Dart as the language of the primary platform API, but it’s best to wait for the fully revealed details before forming an opinion. The reason for Dart is obvious, however: to enable a cross-platform app framework. The pitch will clearly be that developers can write a Flutter app once and have it run on Andromeda, Android, and iOS with minimal extra work, in theory. (Flutter does not target the web.) Even if Andromeda is its full replacement, Android will be a separate developer target for many years given its gigantic installed base.

Andromeda's actual app runtime is called Modular, “a post-API programming model that allows applications to cooperate in a shared context without the need to call each other's APIs directly." To do this it uses Mojo inter-process communication (IPC) messages, which are exchanged via low-level primitives in the form of message pipes (small amounts of data), data pipes (large amounts of data), and shared buffers.

I'm not technically knowledgeable enough to understand how the various languages interface over these IPC calls, and what exactly that enables. The IDL used is the Fuchsia Interface Description Language (FIDL), “an encoding format used to describe [Mojo] interfaces to be used on top of Magenta message pipes. They are the standard way applications talk to each other in Fuchsia.” Right now at least, only C/C++, Dart, and Go have supported bindings. Dart is thus the main platform language. (My understanding is that Go is not exactly ideal for writing UI apps.)

Why Andromeda?

Observers have naturally wondered for years if Google would ever unify Android and Chrome OS, in line with Sundar's obvious desires. Despite some questionable half-measures along the way, this will now finally take place. Andromeda will immensely help Google bring together and unify a broad array of in-house technologies, beyond just its two consumer OSes.

Andromeda clearly serves Google’s own purposes. The only platforms that the company really supports are the web, Android, and iOS, in that order. I think Windows support is effectively limited to Chrome at this point. Flutter exactly exemplifies this strategy. Google will no longer have to field separate Android and iOS apps and teams, and can now greatly focus its app development efforts. "More wood behind fewer arrows," in other words.

Observations

As you would expect, Google submitted patches for Fuchsia to both LLVM and GCC.

Interestingly web rendering is currently based on WebKit, but this is simply “meant as a stopgap until Chromium is ported to Fuchsia.”

There is a machine learning/context framework baked into Fuchsia, which seems conceptually similar to the existing Google Awareness API.

Questions

What happens to the Android Open Source Project and its huge ecosystem of partners?

Does this platform only ship on laptops for its initial release?

Will Android Studio be the basis for Andromeda’s IDE? If so, ouch. IDEs written in Java are wildly slow…

How do you replace the enormous contributions of the Linux kernel and Linaro? It seems like Google will preserve compatibility with the Linux user space to make its new OS a tractable effort in the first place, such as by supporting ELF binaries. But what about existing massive projects, like an Energy Aware Scheduler? Or will Google continue to utilize much of its existing Linux code for now?

Progressing the PC

I can think of more than a few (read: one hundred) reasons why you would want to replace Android. I will highlight only one: to completely rewrite the rendering pipeline. The market for Chrome OS meanwhile is of course mostly limited to education, not to diminish it. Andromeda, however, will provide a laptop OS with native apps and backwards compatibility with Android. The general UI could very well look much the same visually as Chrome OS does now. I also have to imagine the Android update problem (a symptom of Linux's lack of a stable driver ABI) will at last be solved by Andromeda (minus the carriers), but one can never be too sure.

The promise of a laptop platform that can bring all the advances of mobile, bereft of the vestiges of PC legacy, while also embracing proven input and interface paradigms is extremely appealing. And since Apple has only inched macOS along in recent years despite its decades of cruft and legacy, I welcome Google’s efforts wholeheartedly. Hopefully 2017 will finally be the beginning of the new PC.

 

Update

Since I've gotten a ton of questions about Fuchsia being a mobile platform, here is some clear evidence:

Via the Magenta repository: "Magenta targets modern phones and modern personal computers with fast processors, non-trivial amounts of ram (sic) with arbitrary peripherals doing open ended computation."

And here is Google working on Fuchsia bring-up on Snapdragon 835 (MSM8998), which doesn't really fit the IoT segment since mobile SoCs stipulate virtual memory and a memory management unit (MMU). If this is for tablets, why not just use Android?

A commenter on Hacker News has also pointed out an Atheros Wi-Fi driver and an Intel GPU driver. There are tons of similar examples in the Fuchsia source.

 

Notes

- I am not a programmer, so if anything stated above is incorrect, please, please send me corrections. I would also greatly appreciate any clarifying comments, which were one of my primary motivations for writing this piece.

- For anyone interested, I intend to write quite often about consumer technology on this blog. Topics will include hardware, software, design, and more. You can follow via RSS or Twitter, and possibly through other platforms soon.

* At least in trivial cases. I don’t see the average garbage-collected language using a virtual machine allowing for a target higher than 60fps realistically. (Yes, it is JIT'ed, and the Dart team is working on Ahead-of-Time compilation.)