Five problems with Google Android
While I’m certainly unqualified to discuss the usual UR material, we can’t all stay on topic all the time. I thought it would be fun to actually exercise my actual qualifications for a bit. If you’re uninterested, I apologize. (Next up: actual responses to readers’ comments!)
Android reminds me a lot of the first cellphone OS I ever worked on, way back in the Mobile Paleolithic—1997. “Liberty” was written in C++, not Java, and it used its own kernel rather than Linux. But these are details. The basic idea of an object-oriented application framework is more or less the same. I suspect Android is also not unlike the Danger Research (Sidekick) OS, as some of the same people are involved.
In other words, Android is a conservative design. It does nothing to disabuse anyone of the general view held by most programmers today, which is that the era of interesting software is over. Done, finito, stick a fork in it. Certainly this is the safe position. And when you have a trillion-peso market cap, why not play it safe? I suspect that if I worked for Google and you asked me to build a handset OS, I might well come up with something much like Android.
So I can’t really blame Google for the fact that Android strikes me as kind of lame. I blame society. (I always blame society.) And I’m also pleased to note that Android’s Java VM was designed by an old classmate of mine, a good guy who I hope is now very wealthy. (I never knew “Bornstein” was an Icelandic name. Maybe that’s just what you get when you show up at Ellis Island with a handle like “Björnssøn.”)
But that said:
First, Android applications are written in Java.
As Google puts it, “all applications are equal.” However, some applications are more equal than others. Because all applications depend on native libraries that are not written in Java. And no application—and no end user—has any way to add any native library.
In the Android design overview, everything in the middle two layers (framework, runtime, library) is closed. For example, you cannot add your own presence manager, your own media types, your own browser, etc. You could probably build some of these things at the user level, but compared to the built-in versions they will suck.
An open computing platform is a platform on which the end user has the same level of control over the system as the manufacturer. Ideally, as in PCs, this includes the power to install a custom OS. (Handset manufacturers could build phones in which the user could reflash the whole Android OS, but they probably won’t—if only for regulatory reasons.)
But there is a substantial difference between a device in which programmers have to use the Android Java framework and one in which it is only the default option. The latter is strictly more powerful. And describing both as “open” is an unnecessary overloading of the term. (Perhaps Google, since it places such a premium on corporate honesty, could call its platform “fairly open” or “pretty open.”)
Now, no one at Google is stupid. They’ve built the thing this way for a reason. They reason that (a) no one but a total major-league geek wants a command-line shell or a C compiler on their cell phone; (b) deploying hardware-independent, portable native programs is extremely difficult; (c) a secure native interface is unheard of; and (d) Android Java satisfies the needs of 99% of application developers.
They’re right about all these things. But they are still wrong.
Cell phone OSes, historically, have sucked. So it’s easy to fall into the pattern of believing that they will always suck, and that if you make them suck 500% less, you have reached nirvana. Anything that goes beyond setting your own ringtone and wallpaper is a real achievement. If you can install applications, joy is yours. If those applications don’t utterly suck, etc.
But this suckage is an artifact of corporate history. And not only is a cellphone a personal computer—it is actually much more personal than a PC. It is a true single-user device. If anything, it should be more customizable than a PC.
Can you take your Android phone, off the shelf, and reprogram it to emulate an iPhone? Let’s say some vendor shipped the iPhone hardware with the Android software. Could you turn your gPhone into an iPhone? If not, why not?
I’m pretty sure the answers are (a) no, and (b) because the Android application framework is different from the iPhone application framework. Perhaps Android is superior to iPhone in all respects. But frankly, I doubt it.
Also, this idea that C is the native language of viruses and worms is an unquestioned assumption that needs to be questioned. Hasn’t anyone heard of BSD jail()? How hard, exactly, is it to add this functionality to Linux? Isn’t restricting the nefarious activities of machine code the whole point of an OS?
Second, the installable application is dead. It just hasn’t noticed it yet. And nor has Google.
While calling it “open” is going too far, Android Java is certainly a very useful programming environment, in which many fine and useful applications can be written. For example, Android exposes a much richer feature set and application model than the abominable J2ME interface, which has all the disadvantages of a standard without actually being standard. (I do have to salute Google for giving Sun the middle finger. If not the whole fist.)
But Android is one of two programming environments on the Android platform. The other, of course, is the browser. Basically, on Android, you can code in Java or in Javascript.
“Web 2.0” has to be the worst programming environment to ever achieve wide popularity. It is incredibly buggy, poorly standardized, slow, and basically broken in every imaginable way. So it is rather difficult to see its very real virtues.
If you were actually designing your entire system from actual scratch, there is no way you would have one programming environment in the browser and another which depended on some arcane “installation” procedure. If you want to use an application, browse to it. If you want to switch between running applications, that’s why the good Lord gave us tabbed browsing. And so on.
But again, Google does everything for a reason. If this reason is not good, it is generally at least sensible.
“Web 2.0,” which is basically a collection of random unspecified features written by 23-year-old goth acidheads at Netscape in 1995, cannot even begin to solve the kinds of application problems that an Android Java application can solve. And the Web 2.0 platform is mature. You can slap layers on it, but the standard is unfixable and unimprovable.
For a company with the resources of Google, however, this is just small thinking. Suckage is not an obstacle to Google. Suckage is an opportunity. Or at least it should be.
A company with the stature of Google should be thinking hard about how to fix the Web. This involves delivering a new network programming environment (as opposed to a document delivery service hacked to be programmable). There is no shame in competing with a standard. In fact, by writing Android Java, Google is doing exactly that—both for Java and for the Web. Every developer of a mobile service will have to think: do I develop for J2ME, for Android, or for the browser?
But Android is not a better Web 2.0, nor anything like it. Instead it’s a better PalmOS. Yet another standalone OO programming environment. With a networking API. Yawn.
One of the many reasons mobile computing should be new and cool is that, in the past, new generations of software appeared on smaller computers and supplanted their slower-moving ancestors. Minicomputers running Unix replaced batch-processing mainframes. Workstations replaced minis, PCs replaced workstations, etc. Today all servers run OSes developed for workstations and PCs.
If this trend had continued, we would have expected a new generation of cell-phone software to create a new set of standards, which would filter back to PCs. For a variety of stupid reasons, this has not happened. But it doesn’t mean it can’t happen. And if it’s going to happen, you’d think it would be a company like Google that made it happen.
How would you build a single programming environment with the advantages of both Android Java and Web 2.0, and the disadvantages of neither? I’m not exactly sure. It wouldn’t be easy. But then, building Android wasn’t easy, either.
Third, Android’s graphics framework uses pixel coordinates and immediate-mode 2D drawing.
Now this is just a mistake. I was looking at the doc and I could have sworn that someone had mischievously linked me to the Xlib manpages. WTF, guys? Is there a timewarp in Mountain View I don’t know about? I know you’re on the old SGI campus, but really…
The idea that, in 2007, anyone is writing 2D UIs with pixel drawing functions just burns me up. The right way to draw a UI is to construct a vector data structure, à la SVG or whatever, that represents the visual state of the screen in resolution-independent coordinates, and then just render the fscker. No, you don’t have to actually construct an SVG text file. You even have a GL library in there! You can just treat 2D as a special case of 3D! People!
This is not just an esoteric developer issue. It has real usability ramifications.
I don’t know anything about the iPhone’s software stack, but I’m pretty sure it uses Quartz, in which all coordinates are device-independent. But I didn’t even need to know this. The whole UI just screams “vector.” As soon as I saw the demos, my first thought was “now no one will ever write another GUI which uses raster graphics.” Little did I know that down in Mountain View, a crack team of hotshot Googlers was busy recreating the Athena toolkit.
With a lot of work, with good layout and compositing and so forth, it is possible to make a raster UI look pretty good. The Android UIs look pretty good. But they don’t look anywhere near as slick as the iPhone. When you don’t isolate device coordinates completely from the programmer, they leak everywhere. You are constantly deciding whether that line is 1 pixel or 2 pixels thick. And your designers curse you all day long.
You do need a couple of things to build a pure vector UI. You need a high-resolution screen, a fast CPU, and hopefully some kind of GPU. But—as the iPhone proves—all of these are available in products shipping today. There is simply no excuse for creating a new platform in which applications are not isolated from device-dependent screen coordinates.
Fourth, Android has no (obvious) standards strategy or upgrade path.
What will Android 2.0 look like? How will an Android application recognize which version of Android it’s running on? What happens when Nokia decides to use Android and add a few special classes of its own?
As the history of both Unix and Java shows, standardizing programming frameworks is not an easy task. They tend to drift and fracture, and become very hard to improve or evolve. If the Android people have thought at all about this, I see no evidence of it.
The genius of the Web was that instead of standardizing APIs, it standardized document types. While at a certain point it developed a programming model on top of its document model, it started with a major advantage in simplicity. It is not easy to standardize data, but it is much, much easier than standardizing code. Genuine successes in library standardization are hard to find. As in the case of Java, the practical result tends to just be that one implementation is the standard. And forks are simply lethal.
Fifth, I don’t think the business model works.
Sometimes I get an almost Soviet feel off Google. After all, what was the Soviet Union but a whole country run by a single company? Of course, Google is much better managed than the Soviet Union. But give it a few years.
When you are writing a large piece of software in order to just give it away, it has to be a labor of love. If it’s not a labor of love, the task becomes Brezhnevian. Google will do just fine if everyone in the world accesses their servers via Apple or Microsoft phones. The commercial justification for writing Android strikes me as quite thin.
The quality of the user experience on the iPhone makes a major difference to Apple’s bottom line. The quality of the Android experience has only a slight connection to Google’s. Sure, everyone on the project would like it to succeed. But the same is true of every project, whether you’re at Google or Elektronika.
So, in a certain sense, the people working on Android—who I’m sure are all very smart—are hunting wild boar with a can of spray-paint from the back of a pickup truck.
I know this feeling very well, because I worked at a company that shipped over a billion units of handset software, which we gave away for free—or at least cheap. (Our main revenue stream was on the server side—for a while in the late ’90s, we were getting something like a buck per subscriber per month for a glorified Web proxy.) There is still a pretty good chance that you, dear reader, have my code in your pocket.
And frankly, it is not very good code. And the reason is that we were not getting paid to create the greatest possible experience for our users. So this task did not consume our entire attention. It did not occupy us the way a snake occupies a mongoose.
Android does not strike me as bad. It strikes me as okay. It’s probably at least as good as whatever Nokia and Motorola are working with these days. (People used to call Nokia “the Apple of cell phones.” Ouch. Papa’s got a brand new bag.) But if the goal is excellence, Android has a long way to go.
Does it have any agonizing, irresistible urge to go there? What’s the worst-case scenario for the Android team? No one uses their code, so they have to go and fix Blogger bugs for the next five years, while their options vest?
What’s their best-case scenario? They ship a few hundred million phones, for which they get paid squat. What incentive do they have to make Android 2.0 the greatest thing ever? Suppose Nokia adopts Android and starts bombarding Google HQ with an endless stream of feature and change requests. How responsive will they be? How long will it be before they start telling the pallid, slant-eyed Finns to just code it themselves, or go screw a reindeer? And if the Nokians choose the former, how likely is it that their patches will wind up back in the main Android codebase?
System software design can do great things for humanity, but it should not be confused with missionary work. The iPhone has that carnivorous killer edge. It really is insanely great. Unless my initial impressions are wrong, Android isn’t.