The only thing from the interview I care about: “We are thinking about how to make Android handle updates better. We see ways we can do this. It’s early days. We’re talking with our partners and working our way through it. We need time to figure out the mechanics, but it’s definitely an area of focus for me and for the team.” We’ve seen empty promises about this before, though.
Google by making “better” Android than Android.
Now they probably see more need of unified The Android.
Who are they fooling around ? CyanogenMod knows how-to, why can’t Google do the same ? They “just” have to upgrade their kernel and Google Apps and leave the skinning stuff to OEM/3rd parties. Is that so complicated to just upgrade their own files ?
Just cut the system partition in three : google, oem, carrier. Each one update their software at will, and make sure to keep compatibility. In other words, the carrier should detect oem’s version and push their software version accordingly. Oem should do the same with the kernel.
If Google push a new kernel upgrade, it shouldn’t breaks oem and carriers’ software as long as they update their software. As soon as oem and carrier makes their updated software avalaible, it should be pushed OTA when the consumer is ready.
That sounds easy, it SHOULD be easy. Microsoft can upgrade its kernel without breaking oem/3rd parties’ software, so why couldn’t Google ?
Kochise
I think they are talking about updating Android, not just the linux kernel that sits in the backgound.
The OEM’s should just write their custom skins as drop in lanunchers that don’t interfere with the actual parts of Android that Google will update. But in any case, there have been many articles pointing out that most of the delay in the states is due to carrier certification of each update, rather than code modifications and incompatibilities with the new Android update from google.
Carrier certification of each update ? Like Google doesn’t know its game ? That just sounds silly as a child’s excuse for breaking the pot with his soccer ball…
Yeah, oem/carrier should get off their hands from Google’s code and just sit atop the APIs without fragmenting the kernel.
Kochise
I don’t want to belabor the point, but the problem is not with the linux kernel, its with the non Linux parts of Android: the virtual machine Dalvik, Surface Flinger Compositor and assorted helper applications that sit atop the Linux kernel.
Generally, when you find yourself saying things like “you just have to do some abstract ill-defined thing” or “it should be easy to do some significant thing where you have supplied no detail“, then there is a very good chance you don’t know much about what you are talking about.
Just saying.
Thanks for your invaluable insightful comment. Btw, your solution to the problem ? Any advice ?
Kochise
I ‘ll enter grumpy mode again, but doesn’t the unstable ABI of the Linux kernel prevent any attempt at providing smooth Android upgrades?
From a practical perspective, non-open drivers made for a certain Linux kernel version are useless for subsequent versions. And those *bleep*ing SoC manufacturers won’t put their drivers in the tree (aka open them up). So, drivers must be rewritten every time a new Android version (with a new Linux kernel version) is released.
Ever wondered why even Google gives up on upgrading their Nexus phones after some versions? Or why the Nexus S had stability issues when it initially got the upgrade to ICS? (hint: driver issues).
Unfortunately, Android is doomed in the upgrade department.
Edited 2013-05-13 15:19 UTC
No.
In a parallel universe where SoC manufacturers have their drivers in the tree, you are correct. Would be nice if SoC manufacturers opened their drivers, but they don’t want to do it.
If a driver is not in the source tree, it’s probably useless for subsequent versions of the Linux kernel. Isn’t this right?
Then why unbranded (not carrier locked) Nexus S devices took months to receive ICS? Shouldn’t ICS have been served a week after it appeared on the Galaxy Nexus? Hint: Driver issues.
Edited 2013-05-13 16:01 UTC
Not exactly, but you are on the right track. If the driver is in kernel space (in this case as a module) then if the kernel is changed or updated that particular module binary is mostly useless. You can try forcing it to load, but it most likely will fail or cause stability issues. That doesn’t mean that the module won’t compile against a newer kernel if written correctly (usually they will compile against the same major release), but it does mean you can’t just leave a driver binary in place across an update. This is where we get into ABI vs API. Linux’s API, while convoluted, is fairly stable. The ABI, on the other hand, might as well not exist as it is constantly changing with each change in a kernel. If I compile a module for my kernel and you compile one for yours, even if we are on the same kernel version, our modules will not work on one another’s installations. Any slight change in the config has the potential to break a binary module in unforseen, sometimes seemingly unrelated, ways.
Unfortunately in this situation, ABI is more important than API. If Google pushes an update which changes the kernel (even if it doesn’t update it to a newer version), it will break any OEM driver binaries the device requires.
The ideal solution would be to move as many drivers out of kernel space as possible, but that would be one hell of a refactoring job.
Sure, but that’s an easy one. To try to get you to buy a new device, of course.
They give up on upgrading their phones after some versions? I could be wrong, but isn’t the Nexus One the only Nexus device not running the latest version of Android? And wasn’t that due to the old hardware (small amount of internal storage, slow GPU)?
Sarah Palin’s head would explode
It’s early days? Android has been out for four and half years, slow carrier adoption of new OS updates has been a feature of the Android ecosystem since the beginning, in what sense is this ‘early days”?
Reading the interview, and noting the pre-announced lack of substantive platform developments at the forthcoming IO, strengthens my belief that Rubin’s departure was part of a strategic inflection point for Android and Google. I thought the info about how rushed Rubin’s departure was is also indicative of something coming to a head.
I presume that Page wants to refocus Android on what it can do for Google, like he is refocussing everything else at Google, and how Android can be repositioned and redesigned so that it serves Google’s commercial purposes better, which would be a wise move in my opinion.
It will be tremendously interesting, and of course very significant for the wider tech world, to see what any Android reset amounts to.
Thom, I don’t know how you can not care about this, given that you wrote an article yesterday titled “Samsung is hurting Android”:
One might understand this comment like “Google+Samsung could become the new Wintel”. That would be a very worrisome development for their competitors.
not just bad for their competitors.. bad for consumers as well. Having health competition in the market place is critical..
I personally can’t stand samsung phones.. while the hardware is great and some of the features are great the overall software is very buggy.. even the core android platforms aside so many of the “samsung value added features” are so buggy.. randomly soak the battery.. often just seem as though they are rushed out to window dress a product with bling.
I think it will not benefit consumers, but it will not hurt them either, as long as Google keeps its promise to not lock you into proprietary formats and you can take all your data with you at any time.
That way, if a competitor makes a better product, people can switch immediately, and are not held back by vendor lock-in. Admittedly, migrating paid content from the app store is a problem, but that is the same with all DRM schemes.
Android is open-source, Windows isn’t. The x86 ISA is patented, ARM architecture is more verbose, while under licensing scheme too. It’s less of a threat than it was.
Kochise
Edited 2013-05-13 22:37 UTC
I find this to be backwards though. x86 is patented yes, but Intel has done an amazing job at providing everyone excellent documentation and standards on everything. When you have an x86 system you know how interrupts will work, how the timer works, how the thing boots, how to enumerate and talk to accessories, how power is managed, and a lot of other things. The documentation is all available, in a form that for example Linux developers can use. Linux runs, on a basic platform level, fine on millions of different PC configurations. New ones are released every day, and the basic functionality just works, because it is all documented and well understood. Just a thing like USB: a huge R&D project by Intel over many years, then they designed chips and software that worked with it, documented how to make devices that interact with it, and released every part of it for anyone to use for free.
With ARM, with a just as patented core (though easier to license for big players) absolutely nothing of that holds, every SoC does it differently, and it varies from being poorly documented (Qualcomm) to a proprietary secret (Samsung). Getting Linux running on a new SoC is a troublesome undertaking that must be performed over and over.
This is not an argument against the sentiment in your post, but Intel gets an awful lot of hate for one thing (the x86 ISA), and I really do think they deserve a lot better.
Never said that Intel doesn’t deserve their credit where it is due (R&D, amazing performance, Atom, energy efficientcy, etc…) but AMD ain’t any less deserving (first at 1 GHz, 386 DX at 40 MHz, Athlon, HyperTransport, nice multi-core architecture, APUs, etc…)
If Intel has restricted its documentation’s openess, that sure would have been bad because at the time being, there was more powerful alternatives (68k, Alpha, Sparc, etc…) and it was needed to explain the segmented memory and FPU/MMX shared registers.
What really saved Intel was (b)ackward compatibility and… their FPU which permitted games like Id’s Quake where AMD’s offering was shining in interger implementation. The PC architecture was basically inherently doomed for games face to Amigas.
But hopefully MCGA (320×200 in 256 colors) and SoundBlaster were a “game changer”. If Intel hadn’t opened their spec enough, no one would have been able to overcome so many hardware’s flaws (through mode-x, dos4gw, xms, int, …)
Kochise
Amiga was probably doomed, essentially a closed platform which couldn’t evolve fast enough (vs PC approach that would triumph at the end with its economies of scale); most game devs never targeted anything more than 500…