Fuchsia is a new operating system being built more or less from scratch at Google. The news of the development of Fuchsia made a splash on technology news sites in August 2016, although many details about it are still a mystery. It is an open-source project; development and documentation work is still very much ongoing. Despite the open-source nature of the project, its actual purpose has not yet been revealed by Google. From piecing together information from the online documentation and source code, we can surmise that Fuchsia is a complete operating system for PCs, tablets, and high-end phones.
The source to Fuchsia and all of its components is available to download at its source repository. If you enjoy poking around experimental operating systems, exploring the innards of this one will be fun. Fuchsia consists of a kernel plus user-space components on top that provide libraries and utilities. There are a number of subprojects under the Fuchsia umbrella in the source repository, mainly libraries and toolkits to help create applications. Fuchsia is mostly licensed under a 3-clause BSD license, but the kernel is based on another project called LK (Little Kernel) that is MIT-licensed, so the licensing for the kernel is a mix. Third-party software included in Fuchsia is licensed according to its respective open-source license.
Great overview of what Fuchsia is and what it consists of. Google is really experimenting with some different approaches here. Definitely worth a read – before you comment.
At this point, it’s really hard to fathom what Fuchisa’s part is in Google’s strategy, if at has one at all. It’s too big, and involves far too many notable people, to ‘just’ be a research project, but at the same time, they’re literally doing everything from scratch with some radically different ideas here and there, which makes it unlikely that we’re going to see it replace Android or whatever any time soon.
My guess? Google is clearly having issues with Android in that it doesn’t control the whole stack, causing Google to be at the whim of chip makers to maintain support for the Linux kernel, leading to the massive problems with Android updates we all know and hate. Fuchsia seems to be Google’s response to these problems.
I’m not saying Google will replace Android with Fuchsia – I’m saying Fuchsia is the answer to the thought experiment “if we could start over, what would we do differently?”
http://memcpy.io/android-enabling-mainline-graphics.html
This is joint project between Collabora and Google’s Chrome OS team.
So Android custom video driver design could disappear. Maybe google is working out what Microsoft did in the end. Let hardware makers produce unaudited drivers and they will break things.
Windows driver quality only improved when Microsoft started demanding signed drivers with the threat to ban them if the drivers were not up to quality.
You are talking nonsense. Signing with company certificate has nothing to do with drivers quality. Signing is attestation that the Company has released this driver. That is all.
Microsoft never threatened to ban any signed driver. BTW there is no mechanism in Windows to blacklist drivers. If a driver has a valid certificate it will be loaded. Microsoft doesn’t manage certificates this is responsibility of the certificate signing authority.
Edited 2017-04-01 15:59 UTC
> …causing Google to be at the whim of chip makers to maintain support for the Linux kernel…
Can you elaborate on this, exactly who is gonna write the drivers then, Google?
The Linux kernel lacks a stable ABI, forcing chip/device makers to continuously update their drivers. This is a design choice by the Linux Kernel team – and their reasoning is not without merit – but it does mean Google is at the whim of chip/device makers to keep up with the ever-changing Linux kernel.
With their own operating system, Google could do this entirely differently, and be far mor clear and consistent in this department than Linux is or ever will be.
Edited 2017-04-01 00:09 UTC
This is only ever partly true.
https://www.kernel.org/doc/html/latest/driver-api/uio-howto.html#wri…
Linux kernel does in fact have stable ABI for driver development. Only one catch. You driver will run in userspace.
Fuchsia there is no kernel space drivers. So hardware vendors supporting it will have to live with the fact they will have to make userspace driver.
Samsung did a classic example of why you don’t want closed source drivers in kernel space. /dev/mem was removed because it was a huge security risk. Some of samsung code depend on it so instead of rewriting that code samsung added a new device /dev/something I cannot remember name. The new device was a straight up copy of /dev/mem security flaws and it did not have to have third party auditing.
So the reality is closed source drivers need to be got out of kernel space. Either by it being open source so they can be audited and maintained. Or the closed source driver need to use the stable ABI for userspace drivers that can be wrapped by protective frameworks to reduce possibly of harm.
If those who were making closed source drivers used the stable ABI from the Linux kernel no problems. If those making open source drivers were mainlining as well there would also be minimum trouble.
But we need google to put their foot down. Like you cannot particular android trademarks if you driver support is done wrong leading to upgrading issues. So closed source drivers must be userspace and open source kernel space drivers have to be mainlined.
Fuchsia could be a threat to hardware makers if they keep on doing closed source drivers in kernel space google will remove that possibility completely by changing kernels. Stuff the performance hit they take from being stuck in userspace.
It is entirely possible to ship android 7 without moving to a newer kernel than the one shipped when the device had android 6 on it.
That’s how a lot of manufacturers do it. They keep the stabilized kernel (and thus drivers) and ship the updated userland stack of whatever Android version they want to ship.
The problem is that in the case of Samsung, Asus, Huawei, Xiaomi , etc… they modify the the stock android userland so heavily and choose to maintain it out of tree that merging a new version is absolutely brutal. So delays happen. Plus they have to choose whether they’re going to sell you another phone one year later, or keep updating their older model while making no money out of you.
Guess what they’ll choose.
Windows lacks a stable driver ABI, every Windows version breaks lots of drivers.
The most complex ones, graphic drivers need to adapt to new versions of DirectX, OpenGL. And there are often issues.
Now imagine that the chips are not made by large specialized corporations as AMD or nVidia, with dedicated software teams refining drivers for many years, but by small companies assembling blocks (“IPs”) from many vendors. And a phone, with radio, GPS, is more complex than most computers.
Switching from Linux to an obscure proprietary OS would only worsen the problem.
Windows has the most stable driver API among all major operating systems in use. Windows still able to load drivers designed for Windows NT4 and Windows 2000. Stable API has been one of the major selling point for Windows.
API and ABI are not the same.
API is programming interface and ABI is binary interface aka binary compatible.
Have you ever noticed how most drivers come with multiple binaries in different directories with Windows versions in the name like amd64/win7, etc. ? That’s because many times it’s not ABI compatible.
You continue to spread nonsense. Windows have the both API and ABI backward compatibility. Your example is pathetic – you can’t run 32 bit driver on 64 bit kernel.
You can still load a driver compiled in 1999 for Windows NT4 on Windows 7 if you disable driver signature enforcement which has nothing to do with ABI compatibility. Windows ABI is so backward compatible that I have used to have a single 32 bit binary for NT4/2000/XP/Vista/7 .
MrMIPT,
Well, sometimes drivers do break for reasons other than driver signing requirements and x86 code size. I’ve seen a lot of windows 7 drivers stop working in windows 8 or 8.1.
For whatever reason, new versions of windows aren’t always 100% backwards compatible though even accounting for driver signing and x86 code size. If users are lucky, they won’t necessarily notice because the vendors have already updated drivers, so it’s not a problem.
Ideally there needs to be a good balance between everyone’s needs.
Edited 2017-04-02 16:17 UTC
Due to CONFIG_MODVERSIONS in most Linux kernels you can at times load modules for a 2.0.40 kernel in a current day kernel. And that is from 2004. Will it load all modules no. Will you system be stable afterwards who knows. Same with loading a NT driver in modern windows you don’t know if the system will be stable and not all drivers will load some will be straight out rejected.
It is possible to intentionally write a driver for 2.0.40 Linux kernel and newer. They answer is yes. Is the driver horible absolutely.
http://web.yl.is.s.u-tokyo.ac.jp/~tosh/kml/
Yes you can wrap a Linux usermode up into a Linux kernel mode driver basically removing the context switch overhead and that beast will work from 2.0.40 to current.
The reality here is there is a stable Linux ABI it refereed to as the user space ABI. It is usable from kernel space and user space.
There is a catch you should not mix using kernel ABI with using userspace ABI its the path to locking hell.
So if the userspace ABI for writing drivers in Linux sux. So does the in kernel stable ABI of Linux for writing drivers because they are the same thing.
So all the call for a stable kernel ABI for writing drivers is people ignoring that one exists. Then people writing drivers not using it. Why because the stable ABI for writing drivers is slower than the unstable one.
Edited 2017-04-03 00:03 UTC
Sorry but he’s right and you’re wrong, AAMOF I’ve had to on many many occasions at the shop load seriously old drivers into new versions of Windows and as long as they are the same bit and made for the NT arch? They work just fine.
I’ve loaded Win2K Pro drivers into Windows 7 (customer had a $4k professional printer that hadn’t been supported since Win2K) and I’ve had to load WinXP drivers into a Win 8.1 system because the capture card he had was by a company that had gone out of business. this is just 2 of the frankly many dozens of examples over the years and the only issues I’ve had is unpacking the drivers as some of them used 16bit installers that haven’t run in modern Windows in ages.
This is one of the big selling points of Windows, you can keep your expensive hardware and I’ve done it countless times. Linux lacking a stable ABI frankly really hurts it because if you don’t have a dev dedicated to keeping that old hardware alive? Well a user here gave just one example of what a lack of an ABI gets you…
http://www.osnews.com/permalink?562009
This is mostly fiction yet it has been a selling point. Modern version of windows loading a NT4 driver is totally out.
https://developer.microsoft.com/en-us/windows/hardware/windows-drive…
Look at the development kit. Windows 7 to modern Windows can at times use the same drives.
XP back to NT 3.5 at times can use the same drivers. If you happen have vista you are kinda in no mans land.
Fuse was added to Linux 2005 this is 4 years before Windows 7 that is 2009. First generation Fuse drivers can be run on current day Linux kernel no problems.
We are now getting to the point where the stable driver ABI of Linux is more stable than the Windows driver ABI. There are a lot of embedded system that mandate long term maintenance where all the non mainline drivers are userspace drivers on their Linux platforms.
With the existence of fuse and other user-space ABI options to create drivers with the oldest being fuse and its more stable than Windows provided Driver ABI this make me wonder if the linux kernel developers call is 100 percent right.
Basically if you stop quoting fiction and start looking at Windows driver compatibility and Linux compatibility across versions very interesting story comes out.
Including Windows binary drivers doing like All-winner does on Linux kernel.
Both stories end up reading very much the same. Windows allows binary drivers into kernel space. Windows suffered from failures when kernel updates from drivers doing stuff that upstream was not excepting. So Microsoft end up having to demand signed driver program.
Linux kernel developers see the same problem. Binary driver in kernel space equals failure to be able to update kernel. Its not like Linux Developers did not try to implement a shared kernel binary driver between Linux, BSD…. before throwing in the towel on the idea of kernel mode binary drivers.
https://www.cs.hs-rm.de/~kaiser/Slides/gutheil-weisser.pdf
Lot of different groups are using Linux usermode drivers these days because it is ABI stable. They are finding only a 5 percent hit being in user-space compared to driver being in kernel space. This sound bad until you wake up this is still on average higher performance than what windows can provide. Maintaining all those unchangeable in kernel structures comes at quite a performance hit to Windows.
So stable ABI for Linux driver development exists and is in use. Its now how to get the closed source driver makers off there kernel space addiction and use the userspace ABI. Also get bad driver makers of open source out of just dump the code on-line and not mainline anything.
As AMD has found attempting to get DC functionality into the Linux kernel a huge number of faults are found in the peer review process.
One of the things is to make it very clear as end users we don’t want binary drivers in linux kernel space and have google and vendors demanding the same thing. There is not enough performance difference between a Linux kernel mode driver vs a Linux usermode driver to cover security cost of restriction in kernel updating to have a binary driver.
My point of view is Linux kernel has delivered a stable driver ABI. It now all about getting the closed source driver makers to use it. Using it will also allow it to be feature improved.
Hi,
I agree – we should abandon Linux and use a micro-kernel.
– Brendan
I do not exactly say that. There are some classes of driver that never will suit a micro-kernel due to needing direct cpu control features. It between 1-10% of all drivers.
My option is Linux kernel could be slimmed down. A pure Microkernel does take performance hits with particular classes of drivers why Microsoft went hybrid in the first place. We need a hybrid between micro-kernel design and monolithic design that is more sane. Everything is in kernel space need to be under 1 parties control who can see all the source code so they can in fact understand if a change in kernel space is going to have horrible effects.
You need to see Micro kernel interfaces to the user-space drivers and you need monolithic design in the kernel space so you have a system getting maximum possible speed with highest possible security sweet spot.
Saying Micro-kernel or Monolithic are choosing design with weaknesses that the other one does not have. Hybrid what the NT kernel design is how to have the worst of everything why starting processes is so slow in windows.
By the way the performance overhead of usermode drivers under windows is average of 15% compared to running inside windows kernel space. This is 3 times worse than usermode driver under Linux. There is something very import for performance about doing the kernel space monolithic instead of kernel space following Micro-kernel ideas.
So a Micro-kernel user-space with a monolithic core design is the ideal. Linux kernel could be over time converted to the ideal. Just for Linux kernel to get to the ideal is accepting what history has shown us.
Why is kernel driver ABI stablity fiction. It quite simple. When you are running in kernel space you can direct access all the memory of kernel space. Hello problem if my driver presumes kernel memory looks like A as it did in the first version but now kernel memory looks like B and my driver does alteration get ready for kernel crash. No matter how much you document how driver should use the ABI you will always have drivers in kernel space due to optimisations and the like presuming the wrong things and altering places you are not expecting. Hardware mechanics don’t allow you to provide binary drivers in kernel space that are stable end of story.
Micro-kernel idea with drivers in user-space the memory they are access is controlled the memory can be fiction by the kernel. So what the user-space driver sees looks like A but that is a wrapper in kernel to map to to B the kernel is now using. The userspace driver cannot be optimised to access anything that was not intended to be exposed. So yes you can write a userspace driver ABI that in fact works and even better supports multi versions of the userspace driver ABI been used side by side with each other.
So it about time people stop asking for the impossible and we focus on what is possible.
oiaomn,
Except that nobody has suggested doing anything “impossible”.
I know linux evangelists feel the need to dispel every criticism of linux, but still some criticism is warranted. Can you admit that the truth is somewhere in between and that there are both pros and cons to the ABI choices linux makes? Linux’s non-stable ABI is simply not the best choice for all users & developers, some of us really would benefit from having stable ABIs.
The is playing the user card. This is still not providing any technical method how it can be done. If there was not a need their would not be UIO that can pass a full PCI cards interfaces to user space do-to the driver in userspace.
Linux kernel provides a stable Userspace ABI while having a unstable kernel space ABI. Unstable kernel space ABI is really being truthful. No matter what you do to attempt to make the kernel space ABI stable while it binary code you will never succeed.
There are is one thing in Linux kernel space that is ABI stable.
http://blog.memsql.com/bpf-linux-performance/
Yes the “Berkeley Packet Filter†(BPF) where the kernel itself compiles the byte-code by having a JIT in the kernel.
So the only examples where a kernel space ABI has worked and been stable is when its JIT and the drivers are built from bytecode to native by the kernel. These example you find in many OS that are not Linux.
You are not asking for a JIT to write kernel space drivers in. Understanding that you are going to suffer a loading overhead. Stable/longterm Linux kernel versions provide stable API for building from source.
So there is only really two valid choices. 1 build your driver from source using the stable API. 2 some how make a JIT or AOT for drivers and get this accepted into the Linux kernel.
Do note something the JIT/AOT solution does not mandate stable kernel space ABI. The JIT/AOT engine only need to know its own ABI and how it maps to the kernel space ABI. Remember this does not allow the code to go off and be interacting with the idea that something is A when due to a change is now B because the build JIT/AOT part does the wrapper preventing failure.
Making a JIT/AOT solution would be a lot of work. It simpler to tell people to mainline the source code or make userspace drivers. Really the userspace drivers need to be pushed to limit before the JIT/AOT path is followed.
oiaohm,
This view stems from pure ideology and not at all from feasibility. Now I don’t really care that you have a strong ideology, I’m absolutely fine with that, but it is not ok to let your ideology blind you to the feasibility of other approaches, that’s willful ignorance. There’s no need for “magic”. Your continuing fallacious assertions about the impossibility of stable ABI really make me think that there is something you fundamentally don’t understand with the subject matter. Stable APIs can in fact work and do in fact work. Is it too optimistic on my part to ask you to learn more about this so that next next time this topic comes up we can have a more interesting discussion rather than just denying things that are true?
Edited 2017-04-03 00:39 UTC
History over and over again tells us something important. Developers will use what ever they can access. Be it defined as stable or not. So if you write a so call stable kernel ABI and people are developing drivers for it they will find and use every bit you have not defined.
This makes the belief that you can write a stable kernel ABI and let developer create drivers while remaining stable fiction. Its nothing more than a fairly tail that does not match real world.
Stable user space driver ABI where you can control what the developers have access to. The only operating systems that have a Kernel ABI for driver that is in fact near stable that Kernel ABI also works in userspace. So when person is developing and debugging the driver they are running it in userspace under restriction. So you do not have access A undefined struct that in update gets changed to B and driver cause a kernel panic or the like.
If you are after stable you are after a Usermode ABI for writing drivers that can possibly be run in kernel mode for higher performance. So the right solution exists in the current day Linux kernel just need to be developed out more.
So instead of waste time asking for a stable kernel ABI the effort should be put into the Linux usermode ABI of the Linux to make it more useful and better performing for driver development.
I am total sick of people asking for a kernel mode ABI who have not spent the time like I have to understand why it is a really bad idea. Also have not found the time to find the examples that in fact managed to have stable code by driver makers in kernel mode then finding out that was because drivers also run in usermode. So you really don’t want a kernel mode ABI. You want a stable generic driver ABI where the driver works in both kernel and usermode for third party developers. Usermode to make sure they are not doing anything horible.
Once you start looking at the solutions that work you will normally cease asking for kernel mode ABI stable. Particular on something like Linux where the usermode ABI is usable in Kernel mode already.
I have not said a stable ABI is impossible. I have said a stable in kernel space ABI is impossible. This is the reality when you allow for what driver developer will do.
So there are ways to make a stable ABI for drivers that is usable in kernel space but this is not a ABI for kernel space. It ABI for user space usable in kernel space.
Microsoft driver ABI is well documented yet it has stability issues when you look closer you see the problem you cannot choose to run the driver in userspace where access is restricted to find out if the driver is doing actions that are not forwards compatible. This is a repeating story operating system to operating system that have attempted to implement stable kernel space ABI for drivers. You have to allow for human error and at in time change will have to happen.
At some point when something happens enough times you have to accept it like a law in science. We may not know why but that is what is going to be the result every single time you attempt to-do that.
We talked about this last time… Of course things will change over time, but there does not have to be breaking changes at arbitrary kernel updates. Planning ahead goes a long way in software engineering to resolve exactly these kinds of issues, heck that’s another legitimate criticism for the linux project – the lack of planning by project leadership, but I’m sure hell has to freeze over before you actually admit it.
Somehow you’ve created a bubble where stable *kernel* ABIs are impossible, god knows why you’ve done this, to anyone outside the bubble it really is a silly assertion because they do exist as a matter of fact in the real world. It’s such an esoteric thing to deny, but I have learned this much: it’s no use correcting people in their own bubbles since they literally reject and deny the truth in order to defend their own idioms. Maybe I should stop trying to fight the bubbles and instead find a way to take advantage of them like others have done!
Edited 2017-04-03 05:34 UTC
You should not guess my answers. The lack of documentation in the Linux kernel is a clear sign of lack of planning. The fact you have lack of documentation on how stuff should be done in kernel space increase odds of something using a structure in unexpected ways.
Funny that everyone talking about wanting stable kernel abi has not start with the most important thing documentation. You need to document what you have to work out what you should keep.
Asking for a stable kernel ABI is really lack of planning and research. The very thing Linux suffers from and does not need more of.
http://lkml.iu.edu/hypermail/linux/kernel/1604.0/00998.html
The reality is someone fool posts this in the kernel lists as well from time to time asking for a stable kernel ABI.
https://www.kernel.org/doc/html/latest/process/stable-api-nonsense.h…
So far nothing said alters all the reasons against attempting to-do a stable kernel space abi documented in the Linux kernel either. Yes that documentation is from 2004.
The reality here is people post asking for it here without understanding why not and that there not a single example where idea of stable kernel ABI has worked across multi kernel revisions. Yet you can find many cases of user mode drivers that have done just that.
For how poorly Linux is documented people are still commenting without reading the fragments of the documentation.
So the answer to-do a kernel mode driver ABI on Linux is No never happening. No one ever put an example that the Linux developer can look at say hay this form of kernel mode driver ABI in fact works.
As you said the Linux kernel lack planning and project leadership. People asking for kernel mode driver ABI are putting up no leadership or functional prototype to back their point of view. When I say functional prototypes it cannot windows because it does not take long to find examples of where Windows so called stable driver ABI is not stable.
I have told you the solutions that will work. Are able to be merged into the Linux kernel.
Alfman if what you call my bubble is false put up the working examples. Don’t be surprised if I rip them to bits. I have watched enough people make the same mistake on the kernel mailing list. If it in the Linux kernel mailing list as discredit example please don’t bother with it either.
oiaohm,
Wow, you are alleging two ABI breakages in over 15 years…linux users should be so lucky Seriously though I’m not even calling for that length of stability, that’s too far. A one or maybe two year stability period would be a huge improvement over the chaotic process in place with linux today. I wouldn’t really care that you choose to deny it except that linux is worse off for all the people who’d benefit if linux had better kernel ABI stability, and that’s unfortunate.
Hopefully as linux keeps growing, leaders will realize that ABI churn is counterproductive to the community. Then again it fits hand in hand with the ideology that non-mainline code should be purposefully difficult to maintain. Ugh.
Edited 2017-04-04 02:50 UTC
This is you going after the wrong problem. ABI means either locking down the complier or restricting to the userspace ABI.
In userspace ABI there are protective wrapping so you pay a price doing this. The fact that a userspace ABI driver is slightly slower than a kernel driver using unprotected interfaces means hardware makers refuse to make them because it will make their products bench worse.
“Stable Kernel Source Interfaces” out of stable-api-nonsense.html most of the API breaks in LTS kernel versions of Linux are security issues being patched.
Again how can you deal with this it again use the userspace interfaces.
Next problem to have a somewhat stable API need good documentation so people know how they should be using the API and don’t do use sections that are internal only subject to change. There are plenty of examples where Linux Kernel lack of documentation has caused failure Allwinner is about the most well known of doing some insanely creative workarounds because they could not locate the feature in the kernel they should have been using so run into nightmares when locks across the kernel changes how they worked.
How many people do you think have been paid to work on the Linux kernel documentation as a whole in the complete life of the project. The answer 1 and that is only for the last 5 months.
Some of the reason why Windows is so good is how much documentation Microsoft provides to driver developers.
Alfman the reality is stable kernel driver ABI is fiction does not happen in the real world.
Well documented driver API goes a long way to reducing number of failures.
The problem most people miss the bigger fact why Microsoft drivers are so good on average is documentation. Some of the drivers that people think work across multi versions of windows are detecting that they are running on different versions of windows and using different code paths. This is how you have a Windows XP sp2 driver work perfect on Windows XP no service pack.
So the complete arguement for stable kernel ABI is based on miss information. Effectively end up asking for the impossible that no one has ever done.
Alfman your comment about only 2 in 15 years means you don’t have a clue how unstable the Windows driver ABI really is. So you are selling idea based on fiction. To make progress forward you have to base it in proper facts. Proper facts include understanding what is possible and what is impossible.
Stable Kernel API is way more possible. Stable Kernel API is still a hard process when you have hardware vendors saying their drivers must have the highest possible performance. Stable Kernel API is absolutely impossible without decent documentation on how it should be working.
You need to have a Stable Kernel API before you could even start negations with distributions about complier unity for a kernel ABI.
Basically the request at totally out of order.
1) Kernel Documentation
2) Stable/Stablish API in kernel that driver developers are willing to use.
Number 2 cannot be done without number 1 or you will not know what API are stable effectively.
Both would have to happen before you could even look at ABI in kernel for drivers.
Alfman the reality you want to point at me and say I am making it worse for Linux users. The reality all you actions are doing is keeping debate stuck on topic:
1) That cannot go forwards because it require dependencies don’t exist most important being proper kernel documentation.
2) Does not have a functional example and never has had a functional example to prove the idea is truly possible.
3) Takes focus off important areas of Linux Kernel Documentation that is missing and that is causing a lot of driver failures. Why not having ABI takes the blame instead of hey how to-do this stuff was not documented so driver developer did it wrong because they did not know better.
Something to remember allwinner developers have firewalls so accessing Linux kernel mailing lists are insanely hard at times for them. So they do depend on the documentation with the kernel massively. It being poor is giving them very big problems.
So the ones who are not helping the Linux world move in the direction it need to go are those who are bring up the dead horse of Kernel ABI all the time. The Stable Kernel ABI talk is impossible by history and pointless because it not going to address the problem.
Stable Kernel ABI if it is possible without decent Documentation on how developers are meant to use it would still result in drivers that fail to work.
oiaohm,
That’s utterly false, and I can say that with authority as a previous windows kernel developer myself. You want it to be true but continually lying to yourself and others is never the path to enlightenment.
Sorry I can tell you without question that your idea is false. I have been in the maintenance process for over 4000 computer of mixed breeds.
So I have documents on every one of those driver breakages. What happens is is rose coloured glasses effect. You worked on drivers properly using the documentation properly so never interfaced with Windows unstable API/ABI. Not all developers are that well behaved the real world documentation shows this.
So Alfman it about time you stop lying to yourself take a job where you doing maintenance on a large number of machine that are different makes and watch what happens.
The real world does not neatly match your theory.
The reality with most of like webcam drivers that did work that don’t work now in Linux when a kernel updates is if you look at the driver it doing some hack to-do instead of the recommend way. So when the auto patch for API change is applied it does not apply correctly so breaks it. What is causing this problem. Lack of documentation.
Its the thing to consider here with most of the Linux breakages is why is it only X Y and Z models that broke. When there are other drivers that should be using all the same interfaces that remain functioning correctly. Very quickly doing research on what the patches are to the stuff that break in Linux lead you to a Documentation problem so developers cannot know what they should be doing.
You might be a windows kernel developer. I have both Windows and Linux maintenance experience and tracking failure back to cause.
Besides some of the faults in real world is pure stupid. Like Dell machine reporting to has a memory stick slot to OS when internally missing the pull up/down resistors on that so the control when activated it going to go stupid. This fault causes most people using Dell laptops with Windows to turn off USB 3. Why the Windows driver developer for that has never considered the possibility that something like that would ever happen.
Stable ABIs are not “my idea” though, not by a long shot; lots of technically proficient people like me want linux to have more stable ABIs. You don’t, good for you, but it doesn’t make us wrong either. Your acting a lot like kellyanne conway with her alternative facts. The sooner you admit that your view is *just* an opinion like like ours rather than a fact, the sooner we can gain one another’s mutual respect.
You call yourself technically proficient. Reality are an imposter who is not proficient at all. You have never built a proper stable ABI for a OS to know what is required.
https://www.kernel.org/doc/html/latest/process/stable-api-nonsense.h…
This here lists all the technical reason why it cannot be done at current time. Include reasons why it can never be done.
Alfman have you ever built a stable ABI from scratch to know what is require the answer is no.
Have you ever looked into what Windows driver ABI really is like and how it can get kinda stable. Again no.
So you have taken a presume that you have example that works that in fact does not work. Never can be made work with a Open Source OS where different parties can choose to build with different compliers not in the form it is.
It about time you in fact learn the true facts of the matter. The idea of a stable ABI in kernel space is fiction/nonsense to anyone who has really done it. When I say really done attempted to build it from scratch or have attempted to find it.
You find things like windows that give a kind of a appearance that it might exist. Then you have to take note of the conditions applied on driver developers. Like windows you will use the complier from the WDK. You want to be signed Microsoft will inspect your driver if they see you using any undocumented ABI/API they will not sign your driver. Even with all that they will miss cases.
The only examples you will find of a OS with open source core and stable third party binary drivers writing in native CPU instructions will be Micro-kernels or hybrid kernels. Yes where the drivers are the OS Userspace API/ABI.
Now in kernel space ABI examples they only ones you will find that truly work with be like the Java based OS using JIT in kernel space.
If it looks like Windows it is in fact broken and unworkable. I asked you Alfman for a working example there are 4 OS in existence that have working examples by they are True Hybrid. True Hybrid being that the drivers will run in kernel or userspace. 3 of them are in fact open source.
Windows is not a True Hybrid because drivers cannot be made run back in Userspace for diagnostics in all cases.
True Stable ABIs can only be done a limit number of ways.
Stable Kernel ABI is truly impossible and be high performance.
There is such things as mathematically secure operating systems. There is such things as mathematically provable stable ABIs.
Dynamic loading libraries in user-space most of them are mathematically provable methods.
Stable ABIs are not you idea right. You have never created and don’t have clue how to even go though the process of working out if they are in fact designed to be stable or not.
Really I am sick of people who are not technically proficient not reading what Greg Kroah-Hartman wrote in 2004 explain the problem in simple terms. Also not technically proficient enough to know what is required to make a Stable ABI.
Stable in userspace a key bit of magic is hiding in the agreed dynamic loader of libraries. It also hiding in the syscall setup. You know that a static built binary under Linux can be slightly faster than a dynamic binary. Why because the static binary gets to skip all the dynamic loading over head.
So the rules are that a Stable ABI will cost you performance. There is no way to have a Stable ABI without an abstraction layer.
If you do not have the abstraction layer like Windows is you must lock down compliers and everything else to have a some what stable ABI that under close inspection is not stable.
Also you do need documentation and test cases…..
The Linux kernel even that is has modules that it loads most of it interfaces are like if you have static linked.
So a multi kernel driver for Linux will be for sure always slower than a driver targeted at a particular kernel. The abstraction layer exists in the Linux kernel already and it required so userspace ABI remains stable.
Alfman using Windows as example of what Linux should do with ABI straight up proves you are clueless.
oiaohm,
Note that I have not used windows as an example of what linux should do. You keep bringing up windows and I was responding to your claims. I think windows should not really be a deciding factor for what linux could do, so let’s take it out of the equation and stick to the merits of a stable kernel ABI in linux for it’s own sake.
Not for nothing, but this discussion would have gone a lot better if you had stated your concerns with having a stable linux kernel ABI rather than flat out asserting the impossibility of one, that’s the untenable position I’d like you to concede. But I realize that’s not going to happen, you are clawed in too deep to those earlier assertions, so I don’t see any path to recovering this discussion if you aren’t going to admit that your opinions aren’t the same as facts. I expect you’ll agree that these discussions are not at all fun or insightful, so for both our sakes I’ll leave it here, but please do keep this in mind for next time.
You continue to show that you have zero experience in kernel development ( any – Windows or Linux ).
Drivers failed on updates not because ABI was chanhed but because drivers had bugs that manifested themselves on new kernels. There were two reasons – either a call to KeBugCheck had been added to BSOD the system when a kernel detects inconsistencies OR the change of the timing/internal implementation revealed the bug.
Signing was not an attempt to improve drivers quality but an attempt to stop kernel malware from spreading. Signing procedure doesn’t check the quality of a driver it just signs the code with a certificate which is usually not available for malware developers.
Well, I have missed this. I know that many people confuse things, and believe that a driver for WinXP should work on Win10 – and when it doesnt work, they draw the conclusion that “Windows have no stable ABI”, Well, that is wrong. Windows is compatible within the same version, i.e. XP drivers all work on XP versions, Vista drivers work on all Vista versions, etc. Microsoft never promises that WinXP drivers work on Vista. Sometimes they do, though.
When I have used Windows during all these decades, I never had driver problems within the same versions. I dont know what you refer to, but the most probable thing is you mixed up concepts. Windows have excellent driver compatibility within Windows versions.
However, Linux is a mess. Upgrade the kernel and things break now and then. Have you missed all the forum threads about sound stopped working, or usb camera stopped working, etc – when people upgrade Linux kernel? There are loads of them. Just look at any random Linux forum and you will see lot of such threads.
I dont know really in which reality you live in, but even the largest vendors on the planet have huge problems with Linux breaking compatibility. Linux kernel is a moving target, so it is impossible to catch up and get a stable Linux environment. That is why LTS exists: they try to provide a stable Linux. Why? Because Linux is unstable by design.
I dont know which reality you live in, but LTS exists. You should check up why LTS exists and what problem LTS try to solve. Hint: it is not stability.
That is not true. XP drivers made for SP2 or latter work on all XP versions. These fine bits of difference only people repairing systems get aware of. This goes for all Windows versions. You get to know when a particular kernel break was and make sure that all the drivers you are using are newer than it. You have a laptop from vendor who went out of business a XP SP1 and you are attempt to run XP SP2/XP3 you will have trouble.
Please stop pushing this fiction it does not match the reality what people working on repairing machines have to deal with.
Name me 1 pure micro-kernel that can match Linux in performance, name me 1 pure micro-kernel that is used in any major operating system.
The ideas behind micro-kernels are nice their performance has always been an issue. Using userland drivers are essentially what micro-kernels do, its in a ring outside of the core os.
The issue here is simple, Google needs to put its foot down and force hardware vendors that want to supply android hardware to release their drivers as open source software. They should have at least forced this on nexus based devices and should be enforcing this on the pixel based devices now, but Google are Google and for whatever reason they don’t.
We have these same debates all the time, the same conclusions are drawn every time and the landscape hasn’t changed in the last 20 – 30 years.
I do believe the Linux kernel will eventually get usurped, but by new hardware that shifts design, stuff like the memresistor:
http://electronics360.globalspec.com/article/6389/despite-hp-s-dela…
Even then the kernel can be adapted for such tech because it is and always has been open-sourced.
I don’t really understand what problem Google are trying to solve here, but good luck to ’em.
You have no idea what is Windows driver development and I doubt you have any expertise in Windows kernel.
Windows doesn’t have problems with backward compatibility. You still can use old DDK to build drivers for Windows 7 if you do not need new functionality- all definitions and structures are backward compatible.
The Right Way To Do It is for the chip maintainers to get their code in the mainline kernel. It worked for NICs, it’s working for GPUs, it worked for everything in kernelspace on x86 land except for PowerVR and Broadcom being cock-gargling retards as usual. The lack of a stable kernelspace driver ABI is explicitly designed to force people to put their code in the mainline kernel or put the proprietary stuff in userspace (Nvidia) if they want proper long term support for their hardware, and it works. This is just the embedded hardware industry being a bunch of squirrely retards about software as usual.
Just a minor sidenote, but is “retards” really the best word choice, here? It always sounds at best pretty tacky to use a word that in modern times is a slur against the developmentally disabled. That being said, I thoroughly admire your creativity there. May I suggest the alternative of “cock-gargling cretins”? You can stick far more venom into the sound of “cretin(s)” or “cretinous”, and thanks to the euphemism treadmill it actually has the exact same etymology – a word that originally was a medical diagnosis for abnormally low intelligence, then evolved into an insult. But, unlike the word “retard”, there’s almost certainly very few people still alive who were ever diagnosed a cretin, meaning there’s almost no chance of making people outside the cock-garglers you quite rightly take issue with feel insulted by your comment… fantastic, isn’t it? Whereas using “retard” means you’ll have people like me whining about your word choices. As for myself, I have Asperger’s (I’m sure I must be the only OSNews reader to have it), which despite the high intelligence often associated with the condition apparently makes me come under some UN treaty on “the treatment of retarded persons”, and that’s why the word personally bugs me. Found that lovely bit of trivia out when I was in a “specialist” autistic college where we were generally pretty badly mistreated, with a heavy dollop of patronisation. Apart from your word choice though, I completely agree with your post, closed source drivers have no business being in kernel space!
> Just a minor sidenote, but is “retards” really the best word choice, here? It always sounds at best pretty tacky to use a word that in modern times is a slur against the developmentally disabled.
That was the exact note I was going for, considering they seem to have a bunch of mentally disabled people in charge of their software development process.
The problem is not “controlling the whole stack”. The problem is the usage of proprietary drivers. Google is big enough to tip the scales in the other direction, but unfortunately chooses not to – ARM ecosystem could be a lot better at this. There’s nothing inherent and exclusive in Fuchsia that can solve this, it’s mostly a political issue.
Having said that, there is the issue of the lack of stable kernel interfaces in Linux. If those existed, the same driver could be used on newer versions of the kernel. Linux developers don’t want this and I can totally understand why: the code needs to be able to evolve without being artificially stuck in the past. And it’s only an issue with proprietary software, so screw it.
I agree with this assessment of the problem. Google just needs to lay down the law that everything has to go into mainline Linux. Stop the proprietary driver mess.
It is not a lot of work to keep a driver tracking mainline assuming that it has been committed to mainline. That’s the key bit — get it committed to mainline. Based on my work, it is less than an hour a week for a single developer to keep a driver in mainline functioning.
I just gag continuously working on Allwinner code where their engineers did not understand how something worked in the kernel. So instead of asking on LKML they implement some new crazy scheme to work around the issue. If they were required to submit these drivers the maintainers would explain how to use existing solutions in Linux to solve their problems. Instead these crazy solutions get implemented and they become difficult to forward port onto later kernels. The answer here is — submit your code to mainline!
PS – the Allwinner engineers are not crazy. they come up with these crazy solutions since they work in isolation and can’t get feedback.
Edited 2017-04-01 02:34 UTC
And it can be even easier than that. If I’m not mistaken, there is a rather large group of upstream Linux developers who will write the drivers for you, free of charge, providing you give them the appropriate documentation. They’ll even sign NDAs, if needed. I really don’t see the point of keeping it proprietary.
Yeah about that….AMD gave them the specs over 5 years ago, hows that working out for them? Last I heard every Linux site still tells new users to buy Nvidia despite them being so anti-FOSS that Linus actually gave them the finger in the middle of a presentation so my guess is it didn’t work out too well for them.
I mean sure its a nice thought and would probably work for something simple like a USB webcam, so seriously complex piece of silicon will billions of transistors? Yeah not so much.
Bollocks. Even AMD and nVidia are contributing to both radeon and nouveau respectively, due to the community’s momentum.
The future of productivity is seamless access to your data across modalities (Desktop, laptop, tablet, phone, etc.) ChromeOS isn’t cutting it because the application infrastructure isn’t there when you compare it to what is available to Android. One OS with one set of applications that can be used across those modalities is what is needed and if they can transition their current suite of applications to that system they will be able to meet the vision that Microsoft is so desperately attempting to provide with Continuum.
Today is April 1st (April Fools Day)
Yes but this isn’t the announcement of the project nor the first article on it.
True, but the initial release of code was last August
https://en.wikipedia.org/wiki/Google_Fuchsia
But it still sounds crazy to me
Is this a quote of Microsoft marketing? Because seamless access to data does in no way imply the need for the same applications on all “modalities” ..
Sorry, I live in the real world where an application is the data.
Fuschia is Google’s long play, along with RISC-V or an ARM licence (or close collaboration, like we see with OP1).
This. Google doesn’t want to see a replay of the Windows NT scenario, aka the entire company being stuck with Android and its assumptions (which are going to be obsolete in 8 years) forever.
If Microsoft had started re-writing Windows NT around 2003 or so, there would be no font handling done in the kernel today (so we’d have been spared numerous vulnerabilities) and we wouldn’t have Windows hitting the disk so often and using the swap space so heavily, because we don’t live in an era of 32MB main memory anymore (unlike the era Windows NT was originally designed for) and those practices are now a liability instead of a benefit.
Edited 2017-04-03 12:17 UTC
Been using Android since 2.x series. I’ve had phones from HP, HTC, Samsung, and now I’m on a Nexus 6P phone and use a Nexus 9 tablet.
I don’t actually download that many apps– I have a core set that I tend to use across my devices.
But Android, especially on the tablet, gets slower, and slower, and slower, and less reliable. My Nexus 9, with Chrome (and no extensions) installed crashes regularly, and is painfully slow to reload.
My previous two tablets went down the same path of gradual entropic failure– I was hoping a “Google” tablet would be different.
The December security update broke Bluetooth synchronization between my car and my phone– And Google won’t talk to me unless I reset to factory settings to prove their update (the only thing I installed on my phone that month) broke my phone. 7.1.2 might fix it– but it’s held up in development hell, and if I download the beta, I have to wipe my phone to reload the stable release.
I haven’t seen an OS that was this unstable since Windows ME.
I know there are tools that will supposedly refresh my ram, and improve performance– but why am I still reliant on third party tools when I’m running Android 7.x?
I’m sorry but the Nexus 9 is not a good example for Android performance on a tablet. The device was rushed out and they had to disable Android’s out of memory management and instead relied on the OOM of the Linux kernel which is pretty aggressive. They also had to fiddle a lot with the Nvidia SoC to get it working somewhat reliably on Android. It was a terrible device. The Pixel C fares a lot better in terms of performance and reliability.
Wow, you Android people are prizes. First you all say that Android isn’t crap and really, you should just buy Google devices (Nexus or Pixel). But now, suddenly, the OP’s device is not a good example even though it is a Nexus? It’s the same old “blame the user” crap I’ve been seeing from the f/oss community for decades. Pathetic.
I wouldn’t put it quite that harshly, but, yeah– the Nexus 9, with Google’s version of Android, is supposed to be “The One True Version” of Android– and if it’s crap, that’s a problem.
Similarly, if the Chrome browser is widely reviled as crap on Android, and it’s supposed to be the flagship browser, that’s a problem.
For the record, it’s not just FUSE (Filesystems in USErspace) and CUSE (Character devices in USErspace) that Linux offers for userspace drivers.
For example, there’s also uinput (an API for user-mode input device drivers) and libusb (which leverages USB’s layered design to allow the device-specific code to be written in userland, relying on only the OHCI/UHCI/EHCI/XHCI transport-layer drivers to be in the kernel).
For example, xboxdrv (the more featureful userland alternative to the Linux kernel’s XBox controller driver) uses uinput, as does the part of g15daemon which remaps the uselessly non-standard device endpoints for the Logitech G15 keyboard’s macro and media keys into something useful.
…and devices like my thermal receipt printer are supported via userland code which sits on top of libusb.
…not to mention userspace drivers being the norm on Linux for printers (CUPS), scanners (SANE), at least one of the IR remotes supported by LIRC, 3DConnexion NDOF pointing devices, front panel LCDs (lcdproc), etc.
Maybe Google got tired of “System D” trying to take too much control …LOL!
OR it could be a UEFI experiment…
“This project contains some experiments in software that runs on UEFI firmware for the purpose of exploring UEFI development and bootloader development.”
–https://github.com/fuchsia-mirror/magenta/blob/master/bootloader/REA…