Jono Bacon has written an interesting article debating the way the kernel currently handles device drivers. Is there a better way to handle devices in Linux?
Jono Bacon has written an interesting article debating the way the kernel currently handles device drivers. Is there a better way to handle devices in Linux?
Dell dkms has the ability to add modules outside the kernel tree but I am against letting people add binary blobs of stuff which is architecture specific into the kernel. It limits development heavily. Making it easier for binary drivers would make sure they never even think of it as a problem.
Why not compile the kernel modules then? I mean, most of the drivers for things are OSS anyway, so why not download the source and compile? Kernel modules are typically quite small so there wouldn’t be that much waiting for the user.
The one disadvantage of this approach that I can see is that the distros would have install GCC and a working toolchain with the base system… still, it could work.
If this is such a bothersome issue for some people, then why don’t they create a driver driver? Let me explain. Its obvious that the core kernel developers are not interested in maintaining a set-and-stable API/ABI between kernel releases. They have other reasons besides making life easier for proprietary-binary modules, but that’s not the point here.
Now, such a stable API/ABI would be very useful for FOSS modules as well. The arguements are the same as having a strong LSB: ease of development and support for FOSS code. It just happens that proprietary vedors will also benefit.
However, between FOSS and proprietary vendors, why doesn’t someone write a driver API/ABI interface that remains solid while tracking the kernel changes. This would present a solid side for module authors, and a fluid side to track changes to the kernel.
I don’t expect the core kernel developers to do such work, but if its a *huge* problem, there’s money to be made doing this. For instance, the code could be dual-licensed so that FOSS modules can use it according to the GPL and proprietary-binary modules must license it. It would also insulate such modules from the core of the kernel.
thats not a bad idea, having one “driver Interface” to the kernel which basically gets updated with kernel changs but the “driver interface” connection to drivers remains the same there for old/new drivers could easily be added removed.
I think this would be a great idea if it was implemented, unless someone can see a problem i cant.
Snake
You can’t reliably debug kernels when someone is running untrusted, unknown code in their kernel.
From a support stance, binary only drivers are disasters waiting to happen.
The Linux kernel should continue their policy of not catering to proprietary modules. Some people might claim that Linux will never gain marketshare this way, but I completely disagree. Linux is going to gain marketshare no matter what at this point.
I think Linus’ policy of “if you don’t help me, I don’t help you” makes complete sense. Why should we let proprietary companies penetrate our market if they don’t want to give back like good citizens?
We need to stop “needing” the closed-source software, improve Linux as much as we can to get new users, and pretty soon they will need us, and we can get them on our terms.
Contrast this to OS’s where device drivers are not in the kernel; are located in separate directories; etc.
AmigaOS http://amiga.emugaming.com/devices.html
QNX http://www.phinixi.com/articles/ants_intro.php
BOAR http://www.students.tut.fi/~albert/DIP/DIP.fm.html#HDR30
This non-modular aspect of Linux’ architecture creates a huge hassle not so much for companies, but for non-technical users as well as for technical users who don’t want to go through all this just to add a new device.
IMHO the binary-only driver is far better then a hardware without driver. Some card is not problem, the most of TV card held a specified chipset. But I want to buy TV+FM card with bt878 chipset and at home I see the manufacturer changed the bt878 to cx88xx which is not works correctly under linux.
When I was writing the article this kind of argument was to be expected.
Firs tof all, ‘binary only’ is not the same concept in all contexts. If FooCorp create a binary only driver for their snazzy graphics card that is one type of binary driver, but another type of binary driver is a free software wireless network card driver where the code is still available. This is not binary only as the code is still available. The current way of locking out *all* binary only modules (including those built on free software code) is the equivilent of cutting off your nose to spite your face.
I think tanstaafl’s idea of a driver driver is a *great* idea. If this is done in free software it will provide a solution to these problems. I think ntl’s view that Linux is pretty unstoppable is rather idealistic – Linux will never make it on the desktop if people need to compile their own kernels. It is not that we want closed sourced software, but users want free software that is convenient, and compiling kernels certainly isn’t.
Jono
“It is apparent that the kernel will never match the kind of ABI stability that is common in commercial operating systems that see kernel updates once every few years”
Linux is a commercial grade kernel used in commercial operating systems.
Additionally, I do not want an outside driver that hasn’t been tested and reviewed by a set of kernel developer eyes on my system. The minute you begin adding drivers to Linux nilly willy out goes the stability and security of Linux.
If the roadmap (don’t we keep hearing about them a lot recently…) for the Kernel was as well thought out as it is for some other O/S’s then the user goal of keeping a stable API for more than one release can be achieved.
Let me explain,
Foe example, If the roadmap indicated that the API would be stable for releases .0, .1, .2, .3, .4, .5 & .6 then any driver built for .0 would still work on .6 without recompilation.
At .7 release, there is a window where the API can be changed. It would then remain stable (ie frozen) until .14 release and so on.
I know from bitter experience the problems making changes to device driver kernel interfaces can expose. There was huge gnashing of teeth when the driver api changed ov VMS from V4.7 to V5.0. This was a major release so, it could have been expected and was flagged in advance but it still caused me many hours of kernel ODT to sort out the problems in one particular driver I was responsible for.
So, in conclusion, any way the API can be made more stable and thus predictable would be good in the long run for all concerned with the Linux Kernel including the ever expanding team of driver writers and more imporantly, the users. If they don’t like then you will hear about it loud and clear.
The current way of locking out *all* binary only modules (including those built on free software code) is the equivilent of cutting off your nose to spite your face.
What’s wrong with the kernel source having every single driver that Linux supports in it?
Your article points to a quote from Zeuthen suggesting that the “bar be lowered” for getting a driver into the kernel.
I have another solution–if GPL drivers exist outside the kernel, they could be maintained as patchsets against the existing vanilla kernel. They would need to keep up with the kernel changes just as if they were part of the vanilla kernel.org product. Once utopia is realized, distributions would package the driver (probably mark it as ‘uncertified’ themselves) and when the end user tried using the hardware, he would be notified that the driver is ‘unofficial.’
After a while, the driver could not only be made to fit better in the kernel itself, it could actually get field testing.
I doubt people would really mind using a driver marked unofficial if it means they can use their camera.
The problem of having to keep up with API/ABI changes is still there, but it would be if the drivers were part of the kernel.
The end result is having drivers that work much more seamlessly with the rest of the system, and the task of determining what hardware works in Linux would be much easier.
Wow, I also think thats a really good idea. The drivers which talk through the driver driver might run slightly slower, since it would be an aditional layer of abstraction. I don’t see that as being a big deal, depending on how the driver is writen.
And… Binary only == bad
I do agree with you, however, i can see some times when it might be required. NVIDIA tends to be the popular example. Whereas, id love to have a gpl nvidia driver which does everything NVIDIA’s binary driver does, i can understand NVIDIA’s reasons why they can’t open source it. Also, my personal experence with their driver has been very positive. I run debian unstable, and have never had a problem related to NVIDIA’s driver.(any problems I have always been able to reproduce on another machine running the same programs.)
-Mike
ntl, you’ve missed some of my points. Let me reiterate.
1.) As the author expressed, supporting binary FOSS modules is useful. It means fewer support headaches for modules not in the kernel, or which wish to update on a separte schedule from kernel releases. I’m sure we can find some other cases where a FOSS binary module would easier to deal with than the current methods.
2.) More than likely, making things easier for the FOSS world does entail making them easier for the proprietary world. However, my guess is a “driver driver” would place a wall between the kernel-proper and any binary modules. This would help to isolate problems and possibly prevent bad interactions. I’m not a kernel dev, so please forgive a bit of technical ignorance on this point. I understand and agree with the “kernel taint” policies presently in place, but it doesn’t mean they are the only way.
3.) I doubt there’s any core kernel developers willing to do this work. They’ve staked out a position, and they have that right. There’s no way we can force the issue, except to write the code ourselves. As I’ve said, I’m not up for it. However, if we look around at nVidia and other companies (and that webcam guy from two weeks ago), they have to do a bit of this work themselves. There’s nothing from stopping them developing such a product together. Well, nothing but their own institutional barriers.
4.) If (and I repeat, *if*) this is a really huge problem, where vendors are simply waiting for a solution: this is a money-making deal. If that market exists, whomever puts forward a reasonable (FOSS, dual-licensed to proprietary vendors is reasonable) solution could make quite a pretty penny on in.
Actually, I’m very, very surprised there hasn’t been more effort from other companies and vendors to grab nVidia’s code and adapt it. I’m also surprised RedHat or SuSe haven’t tried to do something like this. As I said, if the market exists, this is money-making opportunity.
I am very interested in this discussion here. But to follow some comments I have to understand what ABI-compatibility is.
Please, can someone explain
a) What ABI and its compatibility is?
b) How is ABI compatibility sustained? For example is a kernel version 2.6.8.1 compiled with gcc 3.4.1 ABI compatible with a kernel 2.6.8.1 compiled with gcc 2.95? Or is this dependent with glibc?
Thanks!
Dead on.
With the speed of CPUs and the average amount of RAM in ones computer ever increasing why not just compile the modules from “Drivers on Demand”. First grab the source off the secure server, store it in some module source tree, then compile it in the background and once complete add the module in. When the kernel is upgraded a script just needs to recompile all the modules in the source tree.
Make it dead-easy (i.e. windows-dead-easy) to install new drivers.
If you can’t do that, you’ll never get anyway, because people *deman* ease-of-use from their PC. Why in the world would they switch to something *harder*?
Rules for OS Creation:
(1) Make it easy to use
(2) Give it good (note, not lots of, but good) apps
(3) Make it pretty
(3a) If in doubt, see rule (1)
I have another solution–if GPL drivers exist outside the kernel, they could be maintained as patchsets against the existing vanilla kernel.
But what about the RedHat and SuSe and Mandrake and Gentoo and Debian and Ark and Arch and Lindows and … and … and ….
IIRC all of those distros patch their kernels. There is no way to guarantee that a kernel module I may write works on all major kernel trees (vanilla, -ac, -aa, -mm, -cv, etc. as well) with just any random patchsets thrown in.
Now, how about each subsequent version of the kernel. I’m sure in the last year, there have been hundreds of kernels in simply the list above you. Any number of them will not work for some out-of-kernel module. Tracking the changes and breakages may be too difficult for some modules.
At best, it seems most small developers support the vanilla or RedHat kernel and possible a couple more. The nVidia drivers seem to work well across a large range of distro kernels because of: 1.) money and profit motive and 2.) experience. A small developer may have one or neither of those traits. Oh, and looking at the current version, the nVidia driver officially supports 136 kernel combinations from three vendors: RedHat, Mandrake, and SuSe.
Now, imagine a simpler way for a vendor or FOSS developer to release code. It means more time writing code, and less time tracking API/ABI/random changes. It isn’t a solution for every driver, as I doubt such interfaces can expose all useful functionality, but it could “lower the bar” for smaller developers. nVidia would probably keep their developments in place becasue they demand a level of performance that this interface probably will not give (though it may).
matt: It was a joke post. He’s just trolling, so don’t feed it.
Mac OS X. All drivers ship with the OS. They do allow Binary only, but you don’t normally need third party drivers. Everything can be included with the OS.
Binary only for FOSS only spells trouble. You install a third party driver who do you call when it breaks????
in windows you install a third party driver you call the manufactor, who tells you to download the latest, it still doesn’t work, but now your system doesn’t stop rebooting itself.
The only good solution is Apple’s once again the innovator of the computer world.
why osx can ship with all the needed drivers is that apple constrol the hardware base. they make the computers and they make the os.
this cant happen in a world of commodity hardware where everything and then some can be replaced by something that is supposed to follow the same standards but may have some odd flaw or similar…
basicly the macs fabled stability comes from apples hardware control. i dont see whats so innovative about that…
the only solution is the release of the hardware specs to the community so that the community can write and debug the drivers. black box drivers are bad as the only thing you can do is to trow out the offending black box and replace it.
Make it dead-easy (i.e. windows-dead-easy) to install new drivers.
When utopia is finished, the need to manually install drivers will no longer exist.
But what about the RedHat and SuSe and Mandrake and Gentoo and Debian and Ark and Arch and Lindows and … and … and ….
These distributions would be responsible for adding whatever drivers that aren’t in vanilla linux kernel to their custom kernel. Therefore, the burden of making them interpolate with their kernel falls on them.
As long as Joe Blow’s driver compiles against kernel.org, everything would be cool.
Now, imagine a simpler way for a vendor or FOSS developer to release code. It means more time writing code, and less time tracking API/ABI/random changes. It isn’t a solution for every driver, as I doubt such interfaces can expose all useful functionality, but it could “lower the bar” for smaller developers.
Again, without either of us being a driver maintainer, we don’t know how much of a pain keeping up with the changes is. It could be trivial.
And drivers don’t need to add new functionality once they can do everything the hardware requires.
Just look at windows, it’s a total disastor. The user buys a graphics card, or a modem..etc, and a CD with the binary driver comes with it! The user clicks a few times and the device is working! What a nightmare!
Or, the user clicks a few times, the system doesn’t like it, and refuses to install the device at all. Happened to me twice in as many months, first with a cheapy USB WLAN dongle, and then with the built-in soundcard on my laptop (well, it works, but when I reboot, there is a 1/4 chance of it not showing up the next time).
But that’s not really the point. Microsoft is in a completely different situation than Linux. Windows is the target market for most binary drivers. Ergo, vendors have to pay attention to getting it working properly. That’s not true with Linux. Your binary driver not work with a particular chip it’s supposed to? Who cares, it’s a marginal market anyway! Also, Microsoft has the source to most drivers as a result of certifying them for WHQL. Linux won’t have the luxury of something like that. Lastly, Microsoft can afford to have a huge QA staff inserting hacks upon hacks into the Windows code to keep binary drivers working. Linux can’t afford that.
Personally, I think the “compile on demand” idea is wonderful. Basically, it’s what NVIDIA does with their driver, only it’d be automated. For the user, it’d be even easier than Windows, because you wouldn’t have to download a binary driver manually, or insert a CD or any such foolishness.
These distributions would be responsible for adding whatever drivers that aren’t in vanilla linux kernel to their custom kernel. Therefore, the burden of making them interpolate with their kernel falls on them.
But what if it wasn’t a burden? I can’t say what a distro should and should not do in this case, because, frankly there’s no current way to decide. As I’ve said, I really think RedHat or SuSe could make a nice profit from such a option.
Again, without either of us being a driver maintainer, we don’t know how much of a pain keeping up with the changes is. It could be trivial.
I agree. Most of the noise we hear on this issue (especially on this site) appears to come from non-developers.
Perhaps Eugenia can dig us up some real kernel driver developers to question? The question can be limited to: would a stable ABI/API be useful for FOSS kernel modules?
personally, I’m of the group that suggests that user downloadable drivers, with an easy installation is an absolute MUST.
There’s no getting around it, as Linux becomes more and more adopted, it will support more and more drivers. As the kernel supports more and more drivers, it grows quite rapidly. I don’t know about you, but I certainly don’t want to download 500 megs of source code to only end up needing to compile 1/5th of that. Thats just wasteful. Additionally it becomes more and more difficult to maintain the kernel as more and more drivers become submitted. In otherwords SOMETHING needs to be done. The current method just isn’t terribly scalable.
However, being an advocate of open source, I’ll go along with the ‘just say no to binary drivers’ mantra. Keeping a secure, stable and robust kernel is necessary, and closed source binary drivers make this impossible.
Now I’m not a kernel programmer, and not even much of a programmer myself, but it seems to me that some middle ground might be to move as much device driver code as possible into userspace. I’ve heard this suggestion dropped before, and from what I gather there is some performance penalty to pay, but we’re going to have to end up paying with something (compatibility, security, overcomplexity, etc) with some of the other available options..
The bottom line here is your average jane/joe schmoe should NOT need gcc or the kernel source to use their favorite device. To do otherwise hinders mass linux adoption by complexity. Granted, I’m generalizing here, perhaps something could be created to do all compiling behind the scenes, but it’s my personal opinion that compilers and related programs are development tools, and development tools should not be required on john/jane schmoes desktop. Might just be me though.
this is sick. hardware companys should support drivers for their hardware. they have all documentation. kernel developers should only care about stability of pure OS services. drivers should work outside kernel because its more secure. my dream is that drivers just only allocate necessary resources (memory space, ports) and work on it, not like in DOS, where you had power over whole machine. is it hard to write API that only have:
– get driver id
– get memory
– get ports
– set interrupts and DMA
– get driver capabilities of sound, graphics, network interface,etc. (range of capabilities could change in every kernel release)
– communication (sending commands, for graphic cards access to graphics memory)
and everything work. by the way, in windows you can also install OSS drivers (like perfect drivers for SB Live).
Also, Microsoft has the source to most drivers as a result of certifying them for WHQL. Linux won’t have the luxury of something like that. Lastly, Microsoft can afford to have a huge QA staff inserting hacks upon hacks into the Windows code to keep binary drivers working. Linux can’t afford that.
Question: Does MS charge the vendor for this certification? If so, then why can’t the Foo Linux distributor do the same thing? Only, they certify the driver against a set of solid interfaces that don’t change between kernel releases.
This means Foo has to keep the interaction between their interfaces and the kernel in sync with the movement of kernel development, but that’s their problem. The customers only see a set of solid interfaces and pay the money for certification (Profit for Foo!).
Assuming Foo keeps the interfaces FOSS, then every distro can patch it for their kernels and the drivers work nicely across distros. So, everyone wins, and Foo gets the money and brand recognition. That’s essentially what MS does, but they’re the only distributor (and I assume they charge for certification).
Heck, go one step further: the developers of the Foo interface strike parterniships with the developers and become the official source of their drivers. Please note: these don’t have to be proprietary drivers. This works for all drivers.
Basically, it’s what NVIDIA does with their driver, only it’d be automated.
I agree that nVidia does a good job, but they appear to be a bit unique. nVidia went to great lengths to keep the core of their drivers unified across chipsets and OSes. The Windows drivers are largely the same as the Linux and BSD drivers. Not every vendor has the money or experience to do this.
With Longorn, Microsoft is returning NT to its microkernel roots. They are introducing a new device driver API called “Windows Driver Framework”) that allows user-mode device drivers. I know how Linus feels about microkernels, but research microkernels like L4 have shown that virtual memory tricks can be played to make user-mode drivers as fast as kernel-mode drivers.
Windows Driver Framework:
http://www.eweek.com/article2/0,1759,1643850,00.asp
I think Linus’ stance has been that the same tricks apply to monolithic kernels.
Actually, you can find instances in which a user-space driver vastly outperforms a kernel-space driver. It all depends on what it has to do.
For instance, look up VIA, which was a user-space architecture for using network adaptors. Testing showed that the network cards could get very, very close to their theoretical peak performance using a user-space driver, while the kernel-space driver was much less efficient.
Of course, this meant the user’s program had to manage network traffic, but for high-performance computing clusters, this wasn’t as big a drawback.
I’m looking over at linuxnewbies.org to see if I can add anything else to the discussion.
I found a neat document discussing the Linux Kernel API. This gives a list of interfaces to which a driver has access in the kernel. I’m not sure how recent the document is, but I suspect it covers 2.4, not 2.6. I could be wrong.
Oops, forgot the address!
http://www.kernelnewbies.org/documents/kdoc/kernel-api/linuxkernela…
Interesting debate, though just slightly above my head. But I’m really curious about how Linux driver modules compares with FreeBSD’s. As I understand, FreeBSD’s kernel has slowly been morphing so that more and more it’s a microkernel that uses kernel modules, rather than being monolithic (like Linux, and OpenBSD).
So I’m curious, is there any advantage to FreeBSD’s approach? Disadvantages? Please, this is not a troll, and I hope nobody turn this into a BSD vs. Linux debate. I’m curious about the technical issues involved. I’ve read Linus’s book “Just for Fun” and he talks convincingly about how he thinks a monolithic kernel is better, and apparently the OpenBSD folks agree, but FreeBSD is not taking that approach. So, from the point of view of users and developers, is there a “best” approach?
I think what Linus said was that it’s impossible to implement synchronous posix system calls on a microkernel and get decent performance. He has said good things about the qnx microkernel.
I thought all my device drivers were modules, must take another look.
Nothing wrong with the way Nvidia makes their drivers available for Linux. Works very well accross the board. Reason why Linux can’t go the MS route, easy, Linux supports a diverse CPU architecture set whereas MS is only targeting one subset of the x86 CPU architechture. It’s called a trade off which they accept being a proprietry OS solution.
Now with drivers compiled from source, you can get a driver which will work not only on i386 but i686,K7,P4,K8 and a whole gaumet of systems not running x86 I believe. This allows for something OS’s like MS Windows can’t dream of, namely optimised CPU support where you can run on old hardware or the latest as you require with the best performance for either. This is why OpenGL games with Linux binaries run better on Nvidia card systems under Linux than under Windows as Linux can be better tuned to ones hardware setup.
Again, so I understand.
Just look at Arch makepkg or Gentoo for highly optimised versions of Linux and how they run.
Also there is project Utopia which is aiming to make things easier but I’m not familier with the details.
Works very well accross the board. Reason why Linux can’t go the MS route, easy, Linux supports a diverse CPU architecture set whereas MS is only targeting one subset of the x86 CPU architechture.
That’s a very good point! Supporting binary drivers wouldn’t solve this, but then, nVidia really doesn’t support much beyond x86-32 and x86-64. At least, that’s what I understand as the PPC people complain on occasion.
No, no it isn’t that hard. That is exactly what a good OS should do.
Tanenbaum is right. Linux is a dead-end design. Drivers in the kernel image? What a moronic idea.
>“This is why OpenGL games with Linux binaries run better on Nvidia card systems under Linux than under Windows as Linux can be better tuned to ones hardware setup.”
I don’t think so, first nvidia drivers are from the same codebase as windows drivers and they have the same built in optimizations (sse, 3dnow, etc.), secondly games are optimized for most architectures (p4, athlon) on windows too, any windows executable or dll can contain multiple versions of the code, if just a matter of determining the cpu type and setting a variable or whatever. Also microsoft compilers are know to produce faster code than gcc.
That said i must say that i get about the same fps on quake 3 on windows as i do on linux.
Your post made no sense. You were mocking something you obviously didnt understand. As for my grammer, it was never my strong suit in high school. On the other hand, Im an enterprise webapp developer, so I guess the only benefit was not to be insulted by trolls on tech sites. now lets look at your post for a moment.
>Just look at windows, it’s a total disastor. The user buys >a graphics card, or a modem..etc, and a CD with the binary >driver comes with it! The user clicks a few times and the >device is working! What a nightmare!
Um, no. If you want to avoid crashes, your best bet is to search the web for each piece of hardware you have and downnload the latest versions, as most driver discs will be very dated. Compare this with it working out of the box, and getting updated on a regular basis.
>But look at the beautiful system in Linux, with only 300 >different distributions or so, only 3 or 4 are LSB >complianat, device support is SO easy.
LSB has _nothing_ to do with the kernel. or drivers. the number of distros again has nothing to do with either the kernel, or drivers.
>With the tremendous documentation available for the Linux >Kernel API, with the modularity of the kernel, with the >consistent API, what a user can ask?
A user doesnt care about the kernel api. Im a developer, and I dont care about the kernel API, i care about glibc. if i were the glibc maintainer, then I may care.
>If you want your device supported, download the source >code, setup paths, get kernel source, compile, recover from >errors, compile again, give up, come back in 3 weeks, try >again. Or hey, just wait for Kernel 2.XX.XX which has >support for your device!
many opensource drivers come with the kernel. the ones that dont should have packages made by the various distros. the users who will choose to go this route (the experienced), will already most likely have compiled their own kernel, and installing modules from source should be a relatively easy task.
Great grammer, and nice sarcasm aside, your post made very little sense… except possibly to others who tried linux, found that they actually had to use their minds, found it too difficult and decided to troll about it. There are problems with the way drivers are being handled now, some of which are being addressed. you however, mentioned none of them.
hence, the moron comment.
Why don’t we make a user space driver architecture. A number of them have already been done, but we need to standardize on one and have it included in the kernel. That way user land drivers can link against LGPL libraries, and we solve the problem of changing kernel interfaces, and difficult to install drivers.
User land drivers have been proven to not perform much worse than in-kernel drivers. In some cases they perform better. But mostly, the difference is negligible.
People writing third party drivers should confirm to the userland driver spec. Perhaps with a build flag, that same API will compile a driver for an in-kernel module as well, so that a driver author need only write the driver once.
Well, spark of interest here! It all comes down to this: Hardware vendors will support Linux if the hardware demands Linux support. End of story. If they can’t keep up with development…screw um. The hardware probably wasn’t that good in the first place.
Sorry to bust bubbles here but my system does handle OpenGL based games better on Linux than under Windows XP.
Distro Arch Linux i686 based
Kernel 2.6.7
Latest Nvidia Linux drivers
Ti-4400 graphics card
1gig mem
Dual AMD MP2000’s
Hoontech DSP24 C-Port Audio card
Windows XP SP2
same as above
Games being run: UT2004 (and mod Red Orchestra) and America’s Army.
Both of these games run very well and no, I have not compiled my own kernel for my rig, just using the standard i686 packaged kernel that comes with Arch.
Also noted that the Linux binary being developed for Doom3 was reported to be runing surprisingly well. Now I can only assume this is in regards to how Doom3 runs on other OS platforms given the same hardware. I know I will hold out for the Linux binaries for Doom3 rather than trying Cedega.
Geez I even had Il2 Forgotten Battles runing under Cedega and Linux and runing well but my joystick was not happening (just checkout the Transgaming screenshots under Il2FB, they’re mine).
Quake3, really, it doesn’t cut it nowdays as a means for testing and hasn’t for over a year.
Oh, and guess which OS doesn’t get crippled if a game crashes? Not Windows
I thought every module had to have a hook compiled into the kernel in order to be possible to load it. If that is true, there is no easy “auto-compile the module” and plug it in.
Nvidia and Ati tricks seem to work cause they do have a “driver’s driver” – the AGPGart.
My memory card reader and bluetooth adapter (both have FOSS drivers) required a recompile of the kernel, putting it in /boot and lilo’ling the thing in place.
And, you know, as many average joe’s I messed up the settings on the initial 2-3 kernels.
Installing drivers while not breaking the kernel (only a portion of making devices functional) took me 2 days.
I hope there were name-less, free/dynamic hooks possible for new modules, so I would just compile the module, and that’s it.
I though it’s a temporary problem, some squrrel somewhere is right now making sure that users don’t need a degree in CS to install a bluetooth adaptor…
Linus is better keep his “religious views” private. Cause this view alone, broke my hope and respect for this joint. Starts to look like Mr.Jobs.
To rephrase Ford, Apple’s moto is “You can extend Apple gear with any gear, bought at any sotore, as long as it’s made by us, and sold by us.” Think different my hiney.
If Linus continues this way, along with Apple, we’ll have another tini specialty store chain.
does anybody know, what is going on with GNU Hurd? is it still alpha stage?
re: hurd
seems that way. it have never realy attracted the number of developers that the linux kernel have. debian have a gnu/hurd version of its distro going tho from what i understand.
—
re: …
hal is being worked on under the banner of project utopia. and hal dont help in the driver development side. it just makes it simple for end user app developemnt as they dont have to know how to access that exact device. instead they just do a generic call to the hal and the hal have to translate that to a specific call to the driver.
as for modularity, im not sure what your talking about. with a nicely compiled kernel (like the one that comes with most distros) only the basic stuff like ide/ata hd controller support and some other stuff is in the kernel directly. the rest is in modules that can be loaded or unloaded in runtime. in fact thats a very nice way to test a buggy driver. if your thinking of micro kernel style modularity then thats a whole diffrent beast. there is a tradeoff there of speed vs flexibility. the ms nt (the one from nt on up) kernel is supposed to be microkernel but still there are stuff like graphics drivers that hook directly into the kernel (thats why a bad nvidia driver can bring your computer to a bluescreen). why? speed…
—
it was suggested in a early post that one could make a translator layer of sorts thats maintained outside of the kernel proper (its not the first time that happens, just look at supermount). this layer act as a stable abi (i belive thats the term) towards drivers but would have to be compiled over every time that a new kernel was released as the kernel end if the layer would change. but how is that diffrent from the current nvidia way? they ship a binary blob with a code wrapper that you have to compile over every time a new kernel is put into use. sometimes that breaks (like when the 4k stack was put into use) but most of the time it works.
now if we could automate a system like this so that when a module failed to load it would break open a source tar stashed somewhere and do a compile behind the sceen it would help in the useability department. again this have been suggested in the comments here, and one “problem” was pointed out, the fact that you would have to have the kernel headers, gcc and related tools available in a desktop distro. but whats realy the problem with that? a user most likely dont care as long as the driver works on install
and most power users install that toolkit on every distro install anyways so to me it would be a win-win.
so, can we automate the install and compile of a kernel module so that it becomes a one button affair for the user? maybe adapt the autopackage system into doing this. how simple can we make the kernel config and compile prosess? can it be made so that you can leave it to scripts to install a new source patch or module code and either compile it into the kernel proper or as a module? and do we want it?
kernmod-install “filename” and then the kernel is rebuildt if needed and made into the new default kernel on reboot (with the old one backed up and the system rigged so that if the new kernel fail to boot properly the old kernel will be used on reboot). it would be interesting to try atleast, just to see if it can be done. noone says it have to be used…
i wonder if this rant breaks the character barrier…
It is rare to read such an interesting article. Thanks!
I’ve seen a lot of chatter in the mailing lists about unloading modules. Basically, it seems that the main kernel developers consider module-unloading to be a huge pain to support and they would like to stop supporting it.
I don’t know what the technical difficulties are but I think that if this changes, it would be a big mistake. Personally,
the ability to unload modules is one of my favorite features of the Linux kernel.
Also, I’ve never agreed with Linus’ reasons for not having a stable ABI. It’s fine if he doesn’t want to be locked in forever but couldn’t they at least stabilize the ABI once every six months?
uh? UT2004 using is faster using direct3d on windows not opengl.
I thought every module had to have a hook compiled into the kernel in order to be possible to load it. If that is true, there is no easy “auto-compile the module” and plug it in.
That’s just not true.
Nvidia and Ati tricks seem to work cause they do have a “driver’s driver” – the AGPGart.
That’s not true either.
I hope there were name-less, free/dynamic hooks possible for new modules, so I would just compile the module, and that’s it.
You can. As long as you’re using a vendor-supplied kernel, you should just be able to do “make modules && make modules_install” and just use the new module.
@JCW: Don’t be a twit. Ease of use has no business at the kernel level, to the extent that the user doesn’t interface directly with the kernel. It’s userspace’s job to make driver installation look nice and easy.
Um, I don’t think some people quite understand what the Linux driver situation is. Linux isn’t a microkernel, but it’s a very highly modular kernel. Most significant functionality can be split out into an independent module. Modules are dynamically loadable, and compilable outside of the kernel. Linux isn’t really any different in this regard than *BSD, Solaris, BeOS, or MacOS X.
@runtime. Longhorn will return to it’s microkernel roots when it gets the fricking GUI out of kernel space.
@mrroman: You forgot a number of things:
– Delayed procedure calls. Most interrupt handlers need a way to invoke some post-processing code outside of the interrupt handler itself. You need an API for this.
– Memory allocation.
– Device detection.
– A whole host of APIs for power management (ACPI, APM, etc).
– A way for the driver to communicate to userspace (something like sysfs), to report stats or take options.
– Locking API
– Access to info about multiple CPUs in SMP machine
Then you forget lot’s of the details of the ABI:
– Procedure calling convention
– Stack size assumption
– Locking conventions
It’s really a lot harder to pin down an ABI thank you think, and maintaining it in the face of change is very difficult. In 2.4, the API was broken because taskqueues replaced bottom halves for deferred procedure calls. In 2.6, you had changes to support power management, device detection, and and driver -> user communication (sysfs). During 2.6, you had changes to the stack size assumptions. In 2.8, as Linux scales to NUMA machines, you’ll probably get changes in locking conventions too.
If you really want to keep a stable ABI, then you either have to say “okay, no power management for you!” or “okay, we’ll keep 4-5 different stable ABIs laying around to support a handful of mostly outdated binary drivers.”
but in theory a abi could be stable for that stable release of the kernel yes? so going from 2.4 to 2.6 may break the abi but inside 2.4 the abi is stable? the question then becomes what happens when a bugfix ends up breaking the abi, should it be applyed or held back until the abi is changed? personaly i go for the first but that may not fall well for people wanting a stable enviroment.
Isn’t all that stuff (locking, interrupt masking etc) what hardware abstractions layers (like those on windows or os/2) are supposed to do?, then if something needs to be broken, it would be the HAL and not the drivers.
BTW, is the HAL that the author of the article mentions, the same as the HALs on windows and os/2? wouldn’t that mean rewriting a lot of stuff on the linux kernel?.
@Rayiner:
– Delayed procedure calls. Most interrupt handlers need a way to invoke some post-processing code outside of the interrupt handler itself. You need an API for this. (i said about interrupt settings)
– Memory allocation. ( i said about allocation of memory – “get memory”)
– Device detection. ( i said about device id which would be used for detection. pci and usb devices are identified by device id)
– A way for the driver to communicate to userspace (something like sysfs), to report stats or take options. – i said about it.
– A whole host of APIs for power management (ACPI, APM, etc). – (i forgot about it)
– Locking API – i think it’s kernel base service.
– Access to info about multiple CPUs in SMP machine. (and it)
i know i was saying some generally. but when kernel developers wouldn’t care about drivers, they had more time for improving ABI and API.
and now for antiwindows people i haven’t seen any blue screens since 3-4 years. you are surely talk about windows 95-98 or me but not about xp or even 2000. i know many people using xp and they didn’t mention about blue screen and other windows myths (about stability). so stop trolling. windows xp has kernel which was developed 3 years ago (for true, first versions are from 92). it is so flexible, that hardware which was never heard in 2001 works with it well. hardware vendors just write drivers and they must work because if they didn’t, their hardware wouldn’t sell.
i think Linus don’t make ABI stable because he thinks that linux is for programmers and admins. admins buy hardware once for several years and programmers (espacialy hobbist) like to play with kernel compilation and other.
linux is chaos and is wild, because noone is able to control millions of code lines. maybe it needs good long distance plan.
Testing DirectX and OpenGL under Windows XP and OpenGL on Linux I have found better results under Linux. Admittedly with XFree86 I found that the input responsivness to be slower than under Windows XP but with the release of Xorg that is no longer an issue and gerneral fluidity of a game is better under Linux.
C’est la vie as they say. You use what you want but I’m just posting my findings using two OS’s on the one PC. People on the AA forums have found similar results with AA on Linux compared to Windows. Just my findings but yes some aspects of Linux are not as easy as Windows and visa versa (just try hunting problems associated with the Registry on Windows, shudder, text files are much easier to deal with).
Ok, it’s late, I’ve had a couple of glasses of wine and changed two poopie diapers already. Here goes: Why doesn’t Linux just do it like BeOS did? I really liked driver installation consisting of dropping a file into a folder. That was class.
I think the Syllable folks doing it a la BeOS, no?
@mrroman: My point was that all the things that’d go into an ABI is a lot more than you think it is. Stabilizing the ABI essentially means that you have to be wary of changing any of those things I listed, or else you’d break drivers. So you either break drivers very often, or hold back improvements to retain the ABI.
i think Linus don’t make ABI stable because he thinks that linux is for programmers and admins.
Not only is that not the case, but that’s not really accurate. Read the stuff Linus has said — he does consider the needs of Linux on the desktop. He just doesn’t want to hold the kernel hostage to the needs of binary drivers. Microsoft blames drivers for most Windows crashes. Consider the resources Microsoft has vs the resources the Linux developers have. Would you want them to spend all their time debugging binary drivers, or actually working on improvements?
@John MG: Actually, it’s essentially the same thing in Linux. What do you think the NVIDIA installer does? It drops a binary into /lib/modules/(version)/kernel/drivers/video. That’s all that’s necessary to make the driver available to the kernel. The problem is that drivers need to be recompiled for each kernel version, because it’s too hard to make sure the kernel -> driver ABI doesn’t break.
What the problem, that people fail to grasp is the number of cheapskate vendors who implement most of the features in the driver and associated software.
Maybe one day, we’ll get back to the good old days, RAID PCI cards that are actually pieces of hardare rather than the current situation of glorified space filler plus over sized driver and software taking up a boot load of drive space; and god forbid, possible graphics card companies who dump all the work on their chipset rather than sucking CPU power to run their oversized, buggy and unreliable drivers.
He has a point. Just think about winmodems for example.
Personally, I really like the way that Mac OS X handles drivers. An object oriented, hot-pluggable framework, eliminating most duplicated functionality, and has a both a stable API and ABI, allowing for both open source generic, and closed source proprietary drivers. By far ahead of both Linux, Windows, and (my prefered OSes) the BSDs.
He has a point. Just think about winmodems for example.
Believe me, if given a chance, we’d have wingraphics cards and win-what evers if hardware manufacturers had their way. It is a sad state of affairs when hardware manufacturers prefer to put out cheap crap rather than putting together a quality product so the likes of me, who don’t mind spending an extra few dollars for quality, can purchase hardware which is reliable and supported on multiple operating systems.