This month’s column at LinuxMagazine provides an overview of some updates and new features in 2.6, including filesystem support, threading library changes, and the new kernel-level profiler. This article assumes you already have access to a machine running 2.6.
I feel sorry for all the folks who bought Linux Device Drivers in a vain attempt to keep up with Linux technology (http://www.xml.com/ldd/chapter/book/). By the time everyone (including commercial entities) understands how to properly produce bug-free, stable, fast 2.6 drivers, 2.8 will be the stable Linux version. Everyone’s time will have been effectively wasted, and then the cycle repeats itself.
By the time everyone (including commercial entities) understands how to properly produce bug-free, stable, fast 2.6 drivers, 2.8 will be the stable Linux version. Everyone’s time will have been effectively wasted, and then the cycle repeats itself
—
if you read kerneltrap.org or lwn.net(author) then you should understand that the development model has changed and 2.8 isnt anytime soon
You are just further proving my point. Since now 2.6 is both the developmental and stable branch, API changes will be even more likely. And code will break more often, and will frustrate more IHVs, and lead to less drivers available (if you can have less I guess). Why do you think my ATI x800 does not work in Linux yet? There is no motivation to rush out a driver that will be obsolete in months, if not weeks.
BTW: Eugenia- I am not behind an open proxy. I have had to switch to one to post, though.
You are just further proving my point. Since now 2.6 is both the developmental and stable branch, API changes will be even more likely. And code will break more often, and will frustrate more IHVs
—
since you totally misunderstood the development model i dont agree with you. feel free to argue on better points
There you go, both developmental and stable. What kind of development branch do you add new features to? Do you need me to reiterate my points one by one for you?
Since now 2.6 is both the developmental and stable branch, API changes will be even more likely. And code will break more often, and will frustrate more IHVs, and lead to less drivers available (if you can have less I guess).
You might want to read about how that was arrived at before you post here. By having a stable 2.6 branch, and a development 2.6 branch, evolutionary improvements can be fed into to 2.6 without fundamentally breaking existing drivers and infrastructure, or requiring re-architecting the kernel. We will not have a situation like we had with 2.2 -> 2.4, or 2.4 -> 2.6 again for quite some time, although userspace applications were totally unaffected in both cases.
Why do you think my ATI x800 does not work in Linux yet?
That has nothing to do with the kernel (everything that is needed in the kernel is there), and shows your total lack of knowledge on the subject. Ask ATI to improve their driver support.
There you go, both developmental and stable. What kind of development branch do you add new features to? Do you need me to reiterate my points one by one for you?
Since whatever points you have made are wrong, there’s nothing to re-iterate.
Hello, howdy, thanks for jumping in the conversation. Apparently, I do need to review what I said, perhaps I was too subtle.
Why do you think my ATI x800 does not work in Linux yet?
Explanation for slower folks: The constant API changes have led to delayed support in hardware for Linux. Planned obsolescence by the Linux developers (see differing interfaces for GPL drivers) reduces the amount of drivers available for Linux. I didn’t think I would have to spell out the obvious for such an intelligent crowd like OSNEWS, but I suppose more and more Slashdotters are migrating over here (for good reason).
<i.You might want to read about how that was arrived at before you post here.[/i]
Perhaps you missed the part about the onus of stability being moved down to the distributor. Yes, distributors are expected to debug and test the kernel to provide a satisfactory distribution. Contrasted with the 2.4 situation, where distributors could not even package applications without causing bugs with their interaction, the situation seems downright hopeless, and futile to expect the distributors to now become kernel debuggers as well.
Guess what: Win98 drivers don’t work on WinXP either! Did that delay hardware support for Windows?
So what you are saying is that ATI os not to blame for not putting enough effort into Linux driver development – but the kernel developers are? Don’t you think you would have you ATI drivers if ATI put some serious effort into driver development for Linux??
Dear Sir,
While I must fear that your personal attacks do not help your argument, I do feel it is my honorable duty to respond. See, I’m speaking of a general trend, based on my experience. These are my opinions on what is happening in the marketplace today. Your experience with one Nvdia card does not necessarily contradict the trend that less vendors are supporting Linux.
BTW: Check out how much fun it was installing those Nvidia drivers on Fedora Core 1: http://linuxsig.org/howto/nvidia-fedora.html
Why do you suppose it wasn’t very simple to do? Why do you suppose it didn’t work out of the box?
ati’s driver support sucks anyway id suspect that is the real reason that your x800 isnt working Anonymous (IP: —.velocitus.net) if i was you i would take notice to the amount of nvidia users that run linux or any other os for that matter that nvidia provides drivers for that have no problems with their graphics cards, or if you want to use ati cards .. run windows.. where their drivers work
BTW: Check out how much fun it was installing those Nvidia drivers on Fedora Core 1: http://linuxsig.org/howto/nvidia-fedora.html
Why do you suppose it wasn’t very simple to do? Why do you suppose it didn’t work out of the box?
Ummm, that would be because Red Hat (the packager for Fedora Core) ships a kernel that is far from the “vanilla” kernel found at http://www.kernel.org/ . Not really nVidia’s fault or the fault of Linux developers. All that customization comes at a cost, i.e. easy third-party support. Windows and Mac only ship in a handful of basic kernel configurations. Linux/*BSD are much, much more flexible.
BTW: Check out how much fun it was installing those Nvidia drivers on Fedora Core 1: http://linuxsig.org/howto/nvidia-fedora.html
Why do you suppose it wasn’t very simple to do? Why do you suppose it didn’t work out of the box?
Because Fedora sucks? I mean, seriously. They are in perpetual beta, and release compatibility-breaking customizations to the kernel (eg: 4k stacks), long before anybody else does.
The NVIDIA driver itself is quite well-insulated from kernel changes, and updates to the open-source wrapper are available from third parties very quickly after any compatibility-breaking changes (which are actually fairly rare). In any case, there aren’t going to be massive changes in the driver model from 2.6 to 2.8. One of the key improvements in 2.6 was an overhaul of the driver model to better-handle things like power management and hot-plugging. It makes sense that there was a large amount of change in the API during this restructuring. Stuff like this can only be minimized — not eliminated, unless, of course, you’d rather the kernel not introduce any new features for fear of breaking compatibility.
PS> Oh, and the reason that your x800 doesn’t work in Linux is because ATI’s OpenGL drivers generally suck. They suck on Windows, and it’s unsurprising that they suck on Linux. There is a reason they are completely rewriting them for future releases.
There was indeed a driver API change between 2.4 and 2.6. This was a big change, and the driver API for 2.6 was designed with the future in mind. They don’t intend to change the API for a long time.
During the 2.5.x series, there were indeed frequent changes in said API. During that time, nVidia users basically had no choice but to use 3rd party patches in order for the driver to work.
Shortly after the 2.6 stable release, nVidia started folding in those patches.
If hardware support for linux is indeed decreasing, it is due to removing functionality from hardware, and moving it into software, for which the OEM has only developed drivers for windows. The trend started with WinModems, then continued into printers, and can now be seen in a large number of products. This has no bearing on the Linux driver model.
If you’re in fact referring to the 2.6 kernel not supporting all the hardware in 2.4, you’re right. The same thing happened during the transition from 2.2 to 2.4. It wasn’t until about 2.4.18 or so that vendors jumped on the 2.4 bandwagon. It’s actually encouraging to see SO MUCH support for the 2.6 kernel so early on. There are vendors writing their own binary-only modules, vendors working on open-source modules, and vendors releasing specs so that open-source developers can produce modules. If anything, there’s INCREASED cooperation with hardware vendors. Of that, there can be no doubt.
Because Fedora sucks? I mean, seriously. They are in perpetual beta, and release compatibility-breaking customizations to the kernel (eg: 4k stacks), long before anybody else does.
Fedora doesn’t “suck” it’s just not Debian. Some people like the idea that it’s in “perpetual beta” as you call it. Personally I prefer to call it bleeding edge. I love trying out things like Gnome 2.6 and Xorg before everybody else. Yes they released a kernel that has 4k stacks, and they didn’t do it just to break compatibility as you made it sound. Additionally, the community support is second to none. It’s got a huge community that I would say is considerably larger than those of other community distros such as Debian and Gentoo.
Now personally, I’ve had very little breakage on Fedora Core, there was an issue with ppp not being upgradeable, and a temporary issue with a third party NTFS driver, but nothing major other than that.
Yeah, I’d never dream of running FC2 on a server, but for my laptop, it’s a champ.
This is kind of old news but still good information to kernel programmers. After all programming is an active project.
2.4 was a cool kernel but it had some major problems in its design. It is a far better choice to improve upon the kernel and remove this bad design choices and not leave them in for compatibility sake. As the kernel continues to stabilize people will be able to settle in more.
I’m not just referring to major changes between major kernel versions. Binary drivers break all to frequently between minor kernel releases like 2.6.7 and 2.6.8. I wish I had links to support this claim, but anyone else who has tried to support the amorphous blob that is “Linux” will know what I’m talking about.
I’ve tried installing distros based on the latest 2.6 kernel several times now. Each time, I’ve into stability issues. Last week, I installed the latest Knoppix, with the 2.6.7 kernel – it was very flaky. I experienced things like unreliable CDR-writing, the search function stopped working in Konqueror, Debian’s apt-get system initially worked but then stopped working, etc. When I reinstalled Knoppix from the same CD using the 2.4.26 kernel, I had no problems. This has been my experience with other distros as well (Slackware, for example, which I’m using now).
I assume that the problems I’m seeing are due to driver issues. It’s disappointing, but I realize that developers need time to work out these bugs. I’m sure it’s not easy, but I hope they can get it right soon, because I’d really like to take advantage of the new features in the 2.6 kernel.
It’s not just a matter of the bleeding edge software. It’s a matter of just being plain unstable. Also, Debian unstable is very up-to-date. It had KDE 3.3.0 before it came out, and the latest GNOME 2.7.92 is available in the experimental repository. Yet, it’s been perfectly stable for me. Thus, you cannot say that Fedora’s “bleeding edge” status forgives it’s rampant instability.
I like the way that VMS (now OpenVMS) does this. And very cleanly. In the i/o call (sys$qio() on VMS) you can pass your routine to be called by VMS when the i/o completes (incl. time-out). In that routine you can then process the i/o there and now. Or you can just make it pass the completion notification in a local queue that is the sole source of work requests for the main and have the OS wakeup the main. The same with timer completions. What’s good about it? What the difference? No polling. And no threading. Instant response. Only one waiting point.
It’s not just a matter of the bleeding edge software. It’s a matter of just being plain unstable. Also, Debian unstable is very up-to-date. It had KDE 3.3.0 before it came out, and the latest GNOME 2.7.92 is available in the experimental repository. Yet, it’s been perfectly stable for me. Thus, you cannot say that Fedora’s “bleeding edge” status forgives it’s rampant instability.
It’s funny that you say that. I’ve had just the opposite experience…Fedora Core 2 is rock solid in day to day operations, but just a simple “apt-get dist-upgrade” has horribly mutilated a Debian unstable setup.
Fedora Core 2 doesn’t suck and neither does Debian, unstable or otherwise. Use whatever’s good for you. There’s a reason why there’s so many Linux distros…one size doesn’t fit all.
Anyway, we’re both wayyy off topic here…I’m going to lay low and see how this version of Mandrake plays out…every version of Mandrake I’ve tried has either been excellent or an extreme dissappointment.
“Why do you think my ATI x800 does not work in Linux yet? There is no motivation to rush out a driver that will be obsolete in months, if not weeks.”
Your ATI x800 card doesn’t work because ATI drivers have a tradition of sucking hard even on windows. Do yourself a favor and get a Nvidia card. My 6800 Ultra works fine on Linux and Windows with no issue and all of it’s features are supported through Nvidia’s kick arse Linux and Windows drivers.
“I’m not just referring to major changes between major kernel versions. Binary drivers break all to frequently between minor kernel releases like 2.6.7 and 2.6.8.”
Thats a missleading statement. Such a driver would be the exception, not the norm.
I have switched from 2.6.0 to all the kernel versions as they came out all the way up to 2.6.8, with only one exception. I have yet to have a driver stop working on me even once, and that includes the ATI drivers for my Radeon 8500.
The author says that “the kernel itself is preemptive: some kernel-space operations can be interrupted to yield to user processes”. Is this a correct statement if preeption is optional? “Processor type and features —> [ ] Preemptible Kernel”
Second, the article states that “This [preemption] is especially relevant for GUI applications, which require maximum responsiveness.”
The benefits of preemption is up for debate:
http://kerneltrap.org/node/view/2702
Your ATI x800 card doesn’t work because ATI drivers have a tradition of sucking hard even on windows.
Err…no, the haven’t. ATI cards and drivers are awesome on Windows.
So why is it that ATI has to introduce bug fix after bug fix for games on windows for it’s drivers ? Games that work well with Nvidia cards and drivers but not with ATI drivers. Ever play City Of Heros with a ATI card ? Two words : FLASHING TEXTURES !
Err…no, the haven’t. ATI cards and drivers are awesome on Windows.
I’m afraid they’re not. ATI hardware is undeniably good, but their drivers have always been found wanting in any reviews of ATI cards. There are always quirks between versions that require another version to fix them – textures that disappear inexplicably and other inexplicable problems. Their cards also show up badly in performance terms because of their poor drivers. They just don’t have anything like the unified driver model nVidia has.
Guess what: Win98 drivers don’t work on WinXP either! Did that delay hardware support for Windows
Actually, some do. Like the drivers for my NIC. The windows 98se drivers work great under windows XP.