“The news of Con Kolivas, a Linux kernel developer, quitting that role, along with an interview in which he explains why, could and should make loud noises around the Free Software community which is often touting GNU/Linux as the best operating system one could use, and not just because of freedom you have with it. In the interview he says certain things which should cause tectonic shifts in the mindset that we have all been having. Why didn’t we realize these things before? As you can see, the article intrigued me quite a bit, and got me thinking about a better way forward for the Free Software OS. I’ll go through some of the basic points that he makes and lay out one possible solution and its implications. However, take this article as just a discussion starter.” My take: I have been advocating splitting the Linux kernel up (desktop, server, embedded) for years now.
MacOSX is an example of it!!!
FreeBSD (kernel) -> Darwin (fork) -> MacOSX (desktop revolution) -> profit! ๐
I thought Mac OS X used the Mach microkernel?
Yes and no.
[do correct me if I’m wrong at any point ]
Last time I checked, it was a hybrid.
A bit of history: the mach was essentially a bsd kernel where they started moving more and more stuff into userspace.
What apple did was, take an older mach release and move the bsd kernel parts that was in userspace back into kernel space.
So you got a mach kernel and a partial bsd kernel in kernel space.
๐ Yes, definitely a hybrid, but FreeBSD today is a much changed beast from the long-ago BSD fork, as is the OS X kernel. But I get your point.
I am still interested in the scheduler and threading changes mentioned as improvements for OS X… no one says anything about them. But that isn’t for this thread.
XNU Kernel which is a hybrid of the Mach with many monolithic parts bolted together.
GAAA! I knew that. (XNU). Brain lapse.
http://developer.apple.com/DOCUMENTATION/Darwin/Conceptual/KernelPr…
Has imho more to do with the stylish userland then the kernel. Not allways does the OSX sheduler shine, especially when you run a big mysql database.
See how you argue out of a server perspective? It may be slower on databases, but audio doesn’t stutter regulary, latency is excellent, and even under heavy load the system is just responsive, something I still can’t say about Linux, even the newest distros. Also, I find X11 rather unstable, and extremely unflexible: Connecting your Laptop to a beamer to do a presentation just works under Mac and Windows. Good luck on Linux.
This is all basic stuff, that just has to work, if Linux wants to be more than an alternative to Windows for Geeks. How can you possibly deny that!
Uh, no. It was more like:
NeXTStep (OS) -> Darwin (renamed NeXTStep) -> Mac OS X (pretty GUI on top) -> profit
You guys dereference a lot of pointers without checking for null!
Dude, in Objective C you don’t have to check for null!
The Mac OS X kernel is worse than Linux. The killer feature in Mac OS X is the USERSPACE.
LOL, you have a sense for humor
MacOS X and the desktop revolution *g*
I’m not trying to restart the flames, but I tried the SD scheduler the other day (as part of 2.6.22-ck1) on my Gentoo box. During compilation of large packages the mouse cursor become jerky and useless, I’ve never experience that with the stock scheduler and haven’t so far with CFS (which I’m running now). *In my very humble opinion* the SD scheduler needed more work.
I think Con is a great developer with some really cool ideas for the desktop and I am in no way trying to critique his great work because, after all, if it wasn’t for him making noise with SD we would never have gotten CFS scheduler (Which admittedly doesn’t seem to have made either a positive or negative impact on my desktop usage)
What happened with SD/CFS and between the core kernel developers Con is unfortunate but right now the last thing the kernel needs is a ‘fork’. That would be a total disaster and I’m so fed up of having debates about how Linux is sucking on the desktop that I can’t even be bothered to explain why I believe so.
And while we’re on the topic of Linux on the desktop again, let’s not even get started on having stable internal kernel API. The latest NVidia graphics driver requires a 4 line patch (the kernel interface code is source viewable) to work with 2.6.23-rc1. The patch is on the NV News and gentoo forums. If you believe the hardships of a 4 or 5 line patch warrants locking yourselves into a stable API for 5 years that may be insufficient in 2 then use Windows.
Edited 2007-07-26 16:38
The latest NVidia graphics driver requires a 4 line patch (the kernel interface code is source viewable) to work with 2.6.23-rc1. The patch is on the NV News and gentoo forums.
Thanks dude for sharing!
I really can’t believe you piss and whine and cry about how Linux hasn’t made it on the desktop but expect end users to patch the bloody kernel to use a graphics card. You don’t have to tell us why Linux didn’t cut it on the desktop, you just demonstrated it quite clearly for everybody.
Geek clique. =/
No mainstream Linux distro will be shipping with that kernel unpatched. This is not a problem ordinary users will ever face. That’s why god invented distros.
Oh, yes they will and they have.
Buy new graphics card -> discover Linux support is flaky unless you run the lastest nvidia drivers -> oops, the driver doesn’t work with your specific distribution release unless you apply this experimental patch from a guy posted on a mailing list (which also happens to break a couple of other things).
No. This patch only applies to this particular version of the Linux kernel, which in this case is the very latest release candidate. You just don’t get -RC kernels on mainstream distros. No doubt, any distro that did choose to run this kernel would apply the patch.
The real problem with getting the latest NVidia drivers running on a stable distro is that some distros package the drivers themselves, in which case you’re probably stuck with a pre-9xxx series driver, which doesn’t support Compiz, which is what everyone wants. If they don’t package the driver themselves, then you have to install it manually, which is fine (kind of) until an automatic update updates the kernel and you find yourself unceremoniously dumped to the command line at boot.
Your post is prime example of pure FUD.
A. No distribution ships RC kernels. Period.
If you -choose- to use beta software (Fedora rawhide/Debian sid/Slackware current/etc) you should be ready to suffer some breakage.
B. Microsoft didn’t get blamed from not having full hardware support during the beta stages of Vista. The same should apply to Linux.
C. Vista still suffers from problematic hardware support, and yet, everybody is blaming ATI/nVidia and not Microsoft. How come you do not do the same when it comes to Linux?
– Gilboa
… Plus, nothing stops nVidia/ATI/etc from getting their driver included into the main kernel tree.
As long as they keep their drivers closed, you should be pointing your “flaky-hardware-support” fingers somewhere else.
– Gilboa
“I really can’t believe you piss and whine and cry about how Linux hasn’t made it on the desktop but expect end users to patch the bloody kernel to use a graphics card. You don’t have to tell us why Linux didn’t cut it on the desktop, you just demonstrated it quite clearly for everybody.
Geek clique. =/”
Mod him down all you like; he makes a valid point.
Baadger, you say that only a four-line patch is needed to run the latest NVidia driver, and you find that acceptable? The fact is, no recompilation should ever be necessary to install a driver. For example, imagine if every time Microsoft released a security update that affected the Windows kernel, all drivers immediately became obsolete. If that were the case, there would be very few working Windows drivers, Windows’ much-touted hardware compatibility would fall flat.
In case you’re still not convinced, think about another hypothetical situation. Imagine a new hardware manufacturer creating a new graphics card, and wanting to provide drivers for all major operating systems out of the box. For Windows, a WDM/WDDM driver would do the trick, for Mac OS, they could include an installer for a kext. On Linux, the best they could do is go the NVidia route and provide an installation tool that attempted to compile the necessary kernel module. Of course, that wouldn’t work if the user didn’t happen to have the kernel headers installed, or the necessary compilers and development libraries.
First of all, he isn’t making a valid point, because only a fraction of a percent of Linux users will roll their own kernels and run into compatibility issues with proprietary drivers. The rest will get their kernels as a part of the their distribution’s service stream, and the distributor will not release a kernel that breaks compatibility with a major graphics driver without also shipping an updated driver. It falls on the distributor to hide these inconveniences from the user. That’s why they should be lobbying for free software drivers more than anybody.
Many of the people who argue against interface instability misunderstand what the free software community is trying to accomplish. We don’t want any single points of failure in our development community. We don’t want to be beholden to a particular company for updates and fixes. On the other hand, a hardware vendor shouldn’t have to develop their drivers in a vacuum. If a change is needed to run with the latest Linux kernel, then the kernel community would be more than happy to make the change. The graphics vendors are more than welcome to participate in the kernel project, and they do.
There are all of these boundaries between components in any software system. It doesn’t always make sense for developers to draw a line in the sand and refuse to take responsibility for anything on the other side. A little cooperation goes a long way. It’s like diplomacy. If international relations stayed the same, we would have longstanding arrangements and no conflict. But sometimes the relationship has to be renegotiated, and this must be done through bilateral or multilateral cooperation.
You can’t apply the Windows platform model to Linux, or free software systems in general, because they’re based on different theories. Windows is based on backwards compatibility: release once, runs forever. Linux is based on transparency and cooperation: watch the mailing list, try to keep up. You might argue that this makes life more difficult for developers. But it also keeps the development community engaged and somewhat unified. They have to work together and stay on top of changes, which ultimately (I believe) results in software that improves rapidly and forms cohesive systems.
The Windows world encourages complacency and independence. Therefore you get software that doesn’t improve very often or very much, and it comes in these monolithic globs that don’t share code, functionality, or look-n-feel. I don’t quite understand why users prefer a system that makes developers’ lives easier at the expense of progress and consistency to a system that makes developers’ and distributors’ lives a bit harder in order to squeeze out the latest and greatest for the benefit of the users.
Don’t worry about the developers. They’ll stick around through the seeming chaos, because, ultimately, they prefer hacking free software to proprietary software. It’s more fun and more rewarding without any of the hassles associated with proprietary software development. You don’t have to reinvent the wheel just because your employer doesn’t own a particular piece of software (or because your department can’t afford to license it from another department in the same corporation).
It’s the distributor’s job to put everything together and hide the underlying chaos from the user. The user just hits the update button and all is well in the world. That’s all that the vast majority of users want from their OS. They want someone else to make sure that their software stays current and working. Modern Linux distributions accomplish this at least as well as any of the other desktop operating systems.
It’s funny but every time a user points out why Linux doesn’t appeal to them another Linux user has to answer with a ten page response about why the user is wrong. That is another major problem with the Linux community, your average user isn’t going to be bothered to read through pages of excuses.
It’s funny but every time a user points out why Linux sucks for them another Linux user has to answer with a ten page response about why the user is wrong. The problem with Linux power users is they don’t realize people can’t be bothered to read through pages of excuses.
Just because you don’t like my point doesn’t mean it is invalid. Computers on their own are pretty much useless. It isn’t until you begin to network them with other computers and devices that things begin to get interesting.
There are easily three classes of users. Power users, moderates, and simpletons that cause PEBKACs. The intermediate group is larger than the extremely versatile group of users known as power users. They represent the ideas of power users to the less technologically capable whereas hardcore users can’t often be bothered to discuss things with the less adept. Technology trickle down, if you will.
If you analyze statements in your response such as “Windows was built for backwards compatibility” you can see that you have absolutely ZERO grounding in reality. To add to that assessment you say Windows is inconsistent. At least Windows has a few standard “themes” instead of twenty window managers that change feel AS WELL as look. Furthermore Windows does not encourage independence it encourages reliance on purely Microsoft technologies, just for the record. Linux has nothing to compare to OLE and MS or Carbon Events under Apple.
If free software really works why is it even necessary to “lobby” for drivers?
Also as a developer, you don’t speak for all of us when you say we prefer “hacking free software”. I happen to enjoy development on Mac OS X. The Openstep based development tools beat the piss out of just about everything out there, imo, and as Ruby claims to do … makes software development enjoyable. So advanced that the open source community is attempting a clone (GNUStep).
Also I frankly think we’re only starting to see the true costs of open source software. Bugs introduced are always being patched and so the user’s system is in a state of almost constant upgrade. I installed Ubuntu on a shoddy box and there were, it seemed, quite a few updates every day to various applications. This is a result of the open/free software development model; too many updates. Even when it is fully automated to the point of prompting users to restart if necessary … it is still a burden. Software that needs less updates will be more popular, by default. The “develop by committee” approach that pretty much rules the free/open source is a bit gimped in this regard.
> Baadger, you say that only a four-line patch is needed to run the latest NVidia driver, and you find that
> acceptable?
Absolutely. I’d rather have the freedom to do that, or have distro’s do that for me every quarter, than suffer with Vista for 6 months while the NVidia devs catch up with a major new release and driver API.
> In case you’re still not convinced, think about
> another hypothetical situation. Imagine a new
> hardware manufacturer creating a new graphics card,
> and wanting to provide drivers for all major
> operating systems out of the box.
> For Windows, a WDM/WDDM driver would do the trick
And how is the Linux distro/packaging methodology any different to a lot of the crappy OEM’s who’s drivers are only made available preinstalled with the OS and updated via Windows Update? These kind of drivers exist.
None of the Linux distro’s out there really use NVIDIA’s installer as it is, these things are compiled and repackaged. To be honest i’m sure the open source community would appreciate a tarball, a Makefile and the binary blobs just as much as they appreciate that elaborate installer.
“And how is the Linux distro/packaging methodology any different to a lot of the crappy OEM’s who’s drivers are only made available preinstalled with the OS and updated via Windows Update? These kind of drivers exist.”
Of course they do, and I see nothing wrong with Linux distributions distributing drivers in that same way. However, what about new hardware?
“None of the Linux distro’s out there really use NVIDIA’s installer as it is, these things are compiled and repackaged. To be honest i’m sure the open source community would appreciate a tarball, a Makefile and the binary blobs just as much as they appreciate that elaborate installer.”
So let’s say our hypothetical company making a brand new graphics card does just that, and contributes the source of the driver for their card to the community. Then what? The company stands back and waits half a year for mainstream distributions to include the driver, and hopes that users upgrade?
Actually, the installer more or less contains a tarball and makefile. Just decompress the installer, cd into usr/src/nv and you can make module && make install. I find it quicker and easier that way when playing with customized kernels.
If I’m trying differently configured kernels of the same version, I don’t even bother rebuilding the module, I just make install from the module src directory when i’ve booted into the new kernel.
Of course, I’d hardly call it the most user friendly method, but it’s quick and efficient, and besides, people looking for user friendliness likely aren’t updating their kernels regularly anyways.
The latest nvidia driver installs perfectly on any stable distro kernel yet not on the latest kernels from the development tree. And that’s is acceptible because hardly any average user wants to compile a kernel latest anyway.
> I really can’t believe you piss and whine and cry about how Linux hasn’t made it on the desktop
> but expect end users to patch the bloody kernel to use a graphics card.
No i suspect most users will just update their driver along with all the other packages on their system by making like …2 clicks in Ubuntu’s Update Manager (for example).
I patch my kernel because I want too.
“””
I’m not trying to restart the flames, but I tried the SD scheduler the other day (as part of 2.6.22-ck1) on my Gentoo box. During compilation of large packages the mouse cursor become jerky and useless, I’ve never experience that with the stock scheduler and haven’t so far with CFS (which I’m running now). *In my very humble opinion* the SD scheduler needed more work.
“””
Con would tell you that you are just a corner case and don’t really matter, because all of his other users are happy… except for the few other corner cases like you.
That’s why his stuff doesn’t get merged.
Fortunately, silly ideas like this one about forking the kernel actually involve a large amount of effort by people who actually know something about developing OS kernels. i.e. people who would take one look at the idea of maintaining separate desktop and server kernels , immediately recognize what a bad idea it is, and file it in the round file.
As long as it’s just uninformed people blogging about how “someone should fork the kernel” we have little to worry about.
As long as it’s just uninformed people blogging about how “someone should fork the kernel” we have little to worry about.
Yet, we all allow people to vote even though 95% of the people haven’t a clue about politics or how to run a country.
“””
Yet, we all allow people to vote even though 95% of the people haven’t a clue about politics or how to run a country.
“””
Let’s not confuse a meritocracy with a democracy.
If distro’s had flocked to Con’s work, incorporating his patches into their own distros, and still mainline didn’t listen, then we *might* (or might not) have a problem.
The thing is, by and large, distros have not been interested in the patches in question either.
It’s inane to consider creating an out and out fork over what basically amounts to a sour grapes reaction by an ex kernel dev who quit because he didn’t get a couple of patches merged.
Distros, both desktop and server oriented, have plenty of variant kernel trees to choose from (minus -ck for now).
That said, anyone, even if unskilled in kernel work, is perfectly free to fork the existing kernel, maintain his own, separately, and cultivate a user base of his own. There’s your democracy if you really want it.
Edited 2007-07-26 22:55
That’s why democracy is dangerous, why they tend not to last very long compared to other forms of government, and why they are prone to being overly fickle. It’s necessary to have a body at the top that can flat out tell the people that they’re wrong, that what they want the government to do is not in their best interests. Hopefully this body doesn’t ignore the people when they’re right and doesn’t play politics for personal reasons, financial or emotional.
Of course, if this stabilizing body abuses its power and harms the nation, then the people will fork the government. But it will take more than just one bad decision to warrant this reaction. It would take a series of bad decisions, abuses of power, and a seeming disregard for what anyone else thinks. Even then, it might be counterproductive in the short term to fork the government.
Now, Gallup doesn’t poll on Linus’ favorables and unfavorables, but I’d hazard a guess that Linus is nowhere near Bush country (about 25% favorable, 65% unfavorable, including 24% “angry”). This scheduler debate had been brewing for a while, both patchsets were as ready as they were going to be, and a decision had to be made. Linus had to be The Decider.
In the CFS corner, Ingo Molnar, kernel hacker extraordinaire, primary author of the O(1) scheduler and countless other core kernel features. In the SD corner, Con Kolivas, patchset maintainer extraordinaire, desktop performance guru with a loyal fan-base of enthusiasts. In campaign lingo, Ingo is the experience candidate and Con is the change candidate.
Ingo relies on the fact that most people are happy overall with the O(1) scheduler and the general direction of kernel development. Con tries to rally grassroots support around the idea that desktop users are disenfranchised by the establishment and deserve a new perspective on kernel development. Con has a “where’s the beef?” problem in that his solution is not very different from Ingo’s. He’s relatively inexperienced, and most people are rather dispassionate about his message.
Linus had to make a decision, and while there was little in the way of objective performance evaluation (see my recent post for more on this), the safe decision was obvious. Ingo has proven his considerable chops in this area. He’s a pragmatist that doesn’t take absolutist positions on anything. He was against plugsched because he worried about scheduler proliferation. Then he wrote scheduler modules to support cascading extensibility. He changed his position on fairness in response to successful results.
Ingo is the kind of guy that you trust with these things in the absence of hard evidence. Linus doesn’t trust him for personal reasons. He trusts him because he’s proven himself time and time again. This is not a “political” climate that’s toxic to the establishment. Yesterday’s news was pretty good, but today’s news is better. If SD wiped the floor with CFS, that would be a different story. But in a decision that’s too close to call, especially in times of peace and prosperity, you have to favor experience over insurgence.
“””
But in a decision that’s too close to call, especially in times of peace and prosperity, you have to favor experience over insurgence.
“””
I would only add that the CFS scheduler has a maintainer who has never threatened to take his ball and go home, despite having had substantial portions of his hard work over the years denied a place in the mainline tree.
The alternative scheduler has no current maintainer. Neither does swap prefetch for that matter. And to the person with Linus Torvalds’ responsibilities, the reliability of the maintainer has to count for a lot.
For more on Ingo’s rejected work, see:
http://lwn.net/Articles/242912/
“Fortunately, silly ideas like this one about forking the kernel actually involve a large amount of effort by people who actually know something about developing OS kernels. i.e. people who would take one look at the idea of maintaining separate desktop and server kernels , immediately recognize what a bad idea it is, and file it in the round file.”
You’re right. This decision requires a certain knowledge that the author making the claim mentioned in the headline might be lacking.
Today’s Linux desktop world offers lots of alternatives – and this is a good idea because of different needs and requirements at the user’s site. At some point, distributions get incompatible at system tools or applications level. Imagine, what a joy it would be if they were incompatible at kernel level! ๐
“As long as it’s just uninformed people blogging about how “someone should fork the kernel” we have little to worry about.”
If someone sees a need to fork the kernel and wants to do it, he’s free to do it, because the GPL allows him to. We’ll see the results later and discuss them. ๐
On another topic, I mentioned that today’s desktops do include a lot of server functionalities, see http://www.osnews.com/permalink.php?news_id=18333&comment_id=258519 for related informations. Please note that most argumentation desktop vs. server is done in userland, not in kernel.
Servers and desktops would have much in common if it comes down to kernel level. So a fork would not be needed, but I could imagine different Linux distributions aiming at the desktop or the server user, providing different tools and configurations preinstalled. And, as far as I know, they already exist and do a good job. Some distributions can even be run on older x86 hardware to make a usable desktop that MICROS~1 guys would need to pay USD 500+ for… ๐
I’m not for this at all. Distro’s take care of the desktop side of things. Sure this relies on the Kernel for implementing certain features (mainly hardware support) but everything else after that is at the application level. (mind you this is obviously grossly over-simplified)
I understand that a kernel made for a server might be developed differently (slower changes, more testing etc) and that a desktop version needs to move faster by nature, but I don’t want to have a “desktop” kernel constantly under HUGE changes that might reduce the level of quality and reliability. My desktop needs to be as reliable as a server system.
I agree, and I’d like to remind everybody that the kernel is a very small part of an operating system. It’s supposed to be very generic. It schedules threads, allocates process resources, manages memory, deals with the hardware, and provides basic system services.
There’s a lot of trade-offs down there, but the seasoned kernel architect will tell you to stay neutral or at least tunable. By all means, exploit fast-path conditions, but don’t optimize at the expense of less probable conditions. What is a “stutter” if not an occasional performance inconsistency? You want the kernel to perform consistently in various workloads. Don’t worry about isolated regressions. They can be fixed in userspace.
Which leads me to the broader argument that most of what’s wrong with the Linux desktop is because of userspace. Obligatory link:
https://ols2006.108.redhat.com/reprints/jones-reprint.pdf
Con argues that it’s hard to dramatically affect desktop performance by working on userspace. Yeah, userspace is massive, and there’s a ton of work to be done. But that’s where Amdahl’s Law says to look. It’s going to require a lot more than one man on a mission to iron out the performance wrinkles in userspace. Profiling userspace is not as glamorous as working on swap prefetch, but it’s more likely to result in the large performance improvements we’re looking for.
Performance will not come from a bunch of enthusiasts trying a bunch of kernels and gushing about how smooth they “feel”. Con’s followers should fire up their profiler of choice and start hunting for dust bunnies underneath their favorite applications. This way they can actually contribute something positive to the enhancement of desktop performance under Linux and other free software systems.
My whishlist:
remove multiple text terminal capability (Ctrl+Alt+F1, …): useless for end user
remove X11 remote host feature (xhost) : useless for a desktop => optimize X11 code for “localhost only”
Forget Indirect rendering: merge OpenGL and X11 layer
Use the staircase scheduler of CK (or, at least, boost the priority of the desktop/window manager)
Edited 2007-07-26 16:39
> remove multiple text terminal capability (Ctrl+Alt+F1, …): useless for end user
You can do this yourself, it’s nothing to do with the kernel as such. I have reduced 7 terminals down to 3, and only ever use 2 (one for X, one to login or use if my frantic experimentation results in X failure or freeze ups).
> remove X11 remote host feature (xhost) : useless for a > desktop => optimize X11 code for “localhost only”
Why? I use X apps across my LAN (using SSH X forwarding) all the time when in bed on the laptop
> Forget Indirect rendering: merge OpenGL and X11 layer
OpenGL direct rendering is already on par with Window’s performance and X servers are starting to think about moving stuff (drawing?) over to OpenGL in a backward compatible manner. One of the big snags is the lack of complete open source OpenGL drivers for powerful desktop GPU’s.
We also have XCB which is a C API to replace Xlib (but provides Xlib on XCB compatibility)… more optimization for local X server users.
> Use the staircase scheduler of CK (or, at least,
> boost the priority of the desktop/window manager)
I’m not happy with SD as it is, but I like the concept. But as a user I don’t want to have to buggar around with nice levels to get things working smoothly. At the moment my desktop feels responsive under load already.
Edited 2007-07-26 16:50
Of course, all you said is right.
I can modify the OS default behaviour. I can use my labtop as an X terminal. I can stay with a display layer that doesn’t know about modern 3D-GPU. I can sacrify confort for performance.
But is it really the right thing to do ?
@MORB: I see the difference between kernel and userland applications. But, from a Desktop point of view, X11/KDE/Gnome/… are not “user” applications. These are “system” components. And if a system component needs a tweak/hook in the kernel, why not ?
Edited 2007-07-26 17:02
I agree, especially since this functionality has *zero* impact on performance.
This is ridiculous. When you don’t want a feature, don’t compile it in the kernel.
It goes for all the bloat argument, really. The only thing that’s really bloated with features you don’t necessarily need is the source code distribution, not the kernel binary.
Also, I fail to see what desktop specific stuff could be implemented in the kernel that would help the linux desktop. The desktop is the desktop, and the kernel is the kernel. It just seems people are blaming the kernel for the sucess or unsuccess of the desktop, whereas they are two completly different layers.
Forking is a bad idea, because most of the stuff the kernel does is the same on desktop and server, and it is managing resources.
Yes, maybe it isn’t perfect for the desktop right now (although using primarily linux at home I don’t see what’s supposedly so wrong with the performances), but windows is far from perfect as far as performance goes either (yes, a fresh windows install is blazingly fast, but I’m talking about those unacceptable one to ten second delays you experience randomly when for instance opening a new explorer window)
Also, thom holwerda, you advocate micro kernels (that is, a kernel that performs as little as possible and let userspace do the rest), and you also advocate creating a desktop-specific kernel? I fail to see how those two points of view aren’t opposed.
Edited 2007-07-26 16:55
Ever heard the name “BeOS”?
Don’t confuse the aim (performant desktop) with the underlying technology (kernel type, for instance). It’s like asking “you advocate vegetarianism, but also want to eat good food; are you crazy?”. No… Indians have done that for many years.
It’s just that a micro kernel seems to me as something that aims to be as generic as possible, and wanting a desktop-specific kernel seems to be the opposite thing.
But anyway, when I read this article for instance, I don’t see anything concrete. What people would add or do at the kernel level that would improve the desktop? They just keep saying that the current linux kernel does the desktop wrong, but they don’t seem to be proposing any specific solution.
It’s like they suddenly decided that linux failing its “year of the linux desktop” year after years has to be because of the kernel, and if the kernel is forked in a desktop version, everything will suddenly be rainbow and sunshines on the linux desktop.
All of this fails to account for the fact there are quite a few libraries and layers sitting between the kernel and a desktop like kde – hald and dbus for instance – and they are also very important things that didn’t exist until quite recently, and are bound to result in dramatic improvement of the desktop, for instance when Xorg gets around using providing a dbus configuration interface.
My opinion is that desktop is a more complicated thing than it appears to get right, and it just takes time. But it’s getting there, just not as fast as people hope it would.
Edited 2007-07-26 17:20
It’s like they suddenly decided that linux failing its “year of the linux desktop” year after years has to be because of the kernel, and if the kernel is forked in a desktop version, everything will suddenly be rainbow and sunshines on the linux desktop.
Linux is defitely gaining momentum but doesn’t follow the sureal expectations of some people.
Actually I think that many of these surreal expectations could be met if “best desktop on earth” was the goal in the first place.
The true is that “if the kernel is forked in a desktop version”, the code changes will target a “better desktop” = take care of what desktop end-users want.
What are the rationals behind the code changes in the actual linux kernel ?
I agree with you I don’t think forking is the answer, but at the same time why can Windows get pretty decent sound drivers even high performance ASIO drivers for their soundcards without having to install a howl new kernel and linux can’t. Does it have anythign to do with it being monolithic, I can’t possibly see how, but I don’t really know all that much about it. I think performance does have to improver in certain areas and sound is one of these areas, having to have a kernel just for low-latency that affects performance on other things and doesn’t have everythign else needed compiled is annoying and as a user I don’t I should have to compile a new kerel everytime I want anew feature. the kernel should be more versatile than that.
Neither can I since you do not have to recompile the whole kernel to add drivers.
Honestly you might want to read up on Linux in general and performance related issues in specific.
– Eliminating the additional TTYs would make no difference in performance and take up close to zero resources. 6 might be excessive in a modern system but again they do not take a significant amount of resources and are quite useful at times.
– Changing X11 to remove remote X capabilities has been discussed and generally shot down as a dead end for improving local performance using X. Here, more than removing the extra TTYs, you are cutting off a significant sector of Linux users. Beyond people like many of the OSNews readers (including me) who use remote X applications regularly you are also cutting off linux based thin client solutions.
I meant in the sens that you have to install a low-latency kernel to get decent performance on sound hardware. Windows doesn’t require this all that is needed is proper ASIO drivers. Does linux have an equivalent to ASIO drivers. I wasn’t implying that installing new drivers requires a recompiled kernel. I was saying that in-order to get low latency performance, you need to have a low latency kernel. The low-latency kernel has issues with responsiveness when it comes to UI apps and generally feels less “snappy” than its plain vanilla counterpart. Windows has the ability to have both without any measurable decrease in responsiveness.
I’m an ubuntu user. The only place I use windows is at work and that’s it. I use Linux and OSX at home, and I just started using OSX recently. In ubuntu the low-latency kernel works great, but then I can’t watch tv on linux since the proper modules for my tv card are not compiled into the kernel. Do I have to compile my own kernel just to use tvtime?
Edited 2007-07-26 18:57
Not being a user of tvtime, am I missing something? Why not compile the tvtime module and then do a modprobe?
Not being a user of tvtime, am I missing something? Why not compile the tvtime module and then do a modprobe?
Tvtime is “just” application that make use of a loaded tvcard module which the poster said wasn’t compiled as a module for his low latency kernel.
I apologize if I was/am not being clear. What I am suggesting is that the gentleman compile the specific module for the tvcard. You do not have to build a whole new kernel. Heck, you do not even need the source for the kernel, just the headers and the module you want to add (though this is the harder route). Assuming the card in question is part of the stock source tree, download the source tree, grab your config from /boot, compile the module, move to appropriate directory if need be, then depmod -a; modprobe whatever. For a newbie I grant building a kernel or kernel module is somewhat daunting, but not that difficult.
My point has nothing to do with tvtime in specific but with the fact that you do not have to recompile the existing kernel to compile a specific module and insert it into the running kernel. It is reasonably well documented in the kernel documentation.
Edited 2007-07-26 20:01
BTW. I already know how to do all of the things you described. My post was mis-worded. What I meant to say is that should the user have to do all of that? I certainly don’t mind since I used to do it all the time for the fglrx drivers. Debian (and therefore ubuntu) has some really cool tools that make it dead easy. What i meant to say is do I have to compile my own kernel Modules? The question was posted more liek why should I have to, type of thing. Instead I came off like an idiot. Go me!
Yes. i see what you are saying. Sorry long day and I’m already a little frazzled by the new Beowulf trailer that dropped at Apple’s site. freaking movie is oign to be the end of cinema as we know it.
Anyway, the bttv module is fairly easy to get compiled and I’m an idiot. Case closed.
Anyway my point was that the kernel should NOT be forked. There is no need. I guess I should have been more clear.
Mostly agreed, also to the part about our OS-wizard TH. Well, it is not true that the kernel has no effect on desktop performance, but there are definitely more important factors to take into account first…
I am sure Con had his good reasons to quit. But as I undertand it, he didn’t work on the kernel as a job. It was his hobby. So he can quit whenever he wants and we should still thank him!
Anyway, Con seemed to me to be mostly dissatisfied with the whole desktop paradigm of 1990+ (rather than Linux), which has moved away from optimizied hardware design towards cheap all-purpose hardware where optimization means increase in speed. Well, I can understand that. Dedicated optimized hardware IS superior and damn elegant… but also more expensive, probably often more proprietary, and as opposed to software, it cannot be built and modified by everyone.
About Linux desktop performance I can only say: It is definitely sufficient, even for an impatient person like me. And a lot has been done WITHOUT FORKING THE KERNEL. The leap in desktop responsiveness (with Gnome) from Ubuntu Dapper to Feisty way remarkable, for example. It definitely compares very well to WXP!
And with Ingo’s patch, the otherwise standard Ubuntu kernel provides latencies for desktop audio applications (like Ardour) which I could never get under Windows with ASIO drivers… around 2ms total latency. With most of my sound cards (even the really expensive ones), ASIO (with Cubase) was a pain in the A to even get to run. Then, it usually crashed a lot. I simply don’t see the dramatic situation which warrants a fork of this overall all in all sophisticated, versatile, and stable kernel. People can optimize their kernel enough by coming up with an optimized build, adding patches, and that’s enough for me.
Please remember that modding someone down just because you don’t like his opinion is not in line with the forum rules.
Absolute nonsense.
I don’t find them useless. Granted, I’ve never needed six of them (one would do), but it’s nice to have somewhere to turn in an “emergency”. The real question is what would be gained from removing them. An extra megabyte of RAM maybe? Not a great deal, anyway.
Again, a feature that I find useful as an end-user. Having the facility to run a programme on a remote computer and have the GUI pop up on my machine is a fantastic facility, and is widely used in academic and engineering environments for example. I’ve yet to see anyone prove that removing this ability would make X better, and conversely, even Microsoft has moved to a client/server-type graphics architecture for Vista.
I’m assuming you’re talking about Xegl, which is indeed a good idea. But it’s a long road there, and in the mean time AIGLX (along with hardware-accelerated XRender) provides a good solution. In addition, there are (as I understand it, at least) some problems in a 100% OpenGL solution; for example, OpenGL doesn’t guarantee pixel accuracy, which is required for a GUI toolkit.
I don’t know anything about the staircase scheduler, but boosting the priority of the window manager isn’t necessarily a bad idea. That said, I’ve never had a problem with Metacity becoming unresponsive respond under heavy load, whereas I have with Compiz (even though the latter is meant to be hardware accellerated), so maybe the problem is with the window managers themselves.
Of course, it’s a great feature… but wait: it’s not a feature. It’s the real “core” of X11. That was a real necessity 20 years ago, when using a simple X-terminal and a powerfull Unix server.
But, for a Desktop, I’m ready to sacrify some perfomances for remote applications display, and gain the same perfomances for local applications.
And for the Vista part: I’m not sure I find the Vista GUI more responsive than the XP one.
I wish people would really stop bringing up the network transparency stuff as a reason for why X is slow. When you are running programs on the same machine as the X server, TCP is not TOUCHED at all. Unix domain sockets, or FIFOs or an even more efficient mechanism is used instead. The only cost is the initial check to determine the best way to connect to the X server. So if you aren’t running any remote apps, then you won’t have any performance slowdown at all. Communication is as fast as possible without having the X server and the client running in the same address space.
Except that you won’t. Network transparency adds no overhead if you don’t actively use it.
I completely agree with your points as I’m one of the end users who do not need these functions.
remove X11 remote host feature (xhost) : useless for a desktop => optimize X11 code for “localhost only”
I asked a similar question on another site and was told that the xhost feature is minimal and doesn’t add any overhead and thats one of the reasons its left in X11.
>remove multiple text terminal capability (Ctrl+Alt+F1, …): useless for end user
In normal usage, yes, to troubleshoot an issue with X, having one or two text terminal is quite useful.
Beside how much memory do you expect to save, 50kb?
>remove X11 remote host feature (xhost) : useless for a desktop => optimize X11 code for “localhost only”
1)It’s not useless for a desktop.
2)Knowledgeable people have said that the difference is at most 10%, so it’s not much.
For the other point, I don’t know if they’re useful or not.
“2)Knowledgeable people have said that the difference is at most 10%, so it’s not much.”
I’d say the difference is 0%. Local apps use Unix Domain Sockets and shared memory to communicate with the X server. TCP is not used at all.
…is supposed to be doing? Or do they not get down and dirty with the kernel?
Either way, a fork should not be needed, just options to compile in a desktop friendly scheduler.
I was under the impression we already had a variety of desktop, server and embedded DISTRIBUTIONS that pretty much fit most needs.
Some optimize the kernel for whichever market they aim at, others provide a variety of kernels that can be chosen at install time.
Do we really need to fork the kernel itself?
I like the idea of having a kernel designed around the desktop for a change; as you’ve said it can only help our PC’s run the with the performance they were supposed to have in the first place. However, I believe the disconnect between developers and users is going to present a huge snag. I think this “disconnect” may have something to do with something Con mentioned in the article; a lot of these developers have egos which get in the way of them ever giving 2 bits about the users’ concerns.
To be fair this issue may also exist because a lot of the work is being done by/for major companies who are more concerned about big iron than desktops. This is why I believe that development of such a kernel could only come from truly independent developers; we won’t be able to ride the coattails of IBM, et al., and (since they probably won’t see anything in it for them) this effort won’t see any big-business funding.
Edited 2007-07-26 16:46
… had any problems/issues using a linux distro as a desktop machine. Performance always been at LEAST acceptable, usually quite good. I did not realize there was a crowd of dissatisfied desktop linux users out there.
Actually, I can’t really complain about ANY desktop OS out there regarding UI performance. I mean, I started out with straight X Windows on a VAX, then moved up to Motif, then Windows and Mac OS 7, etc. This over 20 years and I have never been frustrated by the performance of the computers on which I developed/worked.
[edit]
Also used BeOS and OS/2 for awhile and loved them.
Edited 2007-07-26 17:17
The biggest counterargument to a fork (and not merely yet another branch), is the maintenance burden. Fortunately, this guy is not afraid to write about it anyway and reflect on innovation.
Of course, innovation need not be limited to the usual suspects, such as trying out ideas from other kernels. It’s safe to say there is too much tunnel vision. E.g., the eminent philosopher of technology, Andrew Feenberg, has interesting ideas that could inspire better participation in a new kernel or just less “risky” (from a cost POV) stuff such as KDE or GNOME. This is not at all the usual FSF/GNU/RMS stuff that you might think you know well. It’s a different and useful way of reflecting on stakeholders.
One introductory article on this is “Democratizing software: Open source, the hacker ethic, and beyond” by Brent K. Jesiek. Here’s the abstract, with my emphasis:
“The development of computer software and hardware in closed-source, corporate environments limits the extent to which technologies can be used to empower the marginalized and oppressed. Various forms of resistance and counter-mobilization may appear, but these reactive efforts are often constrained by limitations that are embedded in the technologies by those in power. In the world of open source software development, actors have one more degree of freedom in the proactive shaping and modification of technologies, both in terms of design and use. Drawing on the work of philosopher of technology Andrew Feenberg, I argue that the open source model can act as a forceful lever for positive change in the discipline of software development. A glance at the somewhat vacuous hacker ethos, however, demonstrates that the technical community generally lacks a cohesive set of positive values necessary for challenging dominant interests. Instead, Feenbergโs commitment to “deep democratization” is offered as a guiding principle for incorporating more preferable values and goals into software development processes.”
Edited 2007-07-26 17:32
To properly use SD and CFS, nice levels are an important tool to define what should get most of the CPU time.
Whenever I compile something, I gets started as nice 19 so it only gets CPU time when there is pretty much nothing else for the CPU to do, thus ensuring my GUI and mouse stay responsive.
And take that one step further. Be sure to nice Xorg and your Window Manager to a nice of -10. Perhaps in the old days “nicing” a program was considered bad because the mainline scheduler was never fair. But with CFS and SD the nice level system has been reworked and is now encouraged. People just need to learn how to use their nice levels.
People still argue they shouldn’t have to nice, but unless the scheduler can read your mind and predict the future, nice levels are the best way to go about optimizing a desktop; since you can guesstimate your workload.
What would I like to see in a “Desktop” linux kernel? Please take the Microsoft approach and put the mouse driver in kernel space with a realtime priority, so it never skips. Maybe the audio pipelines could have some realtime treatment too. It’s not like they can dominate your resources. Even in a catastrophic Windows freeze, often my mouse pointer is still perfectly responsive. heh.
IMHO, linux GUI performance is still slower than Windows XP and GDI+, considering Fluxbox, xfce4, metacity, kde and e17. On my laptop (which used to be dual boot to XP and Gentoo), XP and firefox allowed me to resize a webpage like news.google.com with instantaneous redrawing whenever I dragged a corner of the Firefox window.
On the same laptop, while in any linux GUI environment, redrawing the same webpage definitely lags… I can see the slow updates, and feel the slow scrolling and rendering on complicated websites.
One remedy is Compiz and it’s opengl backend, assuming you have a crazy fast GPU that is well supported and takes the workload off the CPU. And even though my laptop can use Compiz at 65fps with the excellent open source intel drivers, a simple task like scrolling or resizing a window is a laggy and not realtime experience.
Overall though, linux has come a long way in the desktop arena, but mostly due to applications and frameworks built on top the linux kernel which is mostly designed with the server in mind. Perhaps GTK and QT, and X11 are responsible for the lagginess of current linux Desktop’s, but I wonder how much they can improve considering all the layers in Linux just to draw a window!
so the question is, why is BeOS so responsive on the same hardware? Even SkyOS makes me go “wow” when I see how fast it does some things (firefox under SkyOS is blazing fast for both network and drawing).
so the question is, why is BeOS so responsive on the same hardware? Even SkyOS makes me go “wow” when I see how fast it does some things (firefox under SkyOS is blazing fast for both network and drawing).
One important reason why BeOS is so responsive on the same hardware is that Be, Inc. decided that the time quanta should be 3 ms, unlike Windows (10 ms or more) and Linux (not sure what it is off the top of my head) which has a couple of effects:
1. No single task is working for an extended period of time before the scheduler is called again (makes it noticeably more responsive if the system can decide to reprioritize main tasks that much quicker)
2. It also has a potentially less desirable effect (definitely in the case of a server, this is more of an issue) in that the CPU(s) cache is blown away more frequently from the shorter period. This can get even more pronounced with multiple processors in use, since BeOS doesn’t have a method to set thread/process affinity to any single processor or group of processors, something that Windows has an API to do, and you can manually set that via Task Manager as well, if desired.
Other than scheduling priorities, the time quanta issue is very relevant to the question of whether it is a desktop or a server-oriented system: the desktop is more responsive, but less efficient for throughput than a server-oriented system, where the CPU(s) get more efficient use of cache and time on a given task.
> Even SkyOS makes me go “wow” when
> I see how fast it does some things
On top of it AFAIK SkyOS is still compiled in debug mode (which slows things down…)
I don’t think that Linux should be too strongly optimized for the desktop.
I like that Linux has the flexibility to power web servers, desktops, my N800, wireless routers, etc…
so the question is, why is BeOS so responsive on the same hardware? Even SkyOS makes me go “wow” when I see how fast it does some things (firefox under SkyOS is blazing fast for both network and drawing).
Perhaps some people got scared and the project was dumped quickly after being payed substantially.
“Hi my name is Thom Holwerda, and I have linked this article, and then linked my own blog! Isn’t that so professional?”
My take (Thom abuses this, so I’ll do it as well): Thom should stick to just linking articles.
…then jack of all trades in my opinion. I would love to see the kernel split up and optimized for it’s relevant roles as I do find that GUI lag is a PITA.
I’m not a coder so I don’t actually know much about writing code but I have compiled a few kernels in my time and I think that with all the options during configuration, splitting it up is a good thing.
If this does end up happening, which I doubt will be anytime soon, I think that some form of common code base would still need to be kept to evade duplication of effort in areas where all three kernels coincide.
Just my โฌ0.02
Wasn’t this an issue with the distros before? Where the different distros would merge patches form the 2.5 series into 2.4 and other kernels maintained by developers and in turn create huge incompatibility across different distros. Hardware wouldn’t work on one but work great on another, performance would suck on one, but be pretty quick on another. With the 2.6 kernel most of these things have lessened quite a bit and I don’t think we should have to relive those days. Not to say that they don’t still happen but not to the extent that they did before.
Edited 2007-07-26 18:11
I like the idea. Then again I’ve always liked the idea of more focused and modular designs.
I don’t know anything about kernel programming, but heres my opinion. If splitting the kernel into embedded, desktop, and server versions shows a real potential for vast improvement with each and doesn’t significantly impact the quality of each then I say go for it. On the other hand if the return for splitting the kernel isn’t really significant then I say leave it the way it is and let the distro’s optimize it for each application.
one person alone decides what happens. That is _good_ if the person is a benevolent dictator who has got a history of making the right choices at the right time. I think Linus is very good at maintaining the kernel – which does not mean one can find no fault in his doings, but overall he has done a supreme job, and I doubt this project would be in such a good state if it would have been managed by a bigger group of ‘chief developers’. All the people complaining about either the procedure or the result may be lacking a distanced perspective. There is only so much computer hardware can do, and one can either optimize for throughput or for latency. I think the kernel as a whole is a really really great piece of software, and it performs equally well (not talking bout numbers here) in most use cases users can come up with, be it embedded systems, huge mainframes, clusters, and yes, desktops. I for one am using it here since about two years and this whole Gent0o(GNU/Linux + KDE ++++) thing I am running right now is the coolest pile of software I have ever used. BOINC is running 24/7 and I can compile huge packages in the background, play a 6Mbit TV stream, browse the internet and my discs – and the newsticker in my kicker panel very persistently flows from right to left. I think there was a bit less frame skipping with the 21 than with the 22 kernel now, but it’s barely noticable and I am looking forward to the new CFS scheduler.
The only thing I really can complain about is the handling of a no-more-memory-situation (which happens, yes even with 3G+2G swap), the kernel takes a ridiculous amount of time until it decides this and that program needs to go (it may be hours), and it usually kills the wrong task (f.e. konqueror instead of emerge).. so when I stupidly try and start Google Earth after a week or so uptime I better quickly kill it, or I have to use the SysRQ trick to bring shit down (which always works, meaning the kernel NEVER crashes)..
to sum it up: thank you Linus for coordinating this bunch of nerds, and everybody for the code. FOSS rulez.
I thought that RedHat was the server distro, Ubuntu was the desktop distro … and some other distro addressed the embedded market. Isn’t that the grand Linux plan?
A clone of Apple’s intentions to deliver an “object oriented” operating system that could be configured easily for different markets? I mean … that was Taligent/Pink in the early 90s. Here a decade and a half later it is being purported as a new idea. I am sure it wasn’t original to Apple, either.
Guess that will teach them to try to be all things to all people and screw everyone in the process. I mean come on, do average users really need ###### virtualiziation support in the kernel? If you have to build virtualization into the kernel, how good of an OS do you really have?
So you’ve been advocating what has already been done for years, Thom. How forward thinking of you.
> I mean come on, do average users really need ###### virtualiziation support in the kernel?
If, for some strange reason, it bothers you, then pick a distro that doesn’t compile it in, or compile the kernel yourself. Everyone else can just not worry about it.
> If you have to build virtualization into the kernel, how good of an OS do you really have?
A very good one. You can run linux in Xen using the VT hardware support, but you can’t do that with Windows.
Are we starting down the same road MS blundered on so many years ago? The interface is not the OS. We can never forget this truth. Maybe we have so many windoze users moving to Linux now that they are bringing their own misconceptions with them. I guess next we’ll have to install every rpm/package on every cd/dvd whether we want the functionality or not. Because that’s the way MS does it.
The interface is not the OS. We can never forget this truth.
It doesn’t matter how capable the OS is if there is no way for the user to make easy use of it. I could have the most well-engineered car in the world and be unable to drive it if I have to deal with a large array of nonintuitive, contradictory controls and control panels.
Because that’s the way MS does it.
Making the system usable for Joe Average and making the system work like Microsoft’s are two different things. Granted, Microsoft might be that much more advanced in system design, what with improved enduser experience being a design goal of theirs ever since customers began complaining about Windows 3.11. However, you don’t have to compromise anything for good design — good design is something that lets you not have to compromise.
Are we starting down the same road MS blundered on so many years ago?
Yeah, and we all know what happened to Microsoft once they began moving away from the “MS-DOS + Windows desktop manager” model and began making their operating systems integrated and user-friendly, those poor souls.
Edited 2007-07-26 19:18
Actually, you raise a good point. Everyone in the Linux world has to decide if we want to continue to make a fast reliable OS that can be used successfully by so many different people in so many different ways, or do we want to become the next microsoft. Personally, I think one is enough.
My take: I donโt care what a Website Editor thinks about what he would do to the Linux kernel.
My take: I donโt care what a Website Editor thinks about what he would do to the Linux kernel.
Then don’t read OSNews. There’s nothing stopping you from leaving, Beta. OSNews has always had the editors’ opinions attached to stories every now and then, and we will continue to do so. At least on OSNews, we clearly separate the news from the opinion/editorial points of view.
If you really want a smaller audience to soapโbox, fine.
Well, no; you combine them with the news stories when it’d be perfectly possible (and far more appropriate) to just write up an article.
Well, no; you combine them with the news stories when it’d be perfectly possible (and far more appropriate) to just write up an article.
To separate means that it is clear what is one thing, and what is the other. In this case, that line is clearly visible.
In any case, we’re way off topic here. Please continue the discussion about the subject at hand, else I will remove this thread.
Edited 2007-07-26 20:23
can no one compile anymore? why would we not just remove the pieces that don’t fit, replace with pieces that do and make <*>?
or maybe someone should go pick up ck’s branch and maintain it. then all the desktop-centric distros can run the -ck kernel and if you can’t compile a binary will be provided for you.
__forking__ is not the answer.
…what I want is a good Desktop.
We probably could write a good Desktop using the Linux kernel code as a base. And adding the X11 code. And adding the QT/GTK code. And adding the KDE/Gnome code.
The question is, can we write a good Desktop just by stacking all those pieces of code over each other ?
Can we design a good Desktop with the constraint of having only separated, isolated and independant sub-system ?
My take: I have been advocating splitting the Linux kernel up (desktop, server, embedded) for years now.
You are so strong. What the world could do without Thom Holwerda …
why not focus on other projects for the desktop, such as Syllable instead of forking?
Obviously, I totally agree with you. We’ve been saying the same thing for years now. You can have a dedicated, GPL licensed desktop OS without forking the Linux kernel. I absolutely see no reason why Syllable running on desktop machines and Linux running the servers those desktops rely upon is not a workable, sensible model.
Maybe instead of forking a kernel it’s better to run the desktop OS from memory like PuppyLinux (and probably others too).
Tried Puppy yesterday and have to say it’s the snappiest Linux I’ve ever worked with.
Applications start instantly and the gui is really fast.
Yup. Puppy is snappy even on my PPros. It’s nice to see a newer Linux distro which seems to care a little about older boxes.
DSL (DamnSmallLinux) is also pretty snappy on old boxes, BTW.
+1 for Puppy. Of course, if users are willing to switch to a lightweight wm like jwm (Puppy’s wm) instead of a full-blown DE such as GNOME or KDE (or even XFCE), they’ll see much of that speedup
Is there a problem anyway?
Many distros have more than one kernel. Ubuntu has a separate server kernel with pae,etc.. enabled. In addition there’s also a low latency kernel. Most desktop users will hardly ever experience the difference between many compiled kernels. Unless you are {semi}/professional busy with audio and or other jobs that require a low latency.
this my /usr/src :
2.6.23-rc1-mm1
gradm2
gradm-2.1.10-200702231759.tar.gz
grsecurity-2.1.10-2.6.21.5-200706182032.patch
linux-2.6.21.5
linux-2.6.21.5.tar.bz2
linux-2.6.23-rc1
linux-2.6.23-rc1.tar.bz2
linux-headers-2.6.20-15
linux-headers-2.6.20-15-generic
linux-headers-2.6.20-16
linux-headers-2.6.20-16-generic
linux-headers-2.6.21.5-ph-grsec_2.6.21.5-ph-grsec-10.00.Custom_amd64.d eb
linux-headers-2.6.23-rc1-ph_2.6.23-rc1-ph-10.00.Custom_amd64.deb
linux-image-2.6.21.5-ph-grsec_2.6.21.5-ph-grsec-10.00.Custom_amd64.deb
linux-image-2.6.23-rc1-ph_2.6.23-rc1-ph-10.00.Custom_amd64.deb
NVIDIA-Linux-x86_64-100.14.11-pkg2
NVIDIA-Linux-x86_64-100.14.11-pkg2.run
patch-change-recent-kernel-2.6.22-nvidia-100.14.11.txt
pax-linux-2.6.21.5-test9.patch
I hardly notice dramatic speed differences between the kernels listed above other then a specific role.
Further more many distros don’t ship with a vanilla kernel anyway. What perhaps is needed are more advanced algorithms not only targeted at any kernel but also at USERLAND. A fast kernel can’t make up for a drag of a DE.
You don’t need to fork the Linux kernel. There already is a GPL OS that is firmly directed at desktop users. Come give us a hand with Syllable instead.
But why would we use Syllable, when there are more mature, non-GPL options available?
Because as this article demonstrates, what use is maturity when it doesn’t do what people want? VMS is “more mature” but I wouldn’t recommend it for desktop users.
While I have nothing but respect for Con, this entire forum is the reason why the kernel developers are SO stand-off-ish. EVERYBODY and their brother thinks they know how to write kernel software. Or whats best for desktops versus servers. Yet only about 1% (less i am sure) are actually kernel developers. IF I had a bunch of users yacking in my ear every day I would be cranky too. The kernel devs probably don’t have the time to wade through the sea of input they would get if they opened themselves up to the community as a whole. However, there is nothing stopping a group of you from starting a web site to consolidate the community’s perspectives and submit them to the mailing list in a concise helpful manner. Course that would require people to actually do something, but either put up or shut up I always say.
As for the kernel and the desktop, the kernel is modular and can be stripped down to fit on a floppy disk. The problem isnt the software or the kernel. You want a snappy system, try loading a live distro into a ram disk. Things are plenty fast. Problem is that the disk architectures in our systems arent much faster than they were 10 years ago. Disks running at 100Mbit/s are still pretty much the top of the line. Where are all the 10K rpm ata drives? still only raptors are running at that speed for t he most part. I want to see how Linux runs when its on a memory based drive. Then we’ll see if its so slow.
I think everybody has the right to speak about OS development, but there have to be filters. Here at OSNews, a novice user could meet a kernel developer, and the comments of the first could be annoying for the second. But with a well organized communication chain, novice users should be able to talk to experienced ones, who could pass relevant issues to application/desktop developers, who could pass relevant issues to kernel developers. An organized communication hierarchy would be good for overall development, but needs effort and coordination.
And about hard disk speed, it’s clearly an issue, but it affects other OSes too: Windows and MacOS run on the same hardware than GNU/Linux, so you can’t blame the hard disk for Linux performance when comparing it against them. The performance of GNU/Linux operating systems has to be improved. Determining the best way to do so, involving a kernel fork or other means, has to be left to the experts, not the users.
Changes have to be introduced to meet needs, but needs have to be well known first.
Multi-core aware. Intel is dumping Pentium D and soon first generation Core2Duos will be dirt cheap.
Leopard is multi-core through and through. Apple is making it’s software suite multi-core ready.
As for feeling snappy, when KDE4 is released, along side GNOME 2.20.x [only speculating as I haven’t verified if they are multi-core ready] the performance will improve if that work is done.
This Server/Client/Embedded is easy to configure through the distro and/or building one’s own kernel.
Am I the only one who finds it amusing that all the people suggesting ways to improve kernel development are people who have absolutely no involvement, expertise or experience in kernel development? At least as far as sensible searches seem to show. I’m drawing a blank on this Danijel Orsolic guy having done any actual work on any kernel ever. Or, for instance, ever posted to LKML.
99% of the people with no experience in such issues have the good sense to shut up, I don’t know why Mr. Orsolic thinks he’s different.
Or, for instance, systyrant:
“I don’t know anything about kernel programming, but heres my opinion.”
If you don’t know anything about kernel programming, why do you think your opinion is worth a damn?
I don’t know anything about kernel programming either, which is why I’m definitely not giving any opinions…
I’m typing this on Ubuntu Gutsy Gibbon with stock linux 2.6.22.8 on very modest hardware and the desktop interactivity is great, even with videos playing, updates installing, etc… It’s much, much better than the horrible interactivity under Windows.
Why is anyone proposing a drastic solution to a non-problem?
Edited 2007-07-26 19:58
Because it’s only *your* very own experience.
Am I the only one who finds it amusing that all the people suggesting ways to improve kernel development are people who have absolutely no involvement, expertise or experience in kernel development?
It isn’t really that amusing. Isn’t this the way most forums work: the loudest posters often know the least. I think it is the Murphy’s Law of web forums.
More seriously, if -ck worked on a forked kernel of Linux and optimized it for the desktop I would definitely download it, compile it, and run it to see if it works any differently from vanilla kernel I am running now. That is the great thing about open source. If it is GPL’d you don’t need anyones permission to alter the source if you make your source changes available. If you have a better idea to fix it then you can! Frustration is often times the mother of invention. I don’t do any kernel hacking myself but through observations one can see there are problems with GUI events and scheduling on Linux desktops. I have wondered why mp3 playback is interrupted sometimes when I copy large amount of data in the background. I tried the same thing on XP and it didn’t skip at all. Same hardware, different kernel. Something apparently could use some tweaking in the kernel.
Also the idea of Linux moving to a mircokernel as mentioned by the author is an interesting idea. The Linux kernel is monolithic in design and it could be argued that a one-size-fits all approach is just not cutting it for high-end desktop performance. The idea of different modules which optimize server, desktop or embedded use could be a solution.
Edited 2007-07-26 20:09
>I have wondered why mp3 playback is interrupted sometimes when I copy large amount of data in the background. I tried the same thing on XP and it didn’t skip at all. Same hardware, different kernel. Something apparently could use some tweaking in the kernel.
Yes, because file copying and MP3 playback are kernel functions. </Sarcasm>
What is probably really happening, is that your media player does not load the entire MP3 into RAM before playback. File copying causes high disc contention, so when the player comes to load the next part of the MP3 it has to wait, interrupting playback.
Solution: Use a better media player/a faster hard disc.
Reason it doesn’t happen in Windows: The media player loads the complete MP3 before playback (non-kernel difference) /the disc scheduler gives more priority to the player (kernel difference, but only makes a difference due to userspace).
Well, reading an audio file without drop is actually not as easy as it seems, and the kernel is quite involved in the matter. I don’t think the problem is linked to loading the mp3 in ram or not: even if you have the whole file in the IO buffer (by copying it and having available free ram, for example), you will still get the problem.
The problem really is in the interaction between latency and disk IO: the kernel takes its whole time in IO, while the player needs CPU to say it simply. You can program applications such as this is not a problem (real time scheduling), but it is kind of overkill for playing an mp3. The fact that there is no high level standard to program audio on linux does not help either.
“Well, reading an audio file without drop is actually not as easy as it seems,”
It’s no different from reading any other file without drop.
“I don’t think the problem is linked to loading the mp3 in ram or not: even if you have the whole file in the IO buffer (by copying it and having available free ram, for example), you will still get the problem.”
Have you actually tried this or are just just guessing that you will still get the problem?
>>”Well, reading an audio file without drop is actually not as easy as it seems,”
>It’s no different from reading any other file without drop.
Which kind of ‘other file’ are you talking about?? I hope you realize that playing a MP3 is quite different from reading say a document file..
“Which kind of ‘other file’ are you talking about??”
It doesn’t matter. Reading a file is reading a file. It’s done the same way no matter what the contents are. The filesystem is not aware of whether a file is an mp3 or not.
Well, then reading a file without drop does not mean anything. When I said reading a file, I was not talking about file IO, but about the high level concept of decoding an mp3 from the FS and getting it to your soundcard. Reading the content of the mp3 file on a harddrive and getting it to memory is not different than any other file, indeed; but dropping does not mean anything at this level.
The problem is that with a process with normal scheduling and a “standard” kernel, if you do intensive IO somewhere else (say copy a big file from one harddrive to somewhere else), it will takes all the CPU. If you read your mp3 file a first time, and then reread it, there is a pretty good chance that the mp3 is in fact in memory, not on the harddrive. This is really likely because an mp3 is small (a few MB compared to todays’s standard memory): for example, if you do a checksum on a mp3 file, the first time, it takes some time, because the file buffer is “cold”, and then it is “hot” because the file is in the IO buffer, and read directly from memory.
When you design an audio app under linux, the common design is to have one real time thread, which does not any system call, a thread for IO which communicates the data from the fs to the real time thread, and other threads to do the common work. The real time thread, by avoiding system calls, avoids any unbound operations (if you do eg a malloc, you cannot know how long it takes), and can be scheduled in real time “efficiently” (class SCHED_FIFO, set by the posix fc*ntion sched_setscheduler). The fact that the file data are on the disk or in memory (through virtual memory) does not matter anymore, as long as your hardware follows of course.
As an exemple of the influence of the file buffer being hot or cold on checking the md5 of a file of 30 Mo:
first time:
time md5sum bigogg.ogg (25 Mb)
real 0m0.505s
user 0m0.088s
sys 0m0.028s
second time (and all after as long as the file is still in the OS IO buffer):
real 0m0.091s
user 0m0.064s
sys 0m0.028s
In the first case, md5sum takes most of this time in IO operations and actual hard drive reading. After that, because the kernel has most if not all the file into memory (we assume you have some memory left, here), the operation is purely a matter of computing the checksum (almost 100 % of the time is spent into actual computation of the checksum).
A gentle dream without any real supporters.
I read through all the comments here and such, but I can’t see anyone mentioning any single thing that would speed-up desktop and would require any modification to the kernel itself…So, someone who’s up for a “desktop linux kernel”, please, tell me some things that would have to be done to the kernel in order to benefit the desktop? Any? Even one? :O
Seriously, I myself don’t see a single reason to fork the kernel. I’m not a kernel dev, but f.ex. laggy windows in X has nothing to do with the kernel. It’s the X server, and it’s being worked on. I often find for example Firefox rendering the pages a lot faster than under Linux, but I assume it’s because drawing to the screen under Windows is done largely by the graphics hardware whereas in Linux it’s not, atleast not as much. Beryl/Compiz etc sure render the whole window using OpenGL and as such it’s fast, but the app rendering the actual contents of the window is still just as slow (or even slow, as in my case) when Beryl/Compiz is enabled. That’s just because the code path for rendering graphics hasn’t been optimized enough.
Anyway, enough of me babbling, I’m awaiting for someone to mention any real reasons for a fork.
Linus Thorvalds didn’t lose his virginity until he was 32. Typical nerd.
Some guys have indirectly hit on this already. Unless you single task, your kernel scheduler makes a compromise. It puts off today what it can do tomorrow – but in milliseconds.
Its been mentioned, BeOS had this down well, overall you couldn’t get much better than 70% work out of a CPU, but you could compile and play videos and the system knew a ‘realtime’ stream needed to be ‘there by then’ but the ‘normal’ compile would be ok if it waited an extra few cycles. This is what made it feel so fast. Firewire follows this as a model for its data transfer (which is why DV cams use it instead of USB2).
The Unisys NX mainframe (old Burroughs) also shows how this whole ‘nice’ idea works out. It to had priority levels, but at 3000 threads at normal priority (30) the whole thing crawled as it ended up switching more than it worked.
Con is hitting a particular problem, and offers one solution.
Remember this kernel scheduler business has been around since the beginning of the 2.x linux kernel.
Can we please stop? How about we just go back to the editors submitting their own comments as their take rather than plaster them all over the front page. There are plenty of things posted here now which I think are little more than flamebait but I won’t complain about those but having editors show their bias directly in postings does nothing to help reduce that flamebait image.
OSNews wouldn’t be OSNews without a few flamewars always raging on, you know.. ^_~
“””
OSNews wouldn’t be OSNews without a few flamewars always raging on, you know.. ^_~
“””
True. It would be a superior forum for discussion and a better source of information about the various platforms that it covers.
Can we please stop? How about we just go back to the editors submitting their own comments as their take rather than plaster them all over the front page.
“All over the frontpage”? What universe do you live in? We rarely use the “My take” option. OSNews has been online since 1997, and we have over 18300 stories. Searching for “My take” puts forth 86 stories.
So please, don’t go making stuff up now.
The whole my take stuff didn’t start back in 1997 or at least I didn’t see it back then (admittedly I haven’t been reading since then either but I have been a reader since at least 2002), but you’re probably right, I am exaggerating but my point is it’s irritating and on controversial topics it just shows bias. I don’t really understand why you’ve decided to do the whole my take thing anyway, a simple post in the comments would mean anyone that interested in your opinion would find it.
“””
a simple post in the comments would mean anyone that interested in your opinion would find it.
“””
I agree that a “just the facts ma’am” style would be preferable.
Commentary is better left to the comments section. And comments by staff accounts really should be mod’able, as well.
Edited 2007-07-28 04:26
Well I have been here since the beginning, or close since I know I have been reading since before 2000. It has been around for a long time. I am surprised the count is as low as Thom mentions given that Eugenia used to use it quite often.
It is, and has been, part of the commentary style of OSNews. Perhaps it could be done as a comment, but the whinging about it is pointless and annoying. It seems the readership likes to come up with things to complain about, especially in relation to Thom. Is it that hard to ignore the “My Take:” comment if you do not like it?
Edited 2007-07-28 08:37
Seriously, what “desktop specific” features does the kernel lack that Joe Average needs? Not that I use Linux much but when I do whatever complaints I may have aren’t really related to the kernel. I can watch movies just fine, listen to music just fine, surf the web (even with flash) just fine, manage photos just fine etc etc.
I’m sure there’s niche use cases where low latency etc really matters (Audio/Video processing perhaps) but for the average desktop use case I don’t see what the kernel is lacking.
“My take: I have been advocating splitting the Linux kernel up (desktop, server, embedded) for years now.”
I am confused, i am not sure that i understand what you mean.
Are you saying that you are necessarily right or that your point of view should be listened as absolute truth. Or should we understand it as we should have listened to Thom Holwerda for years, he was right, he is the one who knows everything…..
Let me make it straight, this is not a personal attack, but i think that you need to come back on Earth.
As far as i am aware of your activities, nothing tells me that you are involved in whatsoever way on a kernel development or that you know how Linux is constructed (the same can be said to the author of the article that you link to). Which means that what you have been saying (even for years) is just taken from air. Nothing more.
Linux kernel is a complicated peace of code, and only people who write it do understand it. This is a fact, and nobody can argue against that.
The idea of splitting up Linux means that for the desktop version of it you will need to write code, i mean you will need to change the code that needs to be changed to fit to the desktop use. Well if we take as absolute matter of fact that Linux kernel has been mainly evolving towards a kernel for server use then the way it is written implies that there are assumptions and compromises made by the developers to meet this market. I would assume that, i am not a linux kernel developer, i am more from the Darwin world, but in the process of the design, it has to be.
Also those assumptions and compromises mean that there a lot of lines of code which depend to each other, one peace for code written for better user for server may imply that another peace of code depending on it will follow this assumption.
So at the end what you get to change is code that implies dependencies all across the kernel, and making change is not trivial at all, given the complexity of he kernel.
What i am saying here is just an assumption on what it will require to do when considering to change the code of a kernel, which again is what you end up to do if you want to create a desktop centric linux kernel (and again assuming that the kernel linux contains code that does not fit to the desktop use and which requires to be changed).
Given that, any comment which says that do this or do that to a complex peace of code without knowing exactly what it takes to do it is not relevant, not a single second.
Which make me saying to you that what you say is irrelevant because you are not aware of what it takes to change a kernel whatever the changes might be. And pretending that what you are saying is necessarily a good thing or an absolute truth is just being very arrogant. If you want to discuss it, that’s fine, but don’t pretend that what you are saying is correct given that you do not have any clue of what you are talking about.
If you would have written something like;
“My take: I also think that splitting the kernel up could be a good idea, however the technical challenge (if any) remains to be defined.”
Well that’s fine, you express the idea of being agree with what the author says, however you do not assume it is possible to achieve what he says without knowing the technical aspects of the question.
No, of course you are better than that , right? Instead, you have expressed in big arrogance that you are the one who knows what is good and knows why, even though again you don’t know what you are talking about.
You even added the “for year NOW” which makes your arrogance even more ridiculous and pathetic.
A little bit of modesty would be nice to consider from your side…….
Are you saying that you are necessarily right or that your point of view should be listened as absolute truth. Or should we understand it as we should have listened to Thom Holwerda for years, he was right, he is the one who knows everything…..
?
You are reading a lot there, while in fact, it says very little. All I’m saying is: article A puts forth solution X, and I have been bringing forth the same solution for a few years now.
Nothing more, nothing less.
As the saying goes, there are those who do and there are those who talk, the author of the article sounds like an arrogant jerk who won’t do it himself but would like other to work for him for free, yeah right!
Some distribution adds their own patch to the Linux kernel to make it better suited to end-user, so they are already creating this ‘fork’, except that these aren’t really forks because they keep the compatibility with vanilla Linux.
Would a centralised ‘fork’ aimed at end-user would be useful? Maybe, but it’s not sure that Linux distribution would help maintaining it as they also like to have some advantage over other distribution..
Now would a real fork aimed at enduser only, provide a big difference compared to vanilla+patches (like CK patches)? I doubt it and it would be hard to see which feature should be dropped: virtualisation? It can be interesting for enduser to allow safe installation of untrusted binaries, RT patch? useful for playing video/audio, so it’s hard to see where is this ‘bloat’ he complains about.