“It’s over. The magic is gone. The dream is dead. The egg has fallen off the wall and no amount of ‘sudo’ super glue can put his pieces back together again. I’m referring, of course, to the not-so-recent departure of Con Kolivas from the Linux kernel development community. Con – that champion of all things desktop centric – hung-up his keyboard this summer, the victim of an ideological rift within the Linux community.” Update: And the first rebuttal appeared.
I guess I’m going to wipe out all my Ubuntu installation at home then, because this Randall Kennedy says Linux is getting nowhere on the desktop without Con Kolivas patches.
/sarcasm
Your right. Not one person makes linux. Everyone will leave over the years and be replaced with people that want and have the talent to make the changes needed.
/reality
So every man’s frustration is going to be published. Give the man a pretty paracetamol.
This whole “free” thing was getting on my nerves anyway. People should be TOLD what OS to use and what apps they need. Period.
A lot of people will probably misunderstand your sarcasm and mod you down, but beyond that you do make an interesting point. People do indeed, as a general rule, want/need to be told what to do more and more it seems. It’s becoming ingrained in our society, at least here in the US. Freedom of thought, of speech and just overall freedom to be who we want to be is slowly being eroded, and sadly companies like Microsoft and Apple are watching with pleasure.
I’m not a GPL nut by far; I am happy to see free software whether it’s GPL, BSD or even just freeware, as long as there is diversity and freedom of choice. I’ve paid for open source software in the past, because I felt my money was not only buying me support, but also my only way of giving back to the community as I have little to no talent in programming or debugging.
LOL! I thought I was pretty much telegraphing that one.
Some people are polarized around this desktop issue: they want a forked kernel. Kennedy doesn’t understand that most people aren’t. Why? Linux works well enough on desktop computers.
Linux works well enough for whom? Typical Linux users?
I’ve been experimenting with Ubuntu and it’s been interesting but it’s not enough for typical computer users, though it’s better than my prior experiences with Linux but you can only go so far without going to the shell prompt for help.
Too bad someone has given up but Linus and others just don’t get a lot of things, nor do they care.
Maybe Linux for the desktop will never really happen. Perhaps, someone can start over with BSD.
Notwithstanding the fact that this is highly debatable, it has absolutely *nothing* to do with scheduling issues. You fail.
Just like most windows users i know sometimes has to go run->cmd to get something done (like network management) or google for answers to their problems or drivers for their hardware. This is not exclusive to linux, but for some reason, most windows users figures a totally useless help system, having to manually download obscure drivers and patches only to be told to reboot the system, not to mention all the various crashes with absolutely no explaination, to be acceptable, but god forbid them ever having to go to a text prompt.
“Just like most windows users i know sometimes has to go run->cmd to get something done (like network management) or google for answers to their problems or drivers for their hardware. This is not exclusive to linux, but for some reason, most windows users figures a totally useless help system, having to manually download obscure drivers and patches only to be told to reboot the system, not to mention all the various crashes with absolutely no explaination, to be acceptable, but god forbid them ever having to go to a text prompt.”
People do believe “Windows” is easy because they leave problematic situations (as you describes above) to others – for free, of course, because that’s smart, isn’t it? Seems it’s a completely normal situation for “Windows” users to “FORMAT C:” and reinstall within a few months, don’t wonder about system crashes without imaginable reason, and having trouble with viruses and trojans. And you need to know a friend who fixes all the problems, who knows about the system and how to do things. That’s the way computing has to be. Really?
Desktop Linux has come a long way and is still in development. This is mainly because the use people intend Linux for is developing. Users want to do everything with their system. NB: Allthough the system contains of many parts (kernel, OS, drivers, applications etc.), people seem to understand is as a closed unit. And there are always people who insist on doing strange things and then start complaining about Linux while refering back to their “good XP”… Users don’t use a kernel, or an OS, or even an application. They want a certain result and the computer has to deliver it. Period.
While users of “Windows” seem to insist on employing others to solve their problems, Linux users usually are smart and willing enough to do it on their own. This, I think, is a hint to their superior intellectual and moral education. One could argue here, “I have no time to fix this”, or “If you pay me, I’ll fix it” are common opinions. But still, the openness and choice of tools is what enables Linux users to do so.
Just use the right tool for every task.
In another post, it has been mentioned that people don’t like choice (such as Linux distributions offer). But this kind of choice is essentially needed in order to perform freedom. There are Linux distributions that run on a 150 MHz box well, while others require 2GHz to be basically usable. The choice among these distributions allows the user to find the tools that are the best solution for his tasks. The sad truth is… he’s just to lazy to do so.
“The choice among these distributions allows the user to find the tools that are the best solution for his tasks. The sad truth is… he’s just to lazy to do so.”
Who is lazy – Joe User, or the Dev that will not listen to their customers, actual and potential, about usability from the user perspective? How would developers feel to be told they are lazy in this way when they believe they spend a lot of time ‘doing the code’? The ‘who is lazy’ argument is therefore pretty lame, and it makes my blood boil to see it rolled out so facilely in defence of a noble and subtle thing like freedom of choice.
“This, I think, is a hint to their superior intellectual and moral education.”
What did Alexander do with the Gordian knot? he certainly didn’t try patiently to unpick something as deliberately complex and convoluted as that mass of misleading threads; he gave it what it deserved – he took out his sword and sliced it in two. I have tried enough Linux distributions to know that logic and even extrapolations from previous experience are not enough when it comes to any set of things whose rules of operation, no matter how well documented people consider them to be, are necessarily founded on arbitrary systems, internally referential in detail and operation in slighty different ways from distro to distro. Users do not lack logic or stamina; what they may suffer from at times is a sort of sickness at what appear to be 150 plus sets of moving goalposts (and yes, goalposts do generally come in just a few ‘standard’ sizes), and a lack of time – life is too short to have to, effectively, keep making reference to a large grammar book just to be able to communicate on everyday matters.
And this is from someone who is still trying with Linux, and has managed to persuade their IT department to leave aside some spare space so that they can dual boot.
“Who is lazy – Joe User, or the Dev that will not listen to their customers, actual and potential, about usability from the user perspective?”
Conforming to the meaning of “lazy”, it’s usually the so called average user who’s lazy. He tries one Linux distribution (sometimes a too old one, or one to new for his hardware), runs into a problem and abandons Linux once and for all. “Who’s fault is it? The developers one.” is a common opinion then.
The developers need feedback from users who are willing to test software, who try out Linux distributions and who are educated enough to form their opinion into sentences. So the developers can see where to add work, what to improve or how to adopt to certain requirements.
“How would developers feel to be told they are lazy in this way when they believe they spend a lot of time ‘doing the code’? The ‘who is lazy’ argument is therefore pretty lame, and it makes my blood boil to see it rolled out so facilely in defence of a noble and subtle thing like freedom of choice.”
In no words I wanted to say anything against the freedom of choice! NB english is not my native language, so I may still improve in expression. So let me clarify: The work done by the developers is important to the development of software. I would never call a developer lazy, I’m a developer myself. Just because some software does not conform to everybody’s requirements is not an argument; it’s just because you simply cannot fulfill every imaginable requirement, that’s impossible. I hope you understand.
“What did Alexander do with the Gordian knot? he certainly didn’t try patiently to unpick something as deliberately complex and convoluted as that mass of misleading threads; he gave it what it deserved – he took out his sword and sliced it in two.”
Wow, this is not a car analogy. ๐ Of course it’s possible to solve a problem this way. It is a usual means in software development and explicitely allowed by the several free licenses. But to come back to your analogy, what if everybody would be lazy and just beat on what does not work at once? Surely, the situation would not improve, wouldn’t it? Undiffling knots is what developers do – not creating them. The knot is a certain problem, a requirement users have. Developers try to deliver a product that solves the problem.
“I have tried enough Linux distributions to know that logic and even extrapolations from previous experience are not enough when it comes to any set of things whose rules of operation, no matter how well documented people consider them to be, are necessarily founded on arbitrary systems, internally referential in detail and operation in slighty different ways from distro to distro. Users do not lack logic or stamina; what they may suffer from at times is a sort of sickness at what appear to be 150 plus sets of moving goalposts (and yes, goalposts do generally come in just a few ‘standard’ sizes), and a lack of time – life is too short to have to, effectively, keep making reference to a large grammar book just to be able to communicate on everyday matters.”
As I stated, the time argument applies in some situations (especially in commercial use of software), but that’s usually why software exists you can pay for. Unfortunately, there’s lots of software out there that is expensive, while it consumes your time to get it working. Or there are paid employees who’s job is to run the systems in order to have other people working simply using it.
Furthermore, there will always be situations where things won’t work out of the box. That is due to the lack of standards, in most cases.
With a few common sense, willing and be able to read, to use common means to gain knowledge, most people would be able to solve problems on their own. But according to your arguments and the analogy above, they better keep beating at the whole thing. ๐
Believe me, I really don’t want to insult anyone. But when it comes to stupidity, you’d be impressed what you can find in german living quarters including a PC. Maybe it’s different in the US where people do not suffer from functional illitracy…
To add a personal experience: My uncle tried one Linux distribution with a completely messed up PC, including a strange combination of defective hardware and started complaining that nothing did work. Where does the expectation come from that such a compositum could be handled by a modern Linux OS? If hardware does not work, how is the OS supposed to do any better?
To come back on topic again: Desktop Linux won’t be a matter of a kernel fork. If someone wants to try, surely, we’ll see the results. But personally, I don’t think this would improve Linux’s usage share because usage share is mostly a matter of standards, applications and GUI design – and does not primarily reside at kernel level.
Hi Doc,
I don’t make a habit of replying to replies but since you have been so freundlich about it, I’ll try to pick up on some of the things you have said. It’s not my intention to insult anyone either, so I’d be grateful if you took the following points in that spirit ๐
“But to come back to your analogy, what if everybody would be lazy and just beat on what does not work at once?”
I was really just trying to undermine your argument that very much to me implied that, simply because someone refuses (for whatever reason, it might be good, or bad) to negotiate a seeming rat’s maze devised by someone else with no apparent reward at the end of it, they are lazy: Alexander was given a rat’s maze, and those who had tried to solve the problem acording to the ‘rules’ had failed. Alexander twigged this and, on not getting a definitive reply to his question, ‘does it matter how I do it’, solved the problem at a stroke as a result. Alexander was not beating up on the knot, but the assumptions behind how it was to be dealt with.
“The developers need feedback from users who are willing to test software, who try out Linux distributions and who are educated enough to form their opinion into sentences. So the developers can see where to add work, what to improve or how to adopt to certain requirements….With a few common sense, willing and be able to read, to use common means to gain knowledge, most people would be able to solve problems on their own. But according to your arguments and the analogy above, they better keep beating at the whole thing. :-). Believe me, I really don’t want to insult anyone. But when it comes to stupidity, you’d be impressed what you can find in german living quarters including a PC. Maybe it’s different in the US where people do not suffer from functional illitracy..”
You still can’t quite let go of the ‘four legs, good; two legs, better’ mantra (‘educated enough’, ‘comes to stupidity’, ‘With a few common sense’). There’s no brownie points (Pluspunkte) intellectually or otherwise for being able to rub along with Linux – until people who develop for Linux, speak for Linux, and want to encourage others to use Linux fully rid themselves of these assumptions, there will always be a terrible culture clash betwen apparently clueless users and ‘in-the-know’ devs.
“To come back on topic again: Desktop Linux won’t be a matter of a kernel fork. If someone wants to try, surely, we’ll see the results. But personally, I don’t think this would improve Linux’s usage share because usage share is mostly a matter of standards, applications and GUI design – and does not primarily reside at kernel level.”
I wasn’t aware that I had strayed off topic (maybe I was in the rough, and not on the green, but hey): in the main, I would agree with your assertion. I would only add that it’s not standards but standardization that’s key, and applications and GUI design stand or fall on usability criteria (so, ask users), and not primarily on technical elegance, sleight of hand, or subtlety ‘an sich’.
… And this is what – to do with Con Kolivas departure from the Linux kernel (or actually scheduler) development?
/-1 for being completely OT.
– Gilboa
That says something:
– Linux doesn’t suit your needs, or you can’t adapt enough to a different OS,
– you think a typical computer user is like you, or you just generalize too easily,
– you say some of us aren’t typical computer users, which might be true to an extent, but I don’t see a problem with an OS suiting our needs, and I don’t see how that would nullify a “Linux works well enough” clause,
– you don’t have a clue what Kolivas was contributing.
You’d be surprised – I presume – how many people “leave” the Linux field eventually and at the same time how many people join.
I just think you – if you even know what all this topic here is about – attribute too much importance to the guy. Again, I do _not_ want to devaluate his work, I’m just saying his leave won’t stop the world’s spinning.
True, the Linux desktop is not nearly as intuitive as something like OSX or even Windows, however that has nothing to do with the scheduling changes Con Kolivas was trying to make in the kernel. Unless he started coding a new desktop environment that I wasn’t aware of.
To be frank, kernels matter to the desktop about as much as the brand of gas matters to your car. What Linux needs are more user-related things – and it has come a long way in this department. Today, most things can be graphically configures, it has many great apps, and the desktop environments work well. However, there are still gaps here.
The kernel is the least of desktop Linux’s problems. Sure, it might be important to get MP3s to play with a little better – they can skip on Linux due to scheduling – and some other kernel level things that do impact desktop performance, but it’s the interface and not the performance that is keeping more people away. It should be on the kernel team’s radar to make improvements, but this isn’t what is keeping desktop Linux from hitting prime time.
Agreed, mac OS is built on Unix & that is easier than most if not all Linux OSs. I am sure if you looked at the windows kernel, it is not easy to figure out. The main prob. is the intall prob. we need a one click install system like windows, crap PC-BSD has 1 (.PBI), the main reason I am still with OpenSuSE 10 is, I couldn’t get 1.3 to work. It had the hardware detect prob.
Edited 2007-09-19 15:47
The performance of the kernel affects the performance of the whole system.
So… one guy leaves kernel development and suddenly desktop Linux is a lost cause?
That doesn’t make much sense at all. Unless the entire desktop is really one giant kernel!
Con Kolivas spent his time trying to improve Linux on the desktop.
Con Kolivas’s ideas were rejected by Torvalds & Co until the point where he stopped developing.
Therefore, Linus Torvalds must hate Linux on the desktop.
It’s a fallacy of generalizing ‘some’ to ‘all’.
Right on the spot. So, why would his leave cause Linux to die, it’s beyond me.
That’s GPL!
forking is stupid… i much prefer the idea of Linus, having one scheduler that works great on servers and desktop is a better idea IMHO, or if this is not possible (which i doubt it)… why not just add another scheduler for desktop like there is different preemption systems for server/desktop.
forking the kernel will just cause complexity, users confusion and a lot of code duplication which is not needed.
Edited 2007-09-18 20:43
“forking the kernel will just cause complexity, users confusion and a lot of code duplication which is not needed. ”
I *don’t* think its practical to fork the kernel without a *major* revolt…and that is unlikely to happen.
Although minor *forks* with Con’s patches have been available for years. Forks of the kernel exist for other things like embedded hardware, and this is without what you describe to above.
The reality is *what’s the point* Con gave up because a patch similar to his was included.
Con pissed off because his patches wasn’t included and I don’t see nothing wrong with that, he is free to do whatever he wants. Linus trust Ingo more and Ingo made a great scheduler that doesn’t penalize performance in other areas like servers while it improves performance in the desktop as well.
Some people have to understand that Linus wants a good scheduler that works great on servers and desktops without sacrificing performance in none of them and I think this is a better decision than forking the kernel.
Edited 2007-09-18 21:46
What really should have happened was runtime-selectable, pluggable schedulers. The default can be as neutral as Linus likes, and you can have a ‘desktop’ and a ‘server’ scheduler without harming things. This idea (supported by Con) was rejected a few years ago, but it has gradually appeared and now more or less exists.
Con’s scheduler had small regressions in certain *theoretical* server loads. These were so small as to be insignificant next to the gains made in other (including server) areas. And maybe they could have been dealt with… if things had been allowed to proceed.
“Fair allocating processor time when several processes want 100% of the CPU” is not the same as “Good interactivity.” The former is what CFS does, the latter is what Con wanted. CFS is superior to all previous schedulers by this benchmark, and that’s good, but it doesn’t mean anything. Apparently it’s hard to build a benchmark which accurately test the interactivity thing. Con’s scheduler will still *feel* better to desktop users than CFS and that’s not likely to change.
At least, this is my take on the matter from what I have read on the LKML and elsewhere. I don’t use either scheduler.
His departure does not change the Linux desktop situation one iota. Interactivity is less likely to improve soon, now that no one is looking at it, but that is not an item which is likely to be a key decision point by any users.
..if you want snappy nice desktops, you need to stop using bloated crap like KDE, Gnome, Openoffice, Firefox etc.
Granted, there are no real alternatives to OOo or firefox right now. but kernel latency is certainly not the problem.
Well, of COURSE your computer will run faster if you don’t run software on it.
But then, why HAVE a computer, as you so rightly point out?
Yeah, ’cause a video not skipping when nothing else is being done, and skipping when I load a page in Firefox is really the fault of “bloated crap like KDE [and so on].” In fact, I was using Fluxbox and Eterm last time I recall that happening (unfortunately, that’s a clue that I need to install Linux on my main PC again, which I’ve been lazy about ).
That bloated crap should not be allowed by the kernel to hog the system enough to cause such things. Firefox should be allowed to be as bloated as its devs want it (and it is! ), but without affecting any other task’s ability to take some lesser amount of CPU time at regular intervals.
Though, I do think the CK thing is generally way overblown. Linus & Co. are thinking along scientific lines…sometimes it takes awhile for such was to catch up to the accuracy of gut emotional responses and feelings for issues like The Snappy (trouble is, gut feelings lack precision, even when they’re accurate).
Con Kolivas is far less significant than he and his fans think he is. The guy had some good ideas, but lacked the discipline and ability to work with others to get them accepted, and it wasn’t because Linus hates the desktop.
To understand why the Linux desktop is alive and well, all you have to do is look at what Dell and Ubuntu are doing. Dell even put out a remastered and fixed Ubuntu to work better on their laptops.
Sounds like it’s a battle of ego’s…like little children playing in a Rene Gerard “Scapegoating” game.
I’ve been using Linux happily on the desktop for over 5 years now and my MP3’s and Movies play fine, thank-you very much.
This is a non-event. Media hype. But…of course, there are always improvements that can be made!
I don’t see a fork coming….anytime soon….
People don’t listen to anything from Infoworld with regards to Windows… much less Linux AND much less a blog post. While I suppose the blog article could be humorous, it’s not much else.
The kernel is driven by money. Lots of it, by companies building their products, I’d love the kernel to be built by people like con, but Linus actively campaigns against that. It has little to nothing to do with the desktop. Thats not mean that there is not an awful lot of mutual benefit. It also doesn’t mean that those computers that have started to appear on out desk machines that not so long ago the investment of millions by large corporations.
I get the feeling complaints like this and its a repeat one is just *too* late. Linus pragmatic mistake over binary drivers looks like its being resolved. Ubuntu took the Desktop with little to no effort…and suddenly Red Hat are interested again.
The bottom line is though unless a company can make *money* from the Desktop, the development is not going to be driven that way. Linus has all but driven away those that work on the kernel for either idealogical or political reasons.
…but what can you say the the kernel keeps churning out quality code every 2.5 months and *everyone* benefits, and its driven by Linus and successfully.
You pretty much hit the nail on the head. Anyone who thinks big business doesn’t drive the direction that the Linux kernel goes in, and that big business won’t get it’s way when it comes to features and performance etc benefiting them over anyone else (ie. the ordinary user) is a fool. The vast majority of kernel features over the past 3 or so years have not really been directed at the ordinary user, but at business.
Dave
Well it certainly explains why so many people are talking about how a “few years ago I was able to….” and simply don’t realize things have been steadily optimizing away from the desktop and reorienting for better server performance.
Money talks.
–bornagainpenguin
Yes, money talks. I mean, big business pays the wages for the lead Linux kernel developers, so I guess, from their perspective, that it’s only fair that they dictate where the development direction goes.
This is why I keep saying that the Linux kernel is no longer the ‘people’s kernel’, but is for big business. This is why I question the lead Linux kernel developer’s honesty in regards to the developmental direction of the kernel, and what really drives it.
As soon as it became obvious that the Linux kernel was able to outperform both traditional UNIX flavours, and the Windows server releases, it became the darling of the big business IT sector, who wanted to mould it to suit its needs.
In all honesty, this is against the spirit of the GPL, which is generally speaking, improvements to the code are returned to the community. Let me direct this question – If the improvements/features that are being introduced benefit big business in the server side of things, but not the desktop user, is that a fair and equitable return of improvements to the entire community? I would say not…
Of course, proponents of the modern development process would argue that without wages being paid by big business to the lead Linux kernel developers, they would not be able to afford to continue development of the kernel. This is baloney – Linus and co. were able to develop the Linux kernel for quite some time prior to corporate interest invaded the kernel development process.
What we are seeing is a fast takeup of anti GPL ideals. The modern Linux user is no longer interested in free software, as per the FSF ideals, but only in the fact that they’re not paying for the operating system. Because of this, they are greedy, and they don’t consider anything else.
Dave
“What we are seeing is a fast takeup of anti GPL ideals. The modern Linux user is no longer interested in free software, as per the FSF ideals”
It’s gone full circle then, since the first Linux user (Linus) has no interest in FSF’s ideals either.
You could say that, yes, although I think many of the early contributors were probably FSF proponents. They saw the Linux kernel moving faster than the GNU Hurd and moved ship to the faster/better project (which is quite understandable).
The GNU Hurd will never really take off, because no one wants to work on it. Why work on the Hurd when you have the Linux kernel doing a relatively good job? We have the open source methodology of why reinvent the wheel coming into play here, and that’s not necessarily a bad thing.
It’s a pity, because if the GNU Hurd could become workable and stable, I suspect it would be reasonably easy to get a lot of the rest of the stuff associated with a fully working operating system working OK. Hardware support would be dodgy though probably, especially since most open source drivers are GPL v2, and most of those developers would not give permission for said drivers to be forked and converted to GPL v3 etc.
RMS has had great foresight, and I personally believe his vision is on the money yet again. The problem is that most people are happy with Linux as it is now, and they’ve accepted the commercial interference with it [the kernel] as a necessary evil. The lead developers are happy with it all, after all they get paid to work on what they love doing. The problem is that big businesses’s interests are given higher priority and part of that problem is that most big business can tolerate GPL v2 (and work around the loopholes, destroying the spirit of the GPL IMHO), but cannot tolerate GPL v3, because it closes up said loopholes, and forces them to commit to the community. Big business has never been about community, it’s been about making money. You don’t make money by caring about the community, you make money by screwing the community. Period.
Dave
What the heck am I using then? :O Been using Gentoo for years now, and I for sure thought I was using it as my desktop OS
As for the article itself…well, why would a kernel make such an impact? I’ve asked it before and I ask it now: WHAT IS THERE THAT WOULD HAVE TO BE ADDED TO OR MODIFIED IN THE LINUX KERNEL SO IT WOULD BE “DESKTOP” COMPATIBLE?? I STILL haven’t gotten any straight answers to that question! It is still all the frameworks, DEs and apps that need the most work, not the kernel!
PS. Con might be an excellent coder and all…but the world doesn’t revolve around his navel.
EDIT: Thought to mention one thing: I use Linux successfully even on my trusty P3 866mhz 128mb RAM and don’t get stuttering sound or jerky video playback unless I’m seriously utilizing the hdds. But then again, isn’t it quite obvious that it should do that? Atleast I haven’t met any “normal user” who would be playing music or video while doing something very demanding, or if they did, they atleast wouldn’t wonder why the playback stutters somewhat..
Edited 2007-09-18 21:04
WHAT IS THERE THAT WOULD HAVE TO BE ADDED TO OR MODIFIED IN THE LINUX KERNEL SO IT WOULD BE “DESKTOP” COMPATIBLE?? I STILL haven’t gotten any straight answers to that question!
A stable driver API. This is not going to happen and would be bad for Freedom in the long run, but it would be good for the desktop in the short run.
I use Linux successfully even on my trusty P3 866mhz 128mb RAM and don’t get stuttering sound or jerky video playback unless I’m seriously utilizing the hdds. But then again, isn’t it quite obvious that it should do that?
Try BeOS on the same machine. You *will* see a difference, a very *big* difference. In the difference lies the problem: Linux can be smooth, but if you have high disk I/O and high CPU usage, things will stutter. That’s currently a fact, a problem, and a shame. Maybe some day someone will fix it. If you listen to folks who have tried Con’s scheduler, of whom I am not one, they say he already fixed it.
“A stable driver API. This is not going to happen and would be bad for Freedom in the long run, but it would be good for the desktop in the short run.”
http://www.osnews.com/story.php/18304/Linux-Kernel-2.6.23-To-Have-S…
Edited 2007-09-19 01:23
A “userspace” driver API is nothing at all like a proper stable ernel driver API & ABI.
I think he has a point about Con Kolivas not be regarded as deserved.
Check “http://kerneltrap.org/Linux/Additional_CFS_Benchmarks“. After all the help and collaboration SD still has an edge over CFS.
And Ingo is involved now on another contend, this time against Roman Zippel, again about not giving a proportional credit/cooperation to someone else ideas. When this is discussed in kerneltrap all you see there is Ingo supporters saying “you can help him”, but the opposite is not happening, i.e., Ingo helping someone else work. I know Ingo is probably very busy and perhaps don’t have enough time to take a look on code/ideas from others but it is not fair to accuse when you do the same thing. Ingo is, for sure, an impressive kernel developer, but he should improve his social skills on this particular area (of course, not only him), he should not downplay the efforts of other developers. This kind of dispute over kernel features had also impacted the BSD development and I dont think a fork is the better path to follow.
I hope all these quarrel will, at least, improve the social skills of them all and the linux kernel development system as a whole.
They just need a non-programmer manager.
Ingo act as both a player and a referee here, that’s the cause of those unhappiness.
Anyway, cause Linux favoriates Ingo, so Ingo win. Period.
A great headline with no traction..
Everyone is replaceable. If you’ve been in IT for more than a couple of years you know this fact well….
I guess the Google Ad’s are worth the headline for the poster…
Actually, kerneltrap is a bit misleading here.
If you follow both Con’s and Roman’s threads in the LKML you get a different picture.
At least in Roman’s case, the man seems to be overly-agitated and tends to make things personal (bad-idea) while Ingo’s responses are always clean and to the point.
Grated, I have no idea if they had any type of email exchange that started the fire… but as it seems, I can’t really blame Ingo for having bad social skills.
– Gilboa
Igno has certain qualities that make him a valuable player:
1. He’s reliable. He’s been doing excellent kernel work for years and years. Has suffered defeats… and just keeps coming back, doing good work, and continuing to maintain those pieces of code he agreed to maintain. When he suffers a defeat, he trudges on. No storming off in a huff.
2. He has a proven track record. Linux’s Unified Page Cache, which we all take for granted today, was an Ingo initiative. The low latency patches. Facilitating multithreading in our filesystems.
3. He works well with others. Other’s can hurl epiphets at him, and he responds in a reasoned and equanimitous fashion.
If Linux were a company, Ingo is just the kind of employee worth keeping. Whereas Con was more along the lines of that volatile, trouble-making employee who overvalues his own importance.
So you mean at the end of the day a politician wins…
Somehow your post makes me sad, i don’t know why but it does.
Edited 2007-09-19 07:57
No, someone who writes good code, has a good track record, can behave socially and isn’t a trouble making primadonna wins.
Actually, “winning” isn’t the right term but anyway.
Edited 2007-09-19 08:00
At the end of the day, the mature, responsible developer, who acted in a diplomatic fashion, “won”. I see nothing sad in that.
Edit: Soulbender… why are we responding to CrazyDude1? ๐
Edited 2007-09-19 08:06
“Soulbender… why are we responding to CrazyDude1? ๐ ”
We’re suckers for pain.
The sad part is that Ingo is not an excellent developer. He is just good. But now his social skills are used as a criteria to merge his work as compared to his engineering skills.
Linus in his early age himself wasn’t very socialable or for example Theo is not socialable but because they have been excellent engineers they produced good products.
It is your engineering skills ego that makes you perform the best, once you get diplomatic, the quality is compromised.
This is what happens in big corporation where any asshole who can lick his bosses climbs up.
Edited 2007-09-20 16:16
Just when desktop linux is picking up speed, this Random Kennedy guy declares it dead, because one the part-time developers quit his hobby? And then in his blog entry, Kennedy concludes that “as the Redmond behemoth has shown, you can have your cake and eat it too.”
I have serious doubts that he has any clue what he is blogging about. And this – among other things – could be one of the reasons why infoworld had to discontinuing its print component (and lost overall print coverage as well as significance within the IDG group of companies).
Haven’t a lot of CKs claims been responded to by the rest of the Kernel team, and this article is basically taking and expounding on CK’s original rant?
This again – I guess – is a news discussion item where the content does not matter but ensuring a lengthy flamy “discussion” on here does .
Let’s digg up the past & have a flame fest again because there isn’t anything else out there .
who is the one moding down valid replies …
With the openess of the whole development also come people with half a clue (especially with increasing linux users) who spit out their frustration du jour via a blog .
Fact is part of Con’s work has been integrated into CFS .
He has gotten credited for his influence in CFS & AFAIK he did not leave because of dev issues .
Yes – maybe things are not going as smooth as possible – & with maybe two handfuls of subsystem maintainers & more new developers this potential friction IMO will probably increase .
With bigger Linux use also come new devs onto lkml – some also seem to appear just to create a stir ; like the recent discussions between Roman Zippel & Ingo Molnar & some others .. .
People complain that Windows is “far” better at whatever they want than linux & find something or someone to blame .
The open development also means that nasty hacks which would be unfair or unworkable in development on embedded or big SMP systems can not be sneaked in like in Windows due to the closed development .
How many differences there are between server , HPC , desktop & embedded & mobile windows kernels I do not know – but the mainline linux kernel has to try to be fair to all these segments it targets .
Microsoft can probably afford to have bigger differences between its’ different WinOS versions .
Additionally the comment of having to design for a varied hardware in the mainline kernel means having to come up with more flexible designs ; & this often resulting in more work but also a better kernel for this necessary flexibility .
Additionally to mainline there are all the other specialized development lists of which Con’s was one – these dev groups can cater for the specialized requirements there might be in embedded … desktop etc via patches – which might not go into mainline but which might be picked up by distros & so get to the end-user .
So it probably makes more sense to complain to the distro then public flaming .
CFS is still being improved constantly & also compared against CK & previous mainline .
If people took such an interest in moaning about X (which is probably as important if not more for desktop experience) & desktop environments – we could have a whole “news” site of grumpy blogings on why Linux is failing – fortunately people aren’t that interested in their faults yet that much .
Hooray for the awsum Linux kernel !!
I think it’s funny the author of the crapticle points to Windows NT split of Workstation and Server, considering there was no source difference between the two versions. Even differences between Win2K Pro and Server are small (or XP 64-bit vs. Win2K3 Server).
Tells me the author knows nothing.
I think it’s funny the author of the crapticle points to Windows NT split of Workstation and Server, considering there was no source difference between the two versions.
Strictly speaking, there were two registry keys that were locked, and uneditable, even by Administrator. Those two keys told the kernel to behave differently (not so much performance differences, but numbers of simultaneous connections).
I *do* think more attention needs to be paid to “desktop” use vs. “server” use. I don’t need sustained gigabit throughput on my desktop. It would be nice if a 2.4 ghz CPU didn’t occasionally stutter playing video/audio, though.
A multiplexed audio stream with process-based volume control would also be nice, but that’s not explicitly a kernel function.
There should (and may be, for all I know) be knobs that can be fiddled with to alter the performance of the scheduler to bias it towards userspace or kernelspace performance. A fork is totally unnecessary.
A multiplexed audio stream with process-based volume control would also be nice, but that’s not explicitly a kernel function.
You can have this now with PulseAudio and more. You can have it lower the volume of applications that aren’t in the foreground if you so choose. Fedora 8 (the development version) just set Pulse to the default replacing the venerable esound.
http://www.pulseaudio.org/
“I *do* think more attention needs to be paid to “desktop” use vs. “server” use.”
This is correct, but the difference does not primarily exist at kernel level. Furthermore, many desktop PCs are used as servers today, or, they offer server functionalities (such as file sharing services). These “mixed forms” need to be addressed, too. There could be some kernel parameters that have impact on performance issues designated to a special use, but I think such mechanisms already exist in the Linux kernel. (I am not using Linux on a daily basis, so forgive me if I’m just guessing.)
“I don’t need sustained gigabit throughput on my desktop.”
But users will cry out if they don’t get it, because the Internet is too slow. ๐
“It would be nice if a 2.4 ghz CPU didn’t occasionally stutter playing video/audio, though.”
On 2,4 GHz? You must be joking! P1 / 150 MHz, 64 MB RAM: Played videos and MP3s while compiling the kernel, downloading something via wget and recording a CD-R (at 4x speed) many years ago. This was a FreeBSD 4.x system. I cannot imagine Linux does perform worse on up to date hardware… allthough… “modern” software tends to increase hardware requirements in order to do basic functionalities at the same quality as older software did on older hardware… but hey, the differences between BSD and Linux cannot be that big!
I get desktop stutters and little X freezes all the time … while compiling Gentoo updates in the background at nice 15. Yes, even with kernel 2.6.23-rc4-mm1.
It’s a hard problem to solve though. What happens is the disk write queue fills up. No matter how big I make the queue, my poor laptop drive cannot write it out fast enough. Then, when system RAM is full, and X needs a bit more memory, the kernel does a reclaim. That means flushing dirty data out to disk … to a full queue. So, all of X stops while we wait on the poor little disk.
Things seem to work well in Linux when I’m not compiling though.
Well, try that on the same hardware with Windows, I doubt it will be better… If you have slow hardrive, not enough ram, there is a limit to what an OS can do. And again, this is not a problem which was adressed by Con’s patches.
linux is dying.
Linux is more alive than ever.
One of the strengths of Open Source is the ability to fork. Let them fork. If their version is good enough, people / distributions will adopt it. The changes will likely be merged back into mainline Linux if this was the case.
However… the reason the Kernel hasn’t been truly forked yet is because the maintenance is just *way* too much for one person or even company. Also there is the technical issue of thousands of people testing / developing the kernel.org kernel. This won’t happen for a new fork and soon enough, the two would be very different.
Short of licensing issues (can’t happen in this case), both “forks” benefit from eachother’s changes. Forking isn’t always a bad thing but in this case won’t succeed.
“One of the strengths of Open Source is the ability to fork. Let them fork.”
Very few would have the capacity to fork and maintain something as large as kernel, even if it were sensible.
The only thing that would do that would be a major discrepancy. Like that of X.org. The actual development of Linux is actually progressing relatively trouble free.
The only thing I can see is that Sun offer a more open kernel, but I personally am more happy with the situation of many companies controlling the kernel rather than just one.
Oh dear. This tired old subject again. I agree with you, Cyclops. However, I’m beginning to wonder if the best thing would not be for someone to try to fork as described. I would give it about the same expected lifespan as GoneMe. This would serve the valuable purpose of clearly, and quickly, demonstrating what a bad idea it is, so that we can get on with business as usual and not have to argue this topic every other week. ๐
It would open up huge opportunities for Linux detractors to generate FUD, though. Which would be a big negative.
Note that my post was entirely tongue in cheek. Of course anyone who tried to seriously fork the kernel would fail, it is to big.
GonMe was a perfect example.
The only thing that was disconcerting was Linus saying that Ingo’s CFS was merged in Kernel because he feels more comfortable in working with Ingo and he has known him for years.
Wow whatever happened to “Survival of the fittest” of OSS.
Really survival of the fittest was one of the reason i always loved OSS but it seems the same beureaucratic crap is seeping in OSS world as well.
You want to know what the real problem with the kernel is? Take a look at this: http://kerneltrap.org/Linux/2.6.23-rc6-mm1_This_Just_Isnt_Working_A…
I am amazed at how fast the kernel keeps churning out new features. It rarely seems like they take the time to stop and improve the quality of the code. I don’t think that they should stop development of new features, as there are always people who must have the latest and greatest. But what about people like myself, who want stability? I guess there is no real reason for the kernel development team to slow down. Most distributions release at a frantic pace as well.
Is this really that big of a deal? from the looks of it you are still free to use that scheduler if you have the technical ability. There have been many instances where the more technically superior option was passed over, yet it some how made a come back later on.
Maybe some time away from the kernel devs will allow him to regroup and try a different approach, afterall he does have a good number of people who realize the potential. Others are still free to develop it as they have the urge without having to fork aren’t they. That what branches can be used for right(correct my understanding if I’m wrong)
Plus as much as it would have potentially improved the desktop experience, it doesn’t seem to be that much of an impact without it is there. Where I think there should be some more focus is on cutting down some of the bloat that has been creeping up into some of the Desktop Apps (Firefox, Evolution, OOo … the amount of time it take for gnome-terminal start)
When the time is right and people are ready they will do what is right. I hope anyway.
I just came here from reading about this same article over on Slashdot. The responses are telling, with the majority of programmers and Linux powerusers saying one thing and the rest of us saying another. And the division between the two concerns me for the future…
The Linux powerusers and programmers are mostly running the latest and greatest hardware to be had and see no problems with things as they are. Many of the powerusers advise people to just recompile their own kernels and arrogantly declare people shouldn’t worry because the high end hardware will filter down to the home user soon enough so why change?
The rest of us– many of whom have never compiled a kernel in their lives –look on in bafflement and ponder if they’d be better off to just walk away…
What are we supposed to do when it becomes more and more clear this whole thing is more about who you know and who the big bucks are coming from than which patches are more efficient or if it performs better for the desktop?
I just don’t know any more.
–bornagainpenguin
It has always have been like this and always will be like this. There’s nothing shocking or new here.
Who’d you rather work with? Someone you know who writes good code or some random person who you have no past experience with and who may be gone tomorrow?
Beucause Joe User really cares about low-latency kernels and not at all about managing his photos, writing letters, playing games or surfing the web.
No really, a couple of milliseconds gained in scheduling is going to make all the difference…
Who’d you rather work with? Someone you know who writes good code or some random person who you have no past experience with and who may be gone tomorrow?
As far as I can tell by all accounts Con Kolivas only left when it became clear Linus had chosen to go with the implementation Ingo Molnar tossed together. Never mind that Con had been working on patch sets, submitting bug reports, providing benchmarks, etc… You can see a similar thing going on with Roman Zippel’s patches–only worse because Ingo apparently just made changes without explaining the thought processes behind them.
Linus is certainly free to work with whomever he feels comfortable with, but the ad hominem and quite after the fact attacks on Kolivas’ decision to leave when it was abundantly clear he couldn’t get a fair reception do nothing but reinforce the perception that ‘Linux is still controlled by a small group of elitist “prigs.”‘ The ability of “insiders” like Ingo being able to bypass documenting or explaining thier decision making process. Like I said in my first post it becomes more about WHO you know and not WHAT you know… and I thought avoiding this tyoe of thing was one of the key benefits of OSS…
Beucause Joe User really cares about low-latency kernels and not at all about managing his photos, writing letters, playing games or surfing the web.
No really, a couple of milliseconds gained in scheduling is going to make all the difference…
YES HE DOES CARE!
Exactly, Joe User is going to very much notice that latency when he clicks on a menu and watches the disk trash around while redrawing the menu several times on the way to starting whatever program he was trying to launch. Or when the MP3 he’s listening to stutters while opening a new web page in a tab. Or while waiting for what he;d just typed to display on the screen after pressing the keys.
Joe USer may not know it has or doesn’t have anything to do with his kernel, (or more on topic, to the scheduler used in the kernel) but he will notice any delays between any actions on his part and the time it takes for the desktop to respond.
And he’s not going to be thinking about how wonderful Linux is on the server, or how maybe he might get better performance with another gigabyte of RAM, or even that he maybe should try to do a custom kernel…
No, all he’s going to do is decide ‘this Linux thang isn’t quite ready yet’ and go back to Windows or to Mac OSX.
–bornagainpenguin
Con himself said that wasn’t the reason.
“YES HE DOES CARE!”
No sorry, but he doesn’t. He cares more about having the applications he need.
You know what, that actually calms me.
Well, if you know that those “insiders” are not just random people, then “insider” actually isn’t a negative term.
Well, if you know that those “insiders” are not just random people, then “insider” actually isn’t a negative term.
It is when you are unable to be heard and yet IBM or other big iron type corporation is able to waltz right in and get whatever they want. How long have people been asking for some loving on the Desktop now and how much attention has been paid to enterprise?
–bornagainpenguin
Isnยดt the current kernel sufficient for the desktop?
What is flawed in the current kernel design?
“How long have people been asking for some loving on the Desktop now”
Seriously, what desktop problems do you have that can be solved in the kernel and that isn’t because of X or application design or some such?
Exactly, Joe User is going to very much notice that latency when he clicks on a menu and watches the disk trash around while redrawing the menu several times on the way to starting whatever program he was trying to launch. Or when the MP3 he’s listening to stutters while opening a new web page in a tab. Or while waiting for what he;d just typed to display on the screen after pressing the keys.
Just out of curiosity, on which system and distro does that happen to you? Such a thing doesn’t happen even on my ancient P3 450mhz :O I’m not saying it’s not possible, I’m just wondering..I’ve got 6 PCs and anything like that doesn’t happen on any of them.
All those have almost nothing to do with con’s work, and anyway, if the scheduler was a problem for the Joe User, how come windows, which has an horrible scheduler, could be so used ? There are some things missing on linux for many so called average users. The scheduler is certainly not it. FWIW, I have not got any stutter problems on linux for years, which I cannot say for windows. If you copy one big file on a hard disk, windows becomes slow as hell and cannot do anything else. And come on, on my minimac, the mac os X jumping ball keeps appearing all the time. Those are really details.
Things like making double/triple screen works automatically in X, and more generally auto detection of hardware from X, sleeping/hibernating working, those are examples of things which are much more important for the desktop than the scheduler right now.
if the scheduler was a problem for the Joe User, how come windows, which has an horrible scheduler, could be so used ?
And if we want them to consider using Linux we need to have something better to offer than this, don’t we?
auto detection of hardware from X, sleeping/hibernating working, those are examples of things which are much more important for the desktop than the scheduler right now.
Those things are definitely important, but given that work is already being done on the scheduler now, shouldn’t we get this right the first time? Given how dificult it was to get all the developers running the latest and greatest or focused on big iron to even conceed there was an issue in the first place, who knows how long it will be before anyone looks into it again?
–bornagainpenguin
But the scheduler IS, TODAY, better than other OS used on the desktop. When I hear people saying that linux is too much focused on servers, and nothing is done on the desktop, this is just hot air based on absolutely nothing. For example, the 2.4 had some latency problems (which is the root of audio stutter, or at least one core reason for it), which are much better on the 2.6. On recent 2.6, you get on average, and worst case latency better than windows and on par with Mac Os X for audio users (for who latency is really important); this is much better if you are willing to use some patch (the RT patches).
Compiling many files cause stutter for some applications ? It is mainly the application fault, I think, on recent kernels. And anyway, I get lags, stuttering audio in those kind of situations on Mac Os X too, on my mac book. Actually, I feel (but I’ve never measured anything in any objective way) that mac os X lags all the time compared to Ubuntu on my macbook.
I think for a long time people like you are a bit … off road. I mean you speak of Linux as if it were the only choice for an OS one could use, and you feel, like, lost and betrayed that things don’t go your way. It’s not like you don’t have where to go from an OS that doesn’t suit your needs.
I know this is a line of thought that will – or at least could – provoke a lot of people, especially what you here call “typical users”, but whatever.
When you speak of powerusers’ arrogance… Well, I could write pages about why some people think that the behavior of and opinions of higher skilled people focused on a narrower segment [of whatever] is arrogance. If you as a user feel that your specific needs are neglected, well, think twice about what we have here, what Linux is, how Linux development works, what the main reasons are, and you just might realise some day that it’s not powerusers’ arrogance that doesn’t give you your GUI heaven in Linux.
I think for a long time people like you are a bit … off road. I mean you speak of Linux as if it were the only choice for an OS one could use, and you feel, like, lost and betrayed that things don’t go your way.
“Go [my] way?”
It isn’t about having my way its about providing the best experience for the desktop users in addition to the server administrators.
It’s not like you don’t have where to go from an OS that doesn’t suit your needs.
Hmmm.. let’s see we have on this hand the Vista…
Oh sure I can do what so many other people are doing and stick with Windows XP, but we all know it is only a matter of time before Microsoft turns off the ability to activate that OS and requires users to ‘upgrade’ to Vista. Unless I want to break the law and use the corporate copy floating around I’ll be sunk, as will so many others.
MacOS X is nice, but you’re required to buy your hardware with the OS and not everyone can afford to buy a new machine just to change OSes. Not to mention the inevitable costs of switching out all your software for new ones that work on your new OS. Sure there’s always OSX86 Hackintoshes, but that’d require me to break the law…
Haiku? Its not ready yet.
BeOS? It’s stillborn as far as I’m concerned these days due to its lack of overlays support (and with the demise of Be as a company there will be no support for this ever.) not to mention various other little features like hardware support, applications, etc…
SkyOS? You mean the beta operating system you have to buy just to try? Next!
*BSD? I thought BSD was dying… No seriously, all jokes aside the *BSDs don’t nearly half the support from companies Linux does, so realistically how much of an option are they?
ReactOS? Not done yet and still facing the specter of a lawsuit of the type that delayed the *BSDs so long eventually Linux was able to take its place.
Linux is where the momentum is, Linux is (currently) the best hope for the desktop…IF the developers are willing to give the desktop some more support…
–bornagainpenguin
http://www.mediamax.com/cunninglinguist/Hosted/Chuck33.jpg
I think most of you have missed the point that Con Kolivas is actually not a developer and the little code that he has written is ok at best. He used to give ideas and create patches that inspired others to develop and that was great. But just because he is gone doesn’t change a thing. Linux has always been desktop oriented and while companies do push their own agendas without their monetary support linux would be nowhere. The problem with linux adoption has very little to do with the fact that the process scheduler is not optimized for preemption. As a matter of fact windows has a really bad scheduler compared to the one in linux and yet it has such a huge market penetration and people seem to like it as a desktop OS.
As far as the the kernel scheduler goes Linus is correct to ask for one that works very well in most cases instead of may that work great in one particular case. Sure pluggable schedulers are a great idea but the overhead and the complication that they add to the kernel are really not worth it. ( but this is just as guess as I have never done any experimentation )
Oh and it’s about time for people within the linux community to stop fighting over dumb s$*#. But then again Kolivas is no longer in the community …
most people here don’t even understand what a scheduler does and they claim “fork it”
what a bunch of idiots
So now any random morons frustrated and uninformed rants against things they don’t understand is news?
Inforworld.com? Stick a fork in it!
Edited 2007-09-19 02:54
Thom, what made you link this article?
… I ran Con’s kernel a few weeks ago. Yeah. No big deal. Certainly nothing worth writing a rant over.
Nah, scratch that. How about this: didn’t make a f–k of difference.
My take: bulls–t hype, no substance.
I just went through another round of trying out the latest Ubuntu and Fedora distros to see how things have progressed since I last tried Linux a year ago and he is right on the money… the desktop experience still sucks. And sucks bad.. I hate to say this but Vista just blows away Linux on the usability front from top to bottom. And OS X is even better. I know because I have systems running all three sitting here in front of me right now and I use them every day. He pretty much summed up the reasons and I think he is right on. Linux has really progressed since I first used RedHat in 1997ish(it’s been a long time) and it is much much better, but it still has a long way to go to catch up with MS or Apple on the usability front. And by usability I mean making the damn thing usable from popping in a DVD, to tracking my photos, to managing my music, to making movies, to playing games, etc… you get the picture.. it has gotten much better on the web, email, and office apps front, but it still is not easy to do any of the more common home user things under the latest distros I just tried and it is really horrible that Linus and company can’t seem to get it through the thick friggin heads that for Linux to appeal to the masses it has to have a great desktop experience from top to bottom. Just having a lot of eye candy is not going to cut it.
“””
Just having a lot of eye candy is not going to cut it.
“””
Actually, the eye candy is a diversion from the real problems.
But regarding the problems that you cite: poping in a DVD, tracking your photos, managing your music, making movies, playing games. Can you describe to me exactly what Linus and company can add to the kernel to solve these perceived problems? Con’s scheduler would solve them all? The swap prefetch patch, maybe? I just don’t see how your concerns fit in this thread.
The issues you cite are “distro” issues, and user space issues, and not kernel issues. They all have good solutions. Maybe they are not integrated into distros well enough yet.
But if you think that forking the Linux kernel is going to help you sort your baby pictures, you are sadly mistaken.
“I hate to say this but Vista just blows away Linux on the usability front from top to bottom. And OS X is even better. I know because I have systems running all three sitting here in front of me right now and I use them every day. He pretty much summed up the reasons and I think he is right on.”
honestly, i just formatted vista off my computer, i played around with it for a few days. played world of warcraft on it. Got gimp on it. realized i couldn’t find a good free camera raw file program. The only thing i liked about vista was the new graphics, the blur under the aero glass is a very cool effect. Plus it had some really nice wallpapers.
Formatted vista home premium, and installed ubuntu. and seriously. my framerates in world of warcraft almost doubled. i still get the same kind of effects on both the desktop and in warcraft.. the system uses way less ram, and is way more responsive. i can import my camera raw files. windows sucks, the whole platform. it’s not easy, it’s hard to keep secure, it’s expensive, bloated and slow, insecure crap.
so you have 3 different systems? oh let me guess, one dirt old ubuntu box, and a brand new mac and pc. i just tried vista and ubuntu on the same box, and ubuntu smokes it hands down in every way. except for the cool aero blurring effect lol.
“Con – that champion of all things desktop centric”
seriously, what did he do that someone else working on the kernel team wouldn’t have done? did he even write any code? he said it himself no in this interview —> http://kerneltrap.org/node/465
all he did was apply patches to the linux kernel that were already laying around, and got linux to go a little faster. and he made a benchmark program to test things.
linux is still going places with or without him, but thats what people do, test it, and test it and test it. the linux kernel has thousands of people applying their own patches, or other people’s patches and testing things. This is just 1 less person testing things out of thousands or more… this is open source development. thats just the way it goes.
I’m at a bit of a loss to understand where all the vitriol about Linux’s scheduling “issues” is coming from. I use Linux exclusively at home and I haven’t managed to persuade Amarok to skip in recent memory – this includes “make -j4” which pegs my dual-core box nicely. And that’s just a plain 2.6.21 kernel with a few unrelated patches (fbsplash and whatnot).
Meanwhile my Windows (XP) box at work is _shocking_. Earlier this week it managed to turn my music into a fart every time a dialog box opened. I was running two CPU-bound processes so the CPU was close to pegged (again, dual-core machine, but double the RAM of my box at home) but surely it could find some spare cycles for a freaking dialog box?
And I have lost count of the number of times it’s ceased responding for up to a minute in similar circumstances (Explorer is particularly bad for this – oddly Firefox remains quite responsive).
So in short, I find vanilla Linux to be head and shoulders above the common alternative. I’ve flirted with -ck kernels in the past and found them good, but I don’t think they’re necessary to beat the competition – Linux left them behind some time ago.
Having read the interview with Con and the thread on LKML discussing the issue, I have to say Con comes off as a thin-skinned primadonna that is just making a big stink that his code didn’t get into the kernel. SD and CFS were put through a few benchmarks and came out with very similar performance for most tasks, with each having their ups and downs, but CFS being more consistent across a wider range of usage scenarios.
Linus has faith in Ingo as a maintainer, and if nothing else, this little debacle weeded out Con early in the game, rather than having something similar happen later, when the SD scheduler is already in place. Kernel development is not instant gratification or feel-good. It’s very dog-eat-dog, and although Linus can be an asshole, he’s usually right.
I think a better solution would be something along the lines of the Kernel settings notebook in xWorkplace for eComStation, where the user has a number of scheduling, multitasking and memory management customizations for the kernel at their disposal
http://www.os2usr.de/images/xwp/xwp_drv.gif
http://www.os2usr.de/images/xwp/xwp_sysl.gif
(these are the only 2 decent screenshots i could find of the scheduler settings notebook, but it gives you an idea)
… Oh, Jesus…strike that: Oh, Kolivas !
This is ridiculous. On so many levels, it’s even funny
One thing is true though: Con quickly gained a strong following within this particular sub-community And that’s it. Worshippers need to have a longish rest and then move on. Linux is not Kolivas, and what he has done – although great stuff – is not enough that when stopping doing it everything would fall apart.
Some people have just too much time on their hands and when they start writing articles, well, someone help us all
has none of this issue. Why not simply copy the Solaris kernel’s design?
Edited 2007-09-19 08:16
Why not simply use Solaris if you want its features?
There are plenty of options for that.
Ingos scheduler was better, period. Linus is a excellent engineer and Linux kernel is his lovechild so if Linus says Ingos scheduler was simply better, I believe him. Not everything has to be some sort of conspiracy.
Sure server features will get *marginally* more attention because CODING FREE SOFTWARE ISNT FREE. Without support from companies like IBM and HP, Linux wouldnt be nearly where it is now.
and about forking linux, the kernel is too big of a project for just a bundle of people to fork.
Creating patch sets for the official kernel has worked for everyones needs till now so I cant see why it couldnt work till far in to future…leave it to distro builders to deside what patches they want to add to their distros kernels…there simply isnt a need for two kernels.
I’m not really willing to start discussing this again, but just days ago, benchmarks came out where con’s SD, which has not been optimized or improved for months, still did overall a bit better than Ingo’s CFS. I don’t oppose Linus’ decision to go with CFS, but saying it is better is at the very least controversial.
Of course, forking is useless with Git and stuff…
As I understood it, Con’s scheduler had a far simpler structure, while beeing almost as good as Ingo’s (if not better). KISS principle favors Con’s scheduler, then. Right? Why a more complex one, that is plainly not optimal?
“Why a more complex one, that is plainly not optimal?”
Because the people who actually work on the kernel know these things better than any of us who post at osnews.
that doesn’t go for all of us.
This decision was mostly ‘political’ – Ingo’s position was simply much better than Con’s. Again, I don’t critisize the decision, Ingo works great with the community etc etc, and that’s valuable. But it is a fact that the decision wasn’t (entirely) made on technical grounds.
This decision was mostly ‘political’
No, it was completely political. Go back and read the posts around the time Con left and witness Linus’ admission he never reads outside his main list… in other words, if you aren’t already someone Linus knows or wants to know you have absolutely ZERO creditability with him, and have no manner in which to GAIN creditability with him. You’re simply not on his radar.
–bornagainpenguin
That’s certainly a creative interpretation, bornagainpenguin. If you form a clique of people who agree with your views, Linus recognizes it as a clique, which you have formed, of people who agree with your views… and assigns it a value in keeping with that status.
The moral? Do stuff out in the open. Don’t form a clique. If it is kernel related, have the conversations on LKML.
What I have said can hardly be considered a revelation to anyone.
Why should he read anything but the official kernel list? Surely he has better things to do with his life than follow everyone and their grandmothers tiny mailing list.
Quite a reasonable attitude.
Don’t be silly, of course you can. If there was no way for anyone to gain credibility he would still be the one and only kernel developer.
This is all a lot of ado about nothing. Con’s patches was for whatever reason (technical, political, whatever, it doesn’t matter) not considered good enough to replace Ingo’s work. Big deal, happens all the time. Life goes on.
Con quit Linux, no big deal either. He’s not the first guy to do so and he wont be the last. It surely wont have a major impact on the Linux desktop since what ails Linux is not on the kernel side but on the application side.
The first things I mostly agree with, but your last point about the kernel not being important for userspace – that’s definitely wrong. I do know the KDE developers would love a bit more support from the kernel for userspace… it IS holding us back in some area’s (Inotify performance/limitations are horrible, to name one thing).
“but your last point about the kernel not being important for userspace ”
I guess I should have said “not mainly kernel space” (since that was what I actually meent) since there are of course certain things like ACPI, Wifi drivers and such that is in kernel space.
None of these are what Con was working on though.
that’s just drivers. But the core kernel code (like scheduler, but also inotify and other things) isn’t really optimized for basic desktop use right now. It’s not horrible, but many other OS’es (like some BSD’s) perform clearly better. Of course, it’s not only the kernel, the lack of dedication from the lower level stuff is a big problem for the Linux Desktop. Also in the graphical area we’re missing out on some stuff, and in other things like the GNU linker.
This isn’t all to blame on Linus, of course, my point was that it’s sad Con left as he brought a ‘the desktop matters’ voice to the kernel.
I support about 70 Linux/Gnome Desktop users running on XDMCP servers. And I have *never* experienced any of the desktop performance problems that the Kolivas crowd claims that the kernel has. On my largest server, I have 50 Gnome desktops running on a dual Xeon 3.2GHz box. (That’s 25 users per core.) Performance is excellent. The one thing that you can do to kill performance is try to do this with insufficient memory. Roughly, you can do it on 256MB for the first user + 64MB per user thereafter. But then you are always right on the edge of performance disaster. 256MB + 100MB per user is a comfortable place to be.
We just took this server from 4GB to 8GB, which means I have 164MB per user at the moment, and the thing flies like a bat out of hell! ๐ *
No signs of all these terrible scheduler problems that the Kolivas fan club says we’re supposed to be having. Frankly, I’ve always kind of felt that Con’s work was much ado over nothing.
* I should add that it’s not quite as simple as the picture I painted above. This machine is also running a database server, a web server, a Turbogears application server, a Rails application server, a Django application server, an instant messaging server, and about a hundred sessions of an ncurses based point of sale package. So the per user numbers I’ve given are likely a bit on the high side.
Edited 2007-09-20 15:01
Well, I think really judging that would take far more knowledge of the real technical issues at hand than I have. But it sure does seem that way, yes. It still is the case that SD is smaller than CFS…
“no amount of ‘sudo’ super glue can put his pieces back together again.”
Nice of him to refer to Ubuntu.
Just use the nice/renice command if your system is slow.
Browser: Opera/9.50 (J2ME/MIDP; Opera Mini/4.0.8993/58; U; en)
I hope all these people that feel the desktop is their mission and Linux is missing ‘it’ will put their energy into Haiku (BeOS born again). This effort is for desktop only.
BeOS had much promise.
I hope all these people that feel the desktop is their mission and Linux is missing “it” will put their energy into Haiku (BeOS born again). This effort is for desktop only.
BeOS had much promise.
“””
“””
The operative words are “had” and “promise”. In the real world, Linux and MacOS X are vying for second place for desktop OSes. BeOS was never even in the running.
What are the chances, do you think, that Haiku will eclipse both Linux and MacOS X and become the number two player?
I’d estimate that it is roughly equivalent to the chance that the number of my grand prize winning ticket in the state lottery will just happen to match the serial number of my newly purchased copy of Duke Nukem Forever.
Edited 2007-09-19 17:28
Well, it was almost in the running like 10 years ago or so. Too bad it then went the way of Commodore and made some disastrous management decisions (some would blame it on MS but those folks are delusional).
In the near future? Virtually 0.
I should say that my previous post was a bit on the trollish side. Just because an OS is not likely to meet mainstream success does not mean that it is not worth working on. I do not think that the whole Con Kolivas thing is worth switching teams over. But let there be no question that I think the Haiku guys are doing great work. ๐
Oh I agree. Haiku might turn out great but that doesn’t mean it will be in the running, especially not in the near future.
Yes. I know. A strong Linux desktop would help pave the way for a more popular Haiku.
That man is spewing more baloney than a store deli.
I’m tired of the desktop anyway. I have to stare at a “desktop” all day!
Maybe we need a new metaphor. How about…
a den? A study? Basement? Something people can play on. You ain’t supposed to play on a desktop.
Oh by the way – sticking a fork in something usually means it’s ready to eat. When are people going to stop using that stupid saying?
“””
“””
You’ve just described Microsoft BOB. ๐