Ingo Molnar released a new patchset titled the ‘Modular Scheduler Core and Completely Fair Scheduler’. He explained, “this project is a complete rewrite of the Linux task scheduler. My goal is to address various feature requests and to fix deficiencies in the vanilla scheduler that were suggested/found in the past few years, both for desktop scheduling and for server scheduling workloads.”
Isn’t Linux’s task scheduler the part of Linux that is rewritten the most often?
“””
Isn’t Linux’s task scheduler the part of Linux that is rewritten the most often?
“””
Nice try.
Isaac Asimov answered you far more eloquently than I ever could:
http://tinyurl.com/k9cze
God, how I miss that talented, wonderful man!
nice read!
thanks for bringing it into account and sharing it =)
Agreed! An amazing man to say the least!
My favorite author, so I’m not unbiased…
But that was the best response to a post I have seen in many a day…Kudos to you!
Nice one thx
This reply is lame.
Scientists are getting better at collaboration, resulting in less error from consensus.
This paper proves one thing: Science can’t be trusted for producing truth. Ask the ~100million people who were exterminated last century by people and governments who used “science” as their rationale.
I wouldn’t call this OS stuff “science” in the same way, writing software algorithms is more akin to engineering, since computers are problem domain artificially limited by the capabilities of the hardware.
That said, there’s nothing wrong with going back and refactoring something occasionally to see if there’s a better way to do it.
I’d say at least half the code I write gets tossed on the trash heap. Sometimes because of an initial misunderstanding of the problem, definitely because writing the code is a part of learning the problem.
reductio ad hitlerum?
The guy is named Ingo Molnár, not “Molna”, if you please fixed.
How about Virtualization/Emulation software ?
I know that Linux kernels with HZ=1000 option are rather slow in Virtualization environments and I would like to know how this new scheduler will perform in such Virtual (VMware/VirtualBox/…) environments.
I am by no means an expert on any of this, but my guess would be that if a schedular is completely fair, then virtualization wouldn’t be too good off, as the virtualized box would get one tick / whatever, no exceptions. All the processes in that box would have to run off that one tick (basically).
Of course, I could be completely wrong, in that case ignore me.
When they say “Completely fair”, they actually mean more fair than equal. The processor time is divided in a fair way so that latency sensitive apps can be prioritised over more throughput apps. What you have described is equal timeslices (except nobody divides it as small as single ticks) which can be used on the current kernel, I believe.
“””
I know that Linux kernels with HZ=1000 option are rather slow in Virtualization environments and I would like to know how this new scheduler will perform in such Virtual (VMware/VirtualBox/…) environments.
“””
I may be misunderstanding your question. But if you are speaking of performance degradation due to excessive timer interrupts when many instances of the kernel are running, I think that the really exciting news on this front lies not with the scheduler, but with the tickless kernel patch, applied to 2.6.21-rc1, and due to be released in a stable kernel possibly as soon as next week.
But the tickless patch itself is only the beginning. Next will follow patches with the batching of interrupts in mind, so that more things get scheduled to be done during one wake up period of the processor (virtual or otherwise), resulting in even fewer interrupts.
edit: Just a few more notes. Tickless in 2.6.21 will be an option, not the default. It is only available on x86 (32 bit), but should be available on other architectures shortly. FC7 will have tickless on by default. My laptop has gone tickless (2.6.21-rc6) and I’ve had no problems so far.
Edited 2007-04-23 04:03
Anyone know if these rbtrees are per-processor? If they aren’t then synchronization will be a pain.
This is why I like the Linux Kernel. Everyone gets to see and try out changes to fundamental pieces of the system. I wish the whole driver siutation weren’t such a mess, though.
research my friend you will not see in commercial OSes. However I wish research extended to device drivers to find a away to simplify them more.
From my understanding, a device driver can only be as simple as the hardware API of the device being driven.
Since there are so many different devices each with different APIs(some of which are broken) this makes for a lot of mess.
Then take the fact that many device drivers have to be reverse-engineered because the hardware manufacturers are insanely not releasing specs on their hardware.
It’s like trying simplify the web, we have to get a huge number of people to agree on a standard, and then you still have to implement messy ways to get around the people that don’t. This is one of the reasons webbrowsers are so big and messy.
– Jesse McNelis
Another problem is we consumers always want to newer and cooler features delivered faster and at lower prices.
If everybody had been perfectly happy with the HTML 2.0 standard, most browsers would be fully compatible.
While you won’t ‘see’ this kind of research in a commercial OS, it would be naive to assume it isn’t happening. I’d be very surprised if Microsoft and everybody else didn’t have teams constantly ripping their OS kernels apart and trying all kind of weird and wonderful things.
Just because most of these things end up not being implemented in a commercial product, doesn’t mean they haven’t been tried and tested.
While you won’t ‘see’ this kind of research in a commercial OS, it would be naive to assume it isn’t happening.
Exactly. This kind of research is going on. What you won’t see in a commercial OS is a release that includes code that is 6 hours old and has even less hours of testing. You wouldn’t even see that in a beta release of a commercial OS.
“Break me tender, breake me more…”
OMG..What a wonderful slogan for Linux!
I am getting more and more concerned with the development model of the Linux kernel and the arrogance and attitude of the developers. You can’t trust Linux to be the rock-solid kernel it used to be. You have to hope and pray…and hope that they didn’t add any new regressions or bugs in each release. Since there’s no longer any distinction between stable and unstable, you really never know what you’re going to get.
Why can’t you trust something that is getting better. I think this is good. The design is out there, and people can have their $0.02 on it. That is a good thing.
> Since there’s no longer any distinction between stable and unstable,
> you really never know what you’re going to get.
Here’s a hint for you: It’s the *KERNEL*. It is not meant to be a finished product. You can start worrying about kernel development when you have your own distro.
Use your vendor provided kernel.
It’s gone through more testing. Expecting the final QA to be done by the kernel devs is a layering violation.
Final QA is a responsibility that rests squarely on the vendors’ shoulders. The developers excel at developing. Let them each do their jobs.
If the distributor does not fulfill their responsibility, then find another.
The landscape has changed, and I perceive that there are many who are unwilling to adapt to this new world in which the kernel developers are not responsible for final QA.
Get a kernel off the street, and the risk is entirely yours.
This does not mean that the distributors are not also responsible for feeding their patches back to LKML. And it does not absolve the kernel devs from the responsibility of taking those patches seriously.
Welcome to the year 2.6.x. 🙂
Edited 2007-04-23 23:13
No other project has this sort of nonsense. If KDE has a bug, then it is KDE’s job to fix it, not Ubuntu’s. It’s great that distros try to fix some bugs, but this is more because they really need a polish product and don’t have time to wait around for upstream to fix everything. That the kernel devs want to push everything off onto the distros is just laziness and arrogance. It’s not a reasonable development model. For one, it’s going to lead to splintering as each distro has it’s own patchset and you are FORCED to use a distro’s kernel since the vanilla kernels are unstable. This is not the optimal situation. If KDE or GNOME or Vi or any other project took this approach, people would be screaming about it. For some reason, when the kernel folks do stuff, everyone assumes that because it comes out of Linus’s mouth, it’s perfectly reasonable.
I, for one, am tired of the kernel dev’s arrogance. It’s producing a kernel that’s getting less stable with time, even when the distros are maintaining it. There have been no real innovations for a while and the development process is a mess. I fear Linux will start to lose momentum in a few years, and it will be because of problems like this.
I’m running v5 of the new scheduler, it seems very good indeed.
I noticed that after patching the kernel, there was an option in menuconfig for automatically renicing X to -19.
I didn’t see anything specifically about it on Ingo’s readme, but there is a bit about the niceness handling in general:
the CFS scheduler has a much stronger handling of nice levels and SCHED_BATCH: both types of workloads should be isolated much more agressively than under the vanilla scheduler.
I wonder if we might see a return to the times of distributions renicing X by default to make it more responsive if this sched becomes the default…
For those who are curious to try the patch, it is available from:
http://people.redhat.com/mingo/cfs-scheduler/
> I wonder if we might see a return to the times of
> distributions renicing X by default to make it more
> responsive if this sched becomes the default…
CFS has not been selected to next Linux scheduler yet. There’s couple of others also which “compete” that title. For example I don’t think that Con’s SD needs X’s renicing.
Edit: And Linus really hates X renising so I’m pretty sure that next scheduler won’t do it.
“… renicing X is the *WRONG*THING*TO*DO*. Just don’t do it. It’s wrong. It was wrong with the old schedulers, it’s wrong with the new scheduler, it’s just WRONG.” – Linus Torvalds
Edited 2007-04-23 15:36
Indeed, that was exactly my point.
Hence my question, also *if* this scheduler becomes default, would Linus et al, change their position on renicing X if it is beneficial to interactivity?
Is there some underlying reason why X should not be treated as a special case if it useful to ensure it can run when it needs to?
> Is there some underlying reason why X should not be
> treated as a special case if it useful to ensure it
> can run when it needs to?
http://lkml.org/lkml/2007/4/23/186
There was just a discussion about that very feature.
http://kerneltrap.org/node/8082
Im not sure if posting from a mailing list without explicit permission is ok. So I will simply say that RJop’s post is a direct snipped of a LONG thread. Ingo and Linus have been going back and forth on this X renice issue. At this particular moment in time Ingo is basicly stating …that I really do have a plan and just please check this out Linus… ( thats a very very very loose semi paraphrase not at all a quote ) he goes on to explain that everything in CFS is “Economy driven” every thing is held accountable ect. I wont try to loosely explain further at the moment. I will say that the information can be found on the Con Kolivas CK mailing list. So far both SD and CFS are looking pretty interesting Ingo is a pretty sharp programmer as most folks know. Con however is also pretty sharp and his SD scheduler is keeping Ingo somewhat on his toes! End result is that it dont matter which approach wins, we the people definitely will benefit largely as both are already in GOOD shape.
Gary
Edited 2007-04-24 05:13
Every time I read one of these articles I am astounded at the infighting and “name calling”-like behavior of these developers. And then there is Linus attitude “I don’t want to be polite.”
This kind of attitude can’t be good for any project/developer in the long run, I would think… Even under the guise of “it fosters competition.”
How about instead of competing, everyone “sits down”, figures out a good way (not even necessarily the best way) to do something, then they all work together to make it happen. Then they enter a process that enables them to review what they have done, propose changes in the future and start the whole process over again in a cyclical, and always “polite and professional” manner?
When I worked in openVMS we had regular code reviews (and all patches were subject to peer code review). In those code reviews the “gurus” or maybe mentors is a better term, that usually attented as well would offer advice if you asked or would ask if they could comment about how you did something. But they usually did not have to ask as the first thing out of the person whose code was being reviewed was “please offer any suggestions that could improve what I’ve done.”
And you know what? That advice was accepted and more often than not changes were made to incorporate it. That’s how the younger developers learned some of the tricks of the trade. The only competition that existed was in each developer’s desire to improve their own code!!
They are indeed pretty harsh, but that’s the game, and it has worked pretty well until now. You just have to know they don’t hate each other, but are very critical of the work being done…