“The Linux 2.6.31 kernel is still under active development until it is released later this quarter, but the merge window is closed and most of the work going on is to address bugs and other regressions within this massive code-base. Some of the key additions to the Linux 2.6.31 kernel include many graphics-related advancements (merging of the TTM memory manager, Radeon kernel mode-setting, Intel DisplayPort, etc), an ALSA driver for the Creative X-Fi, initial USB 3.0 support, file-system improvements, and much more. To see how the general system performance has been impacted by the new Linux kernel that is in development, we have a few benchmarks today.”
Amazing to see the progress being done on the releases, Linux is the best kernel/OS ever.
I don’t know if you read the benchmarks, but almost every one showed a performance decrease over past kernels.
Also, best GPL licensed Kernel/OS ever maybe. To claim everything else in existence is inferior would be a stretch, to put in mildly.
If you take a brief look at the graphs you’ll see that the throughput has increased but the latency has increased as well – it depends on what you wish to achieve when it comes to performance; do you want lower latency with lower throughput or higher throughput and higher latency?
Unfortunately those who criticise the Linux kernel have had very little exposure to other operating systems let alone computers outside that of the x86 world. I tend to be more of a *BSD/Mac OS X fanboy though.
Performance decreases are called regressions, and are covered in the summary with: ‘the merge window is closed and most of the work going on is to address bugs and other regressions within this massive code-base’
is a bit of a stretch though, don’t you think?
Whereas this is not a stretch, they aren’t many (or any) other kernels that work on as many platforms, on large (88% of supercomputers) and small scale (RJ45 SoCs, ARM wall-plugs) devices, phones, routers, TVs, all with the same codebase. The argument only rests on what you define best to be.
That’s a bit subjective isn’t it?
Linux is the best kernel/OS ever.
The kernel itself is pretty good, no major issues, stable, works on almost anything ranging from your electronic toothbrush to large-scale supercomputers. But I still don’t like how it lacks some stable driver API/ABI. That’s one thing that stops certain manufacturers from supporting Linux. And I still find Linux power-management to be flaky.
On the desktop side of things there’s also quite a bunch of things that need some attention, but I’ll leave the rant to some more proper topic
Any reaction from the kernel devs regarding the regressions that were found? I had a brief look at LKML but couldn’t find anything.
Generally, Phoronix has about as much credibility as The National Inquirer. They are more interested in hits on their website than whether Linux gets faster or slower.
Real regression tests get posted to the LKML and indicate some level of competence (i.e. a syscall trace or postulate rather than some buffoon running stock benchmarking tools). Luckily, some folks at Intel run good tests from time to time.
That is rather harsh, I always got the impression that those from Phoronix were just enthusiasts running a website – although one has to be sceptical of any sort of benchmark because one finds that it is never mirrored in reality (in terms of performance outcomes).
Reminds me of the TPC benchmark and what a Sun engineer said about them, “they might be a great benchmark if all you intend to use your hardware for is TPC benchmarks all day”.
True, then there is also the question where things might have changed in the kernel but because the benchmark is badly written, it comes up as a loss of performance.
isn’t too late to report it ?
I’ve seen this news many days ago !
Are you using rss to detect new news ?
it isn’t good to be such late !!!!!
And why is that? Is the information worth less now?, Better late then never.
Linux kernel is terrible and everything’s a mess here. I use it from time to time [normally I use OS’s other than Windows, Linux and MacOSX]; but it’s a complete mess when compared to other OSS kernels.
I hope they will eventually stop and redesign everything from scratch, making it clean and logicaly structured.
Care to explain? It actually seems fairly well-structured for the architecture it uses and the number of platforms and hardware configurations it must support. And even given that, it still performs quite well and is pretty darn stable these days.
Have you ever came across kernel compilation? it just needs a fix. Tons of different stuff mixed altogether, unmet dependencies, weird connections between several components. What I am thinking of is the logical structure: “Hard disk” with all needed subclasses, like HD controllers, then “Multimedia” divided to “sound devices”, “video devices”, etc. It’s pretty messed up now and I’d like to see it finally working like it should work. I won’t even mention the lack of profesionalism and lack of code quality that is typical to linux kernel: it just HAVE to work, no matter HOW, while I – personaly – think, that this “how it works” is the most important thing when it comes to the OS-land. I have really no interrest in throwing a bad word on linux, I just want it to be better – I think we all benefit from it – regardless the camp we’re actually in, whether it’s GNU, BSD or another one.
The kernel menu config system is a bit messy, but that’s just one *interface* to configuring the kernel and they have been doing a lot of cleanups over time and it’s a lot better than it used to be. Still, there are a lot of options and suboptions and things that affect multiple subsystems, so there’s not much you can do to organize that any better. Still, I would call that the least serious problem with the Linux kernel.
As far as your accusations regarding lack of professionalism or code quality, I guess you really have never followed kernel development or read the rules the have regarding new code and subsystems, etc. They have some very strict policies about that kind of stuff. And it’ll do you well to remember big deals such as reiser4, suspend2 and others that never got merged because of quality and design incompatibility issues. Doesn’t sound like lack of code quality to me.
“Doesn’t sound like lack of code quality to me.”
Even Linux kernel developer Andrew Morton complains about the declining quality of the Linux kernel. His words:
http://lwn.net/Articles/285088/
Q: Is it your opinion that the quality of the kernel is in decline? Most developers seem to be pretty sanguine about the overall quality problem. Assuming there’s a difference of opinion here, where do you think it comes from? How can we resolve it?
A: I used to think it was in decline, and I think that I might think that it still is. I see so many regressions which we never fix.
Read the rest of your link: the main problem is really testers not doing their job. They can’t fix regressions that they don’t have enough information about to fix.
He also speaks highly of the code review process:
“Q: How would you describe the real role of code review in the kernel development process?
A: Well, it finds bugs. It improves the quality of the code. Sometimes it prevents really really bad things from getting into the product. Such as rootholes in the core kernel. I’ve spotted a decent number of these at review time.
It also increases the number of people who have an understanding of the new code – both the reviewer(s) and those who closely followed the review are now better able to support that code.
Also, I expect that the prospect of receiving a close review will keep the originators on their toes – make them take more care over their work.”
“Read the rest of your link: the main problem is really testers not doing their job. They can’t fix regressions that they don’t have enough information about to fix.”
So what? It doesnt matter why, only the result matters: Linux kernel code is of varying quality as Andrew Morton says.
Nowhere in there does he say it’s a mess or needs to be thrown away. That’s the point of the OP. Yes, maybe some parts need some TLC. Perhaps taking some time to take a look at the dev process to make it a bit tighter would be good too. And maybe it’s having a year or two where things aren’t quite as good as they were before. I won’t deny these facts. I will deny, however, that they mean everything is crap and Linux is terrible and needs to be rebuilt. Can you accept that argument as well?
I guess that no one here was really talking about the linux kernel being complete and utter crap.
It is quite useful, although it certainly LACKS some features, overall code quality and the general ORDER.
I’d suggest some LINUX KERNEL WORKING GROUP that would be reponsible for maintaining the base of the kernel code. Jesus, they already have an easy task – some *BSD, MacOSX, Windows guys out there are managing THE WHOLE SYSTEM SOURCE, where *system* means “kernel+userland”, not only the kernel …
Increase code quality. That is my postulate.
marcp understands my viewpoint. But to look after the quality for Linux kernel will be difficult.
First, they accept patches from non experts leading to lower quality. In fact, Kernighan and the other old Unix gurus has examined the Linux code and they said that Linux code was not so good.
Second, the Linux kernel is over 10 million lines of code – thats a huge amount of code for one KERNEL. The entire Windows NT was 10 million lines of code. How can a single kernel be that big? The more code, the more bugs. Less is more. Not, more is more. Linus should cut the bloat. There are kernels far far far smaller than Linux. And of better quality. In fact, some people mean that Linux is quite unstable and doesnt scale well. For uniCPU machines, Linux is good. For small loads, Linux suffices. But when you scale up, Linux bites the dust. Then you have to switch to a real Unix.
“For small loads, Linux suffices. But when you scale up, Linux bites the dust. Then you have to switch to a real Unix.”
That’s very true, indeed. What linux need is – in my opinion – highly modularized kernel, that would have the strict core *carefuly examined*. You’ll then get the very stable core to build upon. If they can’t do it right with all of the kernel, then they need to modularize it even more and review the core very carefuly. Just throw all of the extra hw support beyond the kernel core area … leave only basic HW support built-in.
The Linux kernel also supports a ton of architectures. NT doesn’t. Why are you comparing it to a 10+ year old operating system anyway? Drivers are included in the Linux kernel but Windows has few in-kernel drivers.
Did you just rip quotes from a 10 year old website? First you compare Linux to NT and now you’re making claims that aren’t even remotely true today. Linux has been running on big iron for many years now and outperforming a lot more expensive systems along the way. Claiming that Linux cannot scale beyond a single processor is obviously dumb to pretty much anyone using Linux today.
Again; there are kernels far smaller than Linux 10M Lines of Code. If they can do it, Linux can.
For Linux scaling bad, we have been through this numerous times. But to recap, Linux scales well on a cluster of nodes. That is horizontal scaling.
http://en.wikipedia.org/wiki/Horizontal_scaling#Scale_horizontally_…
You just add more nodes and Linux scales very well horizontally. But this requires you to not use the vanilla Linux kernel, you use another Linux kernel tailored to this task. (Then I can tailor MS-DOS to this task as well – and claim that MS-DOS scales well).
But vertical scaling is another thing. You have one big machine with lots of CPUs. In this case Linux scales bad. This is vertical scaling. Linux scales bad vertically.
Here you have some Linux experts discussing scaling. They claim that Linux scaling bad, is only FUD from Unix vendors. That Linux scales very well:
http://searchenterpriselinux.techtarget.com/news/article/0,289142,s…
“Greenblatt: Linux has not lagged behind in scalability, [but] some vendors do not want the world to think about Linux as scalable. The fact that Google runs 10,000 Intel processors as a single image is a testament to [Linux’s] horizontal scaling. The National Oceanographic and Atmospheric Administration (NOAA) has had similar results in predicting weather. In the commercial world, Shell Exploration is doing seismic work on Linux that was once done on a Cray [supercomputer].
Greenblatt: With the 2.6 kernel, the vertical scaling will improve to 16-way. However, the true Linux value is horizontal scaling.”
To scale well horizontally is non trivial. But to scale well vertically takes decades of trial and error. Solaris first versions sucked, it scaled only to ~8 CPUs (just as Linux does now) – it was named SunOS. Then they redesigned and renamed it Solaris. And Solaris scales vertically to hundreds of CPUs. Solaris is SunOS second try, redesigned.
You can hardly expect Linux to scale to 4CPus in v2.4 and to several hundreds in v2.6. It takes decades to scale vertically.
Admittedly, Linux scales well – horizontally. But vertically it sucks. That is obvious, how can you develop a Linux kernel scaling well to hundreds of CPUs without some Big Iron to lay your hands on and test your code on? No Linux kernel developer has access to a big machine with hundreds of CPUs. They have access to large clusters (Beowulf) and servers with ~8 cores. Nothing larger. No hands-on experience of Big Iron -> no vertical scaling.
But the good thing is that Linux and Unix are very alike. All your investments on learning Linux are easily moved to Unix, and vice versa. You loose nothing by moving from Linux to Unix, and vice versa. I used to learn Amiga well. Then MS-DOS. Then Win3.11. Then Win95. Then… etc etc. If I only had stuck to Linux/Unix I wouldnt have to relearn. I would be a true guru now. Linux/Unix are very alike, you can move freely between them.
Edited 2009-07-15 09:54 UTC
*delete*
Edited 2009-07-15 13:00 UTC
Once they achieve binary compatibility with Deluxe Paint AGA and correct the spelling to Kernal it will be finished! Woo hoo!
Deluxe Paint AAA would be even better. And it would probably have a better user interface than the GIMP.