After two months of development, Linux 2.6.20 has been released. This release includes two different virtualization implementations: KVM: full-virtualization capabilities using Intel/AMD virtualization extensions and a paravirtualization implementation usable by different hypervisors. Aditionally, 2.6.20 includes PS3 support, a fault injection debugging feature, UDP-lite support, better per-process IO accounting, relative atime, relocatable x86 kernel, some x86 microoptimizations, lockless radix-tree readside, shared pagetables for hugetbl, and many other things. Read the list of changes for details.
Wheee.. Grats to all devs, testers and well users!
There are a lot of significant new features in this release! Two new virtualization interfaces (finally paravirt_ops made it in), a new architecture (Cell/PS3), and the lockless radix tree reads (for the dcache) should be just the ticket for scaling to 2048 CPUs and possibly beyond (they’ll have to slow down and wait for the systems vendors to catch up 😉
Basically, there’s a lot of stuff in here that should scare the pants off the big UNIX guys (myself included, but I like to root for the underdog… too bad about the bears).
Here’s a link to the google cache of the changelog; it’s a bit down now: http://64.233.167.104/search?q=cache:DeB7M2cAS8gJ:kernelnewbies.org…
Edited 2007-02-04 22:45
The best part of this OS, apart from the GNUish thing :o)
x86 relocateability? I have not checked the changelog, but whats that?
Browser: Opera/8.01 (J2ME/MIDP; Opera Mini/3.0.6636/1558; nb; U; ssr)
Allows you to load the kernel at a different address. e.g. I think for kdump you don’t need a separately compiled kernel for your dump kernel anymore (previously you needed one that was compiled for a different, fixed base address).
Like Mark said, previous kernel releases needed a second kernel image in order to handle crash dumps. A bunch of releases ago, the Linux kernel added kexec, a method of loading a Linux kernel image from the running kernel and turning control over to the new kernel instance. This allowed Linux to finally have crash dump support via kdump. The new image needs to be able to dump the old kernel image, so it obviously can’t be loaded into the same memory range as the old image.
The new system allows kexec to load the same kernel image into a different memory range, so you don’t need to jump through hoops to get a crash dump. It would be “cooler” if Linux had dump routines that could run from within the crashing kernel, but that requires being very careful and having access to a raw block device dedicated for dumps. You can’t take page faults or service interrupts from dump routines that run inside a crashing kernel, so the Linux kexec/kdump approach is a more conservative design.
I’m not completely sure if the dump image can continue to run the system after the dump. If a new production image can be loaded via kexec into the original memory range, I believe that this capability would place Linux ahead of all commercial UNIX implementations in terms of downtime due to a crash dump. Very impressive!
hmm, impressive indeed.
still, there would be the chance of it going from crash to crash. thereby getting nothing done as the kernel is to occupied firing up a new version of itself to take over for the old.
But this is fairly simple to test for. If it happens, the system can just die like it does now.
And I just spent the evening downloading and compiling the latest release candidate! Thanx Linus & Co for rendering that effort superfluous!
BTW, I didn’t get 8.33.6 ATI fglrx 3D drivers to work with RC7. Anyone knows if there is a driver version that works with 2.6.20?
try this:
http://bugs.gentoo.org/show_bug.cgi?id=161378
There’s a patch for ati-drivers 8.33.6 in Gentoo’s portage. Works for me™.
Does that mean more speed?
Yes, but not much.
small microoptimizations in x86 (sleazy FPU, regparm, support for the Processor Data Area, optimizations for the Core 2 platform)
Sleazy FPU and regparm seem to be defaults in x86-64 already, and were just ported back to x86.
Now hopefully it won’t take much time for Debian kernel engineers to role it out. Otherwise, I’ll just role my own again.
Just discovered the hard way that the netfilter team has made some structural changes that could impact existing iptables scripts/utilities you may run.
On Suse 10.2, Susefirewall2 was borked as soon as I compiled 2.6.20, only way to use the network was to disable it altogether.
Take a close look through your config settings for netfiltering, apparently with 2.6.20 kconfig unsets some of the options due to the transition to a new framework, including some of the modules and targets needed for standard use. (Here’s Linus’ typically, er, diplomatic explanation http://lkml.org/lkml/2007/1/9/217 ) After a couple of rounds of rebuilding those modules/configs, I gave up troubleshooting what was missing and pretty much enabled everything under the now deprecated framework.
YMMV.
Read the reply – need I say, he is a more patient man than what I would be like in those circumstances
Just discovered the hard way that the netfilter team has made some structural changes that could impact existing iptables scripts/utilities you may run.
Oh, please, not again. This is, I’m not sure, the third time?, I’m forced to recheck every option because they screw up with the config names.
yes another linux version- in your face eugenia “sucki loli”
Does this correct the issue of wireless speed dropping/connection dropping when under a heavy load, that is, when ripping from a cd for example?
There exists a bug like that (wireless drops when system under ‘heavy’ load)?
If so, I’d like to know since then I might be affected by that myself.
Edited 2007-02-05 14:57
There could be a hundred reasons why the system slows wireless transfers when there is a heavy I/O load. One could be that the computer isn’t fast enough to do both at the same time. More likely, the DVD burner is running as a very high priority process to prevent turning the DVD-R into a beer coaster at the expense of wireless transfer performance. Processing power and I/O are not infinite.
That is incorrect; the computer is ‘powerful enough’ given that I’ve accomplished the same task under Windows XP without any problems.
As a side note, I was not burning a cd, but ripping audio from a cd – this is on a Toshiba A100, 1.73Ghz Core Duo, 1gig ram etc. so its hardly an ‘under spec’ed’ machine.
The cause of the problem is more due to bad scheduler rather than an under powered machine.
The cause of the problem is more due to bad scheduler rather than an under powered machine.
It’s never happened to me, I regularly do what you’re describing, ripping from CD/DVD. It sounds like you have IRQ issues, try checking your dmesg output for “use pci=irqroute” errors or something like that. Or switch schedulers if that’s what you think it is – I assure you that it’s a bad assumption.
It’s never happened to me, I regularly do what you’re describing, ripping from CD/DVD. It sounds like you have IRQ issues, try checking your dmesg output for “use pci=irqroute” errors or something like that. Or switch schedulers if that’s what you think it is – I assure you that it’s a bad assumption.
I’ve just checked it with OpenBSD, and I don’t see the same problem – oh well; I’ll go with Windows Vista until such time the problem has been fixed with Linux; I shouldn’t need to jump through hoops just to get basic functionality up and running.
Can we say, “Try a different distribution?”
Your wireless speed dropping/connection dropping when under a heavy load, that is, when ripping from a cd for example sounds pretty much like a driver/acpi/irq problem. OpenBSD has good wireless drivers, so does Windows; there you have a clue to start searching the source of problems.
How about mac OSX? Good drivers, bad, or so-so?
Thinking about getting me a McMchine, so I’m interested to know.
Edited 2007-02-06 14:45
I suppose it depends if the vendors puts OS X stick in the box or not
What you say is stupid : ripping a CD on a 1.73 GHz PC never was a heavy load to begin with. It won’t tax the IO nor any scheduler. What is this nonsense ?
And the scheduler sure enough isn’t bad.
I wouldn’t be surprised that your wireless card driver (or even the hardware) is the culprit here.
Try the same thing you’re doing with an ethernet cable, and see if your bandwidth is decreasing, instead of saying such nonsense.
No thanks, like there aren’t already enough faults in the damn thing!
I don’t get your point… Maybe you forgot a smile in your post? If you didn’t, just relax – noone forces you to use Linux!
And it was only a short time ago when 2.2 was announced.
From a long time Linux users perspective, I believe Linux development has grown more in the last two years than the previous 10 years before that combined.
so is there now a livecd/USB key with this ?, perhaps a wireless(Belkin F5d7050)/wired NAS plus a (usb)DVB-T server and web front end for controlling all the options.
Edited 2007-02-05 22:09