According to Brian Stevens, Red Hat’s director of engineering, version 2.4 is limited to 2 terabytes of storage, but the 2.6 Linux kernel will push the envelope much farther. Read the article at NewsFactor.
According to Brian Stevens, Red Hat’s director of engineering, version 2.4 is limited to 2 terabytes of storage, but the 2.6 Linux kernel will push the envelope much farther. Read the article at NewsFactor.
What about making the kernel modular? Reading that XFS will be integrated in the next kernel does not get me excited. Filesystems should not be integrated in the kernel at all.
“What about making the kernel modular? Reading that XFS will be integrated in the next kernel does not get me excited. Filesystems should not be integrated in the kernel at all.”
Why ? It has been proven time after time that modular kernels are not automagically better than monolithic ones or vise-versa since each model has it’s strenghts and weaknesses.
Except that a modular kernel reduces the working set size and allows new modules to be added later. Which is an easier way to add a new device driver when you buy that cool new hardware device: 1) copying a .o module file, or 2) patching your kernel, recompiling, and rebooting. There goes your precious uptime. Option 1 is easier for users and device manufacturers.
carrier-grade features?
maybe people have to wait till version 3 for those power.
i think the threads problem and the lack of a/some uniform development framwork are the bigest shortcoming of linux now.especially when people want to use linux or develop common-usage software for linux.
[/i]Why ? It has been proven time after time that modular kernels are not automagically better than monolithic ones or vise-versa since each model has it’s strenghts and weaknesses.[/i]
I agree with both models and I think that the choice should be left up to the user at configuration time. The user should be able to have it compiled directly in or as a module as they please. Personally, for a file system, I prefer it compiled in as part of the kernel, not a module. If I’m building a performance based system, I don’t want my file system support having to go through another API to exchange data with the kernel.
sigh, some people need to spend less time reading os news and more time reading os documentation.
linux has supported kernel modules for a very long time now. Integrated means linus has accepted xfs into the official kernel tree.
As for modules and performance, you are incorrect. There is no performance gain or loss.
Why ? It has been proven time after time that modular kernels are not automagically better than monolithic ones or vise-versa since each model has it’s strenghts and weaknesses.
The only disadvantage of the modular kernels that I know of is the slighly higher overhead when calling the module.
In the filesystem case – I doubt that the slight overhead of a VFS layer will give any measurable perfomance impact. And I want to be able to use any filesystem without recompiling the kernel.
linux has supported kernel modules for a very long time now. Integrated means linus has accepted xfs into the official kernel tree.
I knew someone would say that. Currently, although you can load a filesystem as a module – you need to have support for that module compiled in the kernel. Same goes for device drivers.
Just check out latest status report on kernel newbies it was updated today, has everything that’s gone into linus tree
http://kernelnewbies.org/status/latest.html
Zele – That’s a load of crap. For instance, the Redhat kernel does not come with support with NTFS. However, I can either compile the module from source, or download an RPM of the module and it will work just fine. You don’t even have to reboot.
It is true that most modules must be compiled using the exact same sources as the original kernel, but many are much more robust and are portable across kernel versions.
Linux is a modular monolithic kernel. There is no reason you should have to recompile your kernel to add drivers or filesystems. If you have to then it’s your distro’s fault, not Linux’s.
Currently, although you can load a filesystem as a module – you need to have support for that module compiled in the kernel. Same goes for device drivers
yes but only the hooks for them
I knew someone would say that. Currently, although you can load a filesystem as a module – you need to have support for that module compiled in the kernel. Same goes for device drivers.
Eh? No you don’t. How do you think NVidias drivers work then? No specific support for them exists in Linux.
[QUOTE]i think the threads problem[/QUOTE]
There isn’t anything really wrong with Linux’s thread support, at least not for the sort of market 2.4 is intended for (especially when you compare it with the abysmal FreeBSD 4 threading implementation). 2.6, which will be firmly in Solaris and AIX territory, ha a brand new threading implementation which is technically excellent.
man…are they sure it should not be 3.0?Linux is going to be so much better for the user, it isn’t even funny.
They had big dicussion over that on LKML a while back, Linus basically said he was going to call it 2.6 he isn’t really one for version infaltion
The choice of building a fs as module or integrated depends on how you are setting up your drives. If you want the root fs on XFS for example, you need it integrated otherwise the kernel cannot access the drive to load the XFS module!
I normally compile the minimun in the kernel and compile pretty much everything else as modules. It does not matter if you build every single module available, they are not very big, and you can just load the ones you need.
The most interesting development in 2.6 for me is pre-emption. It makes a big difference to the general response of Linux when used as a desktop system, and coupled with a better scheduler should finally put an end to the ‘X is slow’ rants.
The only thing I hate with the current linux kernel model is that anytime I update the kernel I have to go out and get new ltmodem, Nvidia or ATI kernel mods appropriate to my new kernel.
I am actually coming to the opinion for a newb that they should stick with their working kernel and pass on those one every two month updates unless there is something truly substantial in the update.
Somebody also mentioned MOSIX spec for a kernel-based clustering technique that sounded interesting. I did not see anything like that on the list above but that would be so cool.
Looking forward to preempt and better scheduling.
Have not really looked into recent performance spec on SMP scalability but it use to be kind of rough has that been taken care of?
With all the publicity of how Linux is supposed to overtake commercial Unices (HP-UX, Solaris, AIX), this limitation seems to preclude such a ploy. After witnessing the enterprising capabilities of proprietary Unices, I don’t think Linux is ready, yet, to take on the same level of responsibility as a tried and true commercial Unix system.
Maybe in some smaller capacity such as web, file and print and light database serving, Linux could be strong at the moment. But, this is where Microsoft has focused well over the years. If Linux reaches the maturity in code, stability & manageability as some commercial Unix solutions, it could be a drop in replacement. I would take a HP-UX or Solaris solution to manage terabytes of data over Linux at this point.
At any rate, I commend the Linux development community for getting as far along as they’ve gotten so far under varying degrees of scrutiny.
The linux kernel is already modular–that isn’t the problem. The problem is having a stable ABI across more than one version of the kernel, so that you can take a module compiled for, say 2.4, and have it run perfectly with 2.6 and beyond.
well I hope that the algorithum used for scheduling takes better advantage of pre-emption than the one in 2.4 did….I applied the PE patch and while performance was ok, it was not drasticly better.
If you want the root fs on XFS for example, you need it integrated otherwise the kernel cannot access the drive to load the XFS module!
Use GRUB; it can load the module(s) itself before the kernel boots. On another note, hopefully 2.7 will finally bring a stable driver ABI, although I would not hold my breath.
that Linus did not want to name it 3.0 I bet….I am almost positive that 3.0 will have a stabel driver ABI…at least until 4.0.
There isn’t anything really wrong with Linux’s thread support, at least not for the sort of market 2.4 is intended for (especially when you compare it with the abysmal FreeBSD 4 threading implementation)
Jesus christ you Linux zealot, is there any particular reason why you’re taking random stabs at FreeBSD?
Do you know how Linux used to implement threads? clone() essentially creates another HWP. In Linux, threads have process table entries with PIDs, which means all thread interaction goes directly through the process table. There’s no scheduling advantage for threads in Linux… unless you need shared memory you might as well be using HWPs. Sure, Linux’s new threading libraries have finally corrected this, but for a long time this was the case… Linux didn’t used to have true LWPs, yet another result of Linux’s amalgumated design methodology… no one really knows what they’re doing, and things that require tight interaction between the kernel and libraries end up kludged like clone() did.
As far as SMP scalability goes, FreeBSD’s threading was awful, but that’s because FreeBSD still relied on the Big Giant Lock for virtually everything (similar to Linux 2.2 SMP w BKL) However, at least LWPs in FreeBSD were actually implemented that way.
Performace is actaully worse with preemption (marginaly though) and thoughput is deminished. But what it does do
is decrese internal latancy, and gives better time sharing.
Basicly it raises the bar on how much load your system can take before things become unresponsive and begin to stutter.
Its not a magic bullet, but sure as hell is nice to have when
disk load is high, a large job is being executated in the background, ect.
it depends on how you measure performence….if you measure in pure throughput then your right, if you however are talking about user events like mouse clicks and window draws and other such activities premption increases performence.
Jesus christ you Linux zealot, is there any particular reason why you’re taking random stabs at FreeBSD?
You bet. It’s the best way to get you riled up… hook, line, sinker.
Do you know how Linux used to implement threads?
Yup.
Sure, Linux’s new threading libraries have finally corrected this
And boy have they corrected this.
but for a long time this was the case… Linux didn’t used to have true LWPs, yet another result of Linux’s amalgumated design methodology…
And yet Linux’s threading was an order of magnitude better than the userspace only obsecenity in your favorite OS. Funny, that.
no one really knows what they’re doing
You and I both know that’s false. Why lie?
and things that require tight interaction between the kernel and libraries end up kludged like clone() did.
Hardly. Legacy threading in Linux isn’t all that bad – processes are fairly lightweight anyway.
As far as SMP scalability goes, FreeBSD’s threading was awful, but that’s because FreeBSD still relied on the Big Giant Lock for virtually everything (similar to Linux 2.2 SMP w BKL) However, at least LWPs in FreeBSD were actually implemented that way.
Eh? In FreeBSD 4, threads are userspace hack jobs. Threading scalability on FreeBSD 4 is utterly pathetic because of this, the extremely poor quality of the re-enterant libc (something you’re always pimping as superior to glibc), the total lack of kernel involvement in scheduling (resulting in threads getting starved randomly, especially when heavy IO is involved) and lack of SMP support for threads (the child threads are only runable on the same processor as the parent. What a fantastic idea!). Giant (which it seems /still/ hasn’t been eliminated from FreeBSD5, despite your blatherings to the contrary) was just icing on the cake.
Now, whilst Linux threading hasn’t until now been as good as that found in commercial Unixen, it hasn’t been anything like as bad as that in FreeBSD. If this shows the Linux developers don’t know what they’re doing, it shows the FreeBSD developers are even worse…
1) Making the kernel modular has no effect, at all, on the performance. The Linux kernel doesn’t use the ELF DSO format. It links the module into the kernel as a .o file. While it can do this loading dynamically, the end result looks like the module was linked statically. Besides, this is a moot point because most module APIs are called via a pointer in a dispatch table, whether the module is static or dynamic.
2) Things like this need to be in the kernel tree. It’s an organizational thing, not so much a technology thing. Technically, any of these filesystems can be compiled as a module (in fact, most distros ship with a number of different filesystem modules available.) First, your boot filesystem generally has to be compiled in. Otherwise, you have to go through all sorts of initial-ramdisk nastiness to load your kernel. Since Linux offers you the choice of many boot filesystems (XFS, ReiserFS, ext3fs, each of which have different strengths and make different trade-offs) it makes sense that they should be in the kernel source. The far more important reason, though, is related to the swift pace of kernel development. Unlike most commercial projects, the Linux devs aren’t afraid of going in and changing around large amounts of code if that’s what’s needed. Between 2.4 and 2.6, Linux got a new VM, new block-IO layer, new scheduler, and massive changes in other subsystems. Up next is a rewrite of the TTY layer and the IDE layer. With all those changes going on, it’s very difficult to write a high performance filesystem that can be released independent of the main kernel source. SGI did this with XFS, but it requires a whole lot of synching between trees because code changes in the kernel proper often affect the filesystem. For example, preemption didn’t really work that well with XFS until both were included into the mainline kernel, at which point developers had to make sure that their changes didn’t break either.
The Linux kernel is modular. I think that you are confusing modular with micro vs monolithic.
The filesystems can also be compiled as modules.
You bet. It’s the best way to get you riled up… hook, line, sinker.
So you’re proud to be a troll?
And yet Linux’s threading was an order of magnitude better than the userspace only obsecenity in your favorite OS. Funny, that.
Have I ever claimed FreeBSD was my favorite OS? Well, for the record, it isn’t.
However, I must also note that here you’re spouting unsubstantiated heresay. “an order of magnitude *better*”? Is that supposed to have some sort of technical interpretation? You’re quantifying it as such. Does it refer to thread spawn times? Context switching penalties? Do you have any numbers to corroborate your claims whatsoever?
You and I both know that’s false. Why lie?
So you’re saying that there’s tight coordination between the Linux kernel developers and the glibc developers? Perhaps you can explain why Linus curses the name of glibc…
Hardly. Legacy threading in Linux isn’t all that bad – processes are fairly lightweight anyway.
Fairly lightweight? When everything must be routed through the process table? Surely you jest…
Without limits.h hacked, legacy Linux systems would have a system maximum of 512 threads. I wouldn’t call that sufficient.
Eh? In FreeBSD 4, threads are userspace hack jobs. Threading scalability on FreeBSD 4 is utterly pathetic because of this, the extremely poor quality of the re-enterant libc (something you’re always pimping as superior to glibc),
This is completely incoherent drivel which I won’t dignify with a response.
the total lack of kernel involvement in scheduling
Which was fixed by KSEs in FreeBSD 5.0
(resulting in threads getting starved randomly, especially when heavy IO is involved)
Which can be mitigated by the use of a stateful I/O multiplexing mechanism and asynchronous I/O, two kernel features Linux 2.4 lacks.
and lack of SMP support for threads (the child threads are only runable on the same processor as the parent. What a fantastic idea!).
Also fixed in FreeBSD 5.0 through KSEs
Giant (which it seems /still/ hasn’t been eliminated from FreeBSD5, despite your blatherings to the contrary) was just icing on the cake.
Are you referring to the BGL? Your sentence, insult removed, was “Giant was just icing on the cake.”
I’m certainly not in the position to comment on if the BGL was completely removed in FreeBSD 5.0. Are you? Judging from your current “angry Linux zealot” reply, I wouldn’t say you have any technichal knowledge of the FreeBSD 5.0 kernel.
Let me ask you this… can you provide an authoritative answer for if the BKL will be completely gone from the Linux 2.6 kernel?
You might want to try working a few more facts into your posts. Your previous post was something on the order of 90% insults, 10% facts. Of course, I suppose that’s the typical consistency for a Linux zealot. Linux gains mindshare through a grassroots disinformation campaign being waged by people just like you.
Since I am no expert about filesystems can someone explain if XFS is better than the other filesystems such as ext3 or ReiserFS?
If SGI has used it for so long it must be good is my reasoning. Also that it is more mature too.
Or is the truth, that each FS has its own advantages?
XFS is used for so long by SGI because it is very good at large files and SGI machines are used for movie Graphics which mare huge files.
EXT3 is used by many linux distros and users becasue it is easy to get on a machine running EXT2, basicly just installing it will do it
ReiserFS is basicly a FS that pushes the envelope and is realy exciting…it is fast and does a good job on small and mid size files so for a desktop machine it does a good job.
XFS is very mature and stable. It’s got features tuned for streaming huge amounts of data to large numbers of files. It’s metadata performance is decent, and it’s small file performance is pretty good.
Ext3 is incredibly stable. The codebase has been running on Linux specifically for a long time. It offers backwards and forwards compatibility with ext3. It’ll give you the lowest system latencies, because more work has been done on it to break long-held locks than has been on the other filesystems. It offers data journeling, which provides integrity for data as well as metadata.
ReiserFS is original and very fast for small files. It’s got great metadata performance and a very fast journaling implementation. It can be a whole lot faster for a huge number of small files. It’s reasonably stable, but it has the newest codebase of any of the filesystems, so some people are still hesitant about it. (On a personal note, I’ve subjected the thing to maybe a dozen power loss improper shutdowns, in the middle of compiles, when my laptop ran out of batteries, and I have yet to lose any data).
ReiserFS-4, due out with Linux 2.6, promises to be the be-all end-all of filesystems. Supposedly twice as fast, plugin-based architecture, database orientation, suitability for *really small* (few bytes) files. It looks very promising, but is still in in alpha ATM.
XFS is very good for both large and small files. You can build a 9PB file or split it up into separate 1kb files in a single directory and access the data randomly as fast as you can seek the disk, more or less. It was designed the right way the first time. The other filesystems each have their advantages.
But for performance I would choose XFS
For stability/reliability JFS
For forward/backward compatibility ext3
For filesystem-databases and experimental features reiserfs.
Thanks for the informative replies, that is what I love about OSNEWS.
You do realize, don’t you, that there are still times when FreeBSD 5.0 uses the BGL?
Adam
So you compile XFS as a module, and load it if you need it. Big deal.
There are only certain parts of the kernel that have to be compiled into the kernel. Everything else you can modularise to your hearts content. Heck, even your root filesystem module can be compiled as a module, thrown into the initrd.img, bob’s ya uncle, it should work without a hitch.
Windows NT/2000/XP IS NOT A MICRO OR MACH KERNEL, repeat that again whilst sitting in the lotus position. What Windows NT/2000/XP is a hybrid of monolythic and micro kernel concepts which results in a uniquely designed kernel
As for driver support for Linux, the only time where a kernel recompile is when a feature is added that relies onto an API call no supported in the current version of the kernel. For example, I can’t simply grab a USB driver from 2.4 and expect to run on Linix 2.0 because there is no support for that particular device in the main kernel.
that article in full…
– better scheduler
– bigger files
– xfs. maybe.
– uhh…
– …that’s it
– the end
Here is a very good list of what is new in 2.5/2.6.
Including CryptoAPI, VM changes, ALSA, Input layer (this is a big userspace change too!), IDE, etc etc.
http://www.codemonkey.org.uk/post-halloween-2.5.txt
So you’re proud to be a troll?
Not especially, but it’s mildly amusing.
However, I must also note that here you’re spouting unsubstantiated heresay. “an order of magnitude *better*”? Is that supposed to have some sort of technical interpretation? You’re quantifying it as such. Does it refer to thread spawn times? Context switching penalties? Do you have any numbers to corroborate your claims whatsoever?
I do actually, but I’m not allowed to post them (NDA) and I’m not inclined to repeat the benchmarks with publically accessible software. Suffice to say that pretty much everyone agrees that 4.x’s thread is abysmal. Just ask the Yahoo boys… (now fully expecting you to focus on this paragraph)
So you’re saying that there’s tight coordination between the Linux kernel developers and the glibc developers?
That isn’t what the quoted response was responding to. You said:
no one really knows what they’re doing,
Whups! The kernel developers /do/. Rather amusingly, NPTL is a demonstration of how the Linux kernel developers and the GLIBC developers can and do collaborate.
Fairly lightweight? When everything must be routed through the process table? Surely you jest…
Not at all. Processes in Linux /are/ pretty lightweight, especially when compared to other operating systems like Windows and Solaris. Very low context switch times and COW see to that.
Which was fixed by KSEs in FreeBSD 5.0
Which is /still/ nothing like production ready. But yeah, they’ve finally fixed it… oh, wait, the userspace isn’t done yet.
Whups!
This is completely incoherent drivel which I won’t dignify with a response.
It isn’t anything of the sort. If you can’t answer it because you don’t have the technical knowledge, just say so. @_@
Which can be mitigated by the use of a stateful I/O multiplexing mechanism and asynchronous I/O
Except it can’t, because neither of those things address the issue.
I’m certainly not in the position to comment on if the BGL was completely removed in FreeBSD 5.0. Are you? Judging from your current “angry Linux zealot” reply, I wouldn’t say you have any technichal knowledge of the FreeBSD 5.0 kernel.
Yes, I am and I do. Giant is still alive, well and (ab)used in 5.0.
Let me ask you this… can you provide an authoritative answer for if the BKL will be completely gone from the Linux 2.6 kernel?
No, the BKL won’t be gone completely from 2.6 as it is still useful to have a global syncronisation lock. It won’t actually be used by very much though… and lets face it, Linux scalability has already been demonstrated to be pretty good (eg. the 64 proc Altix which runs a fairly stock 2.4.19 with patches that have already been accepted into 2.5); the BKL hasn’t been a bottleneck for quite a while.
You might want to try working a few more facts into your posts. Your previous post was something on the order of 90% insults, 10% facts. Of course, I suppose that’s the typical consistency for a Linux zealot.
Whatever. It still beats the “OMG OMG OMG OMG FreeBSD11!!1!!!” ranting eminating from your direction. Hell, you haven’t even demonstrated that anything I’ve said is false. Way to go!
Linux gains mindshare through a grassroots disinformation campaign being waged by people just like you.
lol. That is a typical sore BSD zealot response – “omg, it can’t be that Lunix is better, it must be an evil plot by the Lunix minions to undermine the other OS’s!!11!!!”. Get a grip. Linux is gaining mindshare by being a technically capable operating system with massive market support.
Filesystems don’t belong in the kernel. And they don’t belong in kernel-linked modules.
Filesystem drivers belong where the file system is, on the disk. It’s quite logical, isn’t it?
I hope your making a joke.
OMG! FreeBSD!!!!1 FREBSd!111!!!
But seriously, I’m curious about a few things and you seem to be fairly informed (anyone else who is knowledgeble about this stuff can answer too of course):
No, the BKL won’t be gone completely from 2.6 as it is still useful to have a global syncronisation lock. It won’t actually be used by very much though… and lets face it, Linux scalability has already been demonstrated to be pretty good (eg. the 64 proc Altix which runs a fairly stock 2.4.19 with patches that have already been accepted into 2.5); the BKL hasn’t been a bottleneck for quite a while.
It’s still useful to have a global lock? What about Solaris, Irix, and AIX, do they have their own “BKL”s? And if it’s useful, is it ever truly NECESSARY for 2.6 to use the BKL?
Do you know if linux 2.6’s SMP will truly be at the level of the “big boys” (aix, irix, solaris, etc)? Does it even come close?
I know at least the threading is supposed to be at or approaching the level of the “big boys”.
Has does the concept of Big Kernel Lock and/or Big Giant Lock work? Is it just some mutex that locks everything?
Thanks in advance.