“Linux Kernel 2.6 has been in stable release for months now, which is like dog’s years in kernel time. Kernel releases are exciting times for Linux geeks, because it’s just plain fun to be able to replace the kernel on a system, or have several different kernels installed, and choose among them as the whim strikes. Oh yes, you want to gain improved performance and functionality, too.” Read the article at ITManagement.
<quot>Itanium, Intel’s 64-bit x86 processor</quot>
???
I’m not a native english speaker,
maybe I just do not get it right,
right?
w_Tarchalski,
This native english speaker agrees. That statement is flawed.
Is Linux ……………?
A. Ready for the Desktop
B. More Secure
C. Kernel 2.6 Primed for the Enterprise
D. All of the above
Definately = B & C
A = 60-70% ready for business use. Who cares about home users.
“A = 60-70% ready for business use. Who cares about home users.”
Hopefully somone starts caring actually, since majority of households have at least 1 computer in them. Business is not the only place where money is to be made. People will use at work what they can use at home. It is not as it used to be, where the company dictated the tools. Currently executives decide what tools will be used, and they mostly determine that by what they use at home. If they can go to CompUSA and buy it, that is what the tool will be, with the exception of highly specialized software. Until Linux is ready for the home desktop with full capabilities, then it will not be ready for general corporate use either. As an example look at Apple. They are used mostly for Corporate use, as they are the bomb for graphics. For the longest time there were no applications available so no one wanted to get one for thier home. Now they have applications, but home use has not increased as the majority still have PC’s as that is what the entertainment applications were for for the longest time. Recently that has changed, but I trhink too late. With Linux at least the games are starting to be developed, which will help make it more feasible to use Linux as the home PC, in which case it will begin to infiltrate the corporate desktop more as well.
Well, one of my computers has 2.6.3, qt 3.3.0, kde 3.2, alsa 1.0.2, k3b 0.11.5, koffice 1.3 and a little more packages. What I can say is: it’s very stable, very responsive and does a lot of things someone working in a company should need. So, yes, I think it’s ready for business use as workstation.
But for server use, I don’t know. It’s always a good thing be conservative.
“A. Ready for the Desktop”
I guess this is up for debate, depending on who you talk to… I have no reason to switch from Windows, so I guess it isnt quite ready for my desktop.
Only thing is that most (all?) distributions are still 2.4-centric and don’t make use of the kernel’s best features, for example, NPTL. Once they make use of these features, there will be a REAL difference in speed between 2.4 and 2.6 particularly on the server.
Linux is 100% ready for the server or desktop. You just have to find out where it isn’t ready – this tends to particularly be in the case of really huge systems that need the best available file system & process management available (think multipathing, system partitioning and the like) that are stable and well-known. Give Linux another couple of years for those things to catch up to the 2.6 kernel and really become anvil-reliable and Linux will be the *only* platform worth thinking about. FreeBSD is great too… but it hasn’t got the momentum (IMHO) to carry it as far, as quickly.
Wait ’til Reiser4 really stabilizes, GlibC has NPTL by default, administrators really get to understand the new scheduler, UDev becomes stable and… a lot of people out there will be floored by the performance of Linux 2.6… and just to prove it will keep getting better all the time, kernel 2.6.3 with minor patching is giving me visible performance improvements over earlier versions. Visible!
Just look for threaded Java application benchmarks with NPTL threads enabled vs. Linuxthreads if you want graphic proof for what that *one* improvement to the Linux systems means.
If you have NPTL enabled system-wide, your Glibc will look like this:
# /lib/libc.so.6
GNU C Library stable release version 2.3.3, by Roland McGrath et al.Copyright (C) 2004 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Compiled by GNU CC version 3.3.2 20040119 (Gentoo Linux 3.3.3-7, propolice-3.3-7).
Compiled on a Linux 2.6.1 system on 2004-02-13.
Available extensions:
GNU libio by Per Bothner
crypt add-on version 2.1 by Michael Glad and others
NPTL 0.60 by Ulrich Drepper
BIND-8.2.3-T5B
NIS(YP)/NIS+ NSS modules 0.19 by Thorsten Kukuk
Thread-local storage support included.
Report bugs using the `glibcbug’ script to <[email protected]>.
— cut
Note the NPTL 0.60 line.
mostly. Its now very responsive, even underload, and does well (in my experience) with multimedia – better then Windows. The only remaining weak area, in the kernel, is hotplugging devices. I’d like to be able to swap say a mouse, have it recognized, and instantly put to use, or plugging in any number of USB devices and have them “just work.” We’re getting there tho…all else above the kernel? I like Gnome quite alot, although I hadn’t used KDE in a good long while. Plenty of weaknesses, but its certainly getting better. I think its ready for the business desktop already, where little things aren’t as important. Many business desktop users “live in” their applications and are scarecly aware of the o.s. itself. Gnome or KDE are easy enough for this crowd, all we need are more desktop and business apps. Open Office, IMO, is very nice and quite “there.”
Hotplugging a USB mouse should be no problem, but XFree86 has to be set up so that it will use it (usually /dev/input/mice) as well. I don’t think that’s a Linux problem, it really should be solved in userspace. And that’s what hotplug+udev try to do (but they don’t configure X). XFree86 really doesn’t work well with dynamic configuration. It’s meant to be static, and that doesn’t work well anymore, now that people have laptops with sometimes external monitors, sometimes external mice, etc.
x is slowly becomming dynamic.
i believe that most configuration will just detect for the input devices (according the recent x changelogs.
I believe the question has to change a bit, since I believe that Linux+KDE+ ….. is ready for many uses.
My question for you all is :
Are YOU ready for Linux on the Desktop or kernel 2.6.x on your machines ?
It has slowly moved from the software being ready to the users being ready.
Lets go to work and teach people, get them used to it, help them believe in it.
I am going there, not totally there yet.
Alex(BR).
Been using it there at work for about 5 years now, and at home for maybe 7 years. Ymmv, use what you will, personal opinion only, Linux has long been more “desktop ready” usable than any version of Windows has yet achieved.
Saying “linux is not attractive, I just use windows” is the same than saying “Mac OS X is not attractive, I just use windows”.
As a Linux user, I can say that Linux fulfuills _by far_ my desktop needs. “I don’t use it” does NOT means it can’t do it.
“Linux is 100% ready for the server or desktop. You just have to find out where it isn’t ready”
Uh, you contradicted yourself there.
“Give Linux another couple of years for those things to catch up”
I for one have been sick of waiting for Linux to play catch up for nearly half a decade now. There are OSs that do enterprise stuff now, and I’m not about to wait for it to continue playing for another. I get so tired of hearing about what features that the developers are working on when what they already have never workes quite right.
“Wait ’til Reiser4 really stabilizes, GlibC has NPTL by default, administrators really get to understand the new scheduler, UDev becomes stable and… a lot of people out there will be floored by the performance”
Ugh. Linux.
You’re damning with faint praise. Linux has ALREADY done amazing things on the Internet, the network and the desktop. These new features are more or less ready now – but they will be *NEXT GENERATION* things that transcend what anyone else has now.. not just catching up.
So don’t give me that. It’s crap.
“These new features are more or less ready now – but they will be *NEXT GENERATION* things that transcend what anyone else has now.. not just catching up.”
1:1 threading, Windows and Solaris have had that for quite a while. M:N threading, Solaris has had it before, FreeBSD is nearly done implementing it.
Security, I’ll admit that SELinux is very cool, but Trusted OS features have been around for ages, Trusted Solaris being one example, and Flask (from which SELinux is derived), and SecureOS (from the folks that own the patents on the SELinux stuff) being another.
Lets not forget the fact that Windows (for example) is moving to a managed set of APIs. Linux is only now getting am IPSec implementation, whereas Windows and OpenBSD have had thm for ages. Let’s not even get into the bennefits of kernel crash dumps and kernel debuggers.
ALSA? Please. Mac OS X still kicks Linux’s but in low latency audio applications.
LSB hasn’t done much good bringing the various distributions together. The various Linux filesystem layouts are still a mess, and configuration chages from distro to distro on a whim.
Linux. Hype, hype, hype, bash Microsoft, hype, hype, hype.
NPTL is new in the kernel. But it’s not new to open source – there are a few threading libraries out there now, all improvements on the aged Linuxthreads. There have been patches for replacement threading models for years.
Trusted OS come on a read only media only – this is a distribution issue, not only an OS issue. If you have a “Trusted” system that doesn’t have read-only media booting it, you don’t have a Trusted system.
IPSec in the kernel is no big deal. It’s been in user space for ages and also available as patches. Some of us are not afraid of patches, some of us are. Granted, this is more stable and much more mainstream now that it’s in the kernel, and granted it should have been in some time ago. But remember, the *nix way of doing things usually isn’t IPSec-centric.
OSS and ALSA work. Whether you like them or not is immaterial. But – Linux developers haven’t had access to original corporate code whereas Apple knows exactly what hardware they are using, and has access to the original development team for that hardware. Maybe my level of knowledge let me set up ALSA easily enough that it didn’t bother me – I dunno. YMMV.
As for distributions, pick one you like and stick with it. This is a silly argument… even if I somewhat agree that an identical layout would make it simpler to go from machine to machine. That said, no version of Windows has been identical to any other version of Windows when it does to methods of configuration. Windows is barely more consistent and has far less features – you seem to consider that a feature in itself.
oops
NPTL is new in the kernel. There have been patches for replacement threading models for years
Out of box experience is important for non-technical users and people making purchasing decisions. These people don’t want to patch kernels for what should be core functionality that they want to know nothing about.
Trusted OS come on a read only media only – this is a distribution issue, not only an OS issue. If you have a “Trusted” system that doesn’t have read-only media booting it, you don’t have a Trusted system
I’m not the one without a clue here. Read some of the documentation here:
http://www.securecomputing.com/
Read up on SELinux, Truset Solaris, and other trusted OSs. Booting from read only media does not make an OS trusted. I’d laugh if I wasn’t so disturbed by your ignorance.
IPSec in the kernel is no big deal. It’s been in userspace for ages and also available as patches. Some of us are not afraid of patches, some of us are
Good for geeks with time to burn, but not for ordinary folks.
But remember, the *nix way of doing things usually isn’t IPSec-centric
What an odd thing to say, and totally irrelevant.
OSS and ALSA work
Yes they do, but niether are the top performers. That was my point. Linux isn’t the best, and it’s nowhere close. It won’t ever get close because it’s a terrible architecture.
an identical layout would make it simpler to go from machine to machine
Yup. Making and STICKING to standards are a good thing. Both Linux and Microsoft are really bad at doing either one.
That said, no version of Windows has been identical to any other version of Windows when it does to methods of configuration
That was never an issue of mine. Based on my experience, it’s a correct statement at any rate.
“Any other stupid observations?”
Only yours apparently…
OK, regarding my “ignorance”, a trusted system *IS* booted from only read-only media. Yes there are many other factors but I stand by that statement. Trusted Solaris is booted off CD as I recall. It’s called OS hardening and it doesn’t just mean locking down RPC, pal. I’ve been a sysadmin for a large network on the Internet 12 years and have only had security issues on systems where a PHP script from a customer’s shared hosting was compromised (most recently, a copy of PHPDig they had running) – or where a system is prepackaged and I have to depend on the vendor (Cobalt system, and all Microsoft cruft). Stick that in your pipe and smoke it.
As for my “odd” comment, it’s not odd at all. *nix admins often use tools such as SSH instead of taking the trouble of building multi-purpose tunnels through (public?) networks. IPSec certainly has its place, but it is more relevant to networks running resources (Windows) that have little security and poor remote management capabilities compared to the *nix counterparts.
Regarding your architecture comment, you’ve shown true colours: you ARE attempting to be a moron. I don’t think you are one, but you’re making a good stab at proving me wrong.