Security and the way windowing is handled remain two of the diminishing differences between Linux and Windows, according to one of the main speakers at Microsoft’s developer conference.
Security and the way windowing is handled remain two of the diminishing differences between Linux and Windows, according to one of the main speakers at Microsoft’s developer conference.
the article carefully omits what exactly the security differences are…
a very surreal article if you ask me. a microsoftie calling the windows kernel & linux kernel very similar? weird…
i particularly like the part where he claims both technologies began in the 70s and blossomed in the 90s. while linux is an evolution of unix which began in the 70s, i’ve never heard a microsoft fan comparing their kernel to dos-predecessors (which i can only assume is what he was referring too)!
We shouldn’t be surprised that the growth of a competitive product (Linux) to Windows has strengths that are Windows’ weaknesses (stability and security).
For completeness, Linux now focuses on the UI and applications, and Windows on security and stability.
IMHO, I’d rather have the framework of Linux and make it look pretty over having the UI and usability expertise of Windows and having to go back and make it structurally sound.
Comparing windows’ and linux’s focus on security is like comparing apples and oranges – you have limited accounts, and admin accounts versus a true multiuser system. If you bring security-enhanced linux into the picture, security becomes even bigger (programs are security bounded as well). Windows has none of this – and as far as I’ve heard, its not going to anytime soon (why would the NSA waste money on a closed platform?).
UNIX user and group permissions are not that different than windows NTFS. XP Pro uses restricted access accounts much like Linux, although I usually give my user administrative access.
If Linux ever does end up on the desktop, you can bet Joe Users will be using superuser and administrative permission accounts on that also.
As far as stability, do you mean as a server? Becasue as a desktop Linux user myself I can’t say Linux apps are any more stable than windows apps.
XP as a desktop is more stable than Linux as a desktop, the security with Linux is really only a usability/security trade, at the point of hitting “mainstream” desktop, that would probably be traded back.
while linux is an evolution of unix which began in the 70s, i’ve never heard a microsoft fan comparing their kernel to dos-predecessors (which i can only assume is what he was referring too)!
From what i understand and what i think the guy in the article was refering to was the fact that modern Windows(NT) was “based” on VMS.
the article carefully omits what exactly the security differences are…
Refresh my memory..when has the windows Kernel ever been compromised due to a security issue?
Oh, and this arguement is going to change with SP2, because the firewall being on by default. Linux still runs many services in Listening state on default. Almost any Linux services installed default to on also, W2k3 uses more secure defaults.
W2K3 also uses a much more secure config of IIS, and is pretty fast.
How do I eliminate most of the junk that gets installed?
Don’t install a distro that’s a testbed for brand-new corporate machines.
Even a minimal command line install is 600MB…
Oh, cry me a f’ing river. You can’t even buy a used hard drive under 2 gigs these days.
I think the nt-kernel is based on some kind of vms-kernel with roots in the 70s.
When the fat penguin goes on a diet it might have a shot at truely competing with Wiindows. My friend installed Fedora Core 2 and that thing is an absolute pig!
How do I eliminate most of the junk that gets installed? Even a minimal command line install is 600MB…
I somehow suspect the “minimal” install gives you more than you’ll get on a standard Windows XP install.
The Linux kernel is usually only a few megs in size as a binary. The rest is the DEs, the libs, the media and office software. I think you’ll be surprised at just how complete a minimal install is. Try using that 600meg minimal install and using a minimal XP install. I bet there’s a world of difference in the tasks you can accomplish.
As far as security goes, linux relies on security through obscurity. If everyone thinks it’s safe, then, heck, it probably is! It doesn’t matter if there are formal methods to determine that it’s safe, just “pretend” it’s safe and everything will be okay… Baaa!
WTF? How the hell can you qualify that? Do you even know what “security through obscurity” is you ignoramous? It’s when your security relies upon people not being able to inspect the problem: the source code. Hence it’s a term applied to Windows, and not to Linux since anybody can download an inspect the Linux source code to try and find holes.
First of all, because you don’t understand Linux security doesn’t mean nobody does. I do, tell me if there is something you need explained to you…
Second, you were the one who started the name-calling (use your scroll-wheel’s “up” function to see for yourself).
Third, a freshly installed version of Windows XP, which is about as “minimal” you can get, takes close to 1 GB. Sure, it has an UI and a calculator, but nothing else… Your Fedora Core 1 has in less than 600 MB.
Fourth, what you’re critisizing (if it can so be called) is not Linux but the Fedora Core 1 distribution. Linux in itself, the kernel with modules (drivers), is a few MBs, and a functional base system (i.e. with nothing in it except the most basic programs, like glibc and the command-line) is under 100 MB. If you don’t believe me try out Gentoo or Linux From Scratch.
– Simon
I think the nt-kernel is based on some kind of vms-kernel with roots in the 70s.
Yeah, headed up by Dave Cutler who came from the VMS group at DEC. There’s an interesting book out there called Showstopper that details the development of NT.
Thats not the only. i have a web/mail/ftp/database server with fedora under 400mb
>How do I eliminate most of the junk that gets installed? >Even a minimal command line install is 600MB…
I installed a standard Mandrake 9.0 Minimal-System on a 256MB-CF-Card-Based System. It had DHCP-Server, sshd, tftpd, Perl, OSGi AND Java1.4 Runtime
all in all it had 120MB
So’ve done something wrong.
Oh, and this arguement is going to change with SP2, because the firewall being on by default. Linux still runs many services in Listening state on default. Almost any Linux services installed default to on also, W2k3 uses more secure defaults.
This is nonsense. First of all there are many different GNU/Linux distributions out there. Which services, if any, are enabled by default is dependant on the distro you are using. The distribution I use doesn’t install any services by default. Other distros usually ask which services to enable at some stage of the install process. Windows doesn’t ask anything and offers very little control over listening processes. It’s difficult enough to stop some of these processes (Some require a registry hack) let alone to bind these processes to a specific ip or interface. Windows has a lot of catching uo to do in that area.
As for the firewall in SP2: this will certainly be an improvement. I just hope that this time they start it before any processes start listening and not after as is the case in the earlier versions. Anyway…I doubt that this new firewall be remotely comparable to Iptables in Linux.
So, TenaciousOne
You said, “My friend installed Fedora Core 2…”
Now that is an interesting point.
I’m led to the questions:
How much do you really know about Linux?
How many Linux installs can you say you have done?
How many of these installs have you tried to use yourself?
Is the one your fiend installed the only one you’ve ever seen?
Did you use the intall yourself, or is this just what you heard your friend say?
How do we know you actually saw the install?
What color was it?
I once had a fiend who said he knew how to ride a Harley because I owned a few. I got tired of hearing it, so one day I challenged him to get on one and ride it. He went five feet and dumped it. When I picked him up, I treated him pretty rough, and, still, even though he’d now been on real Harley, he didn’t know what he was talking about. I guess that’s the just the way some of us are?
http://www.damnsmalllinux.org/download.html
Thanks for all your expert opinions, and sorry about those Linux enthusiast who get a little zealous.
Perhaps with your permission, TenaciousOne, we can all get back to discussing the attributes of the above article?
From what i understand and what i think the guy in the article was refering to was the fact that modern Windows(NT) was “based” on VMS.
That’s a myth propagated by Microsoft because it helps offer validity to Microsoft’s OS. Fact is, Dave Cutler is the author of the NT kernel and VMS’. But, Dave did not base NT on VMS but he clearly carried over many of his own innovations and his source of inspiration is clear. Having said that, Dave was not given free reign to do as he pleased. Microsoft had to muck things up too. AFAIK, some things that Dave wanted, Dave didn’t get. From what I understand, Dave’s desire to have a good OS versus Microsoft’s desire to have a marketing blurb (OO-microkernel, ya right…what lies), were one of many reasons why Dave eventually went on to other things.
UNIX user and group permissions are not that different than windows NTFS. XP Pro uses restricted access accounts much like Linux, although I usually give my user administrative access.
The differences may be subtle, but they are huge differences. Along those lines, Linux and many its filesystems, support ACL’s, so it’s not as if people are stuck with the more simplistic default permission model under Linux (or most Unix implementations for that matter).
As far as security goes, linux relies on security through obscurity.
No it doesn’t and your use of that phrase proves that you have no idea what you’re talking about. Security through obscurity is when you try to keep the security details hidden from public comment and/or consumption. Linux is about as open as open can get. Thusly, “security through obscurity”, when applied to Linux, as about false as false can get. On the other hand, Microsoft has traditionally relied heavily on security through obscurity.
Having said that, there is nothing wrong, in absolute terms, which makes, “security through obscurity”, bad. The problem is, people often think that encouraging ingorance is a security feature in of it self. It is not. And that, is why the phrase is often associated with poor security.
Linux zealots also used to say “I don’t a stinkin’ pointy-clicky user interface! I’m a real man I use the command line!!!”
You’re right, it is funny. But not for the reasons you think. It’s funny because you are showing off your ignornace. Hardcore Unix people, tend to use command line more than anything. Just the same, a command has nothing to do with the amount of additional software which gets installed. Generally speaking, Linux comes with tons more software, even with a minimal installation, than does Win. Additionally, back with 10 and 20 gig drives were common, taking up 10%-20% of it was wasteful. Thusly, linux being able to install in a 1%-5%, or like footprint, was a huge advantage. Now, even older drivers are 20-40 gig, and linux still has an advantage. Furthermore, if you really need to run on slim hardware, you still have tons of options with Linux. How about a fully functional Linux environment, with X, in about 10M-20M? Exactly.
The only think you’ve proved is that you’re over your head and that you’re first in line to wear the “zealot” badge.
Oh, and this arguement is going to change with SP2, because the firewall being on by default.
Yes, finally they will make an attempt to be secure. Not because it’s important to them but because they’ve gotten so much bad press from it.
Linux still runs many services in Listening state on default.
Oh really? Care to cite specifics. Mostly, services run because, during installation, you told the installer that you not only wanted them installed, but you wanted them to run. So, this is not the case of poor security, it’s an argument that idiots shouldn’t be system administrators. If you install a service, tell it run, and refuse to configure it, that’s not the fault of the OS. It’s the fault of the person that installed it.
Almost any Linux services installed default to on also, W2k3 uses more secure defaults.
Really? Care to cite specifics.
W2K3 also uses a much more secure config of IIS, and is pretty fast.
More secure than what? It’s previous low levels of security? Pretty fast? Using what metric? Compared to what?
Oh please go here and download slax (188meg) with full graphical environment, applications….
http://slax.linux-live.org/download.php
“That’s a myth propagated by Microsoft because it helps offer validity to Microsoft’s OS. Fact is, Dave Cutler is the author of the NT kernel and VMS’. But, Dave did not base NT on VMS but he clearly carried over many of his own innovations and his source of inspiration is clear. Having said that, Dave was not given free reign to do as he pleased. Microsoft had to muck things up too. AFAIK, some things that Dave wanted, Dave didn’t get. From what I understand, Dave’s desire to have a good OS versus Microsoft’s desire to have a marketing blurb (OO-microkernel, ya right…what lies), were one of many reasons why Dave eventually went on to other things.”
Bullshit, he had total control over the kernel development, and it’s a great kernel btw. Don’t be jealous, one day you too shall have an HAL and multiple kernel subsystems. Until then go play with that monolithic piece of crap you have on /boot 😛
http://www.winnetmag.com/Windows/Articles/ArticleID/4494/pg/2/2.htm…
“. Don’t be jealous, one day you too shall have an HAL and multiple kernel subsystems. Until then go play with that monolithic piece of crap you have on /boot 😛 ”
ok. so what do you do all those HAL?. run it only on intel systems right. then what about the windows graphics subsystem inside the kernel. is that a microkernel.
meanwhile linux supports multiple architectures and platforms unlike windows and does have better modularity in the kernel than windows ever did
The article mentions that David Cutler was one of the creators of VMS Unix system. While I heard about VMS and about Unix, I’ve never heard about VMS Unix. Was there really such a system or is this an error in the article? Just curious…
“The article mentions that David Cutler was one of the creators of VMS Unix system. While I heard about VMS and about Unix, I’ve never heard about VMS Unix. Was there really such a system or is this an error in the article? Just curious…”
He was the manager for the VMS project at |D|I|G|I|T|A|L| it never had anything to do with Unix, I guess this is just another example of how wrong this article was at so many levels.
ok. so what do you do all those HAL?. run it only on intel systems right.
Plus x86-64 and Itanic.
then what about the windows graphics subsystem inside the kernel. is that a microkernel.
The NT kernel is *based* on a microkernel design, it is not – and has never claimed to be – a full microkernel. OS X’s architecture is similar.
Interestingly, Microsoft developers are reportedly trying to move NT towards a more microkernel-esque architecture now that hardware performance is so much higher. Some of the reports about Longhorn suggest it will be moving the graphics system back out of kernel space, like NT 3.x had.
The graphics subsystem is not in the kernel, it simply runs in kernel space. This is an extremely important distinction.
meanwhile linux supports multiple architectures and platforms unlike windows and does have better modularity in the kernel than windows ever did
False.
Your questions are bait. If you have any ideas that counter my original post then fire away! So far you don’t seem able to provide a good counter argument.
Your motorcycle agalogy is very cute, but quite irrelevant to not only the original article but my original point.
These are the distros, I’ve tried personally:
*) RH6 through RH9 (shrike)
*) Mandrake 8.1
*) Knoppix (don’t remeber the version)
*) Fedora Core 1 & 2.
*) Slackware 8.1 & 9.0
*) Vector 4.0 soho edition
*) Lindows 4.5 Developer
BSDs
*) FreeBSD 4.[789]
RH9 seems to be the one most folks want to try.
You see, I install computer systems for a living, so I have a lot of exposure to Linux, but I definitely prefer Windows for all the reason I mentioned before. I work with people who want to try Linux then they quickly switch back when they realize it sucks…
i particularly like the part where he claims both technologies began in the 70s and blossomed in the 90s. while linux is an evolution of unix which began in the 70s, i’ve never heard a microsoft fan comparing their kernel to dos-predecessors (which i can only assume is what he was referring too)!
He’s referring to VMS, not DOS.
IMHO, I’d rather have the framework of Linux and make it look pretty over having the UI and usability expertise of Windows and having to go back and make it structurally sound.
It’s not the “framework” or “structure” of Windows that needs to be fixed, it’s the middle layer of APIs and the out-of-the-box default settings.
A lot of the similarities the article mentions are very superficial. Sure, both NT and Linux are monolithic, but then again, most OSs are monolithic. They are both preemptive kernels, but then again, so is any UNIX based on SVR4.
At a lower level, the two systems are quite different. An overriding paradigm in the Windows kernel is the object, while an overriding concept in the Linux kernel is the file. This results in quite different
ways of handling things between the two systems. Because of it’s object-based design, NT treats a virtual memory mapping as an object, with an API specific to virtual memory mappings. Linux, on the other hand, treats a virtual memory mapping as a file, and uses the same API for it as it uses for files. The Linux approach makes certain things much easier, because it minimizes the number of concepts. Putting a swap-file on your graphics card memory is possible on a Linux system, not because it’s a feature that makes all that much sense, but simply because Linux allows you to combine a small number of primitives in many different ways. In contrast, NT tends to have different primitivies for different actions.
The two systems are different in other ways too. NT still has some vestiges of a microkernel, so it has a Win32 server (crss.exe) running in kernel space. Linux has no such thing. NT handles I/O using a packet-mechanism (I/O request packets), while Linux uses a file-based API.
Also, the conclusion that the Microsoftie comes to, that Linux is playing catch-up, is misguided. Yes, the NT kernel is more advanced in certain ways. The NT security architecture, for example, is quite powerful (although, so complex as to deter usage). In other ways, Linux is more powerful. ReiserFS (especiall v4) and XFS are quite a bit more advanced than NTFS. NT’s VM uses a lot of hueristics and guess-work, making it fragile, while Linux’s tends to use more algorithmically sound methods. If you’re interested in the two systems, I recommend Tannenbaum’s “Modern Operating Systems.” His coverage of the topic is much more detailed and accurate than any Microsoft evangelist’s.
Comparing windows’ and linux’s focus on security is like comparing apples and oranges – you have limited accounts, and admin accounts versus a true multiuser system.
NT is multiuser. Always has been.
If you bring security-enhanced linux into the picture, security becomes even bigger (programs are security bounded as well). Windows has none of this – and as far as I’ve heard, its not going to anytime soon (why would the NSA waste money on a closed platform?).
NT has all of it. Always has. It’s just that very few people *use* it, just like very few people are ever going to use all the bells and whistles added in SELinux.
“As far as security goes, linux relies on security through obscurity.
No it doesn’t and your use of that phrase proves that you have no idea what you’re talking about. Security through obscurity is when you try to keep the security details hidden from public comment and/or consumption. Linux is about as open as open can get. Thusly, “security through obscurity”, when applied to Linux, as about false as false can get. On the other hand, Microsoft has traditionally relied heavily on security through obscurity.”
Look up the word “obscurity” in the dictionary. Just because open-source “eyes” are looking at it doesn’t mean it’s secure. Please, if you know, tell me what the formalized testing process is for Linux?
What’s wrong the cat got your tongue?
UNIX user and group permissions are not that different than windows NTFS.
Actually, traditional unix user/group/other permissions are a functional subset of NTFS ACLs.
XP Pro uses restricted access accounts much like Linux, although I usually give my user administrative access.
You shouldn’t. Run them as a User or Power User. If you absolutely must, give access to an Administrator-level account via the “Run As” command for specific programs, but running them as an admin *all the time* is a sure way to a system infested with nasties.
after automating LFS and most of BLFS I found that a complete binary install of all the compilers, tools, X Windows, GNOME 2.6.x, KDE 3.2.x and the 2.6 kernel take up a little more than 600MB. That doesn’t include the servers, an office suite or extra desktop applications, but it does include EVERYTHING you need to work on your system and its software/source code, which takes up around 10GB in both compressed and uncompressed form. So now I’m using 20GB partitions to work on my distribution and find I have a few gigs to spare. I was trying to do it in 10GB, which worked well until I added all that GUI stuff on top of it.
Its really amazing how little storage space all these ascii init scripts and config files require. Most of the bloat in my distro is the compilers and other tools to work on all this wonderful software, not the software itself.
“The graphics subsystem is not in the kernel, it simply runs in kernel space. This is an extremely important distinction.”
True, in fact even the memory manager is separate from the kernel itself. Also nt subsystems are very different from Linux kernel modules. The nt subsytems use message passing for communicating with each other, working as separate entities. Linux kernel modules are nothing more than dynamically linked code, forming one big monolithic blob. Windows isn’t crap because of the kernel, it’s crap because of the win32 api (soon to change with longhorn).
That’s a myth propagated by Microsoft because it helps offer validity to Microsoft’s OS. Fact is, Dave Cutler is the author of the NT kernel and VMS’. But, Dave did not base NT on VMS but he clearly carried over many of his own innovations and his source of inspiration is clear.
Uh, what’s the myth that’s supposedly being propogated ? Microsoft have never said NT was based on VMS code (although the small matter of several hundred million in an out of court settlement between DEC and Microsoft might suggest otherwise), they’ve only said it was based on the same concepts and architecture – which given that it’s true, is hardly a “myth”.
Having said that, Dave was not given free reign to do as he pleased. Microsoft had to muck things up too. AFAIK, some things that Dave wanted, Dave didn’t get. From what I understand, Dave’s desire to have a good OS versus Microsoft’s desire to have a marketing blurb (OO-microkernel, ya right…what lies), were one of many reasons why Dave eventually went on to other things.
Cutler had pretty much free reign with the initial design and implementation of NT and the ongoing low-level details. The parts he was overruled on were mainly at a higher level to do with the GUI/display system and the Win32 API.
That’s a myth propagated by Microsoft because it helps offer validity to Microsoft’s OS. Fact is, Dave Cutler is the author of the NT kernel and VMS’. But, Dave did not base NT on VMS but he clearly carried over many of his own innovations and his source of inspiration is clear. Having said that, Dave was not given free reign to do as he pleased. Microsoft had to muck things up too. AFAIK, some things that Dave wanted, Dave didn’t get. From what I understand, Dave’s desire to have a good OS versus Microsoft’s desire to have a marketing blurb (OO-microkernel, ya right…what lies), were one of many reasons why Dave eventually went on to other things.
Yeah its such a myth that Digital sued MS over the NT kernel. I’m sure companies are able to sue over urban legends and win right ?
Dave went on to other things ? Then who is the Dave Cutler currently at MS heading the development of the 64-bit version of windows ? Can’t be the same guy can it ?!??
You haven’t a clue dude. Might be a good idea to keep posting as anon.
”
The graphics subsystem is not in the kernel, it simply runs in kernel space. This is an extremely important distinction.”
that isnt micro kernel by any means. so there goes your argument
“meanwhile linux supports multiple architectures and platforms unlike windows and does have better modularity in the kernel than windows ever did
”
”
false”
true
Linux was first developed for 32-bit x86-based PCs (386 or higher). These days it also runs on (at least) the Compaq Alpha AXP, Sun SPARC and UltraSPARC, Motorola 68000, PowerPC, PowerPC64, ARM, Hitachi SuperH, IBM S/390, MIPS, HP PA-RISC, Intel IA-64, DEC VAX, AMD x86-64 and CRIS architectures.
If you’re interested in the two systems, I recommend Tannenbaum’s “Modern Operating Systems.” His coverage of the topic is much more detailed and accurate than any Microsoft evangelist’s.
An excellent summation by Rayiner Hashem, as always. I second his recommendation of Tanenbaum’s book, which should be a standard fitout to the bookshelf of anyone interested in the theoretical sides of computing.
Yes, finally they will make an attempt to be secure. Not because it’s important to them but because they’ve gotten so much bad press from it.
true
Oh really? Care to cite specifics. Mostly, services run because, during installation, you told the installer that you not only wanted them installed, but you wanted them to run. So, this is not the case of poor security, it’s an argument that idiots shouldn’t be system administrators. If you install a service, tell it run, and refuse to configure it, that’s not the fault of the OS. It’s the fault of the person that installed it.
I know for sure Mandrake does, it is THE most popular Linux distro. Someone else will have to clarify, but I believe Fedora and SuSE also do this. I still don’t see what the user has to do with anything though, you can’t blame the user for an operating systems faults.
just run a $netstat -l | wc on any linux distro to see what I am talking about, there are a pile of services in listening state on any given Linix distro, not unlike moddern day XP installs though.
W2K3 also uses a much more secure config of IIS, and is pretty fast.
More secure than what? It’s previous low levels of security? Pretty fast? Using what metric? Compared to what?
Win2k ran IIS (80/443), and SMTP on default install.
IIS 5 was plagued with problems because of script mappings to .htr, .idc. .ida, and .idq etc. that nobody needed but were there anyway, IIS 6 does not map these.
IIS 5 also shipped with many “sample applications” in the script directoty that were plagued with holes, they avoided this mistake the second time around.
The default install of 2k3 server does not even run IIS. (you didn’t know this?)
Here are some benchmarks
http://download.microsoft.com/download/0/7/1/0715a190-70f5-4b0d-8ce…
http://www.veritest.com/clients/reports/microsoft/ms_competitive_we…
MS also released a benchmark comparing IIS 4, 5, and 6 that I can’t find the link for, I am sure you will call the links bias but you are free to present your own benchmarks.
ok. so what do you do all those HAL?. run it only on intel systems right.
NT was at one time or another available on more than just x86. MIPS, Alpha and PPC versions were produced.
Yes NT is portable.
“I know for sure Mandrake does, it is THE most popular Linux distro. Someone else will have to clarify, but I believe Fedora and SuSE also do this. I still don’t see what the user has to do with anything though, you can’t blame the user for an operating systems faults.
”
mandrake gives an option during installation and is not the most popular distro. that is redhat.
” Yes NT is portable”
theoritical portability doesnt count.
theoritical portability doesnt count.
I was running NT on an alpha. Nothing theoretical about it.
” I was running NT on an alpha. Nothing theoretical about it. ”
i am not talking about history
true
False.
[meanwhile linux supports multiple architectures and platforms unlike windows]
NT currently runs on x86, x86-64 and Itanic. Previously, it has been commercially available for x86, Alpha, MIPS and PPC. Official support for non-x86 CPUs ended at NT4, but an Alpha port of Windows 2000 was available up until the second beta release. Originally it was developed on the Intel 860. Internal ports have also reportedly been done to the SPARC and HP PA-RISC processors.
Reports on Microsoft’s processes have highlighted how they make a strong effort to avoid any platform specificity in NT’s development.
NT supports multiple architectures and based on its history, I’d say it’s pretty safe to say it has remained highly portable, as designed.
[and does have better modularity in the kernel than windows ever did]
NT’s architecture is modular, layered and designed around the principles of a microkernel. Linux’s is not, as Tanenbaum continually likes to point out, although loadable kernel modules have been subsequently hacked in. One need only look at a low level architecture diagram to see the differences.
Linux was first developed for 32-bit x86-based PCs (386 or higher). These days it also runs on (at least) the Compaq Alpha AXP, Sun SPARC and UltraSPARC, Motorola 68000, PowerPC, PowerPC64, ARM, Hitachi SuperH, IBM S/390, MIPS, HP PA-RISC, Intel IA-64, DEC VAX, AMD x86-64 and CRIS architectures.
Linux has been ported to numerous platforms, but it was never *designed* to be portable like NT was (nor was it designed with SMP systems in mind, like NT was, but that’s another discussion).
“but it was never *designed* to be portable like NT was (nor was it designed with SMP systems in mind, like NT was, but that’s another discussion).”
oh so you cannot rewrite things and make them work?. MS can claim that they design for this and that. in ground reality linux is now more portable and modular
you forgot PPC as still (or should I say again) supportet.
i am not talking about history
You claim that NT is theoritically portable. I point out that indeed NT has been ported to other processors and now its “oh thts in the past”.
Yeah I’m sure the design has changed so much that its now unportable.
“You claim that NT is theoritically portable. I point out that indeed NT has been ported to other processors and now its “oh thts in the past”.
Yeah I’m sure the design has changed so much that its now unportable.
”
you were the one talking about NT and yes the design has changed and HAL is useless now
“The graphics subsystem is not in the kernel, it simply runs in kernel space. This is an extremely important distinction.”</I.
It’s not at all an important distinction. Anything in kernel space can crash the kernel. Graphics are complicated, and don’t belong in kernel space, because of the dangers of crashing the kernel. Kernel-space graphics don’t speed things up anyway over a proper DRI-style architecture.
[i]True, in fact even the memory manager is separate from the kernel itself.
This is not true. NT never had a seperate pager like other microkernel OSs.
Also nt subsystems are very different from Linux kernel modules. The nt subsytems use message passing for communicating with each other, working as separate entities.
Not necessarily. Some subsystems are seperate programs (crss.exe) and many aren’t.
Thank you, TenaciousOne
This will probable get moderated down like most of your posts have, but I feel so much better now.
Sorry, it never occurred to me that you had actually tried Linux. After see all those different distros you listed, I believe now that if anyone can speak with authority, it must, of course, be you, I see now that we Linux users have all just been fooling ourselves. “…sucks,” you say. How come I could not see that before you came along? Thank you.
I want to thank you, again, for setting me so very straight on these matters and sharing your wonderful evaluation skill. I mean, who can “argue,” as you would, with such brilliant insightful experience — “argue?”
How can all these foolish Linux users possible understand? Yeah, like you said, what “…security?” What could possible have all these people so misdirected? I mean even companies like Novell, and IBM, and Sun — how can they not see what you see. Investing all those millions — those fools. They should have talked to you first — am I right? Aaaa!
Sorry about the motorcycle parable, I should have known it had no point. After all, if you can’t understand it, who can?
Again, thank you so very very much.
Bullshit, he had total control over the kernel development, and it’s a great kernel btw. Don’t be jealous, one day you too shall have an HAL and multiple kernel subsystems. Until then go play with that monolithic piece of crap you have on /boot 😛
How small minded of you. I’ve read otherwise. So, I guess we could take your word for it, or we can take Dave’s. I’ll take Dave’s word if it’s okay with you. Also, notice that you’re acting like a crack-pot. No one said the kernel wasn’t good. I did say, that some of Dave’s idea were not allowed to be integrated because Microsoft had to meet their buzz word quota, which is also true. The reason that Dave no longer actively works on the kernel is because Microsoft did not want to follow the path that Dave laid out for the kernel. The conflict started before the kernel was ever completed and continued to grow over time. This should not be a surprise for anyone that has to function in the business world. Simple fact is, the people writing the checks get the final world. *Gasp* Then again, maybe reality is what you’re jealous of.
The REALL funny part of your comment, which highlights the sheer ignorance of the subject is that the NT kernel IS a monolithic kernel. That’s so friggen funny that you think it’s not. Simple fact is, even when MS insisted that it was a microkernel (which everyone but Microsoft rejected), it was a semi-hybrid-microkernel. I say “semi”, because the design did have some facets of a microkernel. Just the same, it was, at that time, still mostly a monolithic kernel design. Now, since they’ve continued to refine it to improve performance and reliability, they have a monolithic kernel, pure and simple. If you think otherwise, you’re not facing facts in the least. Clearly, MS’ spin on kernel design in interesting, but it is a monolithic design. Microkernel designers sure won’t claim the design as such and it’s close enough to be monolithic in every way that matters, that kernel designers consider it as such. If the sole qualifier is that it passes messages, then I’ll give you that it’s a microkernel. If that’s not the sole qualifier, then it’s a monolithic design. Which, I’m not saying is anything wrong with that.
The rest of your comment only highlights that not only are you completely ignorant, but you’re a zealot too. The combination is what makes you simply want to send flowers to your parents.
My favorite part of MS calling their kernel OO, is the fact that an integer is considered to be an object. Well, by their definition, all kernels are OO. Give me a break. If we look back at history, microkernels and OO were both the buzz of the industry. No surprise that MS wanted to leverage as much marketing buzz-hype as possible. After all, that’s what MS does best. Remember, NT’s kernel is in C, not C++. Objects in C are pretty much structures. I remember reading some MS docs, many years back, where they implied that their object abstraction was so great, they considered integers to be objects. I’m being serious. So basically, even if you ignore the “integer” remark, they are as much an OO-implemented kernel as is Linux, which is complete BS. Now then, if they want to say that their kernel supports many object based (OB) abstractions for their interfaces, I’ll happily by into that.
Here’s the knee slapper. According to he link you provided, Linux is a microkernel because it can dynamically load kernel modules and device drivers. But, in reality, we all know and understand that it’s a monolithic kernel, just as the current NT kernel is. In spite of the marketing crap that you’ve obviously and ignoratly bought into, there is nothing wrong with a monolithic design. Linux, Unix, and NT, are all good arguments to support that position. On the other hand, QNX is an excellent example of a good microkernel.
The graphics subsystem is not in the kernel, it simply runs in kernel space. This is an extremely important distinction.
I disagree. In what why do you consider it to be important? Do you think the distinction adds security or stability? This is like saying that modules on Linux are not part of the kernel but that ru nin kenrel space. From every way that matters, there is no distinction.
Yes, the NT kernel is more advanced in certain ways. The NT security architecture, for example, is quite powerful (although, so complex as to deter usage).
I’m not sure that’s been true for a long time now. Linux has had ACL support for a long time now. There have been many other significant security efforts, such as trusted Linux, so on and so on. Basically, Linux has it all and has had it for a while now.
NT is multiuser. Always has been.
That’s actually incorrect. Microsoft redefined the word IMO. NT, when it arrived, was a multitasking, multi-non-concurrent-user OS. They are now truly multiuser, but they were not when it came out. Think about the distinction there for a bit. It’s subtle but VERY significant. Now then, think about terminal services and what it really provides. Seriously. Think about what existed before terminal services. Think about how many mutliuser, third party add-ons, simply didn’t work right or had problems from SP to SP because of what was missing. Seriously, think about this.
It’s just that very few people *use* it, just like very few people are ever going to use all the bells and whistles added in SELinux.
You have any data to support such a comment? I personally think it’s going to become more and more common as Linux enters deeper into the enterprise.
Look up the word “obscurity” in the dictionary. Just because open-source “eyes” are looking at it doesn’t mean it’s secure.
Do you even have a dictionary? No one made that argument. Just the same, these are two completely orthogonal issues which have absoluetely nothing to do with each other. Anyone that argues Linux is “security through obscurity”, is an absolute moron, unfit to debate here because they live not in our reality. Either there is a serious language divide between “as used” and it’s real definition, or you’re a fool.
Uh, what’s the myth that’s supposedly being propogated ?
The myth is that NT is *BASED* on VMS. It’s not. In absoluete terms, it is NOT *BASED* on VMS. It is, however, strongly influenced and has MANY things in common, for ovbious reasons. When people used the word “based”, the seem to imply that it’s VMS reworked. Simple fact is, it’s not the same code, it’s only some of the same ideas. I supposed that you could argue that Linux is BASED on Unix as it actually has some Unix code in it. The same is not true for the NT/VMS connection. Thusly, saying NT is inspired by VMS is a much, much, much more accurate depiction.
Dave went on to other things ? Then who is the Dave Cutler currently at MS heading the development of the 64-bit version of windows ? Can’t be the same guy can it ?!??
Oh shesh. Talk about clueless. Since when does, “went on to other things”, mean that he left MS or even the IT field. You are clueless.
I know for sure Mandrake does, it is THE most popular Linux distro.
You’ve been lied to. Mandrake asks you which services you want to run. Likewise, it optionally asked me which services to install. Mandrake has done this for a several major releases.
just run a $netstat -l | wc on any linux distro to see what I am talking about,
Troll or misleading…I’m not sure which to call that. Unix sockets are a common form of IPC on Unix/Linux systems. Just because you see Gnome running (which is a good example of what you’d see represented with your command), doesn’t mean it’s exposed to danger. In fact, unix sockets are only accessible via local access. Your command only shows ignorance of Unix and Linux in general. Processes which used IP and ONLY listened locally, would still show up in that list. That list proves nothing, other than there is a command which allows someone to easily see what services are currently listening on their system. It says nothing about risk or issues of locality.
Operating Systems Concepts by Silberschatz and Galvin
It has several case studies including NT, Linux and 4.3BSD.
Damn. An intelligent poster, that knows what the hell they are talking about! Thank you! =D
I did say, that some of Dave’s idea were not allowed to be integrated because Microsoft had to meet their buzz word quota, which is also true. The reason that Dave no longer actively works on the kernel is because Microsoft did not want to follow the path that Dave laid out for the kernel.
Sounds interesting – more details ?
The REALL funny part of your comment, which highlights the sheer ignorance of the subject is that the NT kernel IS a monolithic kernel. That’s so friggen funny that you think it’s not. Simple fact is, even when MS insisted that it was a microkernel (which everyone but Microsoft rejected), it was a semi-hybrid-microkernel.
I don’t think I’ve ever seen any *technical* documentation claiming NT is/was a microkernel. However, I’ve seen lots claiming it’s a *microkernel-like* design.
If you’ve an even remotely technical bent, you should treat marketing tripe with the contempt it deserves.
I disagree. In what why do you consider it to be important?
Because it’s a helluva lot easier to move a separate, discrete module than it is to edit a hunk of monolithic code.
Do you think the distinction adds security or stability?
Only indirectly. It improves portability and code manageability.
That’s actually incorrect. Microsoft redefined the word IMO.
The definition of multiuser I was taught and have always understood is the ability to have multiple processes running simultaneously in separate, discrete user contexts.
It does not mean being able to remotely log in, nor does it does not mean having virtual consoles, nor does it require the ability to have multiple interactive user – all of these things are quite doable *without* being multiuser and none are required for an OS to be multiuser. You can make Windows or OS/2 “multiuser” if you want to go with those conditions.
NT, when it arrived, was a multitasking, multi-non-concurrent-user OS.
NT has always had the ability to run processes in separate user contexts. It has not always shipped with the ability for remote logins or multiple interactive users, although such functionality has often been available from third parties, or in unsupported form from Microsoft themselves.
They are now truly multiuser, but they were not when it came out. Think about the distinction there for a bit. It’s subtle but VERY significant. Now then, think about terminal services and what it really provides. Seriously. Think about what existed before terminal services. Think about how many mutliuser, third party add-ons, simply didn’t work right or had problems from SP to SP because of what was missing. Seriously, think about this.
Whether or not some applications did the right things with regards to storing configuration details, playing friendly with restrictive file permissions or simply understanding the concept of multiple users is irrelevant, the OS is multiuser.
I could write an application for Linux that require write permissions to system directories, kept all of its configuration data under /etc and would only run a single instance at a time, but that wouldn’t suddenly make Linux single user, just because that one application suddenly was.
You have any data to support such a comment?
Only experience, the observation of how few Linux distributions ship with SELinux even enabled – let alone properly configured – and the traditional belief of Unix sysadmins that user/group/other file permissions is good enough to secure anything.
I personally think it’s going to become more and more common as Linux enters deeper into the enterprise.
I doubt it, for the same reason advanced permissions on NT aren’t – they’re conceptually difficult, a pain to plan, implement and manage and, for the most part, simply unnecessary.
ACLs aren’t required by everything, by any stretch – but when you need them they’re damn handy.
About the only thing ACLs will be heavily used for are those one-off special permissions for certain users that, traditionally in Unix, required much screwing around with multiple group memberships.
The myth is that NT is *BASED* on VMS. It’s not. In absoluete terms, it is NOT *BASED* on VMS. It is, however, strongly influenced and has MANY things in common, for ovbious reasons. When people used the word “based”, the seem to imply that it’s VMS reworked. Simple fact is, it’s not the same code, it’s only some of the same ideas.
Most every time I see the “NT is based on VMS” statement made, the context clearly indicates that it is “based” on it in terms of design features, concepts and low level details – not code.
Even in the comment you were referring to, it certainly appeared obvious to me the poster was not suggesting any sort of source code legacy.
It is, however, strongly influenced and has MANY things in common, for ovbious reasons. When people used the word “based”, the seem to imply that it’s VMS reworked.
I found this in a quick search:
http://www3.sympatico.ca/n.rieck/docs/Windows-NT_is_VMS_re-implemen…
Simple fact is, it’s not the same code, it’s only some of the same ideas.
Again, I’ve never seen anyone seriously suggest NT is worked from VMS code, except while trolling. And, more accurately, it’s *most* of the same ideas.
I supposed that you could argue that Linux is BASED on Unix as it actually has some Unix code in it. The same is not true for the NT/VMS connection. Thusly, saying NT is inspired by VMS is a much, much, much more accurate depiction.
Linux is an attempt to reimplement Unix. In that sense, it is most definitely “based” on Unix.
you were the one talking about NT […]
Windows 2000, XP, 2003 and Longhorn _are_ Windows NT.
[…] and yes the design has changed and HAL is useless now
Evidence ?
Now is he talking about the NT kernel, I will assume that’s what he means. Because if memory serves Microsoft bragged about developing a grand microkernel and later admitted it couldn’t be considered that anymore.
Linux started as monolithic(I could definitely be wrong here though), and remains that way.
So who is changing or are they really alike?
I think what he means is that Linux is now starting to get badly written drivers from companies, like Intel’s e1000 driver that will mistake a 100Mb Nic for a 1Gb Nic. But even Mac has these troubles, a friend’s powerbook has the CPU in constant use (under 10%) by her printer software (HP) (not a driver, but related to hardware at any rate).
He didn’t seem to support his claim at all. He mentions preemption and one other thing. Oh my! Preemption certainly made my machine noticeably quicker (I think not)! Yea… I think this guy is just lookin to find his way into the net presses.
Evidence ?
It (Anonymous (IP: 61.95.184.—)) has no trustworthy evidence to provide. All it does is spread lies about how great Linux, Firefox and GNU are, and how bad everything else is, twisting numbers, confusing terms, and downplaying irrefutable facts as not important to “average users.”
I know it’s hard to ignore it, but we should at least try.
“you were the one talking about NT […]
Windows 2000, XP, 2003 and Longhorn _are_ Windows NT.”
nt was portable. the rest arent
[…] and yes the design has changed and HAL is useless now
Evidence ?”
the evidence is that hal is supposed to give portability, in reality windows is now solely intel based stuff giving no meaning to hal
any thing else?
”
Only experience, the observation of how few Linux distributions ship with SELinux even enabled – let alone properly configured – and the traditional belief of Unix sysadmins that user/group/other file permissions is good enough to secure anything.
”
well since there is not a single distribution that enables SElinux by default. (fedore core 2 comes disabled) your experience is right and thats because selinux is new and requires work to be enabled throughout the system not because it isnt a good option. there is a difference between immature and viable options/ not viable things
”
It (Anonymous (IP: 61.95.184.—)) has no trustworthy evidence to provide.”
what trustworthy evidence did YOU provide. i have been backing up every single statement i made. however because you wouldnt have a good discussion you decide to troll. if you want to ignore me just do it. i wouldnt care but preaching others without providing anything additional to the dicussion is basically useless
nt was portable. the rest arent
Yes, they are.
the evidence is that hal is supposed to give portability, in reality windows is now solely intel based stuff giving no meaning to hal
NT is publically available on x86, x86-64 and Itanium. It remains portable.
So much FUD and disinformation being thrown around here…
…ah forget it. It’s only a message forum.
“Yes, they are. ”
i dont see it any other architecture except x86/x86-64. look at linux,netbsd and stuff. thats portable. you cant just decare it remains portable. where is the same evidence you wanted?
“the evidence is that hal is supposed to give portability, in reality windows is now solely intel based stuff giving no meaning to hal
NT is publically available on x86, x86-64 and Itanium. It remains portable.’
again. you havent even read what i said. i said its only for intel now and thats the same thing you are telling me
“i dont see it any other architecture except x86/x86-64. look at linux,netbsd and stuff. thats portable. you cant just decare it remains portable. where is the same evidence you wanted? ”
The Itanium chip is not an x86 chip.
i dont see it any other architecture except x86/x86-64. look at linux,netbsd and stuff. thats portable. you cant just decare it remains portable. where is the same evidence you wanted?
Which part of “Itanium” are you having trouble understanding ?
Substitute each letter with the letter that follows it in ASCII chart:
“VMS” turns into “WNT” (Windows NT).
“HAL” turns into “IBM” (from the ‘Space Odyssey 2001’ movie)
Isn’t it a proof that WinNT derives from VMS :-)))) ?
Just ask google about ‘IBM HAL VMS WNT’
I SAID: ” Yes NT is portable”
YOU SAID: theoritical portability doesnt count.
I SAID: You claim that NT is theoritically portable. I point out that indeed NT has been ported to other processors and now its “oh thts in the past”.
NOW YOU SAY: you were the one talking about NT and yes the design has changed and HAL is useless now
I’ll have to remind myself not to waste another post on you because I don’t think you know WTF you are talking about period – NT, OS X, BeOS, your mother’s custom Linux Kernel on your microwave – whatever.
oh so you cannot rewrite things and make them work?. MS can claim that they design for this and that. in ground reality linux is now more portable and modular
Why does it matter ?
Any processor that gains massive market acceptence and MS is likely to port NT to whatever they want.
I mean this is getting kinda funny don’t you think ?
MY OS IS MORE MODULAR THAN YOURS!
NO ITS NOT! MINE IS!
NO MINE!
MINE!
MINE!
haha. Gotta love geeks.
It’s not at all an important distinction. Anything in kernel space can crash the kernel. Graphics are complicated, and don’t belong in kernel space, because of the dangers of crashing the kernel.
I’d agree with this considering one of the sure fire ways to crash NT is to install a less than stable video driver.
So anyone have any real insight into why this design decision was made ?
*Please no stupid “Cuz M$ is dum” responses anyone. I’m sure Cutler and the rest of the ex-Digital engineers who wrote NT have more brains than you do*
Oh shesh. Talk about clueless. Since when does, “went on to other things”, mean that he left MS or even the IT field. You are clueless.
You like to lay it out like Cutler left NT development over problems with MS. He went on to other things. Ok, fair enough, thats exactly what I do when a project is completed. I’m also inclined to return to projects when its time to pick up the development axe again. I’m sure Cutler tried a few other things himself during NT kernel dev downtime.
You failed to mention that Cutler is still leading NT kernel development and that led clueless individuals like me to take your post as misinformed, or as you like to put it, clueless.
So welcome to the club as all of us here are more or less clueless.
the evidence is that hal is supposed to give portability, in reality windows is now solely intel based stuff giving no meaning to hal
any thing else?
The fact that they have settled on providing only NT versions that utilize x86 or Intel specific processors (Itanium) dosen’t mean the HAL has no meaning. It simply means MS didn’t see a return on the other ‘non intel’ ports so they stopped producing them.
“This is not true. NT never had a seperate pager like other microkernel OSs.”
If you check out this link:
http://msdn.microsoft.com/library/en-us/dngenlib/html/msdn_ntvmm.as…
you will see:
The Virtual-Memory Manager (VMM)
The virtual-memory manager in Windows NT is a separate process that is primarily responsible for managing the use of physical memory and pagefiles.
you were however right about some some subsystems being joined together in one larger subsystem and communicating via direct function calls instead of message passing. It’s still a subsystem separate from the kernel though.
“I’d agree with this considering one of the sure fire ways to crash NT is to install a less than stable video driver.
So anyone have any real insight into why this design decision was made ? “
Because X86 in general makes context switching from kernel mode to and from user mode expensive in terms of used clock cycles, i.e. on x86 it’s slower to have userland graphics.
I think it was Andrew S. Tanenbaum that once said (early 90’s) it didn’t matter because we would all be using risc sparcs by 2000
http://en.wikipedia.org/wiki/Context_switch
“It’s not at all an important distinction. Anything in kernel space can crash the kernel. Graphics are complicated, and don’t belong in kernel space, because of the dangers of crashing the kernel.”
I’d agree with this considering one of the sure fire ways to crash NT is to install a less than stable video driver.
True, but that isn’t the *only* way, you could have the most perfect graphics driver without one bug in it, BUT the actualy graphics card *could* be buggy, the AGP chipset *could* be buggy, there are so many parts of the computer that can cause instability that pointing it down to a graphics driver will not solve the problem.
So anyone have any real insight into why this design decision was made ?
*Please no stupid “Cuz M$ is dum” responses anyone. I’m sure Cutler and the rest of the ex-Digital engineers who wrote NT have more brains than you do*
I have a feeling that it was a design decision of weighing up the possibly risks with the possible benefits, ALSO, remember, NT was being delivered under a lot of haste, and sure, they *could* have probably developed something like DRI which Rayineer mentioned, however, Microsoft weighed up the time it would take to implement a DRI like system, then compared that to the release date and made a judgement call that it was easier to ram it into kernel space rather than doing in right.
IIRC, it was Cutler who was pulling out his hair when this was done. Also, with that being said, Cutler was not the only engineer working on NT, he was not the über coder where everyone worshipped his feet, he was just one person in a large number who were working on it.
With that being said, there is no excuse for Windows maintaining the graphics layer in kernel space, it should have been yanked out after NT4 and properly done for Windows 2000, I mean, they did have 5 YEARS to do it properly, I mean, it isn’t as though they had a tight schedule.
DRI also needs a kernel module to work, it’s split between user and kernel space, directfb and even directx work exactly the same way, part of it running on user space, part of it on kernel space. Also NT 3.4 didn’t have kernel graphics layer.
True, but that isn’t the *only* way,
Never claimed it was. I said it was a sure fire way to crash the system, which it is.
Also, with that being said, Cutler was not the only engineer working on NT, he was not the über coder where everyone worshipped his feet, he was just one person in a large number who were working on it.
From what I read there was a team of 20 engineers he took with him from Digital, and they hired more at MS for the project. I don’t know anyone who believes that Cutler wrote the entire kernel himself.
Cutler himself has stated that compatability was the hardest job in designing NT. They had three 16-bit enviroments to support, a new 32-bit api(win32) to build and they wanted POSIX compliance from NT. Quite a work load.
have a feeling that it was a design decision of weighing up the possibly risks with the possible benefits,
I’m sure it was. Every software development project is like that.
ALSO, remember, NT was being delivered under a lot of haste, and sure, they *could* have probably developed something like DRI which Rayineer mentioned, however, Microsoft weighed up the time it would take to implement a DRI like system, then compared that to the release date and made a judgement call that it was easier to ram it into kernel space rather than doing in right.
MS really didn’t have a release date figured out a good three years into NT development. I’ve heard the schedule was tight because they had a lot to do, and like any good sized project it ran behind schedule.
I’m not sure I’d say they did it wrong. They may have done it in a less than desirable way, but the design did not bring the project to an end and they surely didn’t have to scrap the entire kernel over the design. When you have to start over from scratch you’ve done it wrong, and they haven’t had to do that with NT (yet).
I’m sure Cutler and all the engineers feel they could have done it better, or differently. I don’t know anyone who develops software that doeasen’t feel that way when a project is complete.
Because X86 in general makes context switching from kernel mode to and from user mode expensive in terms of used clock cycles, i.e. on x86 it’s slower to have userland graphics.
I think it was Andrew S. Tanenbaum that once said (early 90’s) it didn’t matter because we would all be using risc sparcs by 2000
You know this seems to ring a bell with me. I think I remember reading something about it.
Thanks for shedding some light on it. I’ve read so much over the years that I think I’m starting to flat out forget a lot of stuff.
http://www.cs.vu.nl/~ast/brown/rebuttal/
It’s a followup on the linux/minix thing two months ago
True, but that isn’t the *only* way,
Never claimed it was. I said it was a sure fire way to crash the system, which it is.
So why make such a blindingly obvious stupid statement, simply fixing the drivers isn’t going to fix the situation, nor is having 100% perfect hardware, what drivers NEED which costs MONEY is built in fault tolerance, hence the reason why expensive UNIX boxes running commercial UNIX’s are built like a bricksh*thouse.
The drivers are developed to take into account as many possible faults, with possible ways of tolerating them, this costs money, and ultimately, this is another part of the Windows equation people forget.
When you go to Joes Cheapskate PC vendor, this is the price you pay. Want hardware with better quality drivers and hardware, then you’ll just have to pay the price, but unfortunately we have leagues of end users who are Johnny and Janey cheapskates who want to have a rock solid stable computer, all their hardware 100% supported and yet able to purchase all their gizmos from the most dodgy vendors with crappy drivers.
This is about the one thing OpenSource software does have going for it; as problems are found, they’re corrected in drivers, and most of the time when problems ARE found, it is due to a hardware fault, meaning, fault tolerance is constantly improving in OSS systems all the time.
Also, with that being said, Cutler was not the only engineer working on NT, he was not the über coder where everyone worshipped his feet, he was just one person in a large number who were working on it.
“From what I read there was a team of 20 engineers he took with him from Digital, and they hired more at MS for the project. I don’t know anyone who believes that Cutler wrote the entire kernel himself.”
Cutler himself has stated that compatability was the hardest job in designing NT. They had three 16-bit enviroments to support, a new 32-bit api(win32) to build and they wanted POSIX compliance from NT. Quite a work load.
Yes, but remember also that he did not have the final say. He designed and explained to upper management, “this is the situation, here are the options, make a choice”, upper management choice graphics in the kernel memory space, and they got it, with all the warts and problems it entailed.
If Microsoft executives ever want someone to blame for that royal balls up, they can purchase themselves a full length mirror and have a damn good look at it.
“ALSO, remember, NT was being delivered under a lot of haste, and sure, they *could* have probably developed something like DRI which Rayineer mentioned, however, Microsoft weighed up the time it would take to implement a DRI like system, then compared that to the release date and made a judgement call that it was easier to ram it into kernel space rather than doing in right.”
MS really didn’t have a release date figured out a good three years into NT development. I’ve heard the schedule was tight because they had a lot to do, and like any good sized project it ran behind schedule.
But the fact is, they created this feature creap and hype in the IT world, they only had themselves to blame for bringing more hardship onto themselves than what was required.
Lord knows, it would have been a heck of alot cleaner had they used the NT kernel, rammed a UNIX subsystem on top and be done with it. I mean, NT has *JUST* become the mainstream desktop, you can’t honestly tell me that 10 years to get vendors to write software for the NT workstation line would have been a hard ask.
I’m not sure I’d say they did it wrong. They may have done it in a less than desirable way, but the design did not bring the project to an end and they surely didn’t have to scrap the entire kernel over the design. When you have to start over from scratch you’ve done it wrong, and they haven’t had to do that with NT (yet).
I’d definately say they did it wrong. NT 3.51 still had graphics outside the kernel, and they should have kept to the strict rules and tried to find a solution to the problem rather than using an ugly hack at the sake of system stability.
That is what one of their two goals should have been, stability at all costs. If it means that it takes an extra 1 second to process a DB transaction, isn’t that better than having a whole system going tits up because of a lust for speed?
I’m sure Cutler and all the engineers feel they could have done it better, or differently. I don’t know anyone who develops software that doeasen’t feel that way when a project is complete.
But with that said, remember, what the engineers want, and what upper management decides are normally at least two football fields apart. Engineers want to go for the *RIGHT* solution, upper management want to go for the solution that they can get out the door without consideriing the consequences of those choices.
It kills me how everytime I walk into a ‘OS’ forum like this, I have to read through a bunch of child-like garbage before I get to hear “Big People” comments. It always turns into a stupid MS vs. Linux fight. Who cares what OS you run with, just show me PRODUCTIVITY stats?!? For your kernel-eccentric evangelists who somehow think you’re smarter than the rest of the millions of programmers here in userland, I don’t want to here what your OS can do and what mine can’t. I want to hear: “Here is what YOU CAN ACCOMPLISH AND HOW FAST YOU CAN ACCOMPLISH IT” with this OS. What can your OS help me do faster?! Can you tell me that?! What OS do I use if I want to quickly build a reliable bada$$ website from top to bottom? Or how about if I wanted to create a GUI application in the least amount of time? How can I scale my application effectively? What tools are available on your OS that I can use for these tasks?
Quite frankly, MSWin has way more effective tools than any other operating system available, making it a choice development environment from jump. I’m just not talking about MS tools either. Many 3rd party applications are available on MS Win that are not available on other sytems. (Just playing th number fellas, easy now!) Like it or not, if you are a Kazaa, Limewire, or eDonkey network user, you’ve recouped the $$ spent on your Windows installation and maybe even your computer. Who can complain about that? I know there’s file sharing on all systems, but that’s not the point. No other system has so much software available.
On the point of security. There’s an obviously lasp when it comes to Win95,98, etc. It’s obvious that more bugs are going to be found in a system that has most of the world running it. Sometimes I think open source users tend to think of Windows in terms of it’s OS of nearly 10 years ago. Win2k was a formidable OS, XP is better, Win2k3 is bada$$, and Longhorn will probably wipe out any notion that MS is going to budge in the desktop war. (If it’s still a war at that point) If MS Win does lose the war, it won’t be due to the linux community as it stands now.
On the server side of things I’m an BSD/Apache man. PHP is my web language of choice and mySQL is my db…go fig. Remember, use technology for what it’s used best for. I won’t develop web apps on linux/BSD (graphics, flash, etc.) but I use it for it’s strengths….DNS, webserving, email, etc.
Anyway, I gotta go, wasted enough time on people who are just going to say that MS Win sucks, Linux/BSD is better or vice versa. Get a grip, play sports, become a cross-platform programmer (and I don’t mean java), have fun and enjoy today’s technology for what it is – choices of tools in solution to problems. Goodbye ladies.
So why make such a blindingly obvious stupid statement, simply fixing the drivers isn’t going to fix the situation, nor is having 100% perfect hardware, what drivers NEED which costs MONEY is built in fault tolerance, hence the reason why expensive UNIX boxes running commercial UNIX’s are built like a bricksh*thouse.
Read my post again.
I’d agree with this considering one of the sure fire ways to crash NT is to install a less than stable video driver.
I was agreeing with a statement that the drivers in NT reside in kernel space. Install a bad video driver and yes you can crash NT. I’ve had it happen. Thats all I was saying.
I wasn’t talking about fixing anything.
I wasn’t talking about UNIX at all.
Where you got these idea is beyond me.
But the fact is, they created this feature creap and hype in the IT world, they only had themselves to blame for bringing more hardship onto themselves than what was required.
Find me one company that dosen’t do that. Its called advertising and its how products are pushed to market.
Lord knows, it would have been a heck of alot cleaner had they used the NT kernel, rammed a UNIX subsystem on top and be done with it. I mean, NT has *JUST* become the mainstream desktop, you can’t honestly tell me that 10 years to get vendors to write software for the NT workstation line would have been a hard ask.
NT had POSIX, and they also wanted backwards compatability with existing windows applications. They also wanted a new 32-bit API to move developers on.
NT *JUST* became mainstream ? Uh XP was released a good 3 years ago. Where ya been ? Under a rock ?
Yes, but remember also that he did not have the final say. He designed and explained to upper management, “this is the situation, here are the options, make a choice”, upper management choice graphics in the kernel memory space, and they got it, with all the warts and problems it entailed.
Your point ? Thats how things go. There is no “perfect solution” and all projects have compromises made.
I’d definately say they did it wrong. NT 3.51 still had graphics outside the kernel, and they should have kept to the strict rules and tried to find a solution to the problem rather than using an ugly hack at the sake of system stability.
Well considering they have 50 billion in the bank and they still base all work on the NT kernel I’m thinking beyond anal closet cases like you no one really gives a rip.
I also doubt its a straight out hack. I’m going to lean on it being a design decision that neither you nor I understand because we weren’t on the NT team writing the kernel.
But with that said, remember, what the engineers want, and what upper management decides are normally at least two football fields apart. Engineers want to go for the *RIGHT* solution, upper management want to go for the solution that they can get out the door without consideriing the consequences of those choices.
remember ? I write software for a living. I’m full aware of what dealing with management is like.
Upper management in most cases does consider the consequences, but they only have the information that an engineer provides. If a product goes out the door and brings heavy consequences back to the company, then the engineers didn’t present a very strong case on WHY they shouldn’t go about something in a certain way.
Yes compromises are made, yes sometimes things don’t go exactly as the engineers want. Thats the game man. You have two different groups with two very different end goals and somewhere in the middle they have to meet and agree.
Both parties will agree that at some point, a product must ship. In that quest both management and engineering are forced to give and take.
It kills me how everytime I walk into a ‘OS’ forum like this, I have to read through a bunch of child-like garbage
Yet you took the time to post, proving you are no better than the rest of us children.
Buh-Bye
What that article is describing is not a true seperate pager. The term “Virtual Memory Manager” in the article does not refer to the same thing as the “VM” in Linux. Instead, it describes the “manager” threads that handle aging, etc, of the page frame database. Linux also has kernel-level threads that do pretty much the same thing. Neither NT nor Linux have a seperate pager, because when you do mmap() or MapViewOfFile(), you’re making a system call, not sending a message to a pager.
I’m really not quite understanding why it’s such a bad idea to have graphics in the kernel of WNT. Are you talking about the GDI calls that happen in kernel space? What exactly is wrong with taking advantage of kernel-mode for fast drawing of windows,etc? I haven’t heard of a security exploit in recent memory that uses the GDI layer. The closest might be Shatter, but that’s a Win32 issue and has little to do with the actual drawing system.
If it’s about stability, it seems like at some point if you’re doing something specialized-hardware-intensive, like drawing to an advanced graphics card, you’re going to have to do a lot of special-purpose communication with a piece of hardware. If you screw up too much, your display goes and you can’t really use your computer regardless of whether or not the graphics subsystem is in the kernel. If you’re running a server, then you’d probably use a standard VGA driver that’s not likely to crash and you won’t run into any potential instabilities.
Am I off the mark? Could someone explain simply why it’s so bad to have graphics in the kernel?
Am I off the mark? Could someone explain simply why it’s so bad to have graphics in the kernel?
Because it places the stability of the OS in the hands of third party developers.
Yes, but remember also that he did not have the final say. He designed and explained to upper management, “this is the situation, here are the options, make a choice”, upper management choice graphics in the kernel memory space, and they got it, with all the warts and problems it entailed.
If Microsoft executives ever want someone to blame for that royal balls up, they can purchase themselves a full length mirror and have a damn good look at it.
NT destroyed Netware, took over the desktop market and gave Unix a big fright, exactly as it was meant to. I’d hardly describe that as a “balls up”.
The world does not consist solely of academic debates about theoretical scenarios.
Lord knows, it would have been a heck of alot cleaner had they used the NT kernel, rammed a UNIX subsystem on top and be done with it.
But that would have made the whole project pointless, since the goal was to *avoid* making Yet Another Unix and make something better.
I mean, NT has *JUST* become the mainstream desktop, you can’t honestly tell me that 10 years to get vendors to write software for the NT workstation line would have been a hard ask.
Developers have been writing software for Win32 for nearly a decade.
I’d definately say they did it wrong. NT 3.51 still had graphics outside the kernel, and they should have kept to the strict rules and tried to find a solution to the problem rather than using an ugly hack at the sake of system stability.
The trouble is the “problem” was hardware that simply wasn’t fast enough.
That is what one of their two goals should have been, stability at all costs. If it means that it takes an extra 1 second to process a DB transaction, isn’t that better than having a whole system going tits up because of a lust for speed?
When you’re talking about workstations, interactive GUI performance is *very* important.
I do agree, however, that they should have left it separated out in the Server variant.
But with that said, remember, what the engineers want, and what upper management decides are normally at least two football fields apart. Engineers want to go for the *RIGHT* solution, upper management want to go for the solution that they can get out the door without consideriing the consequences of those choices.
That’s because it’s management’s job to sell the product, thus keeping all those engineers employed and insulated from the harsh realities of The Real World. If there’s a choice between releasing a product that’s “good enough” in a year or waiting five to release something better, you pick the earlier release, because five years down the track your company might not even exist. Microsoft haven’t always had 50 billion in the bank and 95% marketshare.
I’d agree with this considering one of the sure fire ways to crash NT is to install a less than stable video driver.
Stability isn’t the only thing that’s important in an OS. It’s certainly up there, but it’s not up there alone.
So anyone have any real insight into why this design decision was made ?
Performance. Remember, this happened back when a 486 with a 2MB VLB video card was top of the line and most people still had 386s. I don’t think a lot of people appreciate just how much faster computers are today than they were even as little as ten years ago – particularly if they’ve only been “around” for ~5 years.
NT4 was supposed to make more moves into the Workstation market. However, the existing architecture (and as someone else noted, the limitations of the x86 platform) made the graphics subsystem too slow for “real life” usage. So it was bumped down into kernel mode. This context must be kept in mind – NT4 was a product primarily aimed at the *Workstation space*, where GUI interactivity is extremely important.
It wasn’t put in there in the first place, of course, because back then intel CPUs were “topping out” and everyone though RISC was going to take over the world in the early 90s. Then intel pulled out the Pentium, followed up with the Pentium Pro, gave them all the big finger and tied us to x86 for another decade.
This is something that was actually stated by Cutler in an interview I read, way back in the day. I doubt you’ll be able to find it online, though – I only ever saw it in dead-tree format and I can’t even remember what magazine it was (Byte, maybe ?).
What they should have done was left the graphics system running in user mode on the Server variant – it should have been reasonably easy (this is one of the reasons the distinction between “in the kernel” and “in kernel mode” is important). However, I suspect they simply ran out of development time. The Microsoft solution was to just run their standard VGA driver – which was pretty much bulletproof- on servers. This is why you often see NT4 servers on running standard 800x600x8 VGA.
Similarly, they probably left it that way in Win2k for the same reason – Windows 2000 was originally supposed to be the crossover product XP became and since it was heavier on system resources, everything possible needed to be done to keep good performance. XP, of course, was only a point release and made few major changes, as was Windows 2003.
As I said elsewhere, I seem to recall reading that the graphics system was going to be moved back out of kernel mode in Longhorn, now that machines are (more than) fast enough to make it practical. Actually, I suspect Longhorn will make quite a few changes that start getting NT back to the OS it was originally designed to be.
Stability isn’t the only thing that’s important in an OS. It’s certainly up there, but it’s not up there alone.
Nope security and usability are important too.
Performance. Remember, this happened back when a 486 with a 2MB VLB video card was top of the line and most people still had 386s. I don’t think a lot of people appreciate just how much faster computers are today than they were even as little as ten years ago – particularly if they’ve only been “around” for ~5 years.
Very true and it seems that M$ has forgotten this. Computers are fast but thats not realy an exuse to create a bloated buggy OS that does the same things as the older versions.
Thank goodness for Linux. Well at least owners of those old boxes can now keep using them!
[i]I suspect Longhorn will make quite a few changes that start getting NT back to the OS
Yep 5 times has much hardware power just to do the same thing that could be done on earlier versions with all the spyware and viruses to go with it 😀
Like it or not, if you are a Kazaa, Limewire, or eDonkey network user, you’ve recouped the $$ spent on your Windows installation and maybe even your computer.
I’m not sure I read this right…are you advocating piracy? Sure seems like you are…
As far as developing GUI apps fast (and cross-platform), I think you should give a look at QT. It’s one of the best cross-platform solutions out there, IMO.
My opinion about the article: it is mediocre. It’s really promoting a Linux vs Windows discussion.
The really important difference I care to mention is open sourceness. Win kernel is not open source as far as I know (I suppose that some important partners may have access to it).
It’s a comercial strategy and that’s fine. In my opinion it will never be behind Linux kernel because all new ideas introduced there will be available to everyone (may except those that are really dependent of the generall paradigm as refered above).
One last thing. I think it is amazing the kind of evolution the linux kernel as suffered since most of the money is in the MS side. And money is kind of important …
Quoting Fanboy from aboce (hope you don’t mind) “Get a grip, play sports, become a cross-platform programmer (and I don’t mean java), have fun and enjoy today’s technology for what it is”
is why some refutation is labeled as “reported abused”. It is more like abusive use of that functioin when some trolls are unable to prove their points. Sad.
“Refresh my memory..when has the windows Kernel ever been compromised due to a security issue?”
Quite recently, see the MS11-04 security advisories…
“When you’re talking about workstations, interactive GUI performance is *very* important.”
And mac os X does it in userspace they say…
And mac os X does it in userspace they say…
And pays for it as well, if true – OS X’s interactive GUI responsiveness is atrocious.
Not to mention the design decisions of OS X were made only a few years ago, when machines were _fast_, whereas the decision to run NT’s graphics in kernel mode was made ca. 1994, when computers were *slow*.
And pays for it as well, if true – OS X’s interactive GUI responsiveness is atrocious.
Of course, as ever faster machines come out, you won’t notice it and it’s always nice to have the added stability that userspace implementations provide, just like X on UNIX-like OSes.