“Microsoft will move the graphics for its next version of Windows outside of the operating system’s kernel to improve reliability, the software giant has told Techworld. Vista’s graphics subsystem, codenamed Avalon and formally known as the Windows Presentation Foundation, will be pulled out the kernel because many lock-ups are the result of the GUI freezing, Microsoft infrastructure architect Giovanni Marchetti told us exclusively yesterday.”
about time.
They seem to alternate with each version of windows between having it in the kernel for speed, or in userland for stability!
Umm… no they don’t.
It’s been in the kernel since at least Win95 (including the NT line)
It’s been in the kernel since at least Win95 (including the NT line)
Uhm, you mean like, Windows 95 actually had a kernel?
Wow.
Yes, Win95 actually had a kernel, it was just boostraped to DOS.
Wrong.
Windows NT 3.x had the graphics sub-system outside the kernel, which resulted in rather lack-luster display speed.
NT 4.0 was the first true 32bit MS OS (Win9x/Me excluded) to have kernel level graphics sub-system.
http://www.microsoft.com/resources/documentation/windowsnt/4/workst…
Gilboa
That’s exaclty what I said.
NT4 came after Win95.
Windows Vista is going to be great, microsoft is doing a good job it seems. Too bad I don’t use their product anymore.
V.iruses
I.ntrusions
S.pyware
T.rojans
A.dware
Edited 2005-12-16 15:22
Every forum has it own idiot, you are that one on this forum
And in the dictionary they have pictures next to the definitions of words. Next to the word “dumb” there is a picture that is of him.
Hahaha
Stephen
Well, this is what they promise to remove with this release, anyway. Whether they will succeed – another question. Time will tell.
You mean its going to be great because they’re finally changing their kernel et al to match the standard UNIXy designs, the same things people have been telling them for ten years, undoing the things Microsofties have slandered everyone else about for ten years? They just never learn.
So is this a client/server model…like X11 or X.org? Windows might finally be catching up to Linux or FreeBSD.
No this will not be client/server. They are trying to shift the windows kernel design closer to a microkernel. In a microkernel, all of the drivers run in userspace and a very very small portion of code runs in userspace. A microkernel is the best design in theory, but implementation is the hardest.
The closest OS to being a full microkernel is minix http://www.minix3.org but even it isn’t completely micro. Microkernels are *MUCH* more stable as you can more or less plug and play add / remove drivers and nothing serious will break if the drivers die.
1) All microkernel systems are based on client/server. There is the kernel, then there are servers (eg: X or Avalon), then there are clients (applications). You want to draw something on the screen, then you do IPC over an appropriate mechanism to talk to the graphics server.
2) Minix, IIRC, puts drivers in the kernel. I’d say the closest OSs to being full microkernels are the L4-based OSs. Minix is small, but L4 is positively tiny.
“2) Minix, IIRC, puts drivers in the kernel.”
As of version 3 of Minix, drivers of all sorts are no longer in ther kernel.
No drivers are really “in the kernel”, but a few critcal ones share an address space (although are seperate threads), also some drives have to packaged into the kernel “image” (meaning they are loaded with the kernel by the bootloader) or you get a chicken-and-egg problem with the filesystem driver.
I am afraid that there was no mention of a microkernel in the article. Microsoft is not going to shift from a monolithic kernel to a microkernel so close to release. The article clearly describes a client/server model.
He didn’t say they were changing it TO a microkernel. He said “closer to a microkernel”. The NT design is already microkernel-ish(for lack of a better term)/not exactly monolithic.
‘The closest OS to being a full microkernel is minix http://www.minix3.org but even it isn’t completely micro. Microkernels are *MUCH* more stable as you can more or less plug and play add / remove drivers and nothing serious will break if the drivers die.’
No, the OS that’s closest to being a full microkernel is QNX http://www.qnx.com/.
Nahh.. BeOS I believe, used a microkernal approach and it worked rather well and fast.. Actually… I see vista turning out to be more of a wierd hybrid between osx and beos moreso then linux.. However, thier new effort, which I believe is called symbol or sylibus or something like that is more of a unix clone…
Do you mean Singularity?
http://research.microsoft.com/os/singularity/
BeOS was very much a hybrid system (both monolithic and micro).
QNX is also a very small microkernel, not sure if it smaller or not than the minix microkernel you mentioned.
it was a joke, just me being cynical. its cat and mouse with windows and the ‘x’s. What even one does the other does next and vice versa. I could care less as long as it works
So is this a client/server model…like X11 or X.org? Windows might finally be catching up to Linux or FreeBSD.
You’re right. Perhaps finally Microsoft’s 90%+ marketshare is catching up with Linux/FreeBSDs HUGE user base.
As much as I despise MS for their predatory business practices, I have to commend them for once in my life.
Even though I use linux on my 7 computers at home, moving the graphics subsystem outside of the kernel is a huge step in the right direction. A year or two ago, I couldn’t count the number of times I fixed dell notebooks due to buggy Nvidia drivers.
Hopefully MS will get the “Principle of Least Authority” approach right in Vista also. ‘nix has had this for how long now?
Yes, before long Windows will be nearly “ready for the desktop”.
Please, stop trolling, windows is desktop
anon
peace
It looks like people have a short memory. Graphic drivers did not run in kernel mode until Windows NT 4. It was included to boost performance, as graphic was rendered by software. Now graphic is rendered by hardware, so there is no need to run these drivers in kernel mode.
Btw, in Linux, graphic drivers do not run in user modes, as monolithic kernel requires compilation of drivers with the specific kernel.
Actually, with linux and X only a small part is in the kernel. Basically enough stuff to talk to the video card and to the X driver. I’m sure Microsoft has the same kind of thing, only standardized for all drivers.
though XFree86 (and derivatives) need root privileges to gain access to I/O port access (and various other things), which is basically the same as a restricted kernel level (the video driver could just as well flood the IDE I/O ports, no protection being done there).
the only reason they need kernel drivers every now and then is for interrupt handling, dma and the like – stuff that ends up in a kernel trap and needs handling from there on.
there are designs to make that more secure: miniports on winNT (for a decade or so, and many drivers actually run in userland already – it’s not that much of a change for microsoft), beos’ accelerants, kgi on linux (shot to death before it was really born)
For clarification, graphics *drivers* have always run in the kernel in Win32 systems: the question is how much of GDI/other GUI stuff has run in kernel space, and what percentage of what’s currently being done by a graphics driver in kernel space was done in user space.
BeOS has always had a very sane way of doing this, having a minimal amount of code of a driver in kernel space, and an “accelerant” that’s called from the App_Server (the user space program that controls the GUI and does all drawing) that simplifies writing graphics drivers while adding to stability. The more stuff you can remove from kernel space, the better stability and easier it will be to create drivers, and create correct drivers. It may (perhaps) have some amount of performance overhead, but it doesn’t appear in BeOS to be a major performance eater.
Now, what will be very interesting is to see how Microsoft changes things with their move of putting more back in user space, when it comes to 3D acceleration performance. But if you think about it, if the 3D rendering is GPU-limited, even this move won’t make a significant difference with modern hardware.
The performance implications of userspace graphics depend a lot on the underlying hardware. As hardware has gotten more advanced, the effective latency to it has gone up, making more indirect methods of accessing it less expensive.
Consider a primitive 2D accelerator. It has some memory mapped I/O registers that applications write directly draw primitives. Now, in a protected system, you cannot have the graphics driver running in the client’s address space, so what you do have the client call into the kernel and allow the kernel driver to do the I/O writes. Putting this in the kernel is more performant in this scenario than having a server, because you don’t have a context switch.
Now, consider a modern 3D accelerator. You can program them via MMIO, but to achieve the best performance, you have to use the DMA engine and control the GPU via command packets. In this model, you can put most of the graphics driver, the part that constructs the actual command packets, in user space. This is very fast (a local library call is faster than either IPC or a kernel call), and still secure (the user space driver cannot actually bang registers and cause a system crash). Now, you have a kernel call (or an IPC — it doesn’t matter a whole lot, because command packets are big enough that they don’t get sent all very frequently) that verifies the command packet, and programs the DMA engine to upload it to the GPU. This overall mechanism is both secure and quite fast — once you’re constructing command packets anyway, a little overhead in moving them around doesn’t matter so much.
> Hopefully MS will get the “Principle
> of Least Authority” approach right in Vista
Yeah.. do you mean “right” like a world-readable /root directory in the default Debian install?
or like the world-readable /home directories by default?
I can’t *wait* til they do this!!
AFAIKT their not moving drivers out of the kernel, God I hope not anyway!
AFAIKT their not moving drivers out of the kernel, God I hope not anyway!
The driver package consists of a kernel mode miniport driver and a user mode display driver.
From the WDK:
“The display driver model architecture for Microsoft Windows codename “Longhorn” is composed of user-mode and kernel-mode parts. The following figure shows the architecture required to support the Windows Longhorn display driver model. A graphics hardware vendor must supply the user-mode display driver and the display miniport driver. The user-mode display driver is a dynamic-link library (DLL) that is loaded by the Microsoft Direct3D runtime. The display miniport driver communicates with the Microsoft DirectX graphics kernel subsystem.”
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/di…
Vista will be a lot more like linux. If microsoft makes vista an ad-supported freeware… Nah, it won’t happen…
And when is Linux getting their network stack out of the kernel?
So, microkernels are bad because they are slow: so you put the network stack on the kernel.
But!! microkernels are great because they can’t crash all your system: so you put graphics and sound as clients.
Can you understand Linux fans? They make no sense at all.
Linux is NOT a microkernel. Andy Tannenbaum (the guy who wrote minix and is a college professor) told Linus Torvalds he would fail his class.
How often Mr. Anonymous, do the network drivers crash the system in Linux? I’ve been using it in mission critical servers in the travel industry (where I’m a systems admin) with 0 problems in the past few years.
Tell me 1 *real* operating system that has a TCP/IP stack outside of the kernel? As much as Andy touts it, minix is still just a toy. GNU/Hurd is a microkernel and look what it is now, a big flop.
On technical merits, a microkernel is superior to a monolithic one. On implementation and performance, a microkernel is too slow with increased latency to be practical for many applications.
Tell me 1 *real* operating system that has a TCP/IP stack outside of the kernel?
QNX. Now *that’s* an OS for mission critical systems!
Tell me 1 *real* operating system that has a TCP/IP stack outside of the kernel?
QNX is a true, proper microkernel operating system, and is also one of the most succesfull embedded operating systems available today. It powers most medical equipment like fMRI scanners, for instance. QNX also operates the space shuttle’s arm.
On technical merits, a microkernel is superior to a monolithic one. On implementation and performance, a microkernel is too slow with increased latency to be practical for many applications.
That might have been true 15 years ago, when computers had limited resources. However, in these times, the overhead causes by the excessive communication between stuff outside of the kernel is neglicable because of our powerful computers. The overhead argument today in the muK vs. monolithic debate is void.
QNX is a real time system, and real time system are not the one with mayor throuputs. They are fast working with many small incoming loads but it’s slow with big ones.
Now, modularity is good, but the process isolation given by current processor it’s too much overhead (a new design can help).
there are other techniques to achieve modularity like this: http://goos.sourceforge.net/overview.php
Looks like the GoOS! project is deprecated.
Replacement project is here:
http://odin-os.sourceforge.net/
Which doesn’t look promising since all I get is a mysql error.
And how about QNX?
http://www.qnx.com/products/rtos/microkernel.html
Microkernel, TCP/IP in Userspace, real… there you go 🙂
AZ
Don’t know about network stack, but there is one example which would fit just as well.
The new audio stack in Vista is entirely in user space, and infact it is going to be faster than DirectSound simply for the fact that new APIs in Vista allows DMA mapped to user space, whereby user-space clients do not have to cake through layers of kernel code before ending up on the sound hardware, reducing latency significantly.
Thus, the principle of microkernel in this case is actually more advantageous if done right.
Latency issue is not inherent to microkernel design, it is inherent to a bad microkernel design.
even then, the miniport driver still has to reflect it into userspace (a simple MMU operation, but that could have been done with a kernelspace driver, too)
Hmmm…. Have you looked at QNX?
Now, after a dozen of “Have you ever heard of QNX?” comments, I have only one thing in mind: 2006 will be the year of desktop QNX.
Now, for real, back to the topic: why would Linux need to have network stack outside kernel?
> “Tell me 1 *real* operating system that has a TCP/IP stack outside of the kernel?”
Microsoft Windows CE
As much as Andy touts it, minix is still just a toy. GNU/Hurd is a microkernel and look what it is now, a big flop.
At ease, SEJeff. Both projects you’re trashing are very important to educational audiences.
The Linux kernel deals with so much tactical detail in supporting its broad hardware range that the teaching points are less accessible.
There is no shame whatsoever in doing something for pedagogical reasons as opposed to business ones.
X Windows or DOS and Win31, again
It seems we are geeting just where we were 10 years ago, with a console OS and a GUI…
It’s obviously the best way to go…
Hmmm, what other OS that is STABLE, RELIABLE, and SECURE has gone to this model?
MacOS, they used to be tightly coupled to the kernel. They saw the error of their ways now MS has…
DOS was flawed because they didn’t do what they intended with version (crap!)… I think it was 3.5 or 4.2… At one point DOS was GOING to be a multithreaded kernel but they nix’d that idea for backward compatibility…
So, version 6.0 rolled our with Win95 while they worked on NT…
The *nix community has been working with this layout… Well, forever…
I am just pleased for a number of reasons.
First, the giant has fallen… they realize it will be better this way.
Second, IT WILL be better this way… I am glad to see this .. It means that Windows will be even BETTER then it already is…
Yes, you heard it, if you made it this far… I LIKE MS windows. I don’t like their business practices but windows is more user friendly then any of the *nix UI’s…
If MAC and Windows can do it… why can’t the *nix land?
In fighting… That’s why… Not, “their paid”, not “we have something good”…
No, it’s in fighting… Lots of people with lots of ideas (which is not necessarily bad) but no direction.
With direction things happen… There is no direction for the *nix desktop experience… Lots of good ideas but nothing definitive and focused.
And don’t give me… “have you used it?”
YES! I have. I have used FC3 for one year… I have used CentOS 3.4 for 6 months… I had too…
It was mildly annoying but I got used to it…
I was happy to return to even Windows 2k after that…
So, be objective, and take a closer look… Microsoft may a bastard of a company but their UI IS better and pulling it out of the kernel (though amusing because they finally accepted they were wrong) will make their product THAT MUCH BETTER…
Then *nix crowd WATCH OUT!!!
I will always stand up for the servers I configure, install, setup and support which are Linux and BSD based, but with this turn of events… Microsoft MAY actually be able to fight back in the server community…
this announcement was already made and confirmed by ATI a couple of months ago when ATI announced that their drivers would be more stable in windows vista because of the new userspace video drivers. doesn’t anyone else remember this announcement?
artcle reference:
http://www.tgdaily.com/2005/09/22/vista_graphics_drivers_to_be_more…
this announcement was already made and confirmed by ATI a couple of months ago when ATI announced that their drivers would be more stable in windows vista because of the new userspace video drivers. doesn’t anyone else remember this announcement?
That got posted on OSNews too. But, ATI isn’t Microsoft. This time ’round, it’s MS doing the announcing, making it, well, official.
I read somewhere that the Vista version of IE7 is also being somewhat or all of yanked from the Kernel for security reasons. Looks like 2 and 2 is starting to come together in Redmond. I’m looking forward to Vista.
IE has NEVER been in the kernel, and therefore there is NOTHING to take out of it.
No, IE is integrated with the shell aka explorer.exe. Security problem with IE often results in a system compromise due to this. Even if you delete iexplore.exe from your computer, your system will be vulnerable to ie attacks as explorer uses the same libraries (dlls) as ie.
And? You just proved my point.
Also, it’s not more vulnerable because Explorer uses those libraries, it’s vulnerable because those libraries are there. And they are on EVERY Windows install.
The same exact thing can be said about Konquerer and KDE, or WebCore and OSX (or even Safari and OSX).
IE is no more integrated than any of these (except Safari).
I read somewhere that the Vista version of IE7 is also being somewhat or all of yanked from the Kernel for security reasons. Looks like 2 and 2 is starting to come together in Redmond. I’m looking forward to Vista.
IE was never part of the kernel.
Giovanni… spagetti design
Probable he cooks and design windows stuff at same time…
If they really cared about reliability they should stop producing swiss cheese.
Security is a process, not a feature.
go read “writing secure code” by microsoft press. Takes you inside what they learn about security at MS, it is unfathomable how much they have to know.
// A year or two ago, I couldn’t count the number of times I fixed dell notebooks due to buggy Nvidia drivers. //
Ah … so nvidia’s crappy drivers brought down the Windows OS on your laptop … and that’s Microsoft’s fault.
Gotcha.
It’s Microsoft’s fault for designing an OS so susceptible to buggy drivers. Technically, it’s gravity that causes your plane to crash, but that doesn’t change the fact that the aircraft designer is still at fault
Perfect response man
Rayiner: It’s Microsoft’s fault for designing an OS so susceptible to buggy drivers. Technically, it’s gravity that causes your plane to crash, but that doesn’t change the fact that the aircraft designer is still at fault
LMFAO, that is the dumbest comparison i have heard in my life. With your analogy, i should blame GOD (or whatever f–k you believe in) for creating gravity which makes my plane fall? I wonder why zealots like you are not even ashamed of your so dishonest bias. Grow up.
I don’t see what the issue with my analogy is. Drivers are well known to be a weak-point of the system, because they are so numerous and because relatively little effort gets put into the compared to writing kernel code. The fact that drivers in general are buggy is simply a property of the system, if you will. The fact that gravity pulls things towards the ground is similarly a property of the system. An OS that is designed without properly trying to handle buggy drivers is no different from a plane that is designed without properly trying to handle gravity.
So when Linux has kernel panic it’s not driver writer fault but Linux itself. Right, most blue screen errors in Windows are due faulty hardware or crappy drivers(name Terratec makes my blood boil). Making OS that doesn’t crash on faulty hardware or driver problem would be an miracle.
exactly what rayiner said. It’s not Microsoft’s fault that nvidia made crappy drivers, but if the design was better, it wouldn’t matter so much.
Uh huh. I’m reminded of the STACK issue and Nvidia drivers a couple years ago.* I guess if the Linux design was better. It wouldn’t have mattered so much.
*Plus if you go to the Nvidia forums you can browse many a problem between the Nvidia drivers and open source. Guess design isn’t everything.
From a technical standpoint, yes it’s MS’s fault. They shouldn’t allow drivers for unstable hardware to have full kernel access. I say unstable as it is super common to OC video cards.
From a technical standpoint, yes it’s MS’s fault. They shouldn’t allow drivers for unstable hardware to have full kernel access. I say unstable as it is super common to OC video cards.
That’s a very curious statement that’s completely beyond reason to implement. How on earth do you propose Microsoft prevent drivers for unstable hardware to have full kernel access? Short of whatever version of Windows in question completely preventing the user from installing drivers that weren’t written by Microsoft themselves (which might *still* be stable drivers for unstable hardware) there is absolutely nothing Microsoft can do to prevent unstable hardware from affecting the system stability.
I’m waiting for you to spout, “Linux does it!” to which I’d retort, “Prove it!” because it simply isn’t possible to keep unstable hardware from affecting the stability of systems, regardless of the OS, regardless of how *correct* the driver is, and no OS can determine programmatically “This is unstable hardware” or “This is an unstable driver” or “This combination of driver configuring this hardware causes instability” if it has never seen that hardware and associated driver before, and has no data to judge from. For that matter, not every single piece of hardware that’s overclocked is overclocked to the same specs, because the hardware and operating conditions are not identical all over: what’s stable at 10% overclocking on one system may cause it to crash on POST on another.
Because hardware that transfers a lot of data tends to use interrupts and bus mastering/DMA, even going to a pure microkernel setup with zero kernel space access by that driver does not absolutely guarantee the system will be stable if the hardware overheats, and weird things start happening. I strongly suspect you’re just unhappy that Windows (a lot of people will hate to admit or acknowledge this, but it’s true!) the best gaming platform OS (in terms of support for drivers and games available, along with performance in terms of speed for rendering: regardless of another OS perhaps having better drivers/rendering speed for a card, it doesn’t do the user any good if no games are available to use it, does it???) simply doesn’t allow you to defy the laws of computer hardware and software science and protect you from your need for speed, which pushes you into pushing your system beyond designed limits, and thus having it become unstable. Don’t you think that if the manufacturer felt the hardware could reliably run at that higher speed, that’d be the default clock speed???? After all, bragging rights for having the fastest hardware for sale translates into profit for them, so their best interests are met when they provide the best value for the customer, and a huge part of that is whether or not the hardware they sell performs as advertised, for as long as it should function, short of lightning strikes and people overclocking it into meltdown situations.
If you want to blame anyone for your overclocked hardware leading to machine hardware/software instability, go complain to the graphics card and chip manufacturers for making it possible to ask the hardware to run until it drops, and stop blaming your self-induced problems on innocent parties.
Jonathan Thompson
Check out this 1996 article where Microsoft talks about the “improvements” by moving graphics into the NT executive:
http://www.microsoft.com/technet/archive/winntas/plan/kernelwp.mspx
“One of the side effects of this change is that now Window Manager, GDI, and graphics drivers have the potential to write directly to other spaces within the Executive, and thereby possibly disrupting the stability of the whole system.”
Well, as said before, the reason it was put into the kernel was to speed up drawing and such (which was all done in software). Now everything is rendered on the hardware (including GDI stuff), and having it in userspace does not effect performance.
//Technically, it’s gravity that causes your plane to crash, but that doesn’t change the fact that the aircraft designer is still at fault.//
Interesting. So it’s your contention that 100% of aircraft crashes are due to faulty construction/engineering, and *NOT* due to pilot (user) error.
Hi, I’m earth. Have we met?
So it’s your contention that 100% of aircraft crashes are due to faulty construction/engineering, and *NOT* due to pilot (user) error.
When it’s on autopilot, yes.
So it’s your contention that 100% of aircraft crashes are due to faulty construction/engineering, and *NOT* due to pilot (user) error.
When it’s on autopilot, yes.
You’re ignoring the fact that the pilot still has to specify what the autopilot is supposed to do: if the plane is flying over a mountainous region, and the pilot decides, “I’m going over there!” and programs the autopilot to home in on a beacon that takes it on a path that intersects a mountain, that’s not faulty construction/engineering, that’s pilot negligence!
Thus, your blanket statement is all wet
Jonathan Thompson
if the plane is flying over a mountainous region, and the pilot decides, “I’m going over there!” and programs the autopilot to home in on a beacon that takes it on a path that intersects a mountain, that’s not faulty construction/engineering, that’s pilot negligence!
Yes, but the autopilot really shouldn’t follow a flight path that intersects a mountain. Modern flight control systems already have mechanisms to discard stupid input from the pilot, and something like this should be caught by the computer.
You’re ignoring the fact that the pilot still has to specify what the autopilot is supposed to do: if the plane is flying over a mountainous region, and the pilot decides, “I’m going over there!”
Doesn’t happen with a computer. There are still black boxes you cannot do anything with as a user, and as a pilot – more so as a user of a computer.
Thus, your blanket statement is all wet
Rather like the crap you’ve written.
Sure, user error can be at fault, but it’s still the engineer’s fault for not designing the system more robustly in the first place. A properly designed system should not allow the pilot to fly the plane into the ground, at least not by accident.
I look at it like this. A plane ALLOWS a pilot to crash it, but takes many many precautions to prevent it. It’s generally considered safe, but the pilot still must be trained to know what to do. Likewise, there are precautions for XP (driver certification for one) to prevent some stuff, but the drivers are still expected to know what they are doing.
Can you blame Microsoft? Well, I suppose if you want, but more blame lies on the driver developer who wrote the bad code. And Microsoft is now taking further steps to prevent crashes, and they should be commended for it.
Microsoft already did this once and then moved the drivers back to the kernel for speed reasons. Funny how they had it right the first time.
It’s not that they were right to begin with, it’s just that they chose speed with a possible loss in reliability (depending on how the video driver was). Now there is basically no speed gain by having in the kernel, so they are taking it back out.
The more line Unix it will become.
Master Foo Discourses on the Unix-Nature
A student said to Master Foo: “We are told that the firm called SCO holds true dominion over Unix.”
Master Foo nodded.
The student continued, “Yet we are also told that the firm called OpenGroup also holds true dominion over Unix.”
Master Foo nodded.
“How can this be?” asked the student.
Master Foo replied:
“SCO indeed has dominion over the code of Unix, but the code of Unix is not Unix. OpenGroup indeed has dominion over the name of Unix, but the name of Unix is not Unix.”
“What, then, is the Unix-nature?” asked the student.
Master Foo replied:
“Not code. Not name. Not mind. Not things. Always changing, yet never changing.”
“The Unix-nature is simple and empty. Because it is simple and empty, it is more powerful than a typhoon.”
“Moving in accordance with the law of nature, it unfolds inexorably in the minds of programmers, assimilating designs to its own nature. All software that would compete with it must become like to it; empty, empty, profoundly empty, perfectly void, hail!”
Upon hearing this, the student was enlightened.
for the right application.
Why not have purpose built kernels. If you have a server where hardware changes are not frequent but need the optimal performance, why not run the network stack in the kernel? Omit the sound and video drivers and any other unnecessary devices that are not necessary for the primary operation of the server. Obviously, any hardware drivers written to this kernel must undergo much more testing as it is not directed towards home pc’s. There would be less hardware total, and it would cost more, but at least it would be rock solid.
Oh, wait a moment, this was already the case with proprietary vendor hardware with minimal hardware but much tighter drivers. I think they got spanked by the cheaper PC model.
Oh well, we have what we have for a reason, I guess.
How time changes, The orginal NT platforms made a big feature of user mode driver. Only later did the graphics drivers go kernel as the 9x and NT streams came together.
So what next, bring back them ultiple OS personna with a posix (linux) compatible user environment!
So what next, bring back them ultiple OS personna with a posix (linux) compatible user environment!
This never went away. POSIX was dropped from the default install in Win2k due to lack of demand, but was still provided via Services for Unix (SFU), a free download for a few years now. Default POSIX is making a return however (at least included with the SKU if not installed by default). Windows Server 2003 R2 ships with an updated and enhanced SFU (I think now called Subsystem for Unix Applications [SUA]) and this subsystem is likely to ship with client and server versions of Vista though nothing has been formally announced yet.
Edited 2005-12-16 16:17
These days where most applications are bash scripts, python, perl, php applications: text parsing everywhere, sql queries, etc. There is really no reason not to have a microkernel. If you think you have less performance, think again: you already ruined your performance when you started running all those scripting languages.
That’s not exactly true: scripting languages are intended for scripts (surprise, surprise!), which somewhat boil down to sequences of orders to lower-level programs; it is these lower-level programs, that are supposedly written to do things fast, that do the bulk of the work, so the speed of the sequencer is not that important.
i advice you start saving, because you’ll need to buy new hardware when vista comes out, because for the look of the thing, very few drivers will be compatible with Vista.
About the step, a very “waited” step.
How the drivers are compiled has nothing to do with whether or not they need to run in kernel mode. You’re right that a part MUST run in kernel space due to it being a monolithic kernel (and at this point I’ll remind you that NT has not been considered a true microkernel for quite some time).
Also, I do believe the majority of the X distributed drivers run completely in user mode.
My understanding of Windows is that not only did graphics drivers run in kernel mode, but all of the underlying graphics work existed in kernel space as well. The analogy would be the X server running in kernel space.
Speed hacks Microsoft has given up. What do you think? We can list off the ones they’re still using, and the ones they’ve come to their senses and given up.
I dont think any of us on Linux can start saying .. “yes now Microsoft is catching up with us” … how old is X again ?
Vista will propably be really cool … but still bloatware I guess.
Win 98 needed bout 500 MB diskspace,XP 2GB .. so Vista will be .. 8GB ?
Im looking forward to it because then Linux will propably still be limbing behind OSX & Vista with a stoneage graphics system & it will be catching up with the rest for us on Linux.
C what happens ?
Maybe Windows will be already having to catch up with Linux & OSX when it comes out … maybe – maybe not.
Fingers crossed 4 Linux development
So they’ve finally figured out X has the model right.
Well I Hope they’ll also introduce a feature like in Linux, the ctrl-alt-backspace, to restart X … well I mean the GUI, and a possibility to log-in remotely (ssh in Linux, to kill frozen tasks, something you can do with Linux too, rather than restarting the PC…
Win9x could do this… well, it would just restart Windows instead of entire PC (and DOS).
As Linux uses modules (BTW I am a conmplete idiot – ure warned !! ) shouldnt that mean that mean that it is in some ways similiar to microkernel design ?
Are there any realworld speed comparisions of monolithic v micro .. a site ?
Because whatever people say Linux is nowehere near as reliable as something like QNX or VMS ?
Linux I guess can be compared to things like Windows & Apple – but it is not in the league of guaranted (SPLN?) 100 % stability & security.
What is stoping people from a Linux microkernel ?
Linux uses modules, but modules just bring the code-organization advantages of a microkernel to a design, not the stability advantages. They still run in the kernel’s memory space, so they can overwrite important kernel data structures when they contain bugs.
I’d guess there are real-world comparisons of micro-versus monolithic kernel speed, but most of them are for older microkernels like Mach. Newer ones like L4 don’t have as mature systems built around them, but there are indications that their rigorously minimal design should allow for high performance.
A true Linux microkernel (with all the major subsystems moved into userspace) would be a huge engineering challenge. Linux contains a couple of million lines of code, and restructuring it so drastically would be difficult. However, there are pseudo Linux microkernel systems, based on L4 or Mach that run the entire Linux kernel as a userspace server. One such kernel L4/Linux, has shown that it can get within a 5-10% of native Linux in terms of performance.
X runs in user mode with memory protection, and lockups are still quite common with buggy drivers. One of the main reasons is that programming the graphics card incorrectly can overwrite random areas physical memory or freeze the bus. Current hardware provides no protection for this.
I’m willing to bet that most of those 89% of crashes are due to errors of that kind, and migrating the drivers to user mode will do little to help.
If I understand correctly, actual bus plumbing is still going to be in the kernel. User-space drivers will hook into the kernel miniport driver.
What will improve is chances of incorrect driver code having complete access to all memory, and dumping inadvertantly somewhere not expected, giving a nice blue screen.
Why are you willing to bet that? Take a look at the size ratio of NVIDIA’s kernel vs userspace drivers for Linux. It’s 1:10. If the bug rate is similar, there are a lot fewer crashes resulting from incorrect programming of the HW than from bugs in the whole GL stack.
Also, I don’t know if MS will do this, or NVIDIA does, but the DRM (DRI’s kernel driver) verifies command buffers before sending them to the hardware. This greatly reduces the possibility of misprogramming resulting in system crashes or security breaches.
X runs in user mode with memory protection, and lockups are still quite common with buggy drivers. [etc…]
I’m willing to bet that most of those 89% of crashes are due to errors of that kind, and migrating the drivers to user mode will do little to help.
You might lose that bet, though what you say is true. Moving code to user space is no guarantee of stability. Code that manipulates the hardware still has windows of opportunity to wreck the system quite thoroughly. Not long ago, there was a discussion in this forum in which it was stated, in affect, that program correctness was a better guarantor of system stability than user space execution. I agree with the sentiment, but that doesn’t mean that we should leave code in the kernel when it would do just as well in user space.
For one thing, even though it’s no magic bullet for system stability, user space execution can still limit the consequences of failures in the code. As we all know, sh*t happens, so I still can say why not limit the damage when we can? Just don’t bother singing hosannas to yourself for having done so. For another, the more code running in kernel space, the greater it’s undesirable impact on kernel latency will likely be. Now there’s a problem that could stand to be addressed. Windows needs it, so maybe that’s the real upside in Vista for moving graphics to user space.
So, it looks like all this internal Linux research is paying off, look at all the clever new os design concepts MS is coming up with *scoff*
If they can make Vista as stable as Ubuntu and as spiffy as OSX I guess I’ll just have to shut the hell up.
To be honest most of my experience with bad graphics drivers are now mostly over. Since XP, and the tail end of using 2000 on my desktop at work I’ve not really had any issues here.
It seems like the Microsoft certification of drivers has eliminated most of the blatently broken graphics drivers, so if they are all “stable” now why remove them?
Yes I know that in theory having them outside the kernel is best; but if they crash outside the kernel will the system cope? Will it reload them? And even if it does what about networking drivers? RAID drivers? Sound drivers? etc?
It seems that bad drivers in other areas are still liable to cause problems so I’m not seeing a huge gain here…
Yes I know that in theory having them outside the kernel is best; but if they crash outside the kernel will the system cope? Will it reload them? And even if it does what about networking drivers? RAID drivers? Sound drivers? etc?
Yes, it can reload them or switch to a different driver. The networking and sound stacks are user mode but sound drivers are still kernel mode so there’s a level of resilliance there as well that is above current versions of Windows but not as resilliant as the GPU. Over time, more will transition to user mode or be sandboxed such that they gain similar resillinacy even if in kernel mode.
Good to see Microsoft moving a tiny step in the right direction. I figure they might as well die off if they’re not going to go the right direction–however, if they do, then great, may they live on to do it right.
Maybe this has a little to do with Butler W. Lampson working over there. I do hope Microsoft considers the fantastic and progressive possibilities found in the EXOKERNEL design (to which Lampson has commented on and should be more than familiar with).
The future of OSes will continue to allow kernels to have less direct management over apps and things in user space, so they can focus more primarily on resource protection.
Microsoft moving the graphics (control/management) out of the kernel is a good step toward allowing apps to manage themselves with libraries at hand to help facilitate this. The options and possibilities of such a move exponentially increase. Updates and replacement of software components are made easier.
I hate to praise Microsoft, because I can’t stand ’em. 🙂 But, to me, this just means those I thought were on the lowest end of the totem pole are actually smarter than some others I tried to convince of certain merits.
—–
Monolithic Kernel – Protects & Manages resources FOR applications. Level of System Calls (between user space and kernel space): HIGH. Slowest and least efficient.
Microkernel – Protects & Semi-Manages resources FOR applications. Level of System Calls (between user space and kernel space): MODERATE. Moderate speed and efficiency.
Exokernel – Only Protects resources. Safely multiplexes hardware to applications, which manage themselves (with help of own libraries or default OS libraries). Level of System Calls (between user space and kernel space): low to NONE. Very fast and efficient.
—–
I saw where Microsoft’s other OS (Singularity) is still based on a Microkernel. I’d love to see them redesign that one on an exokernel, which (if they do it right) would have the ability to run every software program that has run on any of their other OSes, including their other OSes (which the exokernel can run as an app) and on practically any reasonable hardware.
But we’ll see what else they end up doing. My prediction is still that Microsoft is over (or at least as over as IBM was after the 1980s). 🙂
–EyeAm
Genius Insight. Rebel Thinking. Finger To The Status Quo.
I saw where Microsoft’s other OS (Singularity) is still based on a Microkernel.
The exokernel/microkernel distinction is a bit meaningless for a system like Singularity. Given it’s fine grained protection mechanism, and lack of a hardware kernel-space user-space boundry, every seperate module can be seen either as a microkernel server, or an exokernel library.
I remember when NT4 was in development and they made big deal about how they moved the graphics code into the kernel to improve speed. There was a lot of criticism about how this would increase the lockups in Windows. I guess they do listen to their customers. It just took fifteen years for them to do it.
Other than the fact that NT 4 came out in 1996, so that would mean 10 years, I agree with you
Next version of Windows will be named Winux. ))
…remember how MS said NT 3.51 was much more stable because the driver space (GUI drvs, etc) were not at ring 0, but ring 3. and how novell sucked because NLM’s were at ring 0. the how in NT 4 they move the drivers to ring 0 because they could not get the speed they needed (and they dropped the whole ring0/3 talk)……well they (MS) just said they are moving the GUI drvs back to ring 3 in vista to provide more stability. to damn funny.
i know that x86 has 4 privilege levels, where 0 is the kernel privilege, where all things run in the same address space, and 3 is the user level, where everything run in separated address space. What about 1 & 2 ? Actually windows only runs on x86 so why they don’t use those levels to put drivers and services there? (actually i never heard about some OS that uses them)
Actually windows only runs on x86 so why they don’t use those levels to put drivers and services there? (actually i never heard about some OS that uses them)
Windows does not only run on x86. Besides x86-64, it also runs on Itanium currently. NT was designed to be largely architecture independent. Some of the architectures it originally ran on (and may still run on internally) didn’t support more than 2 rings.
Edited 2005-12-16 21:00
“Actually windows only runs on x86 so why they…”
i meant
“Currently windows only runs on x86 so why they…”
i meant
“Currently windows only runs on x86 so why they…”
Even that statement is false as there is still Itanium. The bottom line is that supporting the extra rings compromises their design principles and could make it harder to support other platforms should the need or opportunity arise. The Xbox OS evolved from NT, for example, and some of their platform work in other markets may also benefit from the current architecture.
Even if you only consider the PC space, if x86 became the only architecture they supported, it’s much better that they only support it for business reasons rather than technical limitations.
Microkernels have independent modules which communicate, usually, through sophisticated channels so that as much as possible can run in user space instead of kernel space.
This adds complexity on modern hardware, and it also adds program complexity (the communication).
It’s my understanding that NT was started with being a practical microkernel in mind but ceased being a microkernel before it was ever released for sale.
I believe Mac OS X is using a microkernel.
The biggest problem with microkernel’s hasn’t been their complexity for the computer; it’s been their complexity for the designers as few have been completed to a production level and those, TMK, took a great deal of time and research.
I think it’s an admirable design, but in no way should the user really care. If it’s not crashing, or causing odd behaviour (includes unusually bad performance), the kernel is doing its job.
I don’t think there’s a single piece of software directed at consumers which comes with a guarantee. A clause like “we cannot be held responsible for lost data/finances” is almost ALWAYS included in the license agreement for FOSS and proprietary software alike.
Fools…
doL~b
PS: Got the facts? And how did you like them?
Cool. NT4. Windows 2000, Windows XP, Windows XP 64 are all a design mistake and we return to the NT351 days.
The reason they can do this now of course is that the GUI will all be buffererd in memory on the GPU and it wont be laggy.
Mind you I miss the way Windows used to redraw ‘some time later’ in NT351. I think it looked cool.
from the article…
“The company has already announced to developers that most drivers, including graphics, will run in user mode – which means that they don’t get access to the privileged kernel mode (or Ring 0)”
Is it just me, or is someone else interpreting the statement
“Vista’s graphics subsystem, codenamed Avalon and formally known as the Windows Presentation Foundation, will be pulled out the kernel because many lock-ups are the result of the GUI freezing, Microsoft infrastructure architect Giovanni Marchetti told us exclusively yesterday.”
..as the admission that a perfectly designed driver running in kernel space could choke up when OTHER threads in the kernel might be causing problems (in this case the GUI which also runs in the kernel.)
The whole reason for having a kernel is to restrict the code which has to have near perfect design and validation away from code which can be more imperfectly designed.
However, the more code you have running in the kernel, the greater the amount of interaction between said code, and the greater amount of task/thread blocking which will occur.
Maybe the kernel mode graphics card drivers have a tendency to fail due to timing issues?
I’ve also heard that one factor which drives designers toward a monolithic kernel design (at least on the Intel 86 platform) is the bad speed penalty of switching processor modes between ring 0 and ring 4.
Further I’ve heard but not been able to verify that on the PPC platform, a ring level switch is on the same order of magnitude in speed as a subroutine call.
Ciao.
People seams to miss some important things about this “move” from MS.
MS wouldn’t make this decision if it would stick with the old way to handle the graphics. (Like win2000 and XP) and then I don’t mean the kernel VS user space drivers. I mean the way the graphics “works”
With Vista, MS will move the graphics to the new “Aero” Then all graphics is handled by the 3D chip in your computer (even your 2d desktop), so moving it out of the kernel doesn’t have to be that bad to the performance.
But if MS had stuck with the old way to handle the GUI they never had move it from the kernel to user space.
Aero will be processed 100% by the 3D chip and completely multithreaded.
With Aero they can have the best of the 2 worlds! Stability and performance.
This is the way Linux should take to. And then have the chance to get rid of the poorly designed X-Window (slow and not multithreaded and at last very COMPLEX) This is not an easy task for an open Source OS, it requires that the leading graphic chip makers (ATI and NVIDIA) fully co-operates with the open source teams to make the next generation of Linux the best.
Today Linux is a great performance kernel when it comes to server tasks like file sharing, Web servers. But when it comes to the GUI it’s a joke.
Linux is now losing even more ground, It will take years for the open Source world to make something like Aero..
I’ll hope I have wrong! Now it’s time for Red Hat and other leading Linux distributors to finally show some innovation!!!
To summarise, kernel VS user space is not what it just to be. There is a new “challenger” the 3D chip…
“With Vista, MS will move the graphics to the new “Aero” Then all graphics is handled by the 3D chip in your computer (even your 2d desktop), so moving it out of the kernel doesn’t have to be that bad to the performance.”
It’s the only way Microsoft to go, Aero is just eyecandy and who gives a crap if your window boarder is transparent.
“This is the way Linux should take to. And then have the chance to get rid of the poorly designed X-Window (slow and not multithreaded and at last very COMPLEX) This is not an easy task for an open Source OS, it requires that the leading graphic chip makers (ATI and NVIDIA) fully co-operates with the open source teams to make the next generation of Linux the best. ”
X is not poorly designed just xfree86 made a mess of it, xorg is now clearning it up and have made huge steps in a short time.
“Today Linux is a great performance kernel when it comes to server tasks like file sharing, Web servers. But when it comes to the GUI it’s a joke.”
Linux is the kernel and the GUI has nothing to do with it. The Window managers in Linux have many advantages over Windows.
“Linux is now losing even more ground, It will take years for the open Source world to make something like Aero..”
Since eyecandy is not a prority why would a Aero like GUI be at the top?. The framework is there in Linux but you have to remember it took Microsoft years to do Aero and it was not easy.
“I’ll hope I have wrong! Now it’s time for Red Hat and other leading Linux distributors to finally show some innovation!!! ”
Without Linux there wouldn’t be much innovation with Windows, the pressure of Linux is forcing Microsoft to innovate to keep there monopoly.
i thought Mac OS X has been using graphic chip to render UI?
am I wrong???
If the driver is buggy what are they going to do, swich to standard vga driver?
I wonder if it is to protect the system from drivers or from avalon itself….
How it is going to play with porting avalon to XP? Any clues?
I’m wondering if new, more structured, better organized and safe videodriver API in Vista will allow to create something analogous to NDIS-wrappers. At least for those OS-es which use similar model.
The only good design decision in windows was put GUI as a part of the kernel. Whit this stuff, certanly, Microsoft go to make another big mistake
Anyone remember windows ME and all the hype about that. Personaly I think removing the graphics from the kernel is a good thing as long as it has a way to recover. IE is usualy my problem then it takes ages to recover. All we need now is a open mosix for windows and we are set….
Longhorn vista blackthorn.. or something else.. I might buy @ SP4
For once be honest and tell me, have you ever written a driver on Windows? Have you ever went through DDK?
Windows has the best documentation and much more uniform driver model. If you still say its windows fault when a driver crashes, i am speechless. Tomorrow you will tell me that NDIS drivers wrriten for Windows running on Linux using NDIS Wrapper are more stable on Linux.
BTW there is a saying in our India its called “Koop Mandkoop” means a frog of well doesn’t even know the outside world exist and if it does, how that world is, so it always thinks its world is the best and doesn’t even want to come out of it
Now to your analogy, if a plane crashes, the factor may not be gravity, the factor may be human error, bad temperature sensor, bad weather etc, how many times do you blame boeing for that? Even a semi-intelligent person would first identify the reason of a crash and then blame where blame is due. Zealtory like you do is really cheap.
For once be honest and tell me, have you ever written a driver on Windows? Have you ever went through DDK?
Nope. I know how the driver model works, though.
Windows has the best documentation and much more uniform driver model.
I don’t dispute this (Mirosoft’s MSDN documentation is indeed very good), but exactly how does this protect against buggy drivers? Linux has a very strong policy of “if this doesn’t need to be in the kernel, it gets put in userspace. That’s why fundamental stuff like device naming is done in userspace in Linux. Windows doesn’t go to the same lengths to protect against buggy kernel code.
If you still say its windows fault when a driver crashes, i am speechless. Tomorrow you will tell me that NDIS drivers wrriten for Windows running on Linux using NDIS Wrapper are more stable on Linux.
Of course not. But X drivers are indeed more stable than GDI drivers!
Now to your analogy, if a plane crashes, the factor may not be gravity, the factor may be human error, bad temperature sensor, bad weather etc, how many times do you blame boeing for that?
You can bet when a $100m plane crashes, the company does a little bit of work trying to figure out how to prevent such crashes in the future. Even if its human error or gravity or a faulty temperature sensor, they try to work around it. There is an entire field of cognative control systems design, a big part of which is designing control systems to pilot error is less likely. Again, the things that cause errors are generally fairly well-known. It’s the job of the system designer to try and forsee these problems, and take safeguards against it. The X folks did this, and rightly put graphics drivers in userspace. The Windows folks (even though they knew better, given NT 3.x!) did not do this.
Even a semi-intelligent person would first identify the reason of a crash and then blame where blame is due.
It’s not a matter of placing blame. It’s a matter of preventing problems. There is no hope in decreasing the bug count in drivers (you might as well try to change gravity!). The best way to ease the problem is to take active steps to work around things you cannot change.
I’ve had X crash on me more often than I’ve had all of Windows (2000 on) crash.
I’ve had X crash on me more often than I’ve had all of Windows (2000 on) crash.
So….
When windows 2000 crashes, your whole system crashes.
When X crashes you just need to restart the X server.
Well, one either way you still lose all the work you were working on.
Two, XP (the whole OS) takes about the same amount of time to start up as KDE or Gnome.
Can you please be more specific as to what Windows could do to prevent faulty drivers from crashing the system?
I mean you are giving all these statements without once giving any technical information on how this can be done. Windows has the largest driver base and almost 8 drivers are written every hour for windows. Due to its popularity, many times unskilled people are also writing drivers. I really fail to see why you blame Microsoft here without giving any really good technical reason where Microsoft screwed up.
“For once be honest and tell me, have you ever written a driver on Windows? Have you ever went through DDK?
Nope. I know how the driver model works, though.”
So be honest again… Have you ever written a driver for any OS?
I don’t think you have because what you are critisizing Windows for is just as applicable to Linux. And there is no way around it. Whether you build the driver in to the kernel in Linux, or load it with mod_probe, it is still running in kernel-land. And it still requires direct access to the hardware to do its job. And anytime you give something direct access to the hardware, you allow it to bypass the control of the kernel. And once you have done that, it’s no longer possible to prevent a buggy device driver from crashing the system. That’s true whether it is Windows, Linux, or any other OS.
Edited 2005-12-18 21:36
Audio then video out of kenel; then what? less priviledged user mode, then redesign to Networking Stalk then …. this seems to some like a complete rewrite or redesign of the OS, BUT is it good enough, maybe not, at least not with vista, cause I need to hear something about controlling how applications interacts and how to control applications from gaining priviledges and I need to hear about self healing and about crash control and decent system logging and about restricting applications from installing if it doesn’t comply with the safty standards of computing( I mean to refuse installing applications if they require only to be run as an administrator including games ). All MS now is doing is major changes that resemble Unix and is over due since 2000, I need more from them before swiching all my machines to Mac/Intel and Linux/Intel By next year.
I need to hear something about controlling how applications interacts and how to control applications from gaining priviledges and I need to hear about self healing and about crash control and decent system logging and about restricting applications from installing if it doesn’t comply with the safty standards of computing
Do a search. Everything you mention has been implemented. Search MSDN for LUA, UAP, TxF, Common Logging File System, Kernel Transaction Manager, Transactional Registry, Windows Driver Foundation, Driver Install Framework, Protected Administrator, Code Access Security, Secure Execution Environment, Device Installation Architecture, Driver Quality Signature, Network Access Protection, Windows Feedback Platform, I/O Cancellation.
I have been testing Longhorn since the early alphas and till now logging is so bad and little improved to let you really know what is going underneath your system.
On Vista 52.59 which is the latest I revised eventvwr.exe didn’t give me intutive informative info about why the system was hanging and the explorer.exe is so unstable, even letting windows find solutions about the hang was not solving the problem. Beside, I was able to install incompatible applications that let me mess up the system; this is ok for me but bad for computer savvy people who abuse their computers and are the source of the most unstability windows faces.
Try to install windows vista on incompatible system and you will have a freeze that will not return any errors what so ever, even BSOD is welcomed in this situations; still no informative messages to let me know which component is not compatible to let me replace it with compatible one ( eg in case of graphics cards, sound cards, NICs,modems, USB,RAID cards….etc); Indeed I need this feature for customers who wish to upgrade their systems to windows vista.
Beside all of this I agree with you that MS is changing these things you mentioned above but in reallity their solutions are not reliable (Do you remember their security hype, and how their OS will be very secure after implementing the firewall)
The event viewer is one of a few UI pieces that won’t be fully implemented until Beta 2 (PnP is also still in development AFAIK). I’m not sure if it’s included in current Betas, but you should expect MS to ship an upgrade compatibility checker as they’ve done for some time now with previous OSes. There will also be a hardware compatibility list published online as they’ve done for previous OSes as well. There are Logo guidelines for system builders online already.
If you’re running XP SP2 with the firewall enabled (and preferably AU), it should be very secure. XP RTM should be as well though there’s a case where the network stack could start before the firewall does (fixed in SP2). The security issues w/ XP aren’t from cracking the firewall. Most result from users either not running the firewall (or any firewall) or from manually running malicious programs (compounded by most home users having Admin rights).
Help! Help! I’m trapped inside the kernel. Hey, what’s this Bit do?
This is not about drivers! Even in Xorg there are graphic drivers running in kernel mode (nvidia-module anyone?). I don’t thing that there won’t be any kernel driver in vista, at least a very low level one, in the end it is their hardware and they (nvidia/ati) are not going to tell anybody how it works.
What they are doing is to rip the GDI from kernel, this can be best compared to running GTK+ or Qt. Now that is a good idea:)
http://www.microsoft-watch.com/article2/0,2180,1902540,00.asp
TechWorld is reporting that Microsoft plans to move graphics outside of the Windows Vista kernel by pulling the Windows Presentation Foundation (WPF, formerly codenamed “Avalon”) out of the Vista kernel.
We asked Microsoft for clarification. Here’s the official statement: “Because WPF is largely written in managed code on the common language runtime, it never ran in kernel mode. There are elements of WPF (called the MIL) that are written in unmanaged code, but that code also largely runs (and always has run) in user mode. Insofar as WPF needs to touch kernel mode stuff (e.g., drivers), it interacts with them through the existing DirectX APIs. The user mode and kernel mode aspects of the WPF architecture haven’t changed,” a Microsoft spokesman told us. And nothing has changed in terms of Microsoft’s plans for delivering WPF as an integrated part of Vista, the spokesman added.
hraq :
“cause I need to hear something about controlling how applications interacts and how to control applications from gaining priviledges and I need to hear about self healing and about crash control and decent system logging and about restricting applications from installing if it doesn’t comply with the safty standards of computing”
I hope Linuxai dont forget that there are more stable & advanced OSes out there .. compared to itself.
But hopefully Linux will get there oneday … if we stay open to new ideas & approaches.
Just IMO