“TechWorld is reporting that Microsoft plans to move graphics outside of the Windows Vista kernel by pulling the Windows Presentation Foundation (WPF, formerly codenamed ‘Avalon’) out of the Vista kernel. MSWatch asked Microsoft for clarification. Here‘s the official statement: “Because WPF is largely written in managed code on the common language runtime, it never ran in kernel mode.“
So this is “you had no idea of what you were writing about” kinda statement. So Microsoft won’t move graphics out of kernel space in Vista, because it was never intended to run in kernel mode in the first place…
On the other hand, the TechWorld article is not entirely wrong – it is a change compared to WindowsXP, so yeah, they move graphics out of the kernel, but not out of the Vista kernel, because it was never inside it, and not even outside the XP kernel, because WPF isn’t written for XP…
That’s what happens when you put php+mysql @ 3.95mo kids writing tech articles…
a lot like this site… heh
No, they don’t move graphics out the kernel: Avalon is a high-level-layer, more like an API. It’s likely rendered by some lower-level stuff like OpenGL or normal Windows graphics drivers.
No, they don’t move graphics out the kernel: Avalon is a high-level-layer, more like an API. It’s likely rendered by some lower-level stuff like OpenGL or normal Windows graphics drivers.
This isn’t about Avalon. It’s about Vista’s display driver architecture (VDDM). Display drivers written for Vista consist of a display driver and a miniport driver. This is not unlike the Windows 2000/XP driver model except in those OSes both driver components and the graphics engine were in kernel mode.
In Vista, the graphics engine and display driver are in user mode while the miniport is still in kernel mode.
Here are a couple of diagrams to make things clearer:
2000/XP
http://msdn.microsoft.com/library/en-us/Display_d/hh/Display_d/Vide…
Vista
http://msdn.microsoft.com/library/en-us/Display_d/hh/Display_d/Disp…
beat me to it Anonymous (IP: 70.70.144.—) 🙂
Edited 2005-12-18 00:27
What is really important to note is this is the FIRST time that the rendering extensions portion of the driver (the GFX engine components supplied by OEM’s) has actually been moved out of kernel mode or OS subsystem address space.
Most people (including nearly every post on OSNews) misunderstand the NT 4 architectural changes that were made. In that case, user+GDI windowing and graphics subsystems (Microsoft supplied) components were moved out of the user mode win32 subsystem (csrss.exe) and moved into kernel mode as a driver (win32k.sys). The vendor supplied components (miniport and hardware specific rendering routines) were always either in kernel mode or ran in the same address space as vital operating system code (csrss.exe). The code was NEVER isolated in a separate (non-OS critical) address space.
It is also very important to note that any failure that occurred in csrss.exe, despite running in user mode, would take the system down with a bluescreen indicating that a fault in a critical subsystem (csrss.exe) had occurred. Csrss.exe is the user mode integral win32 application environment subsystem and, as such, the system can’t function with out it.
The stability issues with NT 4 were a result of a rushed development process, not fundamental architectural issues. The moving of user+GDI code to kernel mode made tremendous sense as there was little logic to keeping it in csrss.exe when all sorts of “rule breaking” and resource expensive tricks were needed to improve performance (thread paring, special fast context switch routines, shared buffers, etc.). Windows Server 2003 is very stable by any measure (the most stable version of NT to date) and is based on what is a nearly identical graphics driver model to NT4.
Now, what has changed in Longhorn is that some of the hardware specific rendering routines (Direct3d related) that used to run in kernel mode have been moved into user mode via the D3d runtime environment used by D3d application and the new compositing display engine (uxss.exe), which is completely Direct3d based on the output or backend via the Direct3d runtime environment.
It is interesting to note that user+GDI applications still call (via stub dll) into routines hosted in the KERNEL mode win32k.sys driver which in turn calls functions in the new DirectX kernel mode subsystem rather than the old vendor supplied routines. This new DirectX kernel subsystem has a GPU scheduler, memory manager and the ability to render GDI apps either directly to the screen (via the miniport driver if the compositor is turned off) or to an off-screen texture buffer which is then used as a texture by the compositor (uxss.exe) and placed on top of a 3d surface. This is how Longhorn can switch dynamically between composited and traditional modes on the fly.
It is VERY important to note that this new Microsoft supplied kernel mode subsystem takes over much of the work the old vendor supplied routines were responsible for. Only the D3d implementation specific code is left for the vendor to implement and that runs in the user mode uxss.exe process (that aforementioned compositor) or the user mode address space of a Direct3d application. It is through both a combination of Microsoft taking over a large amount of the hardware control process and moving vendor supplied portions of code to user mode that a big gain in stability is expected (note: nowadays, it is very rare that Microsoft supplied kernel mode code is responsible for system stability issues)
In short, the user+GDI pieces that were moved to kernel mode as of NT 4 are still in kernel mode. Also, the lowest level hardware control driver, the miniport driver, is still in kernel mode. However, much of the code the vendor used to write has been subsumed by the DirectX kernel mode subsystem. What remains, that isn’t miniport, is in user mode as part of the D3d runtime environment.
All of this, by the way, sits BELOW Avalon, which is a display PROGRAMMING model that wraps around the D3D engine to provide an easy to use device independent vector based presentation engine (think of it as a super-wrapper for D3D).
I hope this clears things up.
In short, the user+GDI pieces that were moved to kernel mode as of NT 4 are still in kernel mode. Also, the lowest level hardware control driver, the miniport driver, is still in kernel mode.
I think that this is crucial for (mis)understanding the claim that “Vista moves graphics out of kernel”. The only thing that has been moved is SOME part of vendor-supplied driver logic that has been moved to userland (ring3) – the user mode display driver.
Inside the ring0 (kernel-mode) there are still:
1) vendor-supplied display miniport driver
2) win32k.sys which on NT 5.2- contanis USER/GDI/CON logic and on Vista this has all been mostly routed to kernel-mode DirectX driver
3) dxgkrnl.sys which is basically the core of Vista graphic subsystem as even OpenGl is routed onto it, and it’s obviously NOT “moved out of kernel”.
So basically user-mode WPF components are something analogue to gdi32.dll/user32.dll/comctl32.dll etc. for USER/GDI that we have today on winXP.
Also, WinFX redistributable (which contanis WPF/WCF/WWF) will be back-ported to winXP (you can download beta now), and I think that this very fact dismisses most claims of moving graphics to userland, since winXP DOESN’T and WON’T support LDDM and you DON’T need to reinstall your GPU drivers after winFX installation.
So from how you explained it (ivans), Microsoft has basically taken on more of the back end work so that the vendor needs to do less to get their driver up and running – now in theory, that sounds nice, but it simply pushes the responsibility for (in)stability back to Microsoft – its a nice way of Microsoft to implicitly state, “the driver manufacturers can’t get their shit together, so we’re going to take on a large portion of the work involved”.
As for instability, per say, it isn’t always the drivers fault; I’ve seen instability that can be basically put right down to the poor quality of the actual hardware – I’ve run Matrox cards for most of my computing existance, and coupled with their drivers, I’ve never experienced the kinds of instability that I saw with my first (AND LAST!) Nvidia graphics card.
The issue shouldn’t be so much “lets copy [operating system]” but asking HOW and WHY they can get performance so similar to Windows and yet, their graphics ‘gear’ sits on the outside of kernel space – Windows Vista was the perfect time to let loose with the slicer and remove the crap that was causing problems, but like so many other releases, Microsoft crumpled like a styrofoam cup, and scaled back their vision to something also resembling a service pack.
“its a nice way of Microsoft to implicitly state, “the driver manufacturers can’t get their shit together, so we’re going to take on a large portion of the work involved”. ”
They did it for that reason, and because they needed to implement a controlled gfx pipeline, video memory management and device virtualization. The new directX kernel mode subsystem is a very sophisticated (I highly recommend reading some of the published documentation, it’s really cool stuff) and, well, the old model just wasn’t going to cut it going forward.
-Mak
RE: makfu
Yes, but what I am saying is this, does it actually *REALLY* fix the inherient flaws that exist within the Windows driver framework or is it simply a matter of moving the chairs on the deck?
Yes, its all very nice to have sexy, sophisticated technologies, but if the underlying structured is completely buggered, is it really smart to all that based on a rotting copse?
Microsoft needs to have a good long hard look at the MacOS X driver kit, and same goes for the way they’ve done their 3d and 2d accelerated framework – and its not about copying copying something, but accepting that if someone works, then why not research it.
The only thing that actually helps Microsoft from its NIH syndrome is the massive R&D – if it weren’t for all the cash they have on hand, the ability for them to unjustifiably re-invent the wheel each release would difficult; what Microsoft needs is some hardship, the $50billion taken off them, then maybe they would finally take pragmatic approaches to fixing problems using existing technologies rather than reinventing the world for the sake of it.
“Microsoft needs to have a good long hard look at the MacOS X driver kit, and same goes for the way they’ve done their 3d and 2d accelerated framework -”
Whatever.
The only thing that actually helps Microsoft from its NIH syndrome is the massive R&D – if it weren’t for all the cash they have on hand, the ability for them to unjustifiably re-invent the wheel each release would difficult; what Microsoft needs is some hardship, the $50billion taken off them, then maybe they would finally take pragmatic approaches to fixing problems using existing technologies rather than reinventing the world for the sake of it.
The is the first time the wheel is being reinvented in around 10 years. The $50 billion has nothing to do with it as MS teams don’t have unlimited resources and often have tighter controls than many smaller companies.
MacOS is currently trailing MS in graphics architecture and they have a different set of requirements. Until recently, Apple only pushed composition out to the GPU and didn’t worry about accelerated drawing or resolution independence until after “Longhorn” development was well under way. MS’ driver model has been complete for months now while Apple’s is still in development.
I don’t understand the whole pragmatic comment. How is it not pragmatic to enhance the current architecture and technologies while maintaining compatibility for your customer base? If they weren’t being pragmatic, they’d have just started from scratch, broke all compatibility, and based everything on new concepts just for the heck of it. There’s a difference between taking pragmatic approaches to pushing technology forward, and just using the same technology everyone else is and staying in the same place.
1) They had a working UNIX, Xenix, and yet, they went off and reinvented the wheel – they claimed they could create a better UNIX than UNIX – it was going to simultaneously kill all the UNIX’s at the same time – here we are, 15 or so years later, UNIX’s have merged and to be honest, the only one killed through natural economic evolution is IRIX – everything else is going quite peachy.
2) What is wrong with Appls approach? they’re gradually bringing technology to their platform as it is required, as graphics processors can handle the capabilities, they bring it to the fore front – that is pragmaticism.
You also have failed to take the time as to WHY these steps were made – limited bandwidth between the video card and motherboard is one reason – no use dumping the whole lot of the bandwidth is so enemically bad that there is a massive performance penalty – and I’m sorry ‘massive buffering’ in the form of video memory isn’t going to fix the problem. Agafin, its about being pragmatic and realising the limitations of CURRENT hardware rather than taking the MIcrosoft approach of ‘wait till the hardware catches up’.
3) The constant flogging of the horse; why do they continue to guard their IE core like if it were some sort of ‘important module’ – wouldn’t it be more logical to use an opensource core rather than trying to hold something that is slipping further and further behind the competition?
Same goes for a large number of modules that Microsoft would rather create themselves rather than saying, ‘we can get a whole heap of free code, and simply mold it to our needs’ – thats what Apple has done, and now they’re laughing all the way to the bank, and possibly stopping off at the pub for a pint in celebration.
1) UNIX has significantly weakened ever since Windows NT came into the market, and there is still a lot of Windows technologies that are just appearing on *n*x or won’t be on *n*x for some time. Windows basically has killed Unix (commercial Unix anyway), and Linux is eating the leftovers NT left behind.
2) GPUs could’ve handled accelerated drawing of the UI when Apple first started development. There was enough bandwidth to provide a faster UI experience than what was offered until now. Apple didn’t fully utilize the current hardware and did wait (but not for GPUs to catch up — maybe their lagging AGP support). MS’ approach isn’t waiting until the hardware catches up. They are building a generalized architecture for on and offscreen rendering/calculations. They are using an extended version of a runtime that is probably 4+ years old (DirectX 9) but also designed the architecture for future versions of DX so as the hardware improves, so will the performance and experience. And again, this is about more than a pretty and fast GUI. It’s about a generalized platform for GPU-accelerated calculation and it gets really interesting w/ D3D 10 (currently in Beta). This is why they have a scheduler, virtual memory, resource sharing, and more, all for the GPU(s). This is about working with the present while planning for the future.
3) No, it would not be more logical to use an open source core. There’s absolutely no business case for it an it’s far from being pragmatic to do so. There’s a large base of customers that depend on the services available via IE. If they dropped it and used a different base, they’d have to start from scratch building back in API support.
Apple’s ‘we can get a whole heap of free code, and simply mold it to our needs’ approach has also left them vulnerable to some of the same issues *n*x is because they use a common base in many cases. Plus, you would again have to tailor the code to fit your architectural, integration, and usability needs. Also, tell me, how is it easier for MS to take a bunch of BSD code, for example, and port it to managed code rather than writing managed code from the start? I guarantee you the ported code will perform worse than the scratch code. Plus they’ll have a better understanding of the codebase if they write it themselves rather than bringing in a bulk of foreign code and having to look it over and trust what it does and how it does it.
You have a wierd understanding of what’s pragmatic. Taking a bunch of open source code and NEXT code was pragmatic for Apple because their efforts to do it themselves fell through. It was not a pragmatic decision for customers because it wiped away everything and didn’t take their business considerations into account. Apple has upgraded several times in a short span, each time breaking some ISV/IHV, adding basic features that should’ve been present in the first release because they were present in both *n*x and Windows. Their kernel still needs improvements to seriously compete in the server space.
Usually the line of the day is that MS doesn’t innovate. Yet when it becomes obvious that they do, they get urged to just use the same old code everyone else is using. What happened to “Think(ing) Different(ly)”?
Edited 2005-12-18 12:08
Windows basically has killed Unix (commercial Unix anyway)
I don’t think it’s possible to shoot wider than that.
and Linux is eating the leftovers NT left behind
Nope. Linux is taking over commercial Unix’s share as well as some people running Windows servers like NT 4.0. Windows servers are merely replacing old Windows NT, or to a lesser extent 2000, installations. Sorry, but Windows is doing nothing and going nowhere except where it has always been.
Apple’s ‘we can get a whole heap of free code, and simply mold it to our needs’ approach has also left them vulnerable to some of the same issues
Like what?
I don’t think it’s possible to shoot wider than that.
You do know that NT’s primary competition has been commercial Unix distributions from vendors like Sun, HP, IBM and others, right?
Nope. Linux is taking over commercial Unix’s share as well as some people running Windows servers like NT 4.0. Windows servers are merely replacing old Windows NT, or to a lesser extent 2000, installations. Sorry, but Windows is doing nothing and going nowhere except where it has always been.
This isn’t even worth arguing. It’s historical fact. Look at commercial Unix share before NT’s introduction and look at it today. Look at recent wins for Windows and Linux that have been at the cost of commercial Unix installations. Look at highly touted Linux-to-Windows migrations that still haven’t happened because the entity didn’t account for the lack of equivalent OS services or software on Linux.
Like what?
Like vulnerabilities in OpenSSL, Kerberos, PHP, Telnet, Sendmail, and more. Apple inherited those directly due to their ‘we can get a whole heap of free code, and simply mold it to our needs’ approach.
Like vulnerabilities in OpenSSL, Kerberos, PHP, Telnet, Sendmail, and more.
Are you suggesting Microsoft is writing software without vulnerabilities? Since when?
If its possibile for you to fathom this, Apple doesn’t have to spend all their time fixing vulnerabilities in OpenSSL, Kerberos, PHP, Telnet, Sendmail, etc. And they don’t have to spend their time reinventing those wheels we so commonly forget and take for granted today.
Every computer has ssh and telnet and a mail server. Well, almost every computer.
Are you suggesting Microsoft is writing software without vulnerabilities? Since when?
No, but the original question dealt with MS writing their own code vs. just using what everyone else does. The poster asked what vulnerabilities Apple had gained from their approach, so I pointed them out.
If its possibile for you to fathom this, Apple doesn’t have to spend all their time fixing vulnerabilities in OpenSSL, Kerberos, PHP, Telnet, Sendmail, etc. And they don’t have to spend their time reinventing those wheels we so commonly forget and take for granted today.
But it doesn’t change the fact that they’ve gained the same vulnerabilities as OSS code by basing OS X on the same code. The time they spend fixing such vulnerabilities wasn’t an issue. It was that their approach of appropriating OSS code left them vulnerable to the same exploits and they have more now than they did w/ previous versions of the OS that used their own code.
Every computer has ssh and telnet and a mail server. Well, almost every computer.
But not all use the same implementation or are derived from the same implementation thus leaving them vulnerable due to a common base.
Edited 2005-12-19 06:56
I decided to finally register.
I wrote the rather long description that you summed up pretty well. The real issue that is causing so many people to misunderstand the LDDM is that the industry press (populated by people who really don’t know very much about ComSci or OS internals) keeps making sweeping statements that mischaracterize the statements made by Microsoft.
All of this is very clearly (in my mind) publicly documented in the MSDN library.
“Because WPF is largely written in managed code on the common language runtime,” “2GB is the ideal configuration for 64-bit Vista”.
Sorry for stitching together quotes from two different sources. Of course the situation is a bit more complex than that, but I can imagine Vista’s out-there hardware requirements have something to do with using managed code.
Oh, you mean those fake hardware requirements that were made up at some point, and then subsequently posted all over the Internet as the official Vista hardware requirements?
Yeah, don’t believe it. Vista will run fine on 512 MB — and I know this because the latest Betas do so along with all of their debug/unoptimized code.
An irony, of course, is that by the time Vista is sold these fake statements may not be different from the reality.
Today’s budget notebook or PC goes with 512MB RAM, and more often than not you can get 1GB RAM with a new decent personal computer.
By the time Vista sold preloaded on new PCs, 1 GB RAM could become a low-budget option.
An irony, of course, is that by the time Vista is sold these fake statements may not be different from the reality.
That still doesn’t make those statements true as they were usually touted as requirements to run Vista. Just because the mainstream new computers of the time might meet those fake requirements doesn’t mean you suddenly can’t run Vista on lesser hardware that will be detailed in the official requirements.
That still doesn’t make those statements true as they were usually touted as requirements to run Vista.
The requirements to simply run Windows and the requirements to run Windows and actually run applications and get work done have historically been two different things with ;-).
If you were to try and run any of the Vista builds as a day to day OS, and install your usual software on it you’ll find those hardware requirements are more than accurate. In many ways it looks as if they will be quite conservative.
Oh yeah, because you’re a Microsoft engineer now or something? Give it a break.
Vista will run fine with 512 MB — because it already does. Once the debug cruft is cleaned out, and the code optimized, it will run perfectly.
glad to hear that Vista has improved since beta one was released when it needed an infinite ammount of memory dependent on how often you felt like rebooting it.
http://www.longhornblogs.com/robert/archive/2005/12/16/15412.aspx
This managed structure doesn’t fit into the scheme.
No one will us it xept for basic webpages. I do use a managed form of code but not locked in to only managed, this shows that .NET isn’t compatible with C++. Too bulky.
For open source coding compatibility is most importnet I think. Just another large wrapper.
“No one will us it xept for basic webpages. I do use a managed form of code but not locked in to only managed, this shows that .NET isn’t compatible with C++. Too bulky.”
… That statement doesn’t make any sense… Can you clarify it?
What do webpages have to do with it at all? And of course .NET isn’t compatible with native C++. Native C++ doesn’t run as managed code, and that defeats pretty much the entire purpose of .NET
And now I need you to clarify…what do you mean by “.Net isn’t compatible with native C++?”
It also depends on what you mean by native C++; .NET allows C++ to be used, the only down side, is you have to drop of a few unsafe C++ calls and start pulling in stuff from the .NET Framwork – thats no different to how Apple says that Cocoa supports native C++ through Objective-C++ support.
As for managed code, I think people need, especially the elitist ivory tower residing coders, and see it as a benefit which will allow them to tackle larger, more complex problems, and leave the mundane, accident prone crap to be taken care of by a managed environment – it isn’t about duming down programmers, its about handing over mundane crap to a manager as to allow the individual to concerntrate on the more important things.
“And now I need you to clarify…what do you mean by “.Net isn’t compatible with native C++?””
Well, I think what the original poster meant is that you can’t run native C++ binaries under .NET… Well duh…
Edited 2005-12-18 01:54
This is a pretty complete and authoritative source on what’s going on:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/Di…
It seems to indicate that everything runs kind of like linux DRI, with a (not so) small kernel mode component to handle memory and I/O resources and a user-mode component to create the command streams for the graphics card. It seems to be done in this complicated way so that many surfaces can be rendered at once. The scheduling and DMA architecture is pretty complicated.
Too bad I thought Microsoft where copying X windows, the BEOS and the MAC + name you own favourite system with a user mode graphics server.
But it seems not this time, or not yet.
With WPF the rendering pipline can be marshalled over the wire so that all rendering of the graphics developed with WPF can be rendered on a client machine using whatever capabilities the client can offer.
So for example you can have a terminal session that send rendering instructions rather than bitmaps to the client and if the client has a Direct3D 9 capable graphics card then all the work will happen on the client GPU.
So you’re saying that Vista is X11 + GLX?
No offense to Microsoft — if a good idea is available, you’re stupid if you don’t copy it!
Actually, truth be known. X11 was not a good idea. The only reason we got stuck with it is because the alternatives (which were technically superior) were closed and proprietary standards. X11 was open, and so ultimately we got stuck with an inferior system.
First of all WPF (Windows Presentation Foundation, fomerly Avalon) is an API built on top of Direct3D. It is designed to give developers a simple and powerful API for developing future applications. The WPF was only going to be release with Vista but has since been back ported to XP. I should also point out that the API is backed by an XML based declarative language to make developing tools for UI extremely pratical.
What is new and unique in Vista is the Vista Display Driver Model (VDDM) which has the graphics driver moving out of kernel space and into user space. This will allow a couple of benefits. The first is that Graphics Drivers (which cause something like 70% – 80% of BSODs) can’t bring down the system anymore. The other advantage is that a driver can be upgraded and installed without having to reboot.
What do you mean .NET isn’t compatible with Native C++? Of course it is, although it requires magic underneath the covers. Lookup IJW (It Just Works), the name Microsoft gave to the ability to mix native C++ with managed C++ and therefore .NET.
It’s basically Microsoft’s equivalent to JNI. It allows you to run native, unmanaged code outside of .NET. This is usually not a good idea unless there aren’t any other options.
Not having tested the betas, I don’t have that experience with which to judge the veracity of verious hardware requirement claims. I was a bit suspicious when the source claimed you needed double the RAM to run the 64 bit version though hehe
Of course I know not everything on the internet is true, I’ll take the blame for going with what google turned up