MSDN has posted a video interview with Rob Short, the MIcrosoft VP in charge of the team that “architects the foundation of Windows Vista,” and members of his team. The interview goes deep into kernel development, and they even ask the question that’s on everyone’s mind: “do you ever wish the registry had never been developed?”
For us the poor dialup users
(and for search engine indexing…)
This video is a really short overview of Vista kernel (NT 6.0) features, starring the orinigal designer of NT I/O Manager – Darryl Havens himself.
“quality gate” process shortly described is actually a little more than just “plain componentization”. From what I understood, it defines a set of rules imposed on each system component, that prohibits usage of API sitting on different layers, which at the end allows much easier layering and refactoring.
For example, the lowest of all user-mode DLLs inside native subsystem – ntdll.dll shouldn’t call shlwapi.dll (Shell LightWeight API) which is a wrapper for win32 API that eventually ends up calling the lowest Nt* API inside the ntdll.dll itself.
“Layering violation” strategy allows teams to split up engineering process into different trees, reduce overall integration and testing time, and provide easier framework for extensibility of different system components.
Larry Osterman wrote something about it on its blog:
http://blogs.msdn.com/larryosterman/archive/2005/08/23/455193.aspx
UMDF (User-Mode Driver Foundation) which will be backported on XP SP2 (if you have XP SP2 installed, check out the wdfmgr.exe process – “Windows User Mode Driver Manager”) and allows some cheap hardware to have it’s drivers run in user-mode (ring3), thus not affecting system stability or security if being poorly written. UMDF is designed for stuff like digital cameras, media players, cell phones, PDAs and other USB/IEEE1394 devices, but NOT for storage drivers and drivers that need DMA, interrupts and nonpaged pool memory.
Also in Vista most part of audio stack and some part of display drivers will be run in user-mode, Darryl emphased the move towards “microkernelish design”, because systems today have 1000 times more memory and CPU power available than they had when the original NT design papers were written 15 years ago, and trade-off between security/stability (having UMDF drivers run in separate adddress space and use LPC for communication) and performance is becoming reasonable. However, it’s important to note that NT is still being monolithic kernel, but as you all know, modularity and extensibility have nothing to do with monolithic/microkernel design, but with basic principles of software engineering.
They also mentioned “glitch-free media”, which will eliminate defects as a frame dropping, jagging etc. Basically there is a new win32 service added called a Multimedia Class Scheduler Service which provides mechanisms for threads to register for scheduling on much more fine-grained granularity.
So let say a rendering application claims it’s going to need 70% of all CPU resources in any 2sec interval in order to render the frames correctly – Vista kernel can guarantee such request no matter what, via formal API.
Also multimedia stack has been enhanced in form od Scheduled File I/O, where multimedia app can specify that some I/O operations nee to be completed in certain time interval, and kernel will guarantee IRP completion of such requests with built-in prioretization scheme.
The Memory Manager is also mentioned, the most impressive new feature is a completely dynamic address space that can be automatically or on-demand configured (today you need to add /3GB or /USERVA switches in boot.ini AND reboot). This will maximize resource availability in some common scenarios (desktop heap exhaustion, performance tuning of server applications such as TS, SQL, Exchange..)
Also Memory Manager ehnancements regarding NUMA support and large pages are mentioned, and support for modern GPUs with rotating VADs (I never emagine that GPU will have it’s own scheduler and virtual memory one day..).
I didn’t really understood what the registry guy said, he was speaking very quietly and unarticulated. Registry on Vista will have support for transactions similar as NTFS via Kernel Transaction Manager, and on-the-fly virtualization of registry keys that will be implemented for UAP (User Account Protection) mechanism. So if an app behaves as it would have needed admin privileges, let’s say writes to HKLM or Program Files, those registry keys and directoris are virtualized in the current user’s context so that the app THINKS it’s running as an admin, but in fact it’s running in a reduced privilege context.
There is whole other bunch of kernel stuff coming in Vista not mentioned here (scalability enhancements, hot patching, page superfetching, fast hibernation, Windows Hardware Error Architecture, User Account Protection, protection from writing to critical kernel components such as KeServiceDescriptorTable/IDT/GDT and kernel code pages, dynamic partitioning (that will allow hot-add of CPU and I/O host bridges, hot-replace of CPU/memory..), full ACPI 3.0 support, full PCI Express 1.1 support, EFI support, win32 services Session-0 Isolation, win32 services delayed auto-start, win32 serives Hardening and Privilege Stripping auto-configured in SCM by means of special service-specific SID, Enhanced Registry Filtering, I/O Cancellation, Windows Driver Foundation, self-healing NTFS…).
You can find most of these things documented on MSDN and “Going Deep” video series on channel9.
The driver model makes much more sense to me now. I couldn’t rationalize the pervailing notion that all drivers would be moved to user-mode, that wouldn’t make a lot of sense. But for certain devices, user-mode drivers make perfect sense; you don’t want your non-critical devices causing catostrophic system errors. Hopefully this doesn’t create a lack of quality in drivers just because manufacturers can get away with it.
offtopic-
I went back and just viewed the Memmory Manager Segment video and thinking the entire time one question that wasn’t fielded. Sitting down with a Microsoft expert, the question of NUMA was never asked while “system state” was glossed over for many minutes. My question is, is NUMA a non-factor while dealing with OS memory management algorithms, or is this interview just lacking?
thanks…
Why is it that we see more stuff about the Windows kernel?, well it’s because Linux is always on about it. I guess Microsoft need to give the impression of openness on every level.
maybe because no one knows much about the MS kernel since they are so damn secretive all the time….would people be interested in knowing more of something that is open and available for free out there or something that is closed and not open to scrutiny?
Maybe because nobody in the real world gives a crap if it’s open or not and even if it was open, 99.999% of the fanboys who care about “openness” wouldn’t know what to do with it even if they could get their grubby little mitts on it.
You said it.
Except all of those foreign governments/embedded systems manufacturers/HPC establishments who need the security/flexibility/customizability offered by an open architecture. It’s short-sighted to say that nobody cares whether an OS is open. A lot of people are using open systems (everyone from Linksys to Fermilab) precisely because an open architecture affords many “real world” benefits that have nothing to do with philosophy.
You can nay-say all you want, but you cannot deny the reality around you. The reality is that open systems are everywhere, because of the benefits offered by their open architectures. Many of the movies you watch are made on an open system (Linux). Many of the cellphones you can buy at the store run an open system (Linux). Many of the routers you can buy run Linux or BSD. Many of the websites you visit are served by Linux or BSD (or Solaris!). Much fundamental research is done on open systems. If Linux/BSD were just another closed-source OS, they’d never get used in those ways. If you take your head out of your ass and realize that the software world is more than just the shrink-wrapped stuff you buy at CompUSA, you’d realize that openness does count for a lot.
>99.999% of the fanboys who care about “openness” wouldn’t know what to do with it even if they could get their grubby little mitts on it.
99.999% of fanboys also wouldn’t be affected, or care, if phones were tapped without official warrants, but that doesn’t mean it doesn’t matter. It just means that people should wake up and care about freedom.
Wrong. People WOULD care. And it’s a violation of privacy. That’s a really bad analogy, so try again.
Wrong. It’s called the Patriot Act, and people don’t give a rat’s ass. What does it matter if it’s a violation of privacy? No one cares. Besides, it wasn’t an analogy. It was an example of how stupid the original post was. Freedom should not be dependant upon who cares, but rather should be freedom for freedom’s sake. That was my point. You didn’t deserve a mod point for your post, but Merry Christmas anyway
I know what the PATRIOT ACT is. MOST people don’t care because it has not directly affected them. But there are a LOT of people who do care (mostly liberals). My point is that you can’t compare the PATRIOT ACT to open/closed source.
Being open/closed does not directly affect a users use of a product. They are not being violated by using closed source. It is THEIR choice to use open or closed source.
With the PATRIOT ACT and previous wire-tap laws, you do NOT have a choice and you ARE being violated if you are tapped.
There is a huge difference, thus why I think it is a poor example. And to say 99.9+% of people don’t care about the PATRIOT ACT is simply WAY off base.
>MOST people don’t care because it has not directly affected them.
Exactly my point. “Freedom should not be dependant upon who cares, but rather should be freedom for freedom’s sake.”
>There is a huge difference, thus why I think it is a poor example.
Of course there is a huge difference, but it just points out an underlying concept.
>And to say 99.9+% of people don’t care about the PATRIOT ACT is simply WAY off base.
I don’t have statistics, but now you’re just nit picking. The concept is that “MOST people don’t care because it has not directly affected them.” And that’s still my point. Please reread the original post.
“Exactly my point. “Freedom should not be dependant upon who cares, but rather should be freedom for freedom’s sake.””
I agree. But in the case of open/closed source, the freedom
“I don’t have statistics,”
I understand, but 99.9% is way way way way off and misleading.
You have one incomplete sentence, and then you grossly misquote me, and take it out of context. How does that lead to mod points? Something looks suspicious . . .
Woah. I don’t know what happened there. Text is missing that I typed.
And I got no mod points for it.
Noone cares about kernels. Only Linux geeks and that’s only because at one point RMS caused such a stink about Linux being just the kernel and the rest being GNU, inadvertantly raising awareness about it. As a result everyone and his brother now has an opinion on microkernels vs monolithic and kernel developpers have been elevated from strange sad geeks to heros for strange sad geeks.
First I thought the Registry would be a great thing to destroy for a hacker.
That it would be a target. That hasn’t happened.
Now the Registry is used as a great hiding place for trojans and adward apps. So, it’s been a boon to the black hats.
Y’know, that’s interesting. So I guess Microsoft shouldn’t have stopped using DOS as a kernel for Windows? They shouldn’t have developed NT? Mac OSX shouldn’t exist?
*scratches head*
Okay, well, if you don’t care about kernels, feel free to go back to using your Windows 3.1 and Mac OS9…
Vista: the greatest effort in renovation ever done. Well, can the Titanic be renovated anyway?
“Vista: the greatest effort in renovation ever done. Well, can the Titanic be renovated anyway?”
Wasn’t linux created to be a cheap knock off Unix clone?
no, windows (acutally, dos) was created as a cheap unix clone. linux was created as a FREE (as in speech) unix. and not an ‘unix clone’ as it IS a unix. windows is no unix (but they are working hard to get it up to the same level, and maybe, once, they will succeed and bring to windows all the things unix had since it was designed in the seventies).
Unix hasn’t had all the things it’s had since the 70s.
And actually, even the security of Unix was really crappy until much later.
you mean stuff like O(1) schedulers, asynchronous I/O, hardware abstraction layers, backwards binary compatibility, coherent driver models and a graphics subsystem can actually detect your monitor refresh rates??
Edited 2005-12-24 21:42
> no, windows (acutally, dos) was created
> as a cheap unix clone
Actually, it’s *a lot* more complicated:
* CP/M was created as a cheap Unix clone
* DOS was created as as poor man’s CP/M
* Windows 3.1 was created as a poor man’s version of OS/2 that ran on top of DOS
* Windows 3.5 NT was created as a poor man’s version of VMS that was grafted on to an advanced version of Windows 3.1 with a GUI interface licensed from Fisher Price.
* Windows 2000 was created as an advanced version of Windows NT with a more modern version of the GUI that was closer to OS/2
From what I’ve heard in the interview, Vista is an attempt to clean up the complicated legacy mess and move it closer to Unix (e.g. layering and modularization instead of spaghetti interdependency).
The part of the registry concerns me a bit that they don’t get it get. They mention that sometimes programmers use the registry to save values between pages and perhaps a better solution was to have a database for this purpose so it would lessen the temptation to put it in the registry. IMO, this completely misses the issue. This sort of programming is just plain bad practise that should be shunned. If you need to communicated between apps, use a pipe, socket, shared memory, message passing, or some other communications API. Don’t go cluttering up permanent global space with your values.
IMO, the registry *is* good. Linux has several: ProcFS, LDAP, GConf, etc. The key difference from Windows is that each type of registry has a specific purpose and a well defined API. A unified global registry where everything from critical low level configuration to high level app specific info to interapp communication is a sure recipe for interdependency hell and corruption. And when you get such corruption in the global space, it’s nearly impossible to fix it. If things are separated as they are in Unix, you at least have a fighting chance of recovering your system. For instance, with GConf, the only things that get stored are your preferences. If the GConf registry gets corrupted, you can just delete the GConf registry and the default settings will return to the GConf registry.
You’re basically right about the registry.
Though your suggestion of using pipes/sockets/shared mem/message passing instead is a bit off. That works if both apps are open and know how to communicate. But what if the second app isn’t open and you want to let it have the information when it’s ready? Registry is decent for that, or just a file.
I personally hate the registry, but it does have its uses. Sadly, it is horribly overused. Most applications that use it are much better off just using INI files or something similar stored in the application data folder (%APPDATA%).
Also, the windows registry is actually broken up into 5 different parts (or hives). If one hive gets corrupted, you can restore just that hive to default. There is SYSTEM, SOFTWARE, SECURITY, SAM, and DEFAULT.
“Though your suggestion of using pipes/sockets/shared mem/message passing instead is a bit off. That works if both apps are open and know how to communicate. But what if the second app isn’t open and you want to let it have the information when it’s ready?”
Still message. In intelligent-design systems. Flatten/serialize it, archive to disk (so yes, file, but
there may be other solutions). Then unarchive and instantiate.
And the per user registry hive.
IMO, the registry *is* good. Linux has several: ProcFS, LDAP, GConf, etc. The key difference from Windows is that each type of registry has a specific purpose and a well defined API. A unified global registry where everything from critical low level configuration to high level app specific info to interapp communication is a sure recipe for interdependency hell and corruption. And when you get such corruption in the global space, it’s nearly impossible to fix it. If things are separated as they are in Unix, you at least have a fighting chance of recovering your system. For instance, with GConf, the only things that get stored are your preferences. If the GConf registry gets corrupted, you can just delete the GConf registry and the default settings will return to the GConf registry.
So while most pepople argue that one of UNIX strengths is the unified name space, you argue that one of the big strengths of Linux is that it has disjoint namespaces?
Bzzzt, UNIX strength is the unified name space *and* its philosophy to use ‘one tools for one purpose’ which goes against storing everything and the kitchen sync in the same file.
That said, nothing but NIH and legacy precludes reusing the same file format, file hiearchy, everywhere in Unixes..
i can see that some of the architecture and system architecture and development practises are moving to the (unix-kernl, API, userland) model.
>Windows 2000 was created as an advanced version of >Windows NT with a more modern version of the GUI >that was closer to OS/2
Windows 2000’s GUI/WIMP was from Windows NT4/95.
Cmon Yall. Solaris and Sun has just gone open source. Join the program. But alas my Linux system can’t even play the dam video. Shucks.
In my opinion, the question about whether or not they think the registry should have ever been created is the least interesting part of the interview… The fact is, the registry was a good idea that was created without thinking about the long term implications of a cross-session object dependency store. Seems to me that the future architecture of Windows is a tad more intriguing. “The [registry]question that’s on everybody’s mind” is a pretty rediculous introduction to the interview… Memory management, (micro)virtualization, componentization are much more interesting concepts.
Despite the fact that the great majority of the people don’t care or don’t realize that there is kernel running, it’s a very interesting interview. If the kernel works great, nobody notice it, but if it fails…. Poor kernel developers! 🙂