Ian Griffiths posted to Longhorn user group detailing some of the kernel improvements that we can be expecting from Longhorn. Longhorn will feature CPU time reservation, and disk access scheduling. These changes are being added mainly for the benefit of the media services. MS wants movies and other media to play without interruptions. Additionally, GotDotNet is featuring a diagram showing how Longhorn is built.
um I don’t see anywhere in that diagram anything about winFS the new files system it has ntfs, fat16/32 but no winfs in the kernel????
I know these are interesting things, and I think they are good ideas. But am I the only one who worries that these will get overused with lots of people trying to garuntee their pointless little piece of software that does nothing CPU time or disk access? While it’d be usefull for media stuff and I could even see it making some sense for games in some situations, I’m worried that this will be too common. I mean, could I “disable” this for specific programs through task manager or some setting somewhere?
Just because a feature exists doesn’t mean it should be used all the time. Those of us who have been on the ‘net long enough to remember the <blink> tag well know the horror of this all too well.
WinFS is built ON TOP of NTFS, so it doesn’t need kernel code. It’s the big yellow thing in the middle of the top row.
> gee…they just confirmed what all objective parties suspected…..Windows is not a good Multimedia platform.
Yeah, which is probably why pretty much every major multimedia content creation app and 99% of all games are created for the windows platform. Makes sense. Really.
NOT.
(Just for protocol, I’m mainly a Linux user nowadays. I prefer Unix’ mentality of simple building blocks, open software and the power it gives me. Using Windows would limit me, personally. But let’s get the facts straight.)
Remember y’all, WinFS is NOT a FILE SYSTEM. Its just a little thing in life to help organize it better.
Replace “created” with “available” in above statement, which is what I intended to write. Obviously, many major content creations apps did not start out on Windows, nor were they created for it. But they’re available, and they work. That was my point.
BeOS did that before for multimedia performance. It seems MS is on the same way.
Nothing new. But it is useful for multimedia.
making an application for the platform and the platform sucking for support of said application have nothing to do with each other.
really though, all this is telling me is that I will REALY want to buy Longhorn when it comes out because the platform will basically embody all the greatness of the other platforms out there.
my enjoyment for OS X might drop by the wayside in that case….but that will of course depend on the CLI in longhorn.
“making an application for the platform and the platform sucking for support of said application have nothing to do with each other.”
Well, it at least implies that the platform is good *enough* for the task (though perhaps far from ideal), or else the developers would have used a different platform instead, out of sheer necessity. So I’d say that while, yes, the planned-for new functionality does highlight shortcomings in the current iterations of Windows, saying that they are “woefully under powered to handle multimedia” is a tad strong.
developers will make programs for windows just because it is windows.
Windows does not have to be good enough…even though I will admit it is good enough, but far from perfect (any person who listens to music or watches movies on it will tell you that)
Why does MS make things overly complex???? It seems that their approach to design is CISC vs RISC (hardware analogy). Prepare for hardware upgrades if you want to run longhorn.
This is a shame. Their new database of the file system is very much in line with the unix philosophy (the idea, not implmentation). I was hoping for much more progress from microsoft.
They could be building longhorn for i64 platform not the ia32. Thats what this model looks like to me. It might read something like this: “will work on ia32 but best performance is on ia64). Just a guess… hope i am wrong…
@Eike Hein: Most content-creation stuff is available for Windows, but that doesn’t mean that Windows is particularly suitable for it. Its good enough, but not ideal.
@Yan Weng: No! BeOS did nothing like this. BeOS was simply high-performance, it didn’t offer any of the guarantees that this architecture does. Longhorn’s new features are more along the lines of XFS’s GRIO (guaranteed rate I/O) feature, and POSIX.4’s Real Time extensions.
This seems like a bit of a crutch to me. You should *not* need these kinds of guarantees on a regular desktop machine. They can lead to lowered overall performance in cases where you don’t need hard guarantees (by requiring the kernel to be a bit conservative in allocating resources) and could cause loss of responsiveness for poorly written apps. This might be useful for graphics workstations (a market that’s currently Solaris/IRIX oriented and is quickly adopting Linux), where users have fixed requirements and a small suite of high-end tools, but I shudder to think what will happen when craptacular programs like MusicMatch Jukebox have access to these APIs.
Has Microsoft just shown proof that Windows XP and all NT based OSs are woefully under powered to handle Multimedia?
gee…they just confirmed what all objective parties suspected…..Windows is not a good Multimedia platform.
All they’ve shown is that they intend to make it even better for multimedia playback.
How is an improvement to an existing system admitting anything more than that ?
It seems as if Microsoft just throws in everything (including the kitchen sink), regardless of the consequences like how messy or massive the model ends up being.
Did anyone else notice the HTTP listener running in Kernel mode, is that a standard practice in OS design? I guess that will be fast… scary! Notice that in the “framework” they have another HTTP handler under the connector area in the communications area (with the transport channels)? Isn’t HTTP an application layer service? Strange, IIS is actually a part of the networking stack?
Anyone else notice the CLR virtual machine sitting just above the kernel,I wonder how much work the memory manager offloads to the real memory manager? If you had a lot of .net applications it would be pretty much interpreting all code, then I wonder if it even then speaks directly to the kernel, or if it has to make normal syscalls to framework modules, etc. Woudn’t this be fairly similar to Java’s VM?
There also seems to be a lot of preoccupation with winfs and synchronization between objects and file information, I wonder how much work they do before they pass of a request to the drivers below to do block I/O. Further I wonder if the transaction layer interprets the request before the driver? – Are these things normal in a monolithic OS model? Perhaps someone with a lot more knowledge can explain this further?
Did anyone notice anything different about where the graphics drivers are located? No longer in kernel mode. This is at least one step in the right direction.
if winfs is run on top of the whole kernel stack, what will happen to performance. also with the http stacks in there aren’t they just asking for probelms??
Does anyone else think this is something that should be axed?
I understand keaping it in WinXP so people could transfer over to NTFS from fat32. If you’re then going to be running longhorn there is no way you are going to be running fat32, and i can’t see how you would end up with a longhorn box interacting with a fat16 partition. Even fat32 is pretty obsolete at this point with the size of HDs and the size of partions people run. I guess I would like to see MS removing old legacy stuff to force migration to newer versions of windows and so forth.
What about an old HDD which contains FAT16/32 partitions which one wants to run on Longhorn? Are such sacrificed solutions justified for the ”good” of _forcing_ people to a newer FS?
Why not giving people choice? Why not honestly inform people why the newer FS is better? Why would you like to force people to newer versions of Windows?
Besides compatability with legacy OSes or data on a harddrive with a FAT 16/32 partition, devices like key drives and flash memory cards are formatted as FAT16/32 and the OS needs to support those file systems for compatability with those devices.
No need to worry. Re-read the following.
This lets code (such as media playback code) tell the OS
that it want to be guaranteed to get a certain amount of CPU time on a
regular basis. The OS will make sure that it wont be over-committing itself
before agreeing to this.
I would like to see a Linux 2.6 diagram like this.
Parhaps a Longhorn vs. Windows 2003 vs. Linux 2.6 architecture comparison diagram !
Well, Linux would just be all the stuff in the “kernel mode” box. You’d have to add X + a desktop environment for an accurate picture.
Interestingly, it looks like graphics have been pulled out of the kernel in this release. The “DirectX graphics miniport” is in the same position that the Linux/BSD Direct Rendering Manager would be, and the “DirectX graphics” and “graphics drivers” are in the same place that the DRI layer would be. If that’s the case, all I can say is “welcome to the 1990’s Microsoft!”
PS> To address a pet peeve of mine: Anybody complaining about too many abstraction layers in Linux can now officially shut up. Just look at all those boxes between your application and the hardware! Layering is something present in all large software systems — there is no way to manage all that complexity without it. You just notice it much more when the design is made transparent like in this diagram.
id anyone else notice the HTTP listener running in Kernel mode, is that a standard practice in OS design? I guess that will be fast… scary!
Gee this is already done in Windows 2003 Server and makes perfectly sense.
“Gee this is already done in Windows 2003 Server and makes perfectly sense.”
Yeah, from an Exploit point of view.
You’re argument is basically performance, and by that account, shouldn’t the whole OS run in kernel mode?
Throwing http stuff into kernel mode is just inviting for all kinds of weird and wonderfull cracking – let the feast begin!
Like we say in the open source sphere, “Show me the code!”. Not until I how see cleanly or poor written that whole complex structure in the diagram fits nicely together, will that diagram make any sense to me. As far as I concerned, the diagram is fake. It is false. It is another marketing gimmick and there will be more to come.
That’s why you wouldn’t see stupid diagrams like that in open source Unix. You don’t need them. They are essentially meaningless. Yeah, to an uninformed user it looks spiffy and colorful but to a potential developer or a seasoned professional programmer, it’s absolutely useless. How can anyone guarantee me that what you see in that diagram will be implemented exactly the same way in Longhorn? Yeap, no one can.
Microsoft should just stick to lecturing developers on how to use their new overly complex tools, and users their new GUI. Rather than showing me a diagram of their kernel that I would never ever get to see the code of. Sheesh, the more I see things like this, the more assert that I’m done with Microsoft. A diagram of a kernel? Give me a break!
Pfft…I’ll get modded down anyways. :rolleyes:
Yeah, which is probably why pretty much every major multimedia content creation app and 99% of all games are created for the windows platform. Makes sense. Really.
The reason is dominance of the platform. Having worked on media applications on Windows, I can honestly say the platform isn’t well-suited for it, architecturally. It’s gotten better with Windows 2000, but the dominance was already there before that (i.e. in the 95-98 series).
The reason apps could get decent performance out of your machine is because they took over the hardware. For games, this was standard practise, but even applications like GigaSampler replaced core functionality of Windows with their own stuff – basically you need write your own pseudo-OS if you want media performance on (old) Windows systems.
A few points in response to questions asked in messages above:
HTTPSYS is a *very* small part of the HTTP stack, and yes, it lives in kernel mode, and does some basic processing and routing; yes, it is done for performance reasons (the same reasons that chunks of the TCP/IP stack live in the kernel on most OSs). However, it isn’t the web server that lives in that space!
There are, of course, plenty of reasons to talk about the kernel, and other low level areas of the OS stack. Understanding how the kernel is implemented (even to the extent of licensing the source code) helps a great many software developers build better, more performant user-level code. Not to mention the device manufacturers.
As an example – to get to this ‘glitch free’ performance, *most* device drivers (and many software programs) are going to have to understand the new model implemented by the kernel.
WinFS and performance. WinFS is *not* a filesystem. NTFS is a filesystem and WinFS is a unified indexing, query and storage system, whose data ultimately lives *in* the filesystem. It uses SQL Server Yukon technologies (but isn’t actually SQL Server itself) to provide these indexing and query services. There are also a bunch of standard, extensible schema for things like people, organizations etc., and a model for tying these together in a distributed manner.
As a more general note: one of the things that is interesting about Longhorn is not that it is full of *new* technologies. Avalon, WinFS, new kernel features, Indigo; we’ve seen some of these technologies in other operating systems, in some form. What Longhorn does is to bring them all together in an integrated fashion, each leveraging benefits from the others, so the whole really is greater than the sum of the parts.
> Parhaps a Longhorn vs. Windows 2003 vs. Linux 2.6 architecture comparison diagram !
linux 2.6 is just a kernel, no integrated GUI in it!
Forever X11!!!! (sarcasm) ruin ruin ruin linux…
—
imho, also, linux 100.0 will be with x11 ;(
…that Longhorn wasn’t monolithic. He said it was ‘highly modular’; however, that doesn’t negate the fact that it’s a fat and complicated architecture and shows no signs of simplification. If this diagram is real, then the design make me truly sick. What a mess.
Everybody agrees that you can play every kind of media under windows. But it is the individual apps that do the work, not the windows kernel. I would say windows have good software support for multimedia, but the OS (i.e. kernel) support for multimedia sucks.
Even with a P4 interruptions of the playback are frequent when you copy files to your disk (or run any program that involves “heavy” disk access).
On the contrary, with an old-dusty P2 and linux there are no interruptions at all even if you compile the universe on the background.
So, I would say that windows have much room for improvement in multimedia performance, although I am not sure if those “reservations” is the way to go.
Something missing in that diagram next to NTFS and FAT16/32 is something that would allow third parties to install something for Linux or other OS filesystems.
What I mean is a plugin type architecture where it would make it easy to install something to be able to read ext2,ext3,reiserfs,xfs. Microsoft wouldn’t have to port anything, just provide some kind of interface.
Or does this all happen on a higher application layer?
Is what I am saying technically possible?
> Remember y’all, WinFS is NOT a FILE SYSTEM. Its just a little thing in life to help organize it better.
So it stands for “WinFS is not a File System” then?
Seems they’ve jumped on the recursive naming bandwagon….
Just as long as I can still type “FORMAT C:”, I’ll be happy.
The sad thing is that Microsoft is beating Linux on both fronts. They are a near monopoly of the desktop and along with Sun, the business middleware platform, and they also attract more students than Linux does to their closed source kernel. I do not understand the logic of trying to learn Windows kernel implementation, it is closed source for God’s sake! However people still do it and that’s because Microsoft does a better job at everything. The fact that they can have people defend their system implementation when it is a closed source product rather than an open source platform is an amazing testiment to the total control that they have over people.
Well all this seems really nice. I’m surprised more people haven’t commented on the 2nd comment by MBCook.
It’s great that MS is providing new guarantees for media applications, but that is going to rely so much on applications playing nicely. And with realplayer, quicktime, and WMP all fighting it out on my system now with file extensions, I dread to see what happens
Everyone knows of programmers who use short instead of int, or who allocate exactly the amount of bytes needed for their particular string, to optimize their little program that would run fine on a 286. I wonder what fanciful tricks this will give programmers.
I don’t think it will be abused too much, and hopefully either MS itself or the users has the ability to ‘authorize’ who gets the special time priveleges. I don’t want Gator (now something else), or that purple monkey guy using this stuff.
Yamin
>The fact that they can have people defend their system >implementation when it is a closed source product rather >than an open source platform is an amazing testiment to the >total control that they have over people.
So what does that mean, exactly? Why should one not defend some closed source software?
Who said Open Source is by definition better?
Apart from the licencing model, there is not clear advantage that open source has. The “bazaar” style of software development is just ONE more way of managing software development.
The fact that the code is “open”, means nothing. You know why? Because the code is open in CLOSED SOURCE projects also. It is open to the programming team (often of huge size). Everyone in Adobe, say, gets to see the Photoshop code.
The only thing that Open Source adds to this is that *anyone* can see the code, not only the programming team. Also, anyone can be a member of the programming team (or fork the project, as in X86).
Now, why exactly, the OSS model where *anyone* can see the source code is better than the *anyone in the programming team* can see the source code? Just because you gain quantity (more eyeballs)? Quantity != Quality.
Just because Eric Raymond wrote some stuff amidst the dot com boom that
“Something missing in that diagram next to NTFS and FAT16/32 is something that would allow third parties to install something for Linux or other OS filesystems.”
Windows has supported installable file systems for years.
The latest DDK is here: http://www.microsoft.com/whdc/ddk/ifskit/
Will be a good thing for users of products like Cubase, Reason, or any of Sony’s Sonic foundry stuff, dontcha think?
One criticsm that Mac and other OS levelled at windows is that there is a problem with resource allocation when using these products-it’s too slow, stutters, has to negotiate with the underlying OS for RAM/Harddisk/CPU time, etc.
If you’re doing real-time harddisk mixing, then this sort of arrangement is not ideal. In fact, several computer music mags recommeded using win98 for these programs to get around what they generally bunch togther as “latency issues.”
Doing a hard CPU commit for media sure would help.
Just checked the diagram, and something struck me as odd-
Why have WinFS at all? Seems to just add needless complxity, since part of it is at the internet protocol/
kernel level (XML, in particular), and part of it is essentially communications (ado, etc.).
Why not just add a middleware layer to facilitate external and internal communications processes, and use the userland and internet services as modules called from that? Turn the whole system into an ASP on a desktop, and isolate it from the kernel via DRM/Palladium/Trusted execution.
That would be easier than creating a whole new set of services.
Just checked the diagram, and something struck me as odd-
Why have WinFS at all? Seems to just add needless complxity, since part of it is at the internet protocol/
kernel level (XML, in particular), and part of it is essentially communications (ado, etc.).
You seem to be reading the diagram incorrectly. XML is just one of the models for WinFS, it’s one method available for information storage and/or retrieval. ADO.net is presented as another part of the data layer, though is also used by WinFS.
Why not just add a middleware layer to facilitate external and internal communications processes, and use the userland and internet services as modules called from that? Turn the whole system into an ASP on a desktop, and isolate it from the kernel via DRM/Palladium/Trusted execution.
That big yellow block labelled “Data” in which WinFS and ADO.Net are two smaller blocks is just that. It’s a layer between the kernel and the end-user (and developers). It’s the data layer of the OS, just as most development texts advocate a data layer in data-access applications, there’s a data layer in the OS to handle access between applications and the actual data (at the file system level, in the kernel). ASP.Net is in the presentation layer, which will usually access ADO.Net or WinFS to retrieve information from the file system, in other words, the presentation layer will go through the data layer to get to data. On the other hand, the presentation layer still needs access to the kernel for various reasons, such as GDI/GDI+, DirectX, the Window Manager, and so on (all of which are listed below the Presentation Layer in the base services). The kernel and kernel mode software are a very small portion of the diagram (at the bottom in the base OS services).
That would be easier than creating a whole new set of services.
There are a whole 3 services listed in the WinFS portion of the diagram, Synchronization, InfoAgent, and FileSystemServices (MetaData Handlers). Then you have your schemas and data models, which have more to do with how data is presented and related than the data itself.
That’s why you wouldn’t see stupid diagrams like that in open source Unix. You don’t need them. They are essentially meaningless. Yeah, to an uninformed user it looks spiffy and colorful but to a potential developer or a seasoned professional programmer, it’s absolutely useless. How can anyone guarantee me that what you see in that diagram will be implemented exactly the same way in Longhorn? Yeap, no one can.
As a programmer that’s never single-handedly developed anything as large as Longhorn (and how many people have?), I’d have to say that this is exactly the sort of thing I’d like to see at the early stages of a project like this, before a bit of new code is written. Of course, there would have to be additional documentation going into more detail on the individual portions and how they connect, but this gives a good overview that lets everyone working on the code (or on applications and drivers that have to work with the code) understand where each part fits, even if it is a very general overview.
Microsoft should just stick to lecturing developers on how to use their new overly complex tools, and users their new GUI. Rather than showing me a diagram of their kernel that I would never ever get to see the code of.
Obviously you either didn’t look at the diagram or didn’t understand it, because it’s a diagram of the OS, not the kernel (in fact, there isn’t a diagram of the kernel there, just a blob that says ‘kernel’). Without this kind of view of the OS available, when you’re handed several million lines of code, how are you even going to know where to start? You could look at the code all day and have no idea what you’re looking at.
I will answer your sick inquiry as to why open source means better quality. The reason for this is because if you control the factors of production, those being the research and development, than you are in a position to lead. You do not have any control and on the Microsoft product you are forced to specialize, you can not generalize, you can not have control. You are a puppet, and you don’t even realize it but you are being lead completely by Microsoft and by a strategy that you are completely ignorant about. You have no will of your own but you for some ignorant reason believe otherwise. Sure you can create your solutions but can you even really do that, I seriously doubt it.
Open source is about learning. The vendors that support open source do not realize it because the computer industry has been lead by vendors who’s first priority has always been to control the factors of production. You have no idea, not even a clue about the type of technology that can be developed thorugh open source. You are totally ignorant of what you could create if you actually had power because you do not have any power, you never had any power and you probably never will.
>I will answer your sick inquiry as to why open source means >better quality.
“Sick inquiry”? This is religious talking. When two people debate something (a development model, of all things) there are no …”sick” inquiries or views. Grow up.
Still, you DON’T answer my sick (sic) inquiry about WHY open source MEANS BETTER QUALITY. Instead you right:
>The reason for this is because if you >control the factors >of production, those being the research >and development, >than you are in a position to lead. You do >not have any >control and on the Microsoft product you are >forced to >specialize, you can not generalize, you can not >have >control.
Apart from the horrible syntax, you don’t say a word about the QUALITY of the software. You talk about control, which is another matter altogether.
Not only that, but you failed to see that I ALREADY talked about this issue. I write:
“APART FROM THE LICENCING MODEL, there is not clear advantage that open source has”.
So I covered the “control” issue from the start, and as an advantage of the OS model.
Better luck next time…