Vista will be the last version of Windows that exists in its current, monolithic form, according to Gartner. Instead, the research firm predicts, Microsoft will be forced to migrate Windows to a modular architecture tied together through hardware-supported virtualisation. “The current, integrated architecture of Microsoft Windows is unsustainable – for enterprises and for Microsoft,” wrote Gartner analysts Brian Gammage, Michael Silver and David Mitchell Smith.
That there are people in this world who think that “modular architecture tied together through hardware-supported virtualization” is anything other than marchitecture buzz speak; or that people will pay outfits like Gartner tons’o’cash for long versions of this crap.
damn, too bad the maximum comment score is 5. you’d totally be up to a 10 by now. i couldn’t agree with you more.
I can’t agree with you more. If these jerk-offs working for Gartner were any good about predicting the technology future – they wouldn’t be working for Gartner. For the most part when they aren’t spouting jibberish – they spout platitudes and state the obvious.
Gartner’s up to it again…making decisions for other companies.
Excuse me, but I think you forgot to insert the word “poor” between the words “making” and “decisions” in your comment. 😉
How come Apple and the BSD and Linux distros can compete and/or outperform/outshine Vista with their “monolithic” designs?
Gartner is full of it as usual. Microsoft’s problem has nothng to do with the monolothic versus a “a modular architecture tied together through hardware-supported virtualisation”. Microsoft’s problem is that they’re unable to deliver, period.
That said, virtualization has a lot going for it, but for other reasons than what Garnter likes to think.
Last I read Apple was a Hybrid Kernel..
It is hard to maintain an OS such as Windows with it’s current design. Some of the code in Windows hasn’t been touched in years and that makes a mess to maintain. If the kernel were designed in a modular fashion parts could be rewritten or updated/maintained more easily since they are not so tightly knit into the kernel
Another point is that hardware is getting faster, the once significant issue with context switching is becoming less and less of an issue and by the time Microsoft contemplates switching architectual designs hardware will be even faster.
This could increase the stability of the OS as a whole which is shown when Vista moved it’s Audio stack into User mode, it creates more responsiveness since it’s the servers talking directly to the hardware (Or through very little OS abstraction) instead of all of it being in the kernel mode.
>Last I read Apple was a Hybrid Kernel.. <
I think this is taken out of context, he wasn’t refering to the kernel, I believe he was refering to the monolithic design as far as how the entirety of the hardware and software is all contained on one system.
” Last I read Apple was a Hybrid Kernel.. ”
The NT kernel is also a hybrid kernel. Their problem is probably the complex web of interconnecting DLLs, drivers, Windows services, explorer shell etc.
http://en.wikipedia.org/wiki/Hybrid_kernel#Hybrid_kernel_example_-_…
Gartner is (as usual) full of manure. The problem with Windows is it’s full of cruft – the same problem OS 8/9 had with Apple. When the OSX transition happened they created a paired down system API (reducing the number of toolbox functions to about 1/3) called Carbon. Did it break Apple apps at the source level? Yes, it did and people fixed the apps that were still shipping using tools provided by Apple. Windows has the same problem – 3 sets of memory management functions, for example, and all three are still there to support “backwards compatability.” What makes it painful is that all 3 APIs have to work in each release of Windows, even if there is only 1 non-deprecated API.
Another problem that Microsoft is very bad at bundling things into the kernel that probably shouldn’t be there. If they got down to the bare kernel with, with a slimmed down set of system calls, and partitioned all the other crap (including most drivers) into userland modules (like a micro-kernel), they’d probably be in good shape. (I’m not a huge fan of micro-kernels, but so many 3rd parties produce windows drivers – with varying quality – I’m not sure letting them at the kernel is a good thing(tm)). Keeping the kernel free of unecessary doodads like web browsers is what makes the *nix kernels fairly stable. All the other stuff is layered on top of the kernel, so X11 can fail without bringing down your Solaris box.
And while I’m ranting , would they please learn how to move large amounts of data without bringing the UI to a grinding halt half the time! And could we please have more than 10 inbound TCP connections! And oh, while I’m here, what the freak does Windows XP (desktop) do that requires so much disk space and memory?
Linux isn’t monolithic at all. The kernel has some monolithic aspects to it but the applications and the user interface are mostly collections of modules. Take for example K3B. It is really just a front end GUI for the command line executables for cdrecord, etc. Windows applications tend to be more monolithic where all the application GUI and modules are all tied together. Often times this is wasteful since several applications will be duplicating code and functionality where they could have shared resources if it was open.
It is pretty clear based on how long each version of Windows takes to be released that their current strategy cannot continue or the next version after Vista will take 8 years or more to develop. A new modular architecture is pretty much a given after Vista or Microsoft will face problems with missed schedules, increased cost and code complexity — which is already at a reportedly frustrating and painful level with Vista.
also, when you noted k3b — it’s one of the strenghts of this modular design.
As parst are separated, a bug in one part doesn’t always require rewriting other parts.
So, if cdrecord fails — you only change cdrecord. No need to thoroughly retest other parts — k3b’s GUI still will work. Of course, when you add dvd writing to k3b, you may require additional stuff.
What to me is one of the biggest pro’s is that an GUI does what it does — you click on what you need and the GUI will build a commandline to <insert program here> — even that is customizable !) and execute it.
Other positive effects are that you can burn a disc on the commandline.
One big lump of code doesn’t make it easy to maintain and we’ve probably all seen that MS proves that on a daily basis.
The modular design in kernels also is a good thing — it’s like unloading a part and reloading a new module — you can’t do this without rebooting. MS doesn’t even know for sure when it needs to reboot…
there comes a time that you don’t need to reboot your linux system to start using a new kernel…. MS has a lot to learn from open source.
In fact, I believe that if MS would have opened up their code, it would tremendously grow in quality and security. (it very well may kill off most other alternative OS if they would do and lucky us — MS won’t open up their sources to the public in general)
I might also add, that’s it’s easy to replace the base system with, say, FreeBSD or AIX. Not as easy as it is to replace one Windows with another (and this point I *will* concede to the Windows fanboys), but easier than replacing either with Windows. Yes, WF’s, Windows *is* awkward to get to grips with if you don’t use it every day.
Linux isn’t monolithic at all. The kernel has some monolithic aspects to it
The Linux kernel is a monolithic kernel. All parts of the kernel, filesystems, drivers, network stack, etc run in the same memory space. That is what determines if a kernel is monolithic (all in one share memory space) or micro (drivers, filesystems, network stack, etc are in separate, protected memory spaces). Yes, you can load/unload kernel modules, but there is absolutely no difference in terms of memory use to a driver compiled into the Linux kernel and a driver loaded via a kernel module.
Modular != microkernel. Monolithic != no modules.
How come Apple and the BSD and Linux distros can compete and/or outperform/outshine Vista with their “monolithic” designs?
How so?
I’m bit tired of reading things like you just said. How can you claim that Mac,BSD and Linux “outperform/outshine” Vista? Some OS that is NOT even released?
First, BSD and Linux distributions are not “outperforming/oushining” Windows Vista. BSD and Linux are just kernels. On top of them, there are “low level” pieces of code that are for the most part more than 20 years old (shells, command line tools). On top of them, there’s the X11 architecture that is more than 20 years old too. And actually it’s not great. It’s just a really old design with a billion extensions. And as if it was not enough, over X11 you have some poorly designed Windows Managers + GUI Toolkits. Please, do not compare Gnome and KDE to Windows Vista. And please, do not compare the power of the Win32 API + the Windows Forms + the Windows Presentation Foundation to basic “Toolkits” like GTK, QT. You just cannot 😉
And now, Mac? Not as bad as BSD and Linux distributions but still far from “competing” with Windows Vista.
Keep in mind that I’m NOT saying that people should not use Mac, BSD and Linux. I use Ubuntu and Kubuntu all the time myself (but I dont use Mac). But I want to make sure that you understand that Windows Vista is revolutionary compared to everything on the market right now.
no its not. it might be ‘evolutionary’. but its not ‘revolutionary’. theres nothing IN Vista that hasnt, in some form or another been done already. same with any OS.
“And now, Mac? Not as bad as BSD and Linux distributions but still far from “competing” with Windows Vista.”
“(but I dont use Mac)”
So how can you say that OS X is “far from competing with Vista”
I can assure you, OS X is very fast, rock-solid and it hardly needs any maintanance.
The only reason for keeping using Windows is the number of applications, but you can’t blame Apple for that.
Hmm, considering that some of these things have been used for 20 years instead of being ripped out and re-written says alot about the original authors. They work and do what they are supposed to. THats like saying Windows sucks because its been using the same gui since Win95. Your right. You cant compare Windows and Linux. On one hand you have an OS that has cost BILLIONS of dollars to develop and has the worst security history of any OS on record. On the other hand you have an 2 OSes (BSD and Linux) that were developed largely by voluteers that rivals or surpasses any commercial OS every written. You are absolutely right. THere is no way Windows can compare to Linux.
Edited 2006-08-26 04:53
<mods you up to 7>
Please, do not compare Gnome and KDE to Windows Vista. And please, do not compare the power of the Win32 API + the Windows Forms + the Windows Presentation Foundation to basic “Toolkits” like GTK, QT. You just cannot 😉
Are we talking about the same toolkit that didn’t even support auto-layout until not too long ago? If GTK+ is “basic”, every Windows toolkit prior to to Win.Forms 2.0 (which almost nobody uses yet) is positively primitive.
But I want to make sure that you understand that Windows Vista is revolutionary compared to everything on the market right now.
Revolutionary my ass. Aside from hardware acceleration, Squeek does everything Vista is supposed to do. And it’s a copy of Smalltalk systems dating back to the 1980s. What exactly does Vista have that Cocoa doesn’t? Both use the same basic widget model — composited overlapping widgets drawn using vector graphics. Sure, one’s retained-mode and the other is immediate-mode, but that’s a matter of design-philosophy. The only thing particularly original in Vista is Loop & Blinn’s vector texture mechanism. Everything else is a rehash of things Smalltalk and NeXT had a decade or two ago.
How so?
I’m bit tired of reading things like you just said.
Believe me when I say, they we feel the same when reading your things as well. It’s nice if you enjoy your Vista bread with Vista butter and Vista coffee, but still, the store is full of other breads and butters and coffees. Dismissing them won’t make them less exceptional.
It’s really nice to see MS PR at its peak success working in other people’s talk, making neutral or good characteristics sound “bad”, e.g. “old design with a billion extensions”, “X11 architecture that is more than 20 years old”, “old design with a billion extensions”, “poorly designed Windows Managers + GUI Toolkits”, “do not compare the power of the Win32 … to basic “Toolkits” like GTK, QT. You just cannot”.
make sure that you understand that Windows Vista is revolutionary compared to everything on the market right now
I won’t even say anything to that one. You know, there are institutions on this planet where they can cure these kinds of hallucinative dillusions.
“And now, Mac? Not as bad as BSD and Linux distributions but still far from “competing” with Windows Vista.”
Let me get this straight… you are here saying “Mac isn’t as bad as Linux”… I guess that means Linux is “bad” in some way, or does it?
“Keep in mind that I’m NOT saying that people should not use Mac, BSD and Linux. I use Ubuntu and Kubuntu all the time myself (but I dont use Mac). But I want to make sure that you understand that Windows Vista is revolutionary compared to everything on the market right now.”
Yet you claim to use Ubuntu and Kubuntu (wow, you seem to love both KDE and Gnome, you can’t choose!) “all the time” ! So unless someone forced you to use (k)Ubuntu (who would do that?), you’re using an inferior product out of your own free will?
That’s about as revolutionary as you think Vista is.
“BSD and Linux are just kernels.”
Wrong, wrong, wrong. BSD is not just a kernel. All BSD’s are complete systems, not just kernels. Hence “Berkeley Systems Distribution.”
Clearly you have never used any of the BSD’s, and clearly they DO outshine anything from Microsoft.
First, Linux is a kernel, BSD is an OS.
On top of them, there’s the X11 architecture that is more than 20 years old too. And actually it’s not great.
Never heard the name of XGL ? Its 20 years old ?
“But I want to make sure that you understand that Windows Vista is revolutionary compared to everything on the market right now.”
Which revolutionary ? When I look at the windows Vista site they say:
– you have a real firewall now
– you are not system administrator by default now
– IE7 will try to catch his late
– Windows Vista suport ipV6 now (lol!)
– WinFX SDK that they want to call .Net 3.0 ?
Nothing that every OS already have today.
And now, Mac? Not as bad as BSD and Linux distributions but still far from “competing” with Windows Vista.
At least we agree here. Mac OS X is already far away of Windows Vista. Vista can’t compete here.
The real revolution in Windows is taking place under XP. It’s the .NET platform and managed code. The new Windows programming models really kick ass.
Most of the firewall and other stuff that you mentioned were left up to third parties in Windows…which is why things were not done right. Just look at Symantec.
Dano
“And now, Mac? Not as bad as BSD and Linux distributions but still far from “competing” with Windows Vista.”
Let me get this straight… you are here saying “Mac isn’t as bad as Linux”… I guess that means Linux is “bad” in some way, or does it?
Yes. I was talking about programming interfaces. At least Mac has Cocoa. Linux has GTK and QT (Ok I know already that GTK and QT has been ported to Mac and Windows).
I was just saying that Win32, WinForms and WPF all mixed together offer me more power than all the others.
“Keep in mind that I’m NOT saying that people should not use Mac, BSD and Linux. I use Ubuntu and Kubuntu all the time myself (but I dont use Mac). But I want to make sure that you understand that Windows Vista is revolutionary compared to everything on the market right now.”
Yet you claim to use Ubuntu and Kubuntu (wow, you seem to love both KDE and Gnome, you can’t choose!) “all the time” ! So unless someone forced you to use (k)Ubuntu (who would do that?), you’re using an inferior product out of your own free will?
That’s about as revolutionary as you think Vista is.
Yeah I use Ubuntu and Kubuntu. Like I said, there are NOT that bad. Yes, of course, I don’t like BSD and Linux when it comes to GUI programming. I can’t stand GTK and QT. But that doesnt mean I cannot use them as a user. And uh, I prefer Kubuntu over Ubuntu (alot). But yeah I do use both because I hate mixing GTK and QT apps…even with that gtk-qt-theme-engine…
And hmm, for you record, I don’t own a Mac but I used a Mac several times. I just can’t stand it. I think it’s just not for me.
Yes, of course, I don’t like BSD and Linux when it comes to GUI programming. I can’t stand GTK and QT.
Are you sure you’ve never heard of Glade and QTDesigner? They’ve been around for years and years. Both generate XML files to be loaded during runtime. Few people hand code their GUI nowadays. If you don’t like C/C++, you can use GTK#, Koala, Java-gnome, pygtk etc.
As for Mac OS X, Vista still doesn’t have the equivalent of CoreImage, CoreAudio, CoreData and certainly CoreAnimation. Although there are fewer changes between OS X releases due to its frequency, I am sure you will find Leopard (will be released around Vista’s time) revolutionary compared to Cheetah (released around XP’s time).
As for Mac OS X, Vista still doesn’t have the equivalent of CoreImage, CoreAudio, CoreData and certainly CoreAnimation.
Windows has a superset of those technologies. The capabilities of Core Audio, Image, and Animation have been covered by DirectX for years. Likewise, ADO has covered the same scenarios as CoreData.
In Vista, all of these technologies and more are simplified and woven into WinFX.
DirectX does *not* do the equivalent of Core*. If you think it does, you’re either unfamilar with DirectX, Core*, or both…
DirectX does *not* do the equivalent of Core*. If you think it does, you’re either unfamilar with DirectX, Core*, or both…
The abstractions may not be the same (again, WinFX simplifies things), but what capabilities do the compared Core components cover that DX doesn’t?
Edited 2006-08-27 05:43
Current versions of DirectX don’t tie shaders to the media framework like Core* does. You can achieve similar effects, because DX does give you raw access to shaders, but that’s like saying /dev/dsp is the same as Quicktime. WinFX might get closer, but I haven’t seen any indication that it provides a feature that is a direct counterpart to Core*. Avalon provides the low-level tools to accomplish the effects achievable using Core*, but it doesn’t provide the high-level framework.
On top of them, there are “low level” pieces of code that are for the most part more than 20 years old (shells, command line tools). On top of them, there’s the X11 architecture that is more than 20 years old too.
Automobiles are using this “low level” piece of hardware called “the wheel” which I’ve heard is more than a few THOUSAND years old, and yet this wheel thingie still seems to be a fairly solid foundation for cars and also for other similar transportation devices.
More to the point: I currently work part of the time in a mainframe transaction environment (Unisys TIP1100) that is arguably more than 40 years old, and it still works very well (and does a lot of heavy lifting).
Moral of the story: sometimes older designs are of high quality, even in the software world. It isn’t always about supporting the latest video and sound cards, ya know. 🙂
Linux and BSD, as a platform, are much more modular than Windows, even if they do use monolithic kernels. In *NIX, all the major pieces are not only seperate codebases, but are developed by seperate organizations in a largely platform-agnostic manner. The GNU userspace is maintained seperately from the Linux kernel, as are many of the userspace tools that provide important kernel functionality (eg: udev). The X Window System is a seperate project that runs on many different platforms. GTK+ is a seperate component still, and also one that runs on many different platforms. On top of all that, GNOME is not only still another seperate component, but one that is actually a collection of (often platform-agnostic) sub-components. Cairo, libxml, glib, HAL, dbus, etc, are all seperate pieces that are not only independent of their use in GNOME, but largely agnostic to the layers that sit below.
Because most of the components that build up a *NIX desktop are independently developed and agnostic to the components they are paired with, a lot of work has gone into defining strict interfaces that allow the different pieces to be developed independently without worrying about hidden interactions. Take a few interfaces and compare them. The Linux kernel API is public and well-defined. The NT kernel’s private APIs aren’t. The X protocol is public and well-defined, while the GDI’s internal protocols are buried in DLLs. Cairo’s interactions with X use these well-defined protocols. GDI+’s interaction with the GDI uses all sorts of internal hooks.
One of the biggest advantages of the open source develoment is that it doesn’t lend itself to the model of having thousands of programmers al working on the same code under one roof. I’m convinced this model is a recipe for poor software quality, and Windows is example numero uno. When development is done with relatively small groups communicating through public channels, lots of things are made explicit that would otherwise often be implicit (and undocumented) . Yes, good management can make up for a lot of this, through code reviews and the like, but the ultimate way to ensure that design is stated explicitly is to remove any alternative. Distributed development over the internet does precisely that.
Very wise, especially the last para.
A key difference is that overstaffing is almost impossible in open source – the team will never end up with more people than it needs making work for each other. Overstaffing is probably MS greatest (hidden) underlying weakness.
Windows Vista the Last of its Kind
If only it were true, if only…
It maybe reality. Microsoft has been creating winVista for 5-6 years. It’s the same time they needed to get from Win95 to WinXP.
Only way for em is to push Singularity to desktops, because they will need to create something new till 2010.
Maybe if we could just get it to suck the salt out of people… 😉
Otherwise they would know that despite their assertions to the contrary, Windows is actually quite modular: how do you think Microsoft has managed to port it to such varying platforms and still have it retain its Windows-ness? The kernel itself is fairly small: there’s a lot of stuff surrounding the kernel that provides the larger surface area where things get complex.
The problem Windows has in advancing is maintaining backwards compatibility while also providing new and improved functionality: Linux and Mac OS X (as well as pre- OS X Mac operating systems) constantly break binary compatibility, as well as semantics in how things work, on a very regular basis. As a result of this freedom, they can innovate all they want to, and do it quickly, by simply defining the new mutations as the target for developers and users. By comparison, there is a large number of old DOS applications still being used for running businesses to this day, that run on the latest mutations of Windows. Not all DOS applications run, granted, but keep in mind these applications were likely written in the 80’s and are original binaries without the source to recompile with. Linux and the other OS’s that are predominantly open source applications that are used are more likely to be recompiled and ported/modified from one version to the next, as changes in the OS break the existing behavior (even though a lot of the ways things may break reflects correcting an actual bug in the OS). And that’s largely what’s holding Windows back: the need to maintain all the old behavior the huge number of end-user’s applications are depending on, especially the bugs! As soon as Microsoft fixes a bug, invariably there will be at least one application that will break. If Microsoft fixed all the bugs at once, there’d be a lot of whining.
I believe it would make more sense to make a clean break and start out with something like Singularity, which has benefited from the experience of the designers iterating through several versions of frameworks and allowing a clean framework from the start, without backwards compatibility issues. In that case, they’d still need to run a Windows 32 API classic mode either in a virtual session, or a special WOW mutation for that purpose, for the sake of providing backwards compatibility. Otherwise, the quantum change would be too hard for people to swallow in a reasonable amount of time, even though it is probable that within 10 years, the previous Win32 API OS wouldn’t boot on the then-current hardware. If you question that, look at the fact that you can’t today take a standard original OEM Windows XP installation disk and install it onto an SATA drive without an extra driver disk, and then keep in mind how many new chipsets with new types of devices have come out since XP was released.
Precisely.
Prior to Vista Microsoft were suggesting their next OS would be developed from scratch, but that didn’t work out . Almost certainly they could create a new OS from scratch and add in a few Win32 compatability layers – but things like Device Drivers would all need to be recoded from scratch.
If Microsoft ever did make a clean break their new system would be like Linux was a couple of years ago – great if your hardware was supported, but a real pain otherwise.
Now for business customers this might be tolerable, they could buy Vista+1 with new hardware and be up and running so long as Excel, Word, etc worked as expected.
But 90% of businesses would refuse, and the poor home users would be out of luck too. They need their hardware drives for things like digital cameras which are never going to get updated now that newer models are out.
I’d love to see Microsoft start from scratch, but only because then there is a real choice:
* Move to Microsofts latest vision.
* Switch to Linux where you’re old hardware still works.
Prior to Vista Microsoft were suggesting their next OS would be developed from scratch, but that didn’t work out . Almost certainly they could create a new OS from scratch and add in a few Win32 compatibility layers – but things like Device Drivers would all need to be recoded from scratch.
Why even do that? why not base it on top of OpenSolaris? I know it sounds stupid from the outset, but we’re talking about grabbing a rock solid core, and then bolting a cleanly developed user interface on top of it.
Lets take this; Microsoft works with OpenSolaris, making DRI ported to OpenSolaris; Microsoft has said in the past that DirectX is actually platform independent, that is, its not actually tied to Windows per-say, lets take Microsoft on this, and Microsoft could couple the DRI plus DirectX and an Xorg server ontop of that, using all the information they have to make drivers from Nvidia and Ati, it would give them a competitive edge.
Then ontop of the Xorg server, they can base a whole new GUI around Winforms 2.0, exploiting the benefits of managed code when ever practical.
Then for backwards compatibility with win32, provide virtual machine, and voila, you’ll have compatibility whilst providing a brand new, stable, opensource base for which vendors can write their applications for – written the correct way, the first time rather than the adhoc manner in which Windows was written.
Why even do that? why not base it on top of OpenSolaris? I know it sounds stupid from the outset, but we’re talking about grabbing a rock solid core, and then bolting a cleanly developed user interface on top of it.
Indeed. In fact I think if Vista is delayed one more time OR people think it’s crap, MS just do this (they won’t use Linux). Of course Apple did the same thing: they took “the world’s hardest OS” and used it to create “the world’s easiest OS.”
“”Why even do that? why not base it on top of OpenSolaris? I know it sounds stupid from the outset, but we’re talking about grabbing a rock solid core, and then bolting a cleanly developed user interface on top of it.”
Indeed. In fact I think if Vista is delayed one more time OR people think it’s crap, MS just do this (they won’t use Linux). Of course Apple did the same thing: they took “the world’s hardest OS” and used it to create “the world’s easiest OS.”
microsoft doesn’t need solaris or a bsd or whatever.
they have a good kernel in NT its everything beyond that that needs to get scrapped and reworked
microsoft has some decent stuff going for it: the NT kernel, .NET, VS., DX10 is looking like it wil be nice, WinCE isn’t too terrible, debateably MSoffice.
one of these yearss singularity might even turn into a very cool OS
they just need to sit back take a look at what they have and pare back a little. make the tough decision to maybe lose some things that are profitable in order to focus on things that they can keep competitive with.
its just that backwards compatibility problem, keeping BC they can’t be as agile as the competition, throw it away though and they lose the biggest arguement for windows (# of consumer apps), and for staying with it.
big dilema
they have a good kernel in NT its everything beyond that that needs to get scrapped and reworked
microsoft has some decent stuff going for it: the NT kernel,
There’s the rub: NT is “a good kernel”, but strip all the crap away and it becomes “a good microkernel” rather than the hybrid OS it is now. Good microkernels are ten-a-penny; it’s constructing microkernel servers around them that’s hard, and until you do that you can’t construct a userspace.
I’d say the article’s on the right track in that the monolithic system model’s becoming unsustainable: *not* the kernel, but the system as a whole, but comes to the wrong conclusion with the “modular architecture tied together through hardware-supported virtualisation”. As MS has tightened up the configuration options on it’s OSes it’s added more and more “flavours” of windows. XP had enough variants but we’ve got what 16 or something Vista variants coming out? each with a strictly defined profile and basically MS preselected installation options. MS admitted as much a while ago that for it’s server line it’s going to have to modularise the OS. Virtualisation might be a module, but not the basis of the modularity (*requiring* virtualisation would be insane). MS is having to compete with Linux/BSD’s configurability, and while for the moment MS is reacting by having dozens of product lines, eventually they’ll collapse back down to a highly configurable & modular base product (like Linux/BSD) simply for simplicity’s sake.
It’s funny, isn’t it, how “fragmentation” is bad, but the current three (at least) XP OSes (Home, Professional, and 2003 Server) are about to replaced by no less than seven, yes seven, Vista versions?
It’s funny, isn’t it, how “fragmentation” is bad, but the current three (at least) XP OSes (Home, Professional, and 2003 Server) are about to replaced by no less than seven, yes seven, Vista versions?
You left out Media Center and Tablet PC Editions. These seperate SKUs are gone for Vista. Server 2003 doesn’t apply here as Vista is just the client release.
Starter and N Editions existed in XP so these aren’t Vista additions. Also, these editions are limited to (or mandated by governments for) certain markets.
This leaves 5 SKUs for the mass market — 2 SKUs for home users, 2 for Business users, and 1 shared SKU:
Business SKUs:
Windows Vista Enterprise (only available via Software Assurance or Enterprise Agreement)
Windows Vista Business
Home SKUs:
Windows Vista Home Premium
Windows Vista Home Basic
Shared SKU (available to home and business users):
Windows Vista Ultimate
This doesn’t fragment the codebase into incompatible products. Each SKU is a superset of the lower SKU, and users may choose based on price/features, and upgrade at any time.
Edited 2006-08-27 00:02
This doesn’t fragment the codebase into incompatible products… [U]sers may choose based on price/features, and upgrade at any time.
Sounds like Linux.
BTW, what the hell is an “SKU”?
People often forget the largest advantage OSS has going for it: motive
I agree Windows has some pretty good development tools, but one feature most 3rd pary software products for Windows all seem to have in common is that they are all pretty intrusive. Just look at the task bar of any “joe users” PC and you can observe all the crap competing for his money/information/attention.
Linux does not suffer the same problem because bundled applications are hand selected from a bunch of competing free alternitaves. Some still tend to be a little rough around the edges, but at least they are not intrusive.
What Linux needs most now is a Visual Studio style “RAD” IDE.
What Linux needs most now is a Visual Studio style “RAD” IDE.
Actually, RAD IDE is already available on linux (and has been for many years). Here are some examples:
Anjuta with embedded Glade
KDevelop with embedded QTDesigner
Netbeans (Matisse is lightyears ahead of anything VS.Net 2006 has to offer)
Eclipse
Has little to do with modularizing Windows. What is basically described is running concurrent versions of Windows in different zones for compatibility. It amounts to little more than an ad-hoc versioning scheme that is dependent upon virtualization. If your program doesn’t work with the current release of Windows, you can just keep every permutation of system files and various DLLs available until the end of time: your program is bound to work in one of them. It would be more like running distinct versions of Windows in VMWare than like having multiple .NET VMs, but I suppose the intent of the analysts is to convey something similar to the latter.
How any of this is meant to make Microsoft’s job of maintaining Windows any easier isn’t obvious. Old broken code doesn’t stop being broken simply because it’s running in its own zone, and it’s no more trivial to provide backward-compatible updates of functionality and correctness with virtualization than without it. Perhaps I’m just too tired to fully grasp what their seemingly-nonsensical jargon means.
Surely the essenrial thing is the modularity of the design team, rather than the modularity of the OS. (Yes – I know the second rather follows the first.)
The Vista projrct has 50+ million lines of code, and is reaching (has reached?) the upper limit for a centralised development model, hence the delays. When a project reaches this size the pyramidal structure of the team, where every change has to be passed up one branch of the tree and down another one (or usually more), and back again ad infinitum (or so it seems) delays and complexity multiply on a logarithmic scale.
Apparently Vista takes 24 hours to do a build, which cannor help.
Although the Linux “project” has a kernel of 5.6 million lines of code, equivalent functionality has been achieved more easily by the distributed development model inherent in “open source”, where code in any one “module” (I’m using these terms in a broad sense) can be easily changed, whereas interfaces between modules via the kernel are relatively fix4ed and stable, requiring a decision by the kernel team, and oltimately Linus.
The total size of the Linux “project” is, I understand, some 500 million lines of code if the applications are included (these are of course to a great degree 3rd party in Windows, which must severely impact on the developers of Vista in their freedoms and cause them massive headaches) – yet, for example, the Debian project despite hiccups carries on producing a functional whole under distributed development. There are Vista developers who will nod their agreement with this analysis, I know.
MacOS too seems to have hit the correct note on this by providing an emulation layer for “old” MacOS software while restructuring for MacOS X, giving them developer freedom with backwards compatibility. (The situation is not the same for Windows, but the answers can be comparable.) I am no MacOS expert, but have used and like my daughter’s Macbook very much.
edit: typos only
Edited 2006-08-26 10:11
The article was 100% on the spot, XP should have been the last Windows version already, because of this monolithic issue a comparison between Windows and OSX is a comparison between two os’s from two different generations, kind of pointless.
……..Microsoft leave the OS market and do what it makes better (Applications)?
It has been proved that the worst operatings systems are these of MS, but they do goods apps like word, powerponit and visual stusdio.The world will be a better place to life if MS concentrates on make their apps avaliable to other OS and give the OS market to what know about how to do an OS
AHHH!!! sorry, it´s true microsoft only want $$$$ and they do not want a better world neither a more advanced tecnologycal civilization, they want to dominate the world even if we have to live in a 20 years back in time tecnologically world
Honestly, they’ve been trying to do this for years. They told us all that Windows 2000 was modular, and it isn’t. Sorry, but Microsoft won’t be able to do this.
Oh, and I’ve never seen anything that involved virtualisation that didn’t overcomplicate things.
Edited 2006-08-26 12:24
Honestly, they’ve been trying to do this for years. They told us all that Windows 2000 was modular, and it isn’t. Sorry, but Microsoft won’t be able to do this.
Modular, they probably meant in regards to ‘components’ which could be glued together with your own code to speed up product development.
The problem with Microsoft is actually two fold, their operating system isn’t modular enough, to the point that there are no clear lines drawn between the different parts; change something somewhere down the bottom of the stack, and all hell breaks loose at the top somewhere.
The second part is the modular way in which software is produced; there is nothing stopping Microsoft from adopting a model similar to that of the opensource world; having lots of little groups working on individual components, and merging these changes back to the tree; the problem that occurs isn’t cultural but technological – as illustrated by the above.
Due to all the interdependencies, if one group does something incredibly radical, it ends up affecting everything that touches it in some way, resulting in chaos.
As much as I would love to use my Irish optimism, I just don’t see the fundamental issues being addressed by Microsoft; eventually it’ll be put into the ‘too hard basket’ and they’ll have to try again.
The problem with Microsoft is actually two fold, their operating system isn’t modular enough, to the point that there are no clear lines drawn between the different parts; change something somewhere down the bottom of the stack, and all hell breaks loose at the top somewhere.
This used to be the case. Part of the work they did in the Vista timeframe was map out the entire architecture, trace dependencies, and eliminate those that went from lower components to higher components except where absolutely necessary. They then put process protections in place to keep such dependencies from being introduced in the future.
http://blogs.msdn.com/larryosterman/archive/2005/08/23/455193.aspx
http://channel9.msdn.com/Showpost.aspx?postid=148820
The second part is the modular way in which software is produced; there is nothing stopping Microsoft from adopting a model similar to that of the opensource world; having lots of little groups working on individual components, and merging these changes back to the tree; the problem that occurs isn’t cultural but technological – as illustrated by the above.
Microsoft has used this model for years. Various teams own and build out their features in their private branch of the build tree, then integrate those features into the main tree after they pass some quality checks.
http://blogs.msdn.com/larryosterman/archive/2005/02/01/364840.aspx
Modularity of Windows is twofold. The architecture itself is modular, but users interpret modularity as being able to uninstall anything they want. MS doesn’t allow the latter because they want to ensure developers have a known default set of functionality which they can code against. MS does seem to be moving back to giving the user some level of flexibility in the components they can install/uninstall. Vista’s components are arranged in seperate packages sort of like in their embedded OSes, and Server Core allows you to have a stripped down image. Expect more movement in this area in future releases.
Oh, and I’ve never seen anything that involved virtualisation that didn’t overcomplicate things.
IBM mainframes?
Being monolithic does NOT have anything to do with being or not being modular. I’ve seen a lot of comments that seem to confuse these matters. So does Gartner, btw.
Take for example the Linux kernel. It’s monolithic as hell, yet it is also very modular, both on source and on binary level.
Mac users are used to change, but I don’t believe that average Windows user will want to change. They will probably stick with 98, xp or vista for sometime. Looking beyond Vista, the next OS to come out of Microsoft will be 2015.
The problem with Windows are mantaining compatibility with older programs and a gigantic API with lots of duplicated funcionality.
If I were them instead of that virtualization stupidity I would port a very small subset of the API to 64 bits (or replace with a new one if it makes sense) and use the 32 bit subsystem as a compatibility layer with no new features added (and eventually they could run it inside a virtual machine).
I don´t know what they are doing with the 64 bit port but I think they are missing a great oportunity.
“I’m bit tired of reading things like you just said. How can you claim that Mac,BSD and Linux “outperform/outshine” Vista? Some OS that is NOT even released? ”
I don’t see what is wrong with comparing Mac, BSD, and Linux with Vista in its current form. It might not be released yet, but the comparison is against Vista as it stands now. Nothing wrong with that IMHO, and to be honest I am sick to death of people like you suggesting people should not compare released OSs with Vista beta/pre-rc, etc.
When will the vista apps arive?Remarkably there isn’t much written for Vista at the moment.
someone, Vista has Core Animation functionality as part of Avalon, their WinFX replacement for GDI built on top of DirectX. It does all the funky GPU accelerated stuff, 3D animation, video, images etc. Here’s a video, http://channel9.msdn.com/ShowPost.aspx?PostID=58925
With regard to the article, Vista is now heavily componentized, removing most of the spaghetti dependencies between services, allowing much more flexible installation options. http://www.windowsitpro.com/Article/ArticleID/47447/47447.html
Alot of work has also been done moving stuff out of the kernel with the entirely new user-space video and audio technology. Pretty much the only thing left in the kernel are the kernel gfx chip/ soundcard drivers. There are also managed frameworks for doing all that CoreAudio DSP kind of stuff etc.
The article sounds like the usual unfounded analyst rubbish. Why would they write it if Microsoft itself denounces it in the article?
The managed frameworks built on top of DirectX do though