“This build did feel more solid overall and was a lot of fun to play with. But users will be looking for value beyond fun factor. If Microsoft can address some of the performance issues we’ve seen, then we’ll feel much more bullish about Longhorn” writes ExtremeTech in their preview of Longhron.
That is the first screenshot I have seen of the ALT-TAB feature. Must be the other reviewers don’t use it to notice.
http://common.ziffdavisinternet.com/util_get_image/6/0,1311,sz=1&i=…
If they are going to add all that overhead to tie in NTFS with WinFS I hope they at least give us a search interface that with better support for regular expressions.
How many radio buttons do you think it will take to match “find”?
http://common.ziffdavisinternet.com/util_get_image/6/0,1311,sz=1&i=…
Well, won’t the most of the value as usual come from the third party developers?
Longhorn will provide a lot of features that is valuable to developers and if they are valuable to developers they will in the end be valuable to the users.
That’s something many people don’t see. People who say “the OS doesn’t matter it’s the apps that matters”, while it’s true from the users perspective the OS matters a whole lot, because it provides the tools and the mindset for the developer which will have a lot of impact on the final product.
Besides, isn’t it a bit early to make an In-Depth review of longhorn?
[/i]Besides, isn’t it a bit early to make an In-Depth review of longhorn?”[/i]
I enjoyed reading it and it does shed some light on the platform. It brings up some very interesting points. His “todays” system didn’t seem to have problems with Avalons heavy UI. The graphics card is mostly idle when using windows anyway, so maybe it is a good thing that they put it to use.
His concerns with WinFS on the other hand are possibly well grounded. Every year we see improvements in memory speed, CPE speed, disk space etc. HDD r/w speed still seems to remain nearly constant as all the other technologies advance. The HDD is physicially moving media so in short it will always be a major bottle neck for the PC. This is a problem that if anything is growing.
So moving into a time where applications take up a gig of disk space and will be processed on dual core 5GHz CPU’s, is adding overhead to disk I/O really the correct direction to be going?
Sure applications will be fast, if you don’t mind going to get a drink while it loads into RAM!
So moving into a time where applications take up a gig of disk space and will be processed on dual core 5GHz CPU’s, is adding overhead to disk I/O really the correct direction to be going?
What exactly makes you think that applications in general will take up a gig of diskspace? Apps in general still just uses 2-20MB of diskspace. There are larger apps sure but to say that 1GB apps will be common in the next few years doesn’t really make sense. What would the apps include to make them that big? Will they be including a lot of libs just for fun?
Ok, outside of all the glitz and glamour and eye candy, what real VALUE does Longhorn bring to the table.
Microsoft needs to:
(1) redesign the OS with security in mind. Security these days is a big concern for everyone, and with all the recent attacks on workstations, the increasing threats that spyware/adware viruses, dumb users, etc.. pose, security must be enforced at every step. Administrative accounts should NOT be used as normal daily account(s), users should not be able to have write access beyond their home folders, log installs for every program to keep track of every single file installed on the system, jail outlook and IE and other microsoft programs so that they can only execute within the application(s) themselves, and many many others.
(2) ship a much improved firewall. my suggestions for this would be something similar either to Zonealarm. Zonealarm makes it easy to essentially lock down the PC, notifies the user when an unrequested application attempts to make a connection to the Internet, allows the user to essentially disconnect the connection at any time, provides relatively good logging mechanisms, is generally easy to understand and configure, and essentially provides the user with a good level of security.
(3) improve Windows update so that it makes it possible to better automate updates and centralize management among many workstations (apply windows updates to all the PCs that I own from one location)
(4) incorporate in-process security checks (Would you permit me to silently launch my browser so that I can make a connection to download adware and spyware and put them on your computer for you and watch everything you do?)
(5) virus scanner?
(6) incorporate a file monitoring system that precisely tells me when a particular file was created, what created this particular file, who this particular file was created by, (and in the case of DLL files) what particular application or process(es) need to use this DLL file, and file(s) related to this particular file. For instance, lets say some application threw dja48ala.dll in the WINNT/System32 folder. Tell me, what is this file for?
Look at what Microsoft is doing, look at what microsoft has done, and tell me what you think Microsoft should be doing?
Look at where Linux/BSD/*nix is going, look at what they’ve got, and tell me which one REALLY offers a better value.
Look at what Microsoft is doing, look at what microsoft has done, and tell me what you think Microsoft should be doing?
What _I_ think they should be doing? Disappear! But that’s not realistic.
Look at where Linux/BSD/*nix is going, look at what they’ve got, and tell me which one REALLY offers a better value.
Again, for me personally, linux offers a _much_ better value. Even when Longhorn is ready it will offer me better value. Because I consider the price and my freedom as valuable. Besides, Linux offers me better tools for most of my work, and it’s free. It all depends on what you want to do.
To some people Windows offers more value, to others DR DOS offers more value.
what is it’s purpose, is it so you can sorta see all the windows and then click on the one you want instead of moving them around?
Also, does anyone know if their is a “send to back” via right click like their was in BeOS, that is a feature I miss greatly. I can’t belive windows doesn’t have this. The power toys only gives you “focus follows mouse” which is a terrible thing.
Concerning the DirectX 9 issue.
UT2004 was the wrong game to test as it’s a DirectX 8 game, NOT a DX9.
Make’s it even funnier I guess! LOL
hmm ALT_TAB looks like project looking glass @_@. I think MS is taking ideas from everyone, but damn theyre actually making it look good. Hell the black theme looks more professional than the cheesy blue bar in MS XP @_@. Linux has got some time, but if MS releases this good looking stuff, it might be harder for linux on desktop
I don’t want to get into or really start a debate on this operating system or that operating system or which is better or worse, I simply want to point out that microsoft has fallen short of the bar for many years in delivering a stable, reliable, secure operating system and I feel that after evaluating Longhorn in its current incarnation, Microsoft has once again fallen short. They are focusing too much on trying to tell users what they want instead of listening to what they need. I just want a fast, stable, secure operating system where I can get my work done, maybe if I have time play a game, do what I need to do, and be happy with my overall investment in Longhorn. Sadly to say that the more I work with Windows, the less overall investment I find myself willing to put into it. I find myself becoming more dependent on my Linux/BSD boxes because they are stable, they are fast, and I can do what I need to do when I need to do it how I want to do it. Microsoft keeps making the point that Windows provides a pleasant user experience, yet the general majority of people that I talk to are displeased with Windows. I say Microsoft developers need really sit down with users (not developers, not just power users, but people who have none to very little technical knowledge of computers) and really understand from a user perspective what is going on. Its pretty sad.
“incorporate a file monitoring system that precisely tells me when a particular file was created, what created this particular file, who this particular file was created by, (and in the case of DLL files) what particular application or process(es) need to use this DLL file, and file(s) related to this particular file. For instance, lets say some application threw dja48ala.dll in the WINNT/System32 folder. Tell me, what is this file for?
”
I am pretty sure you can monitor it with the events monitor built in windows NT for ages…
I agree with you that apps as in the .exe will probably never be that large. The files that applications read and manipulate however are getting to be very large. Not that long ago there were people that used 1.4 meg floppy disks for more than just drivers. Today many people are going with DVD writers becasue they have passed the 700 meg cd-r limit. Digital cameras are popular now, soon digital camcorders will be the “next big thing”. Maybe not in Jan. 2006, but a few years from now HDD I/O is going to be a noticable bottleneck unless on RAID.
WinFS is slow because it sits on top of NTFS and a database engine. It will not slow down your system files or applications because those files are still part of the NTFS storage (they are NOT stored in WinFS).
A database oriented approach to storing data (user files, /home, documents, music) is interesting for sure and offers very interesting search, indexing, and organizing features. BeOS could do that a few years ago 😉 but WinFS is a “heavier” database. A similar project for Linux is Gnome Storage.
A few years from now, OSes will run on a ram based type of disk and the current technology will only be used for storage. Disks with 3GB/s transfer rates already exist, they are just way too expensive yet.
Disks are lagging behind other computer technologies in progress due to physical constraints, and to surpass this, the technology must change. This type of change takes time.
damn, the gnome storage project I mentioned seems to have disappeared:
http://www.gnome.org/~seth/storage/
bad bad bad 🙁
It’s probably due to the intrusion when the servers got taken down a while ago. Some things aren’t quite up and running yet it seems.
NewtonOS had a database oriented data store long before even BeOS. While it makes it easier, programming wise, to do complicated application/data integration, it really makes it a major PITA to get data to and from the Newton.
I agree that eventually there will need to be a non physically moving replacement for the HDD (and even CD’s). Seeing this technology in a few years however is a very optimistic guess. RAM is fast but volatile and VERY expensive in terabyte quantities. Flash drive technology is still much slower than HDD’s currently and even more expensive than using RAM. 1 GB flash drives are about $400, a per gig cost of about 400 times a standard HDD. And again, flash drives are slow.
It will be at least:
2 years before somone concepts an idea how to do it
1 more year to figure out how best to build it
x months for people to agree on a standard
3 more years before it comes down enough in price to see consumer desktops.
We will first see new more RAID solutons (like dual CPU’s), then someone will figure out it makes more sense to add more platters to a HDD and make it bigger than it does to put several HDD’s in every machine (think dual core CPU’s).
You might even see a situation where some files like the core OS system files are stored in ROM to make for faster load times.
Or games may go back to the old way (Nintendo classic) and are plugged in as a card rather than on CD.
It will be interesting anyway.
1. I’m extremely cautious about DRM (DRM was not mentioned).
2. It is not clear why does someone NEEDS to define a “castle” in order to connect to non-Longhorn systems.
3. The copy progress issue (stating a remaining time that’s completely off from actual expectations) is quite an old one. When are you going to fix that???
4. I personally don’t like any of the looks (maybe a bit 9x’s… a bit), I’d rather get skinning capabilities on the OS.
5. Performance is an issue. My biggest grip is that I can not multitask while burning (I mean a CD in XP) without burning errors.
6. Also, I shouldn’t be able to buy a new PC to have a performance experience. I bet if I had 1TB of RAM, Longhorn would “churn it all”.
5. Performance is an issue. My biggest grip is that I can not multitask while burning (I mean a CD in XP) without burning errors.
Wow, are you still using 486? I believe XP’s minimal requirements are somewhere in the Pentium age. Honestly, that isn’t normal behaviour and I’ve never run into such. Maybe wrong priority settings on some of your software? Hardware problems?
– Yak
The first thing you tend to notice is that all the text and icons in the task bar on the right disappear. Also, sub-windows in the start menu don’t handle shadows properly.
This is quite bizarre, and the stuff on the right hand side appears to be something to do with the sidebar. Microsoft always seem to silently take chunks out of their desktop for new features. If you look at a beta, or a CVS version, of any other desktop nothing like this ever happens. Microsoft software always does a lot of strange stuff that never happens elsewhere. Look at the way the Windows desktop redraws after you exit a game, or a memory intensive application. Bizarre.
In its current incarnation, the Desktop Windows Manager begins churning virtual memory – and this, on a system with one gigabyte of RAM. As soon as VM starts to churn, the system can get very sluggish.
I see Windows still has memory problems. The next incarnations of Avalon will not solve this. Historically, this is a fundamental problem with Windows, as we all know. Disk churn at startup, and after you have exited a memory intensive application, like a game, is a classic sign. I’m willing to bet that if you ported this to a Linux or a BSD system (on a system substantially less than what he is running with) it would run pretty well. It seems as though people are going to need well, well in excess of 1 gig or RAM to run Longhorn at anything that is bearable. We’re probably looking at 2, 3, 4, 5 or more gig’s worth of memory here.
Of course, by the time Longhorn ships, 64-bit processors will be fairly common and 2GB will likely be the entry level memory configuration.
Not in many companies they won’t.
On top of that, performance engineers in Redmond will probably be burning some midnight oil to improve Longhorn’s performance.
I doubt it. Windows has always had performance issues that can only be solved by going out and buying yet more hardware.
If you look closely at this view of the castle, you’ll see typical Windows shared folders, but you’ll also see some unfamiliar ones, like “DefaultStore,” “Catalog,” and others that begin with “SQL.” These are representations of the WinFS database file system.
I though WinFS wasn’t going to be available in a networked environment? Or does this share with the WinFS stuff removed?
We got a dialog box that seemed to indicate that it would take four days to copy these files to the music file store. The counter turned out to a bit off, but it still took a good twenty minutes.
This happens in every desktop environment.
As with all versions of Windows, Longhorn ships with a version of Internet Explorer. IE in Longhorn pretty much behaves like IE. At least it now has a pop-up blocker and download manager. We saw no evidence of tabbed browsing, though.
That version of IE looks absolutely terrible.
We installed Halo off the CD and applied the patch. When applying the patch, we got an interesting message. Still, it’s interesting to see that Microsoft seems to be thinking about security a bit more.
No, this is about Microsoft trying to control what you install, slowly but surely.
We can certainly understand bugs, but for a Microsoft game to crash with this error seems deliciously ironic. Halo is telling us it wants DirectX 9.0b installed – but DX9.0b is installed.
Don’t see why that should be happening – it should get the version number from the same place.
Okay, so we need to prove that some DX9 game will run. So we pop in our DVD edition of Unreal Tournament 2004 and install it. We fire it up and it runs just fine. We’re fragging happily, but after awhile the hard drive starts to churn. The whole system runs like molasses on a cold day.
Welcome to the Windows world – this isn’t just Longhorn.
The 3D desktop needs some serious performance tweaks and WinFS is unbearably slow, even on basic functions.
A terrible filesystem in NTFS, with a storage layer on top?! I agree with Hans Reiser on this subject – you need a filesystem that is good, not just an afterthought. Paul Thurrott says that WinFS is definitely happening. Given that I don’t see any way that you can get around the performance issues with how Microsoft is doing this, we’ll have to see. The solution will probably be to buy an expensive SCSI/SATA hard drive – as usual.
It’s all well and good to take advantage of new hardware when it comes out, but Microsoft will be facing a hefty installed base of older systems when Longhorn ships.
Well, quite. With XP today you will not see one corporate doing a massive rollout of systems. We had that with NT4 and 200 to an extent. No more. With the hardware requirements for Longhorn I can see Microsoft making a rod for their own backs here, because you should be able to do all of what is in Longhorn on a reasonable computer today. A corporate computer is not a Pentium 3.2 with 1 gig of RAM – it is probably 128/256 RAM at most, with a 500 MHz to possibly 2 GHz processor. I ran a 1GHz Athlon two/three years ago that is still way better than what most corporates have. Businesses and corporates do not run huge graphics cards either, but they are around. They are not really utilized today, but you should be able to utilize on the desktop them in an efficient, positive way – even the onboard ones. I’m mystified as to how much people are going to have to spend, even in two years’ time.
Take a good KDE desktop, a Mono/Java/Qt choice of programming technologies and all of the KDE desktop infrastructure, the good work happening with Freedesktop and the work by the kernel guys, and people like Hans Reiser, and you’ve basically got all of Longhorn and more. As we’ve seen with Windows 2000 and XP, people are always looking eagerly and anxiously at the next version of Windows – until they actually see it.
Wow, are you still using 486? I believe XP’s minimal requirements are somewhere in the Pentium age. Honestly, that isn’t normal behaviour and I’ve never run into such. Maybe wrong priority settings on some of your software? Hardware problems?
This is an Athlon 1.5GHz.
I always run “fc /b” on all of my burns to double-check. I don’t get any error messages from Windows, but I do find errors with “fc /b” and it’s always when I have been browsing the Internet or even My Computer at the same time I burn (I mean, the CD burns).
2-20mb for most apps wat are you smoking??
if that is the case then why do you need 4 cd’s for Microsoft Office to get basically 4 apps.
Most of the games installed on my windows machine take up several GB’s a piece. Install UT2k4, and then start adding maps, you burn up gig’s in not time. Hell even Diablo II which is several years old takes up 2 GB’s.
The only time I see 2- 20mb for installed apps is when I use Linux, and that is only because *nix breaks everything down into pieces as a design.
Just because an app is only using 2-20 mb of RAM doesn’t mean that is the installed size of that App.
i dont care about the corporate desktop, windows NT, 2000 is fine for the corporate desktop, if they’re happy let em stay, but this is the kind of shit i like, eye candy, bit of fun, productivity aside, half the time i spend on the computer is looking for something interesting, they’re providing it, i’ll dump my suse/xp config for longhorn for a while, then i’ll find something more interesting to do, i love this shit, something new everyday…. (just wish they’d hurry up) the only thing i dont want, is instability… i use my computer at the office for work, my computer is my computer, the less productive it is.. the better, means i am not doing unpaid overtime,
I agree. Those drop shadows reminded me instantly of MacOSX. And the Alt-Tab feature reminded me of the Looking Glass Project of Sun. I wonder if they’re going to copy more UI stuff from other projects… They do have a tradition of doing so.
Nevertheless it’s kind of good looking for now… But it still doesn’t compare to the sheer beauty of Aqua.
Microsoft exec watching the Exposé demo a year ago: “Damn, I wish I could have thought of that!!! I have to copy that!”
>I see Windows still has memory problems.
>The next incarnations of Avalon will not solve this.
I am guessing memory will be even worse because the
new technologies will cause a lot more small memory
allocations.
RE: “Of course, by the time Longhorn ships, 64-bit processors will be fairly common and 2GB will likely be the entry level memory configuration.”
Okay even if some users in two years have switched to 64-bit procs I still don’t understand this comment. Sure the AMD 64-bit proc is backwards compatible but Longhorn is a 32-bit OS not a 64-bit OS. Microsoft should be making it to work properly even on current 32-bit processors like the test system. ExtremeTech found Longhorn sluggish on even a 3.2 GHz Hyperthreaded CPU with 1 GB RAM and an ATI Radeon 9800XT. Current OS on all platforms typically require 128 MB RAM to run the OS. How did Microsoft leap to 1 GB or more just to run Longhorn? Basically Longhorn appears at present to be more bloated than WinXP?
Longhorn looks like another WinME flop because it will not attract consumers in the way Microsoft hopes to. They seem to be still stuck dealing properly with memory management in Windows and swamping the user with more useless eye candy (ie: 3D browser view, bulging tool bars, unnessary system requirements).
I keept hearing about the huge size of Microsoft’s Redmond headquaters with all it’s resources. Well then it seems Gates should fire idiot programmers who create bloated OS like Longhorn. What consumer are they marketing this too? It can’t be the Joe or Jane end user in the home or typical office. Most of these consumers can’t afford a highend workstation and settle more on low cost desktops because that suits their needs.
I’d predict that in two years LINUX will be even more attractive to consumers than Windows or OSX. I say this because consumer attraction will not be based on eye candy alone but more in offering what the consumer wants and needs in an efficient manner. Longhorn will really turn off a lot of people including die hard Windows users. As for Apple unless they change they marketing by porting OSX and more software to other platforms then they will sadly be pushed aside. LINUX is getting better at meeting the needs of all consumers and not only a select few. This should dramatically improve over the next two years causing consumers to think more about where companies such as Microsoft and Apple are going.
Don’t forget that the current versions of Longhorn are most probably build with full debug information. So it’s no suprise that it consumes huge loads of memory.
To make some certain statements over the performance of Longhorn one should wait until the beta version series are coming. They are build in release mode normaly.
Today business desktops generally have integrated low-performing gfxcards. It is cost-effective since XP, Word, Excel etc don’t require a separate card to perform well.
With longhorn, it seems as if even the os itself will require a powerful gfxcard. Instead of a cheap integrated gfx, a separate gfxcard will have to be added. This will make business desktops more expensive, maybe 10-20% more expensive. Not good news for businesses that are trying to reduce costs, not to increase them…
Okay even if some users in two years have switched to 64-bit procs I still don’t understand this comment. Sure the AMD 64-bit proc is backwards compatible but Longhorn is a 32-bit OS not a 64-bit OS.
MS have announced just recently I’m certain Longhorn will be available for x86, Itanium and AMD64. So strictly speaking it’s not just a 32 bit OS.
“With longhorn, it seems as if even the os itself will require a powerful gfxcard. Instead of a cheap integrated gfx, a separate gfxcard will have to be added. This will make business desktops more expensive, maybe 10-20% more expensive. Not good news for businesses that are trying to reduce costs, not to increase them…”
Actually the review talked about how you can shut avalon off leaving a standard 2d desktop.
It’s the WinFS feature that sounds doggedly slow to me. No one has time at work to wait 10 minutes to copy 10MB of stuff.
And that long time they spent was on a machine with a 3.2 P4 and low latency RAM; I can’t imagine it with a cheap motherboard, value RAM, and a 2.4 Celeron.
I hope there is a way to shut winFS off.
Don’t forget that the current versions of Longhorn are most probably build with full debug information. So it’s no suprise that it consumes huge loads of memory.
Only the in-house development versions of Longhorn will be built with debug output. The preview stuff will not, and is output in release mode. If they did, absolutely no one would be able to run it. Sorry but Windows, not just with all the stuff in Longhorn, is an absolute memory hog.
To make some certain statements over the performance of Longhorn one should wait until the beta version series are coming.
Yes I agree, but most of the stuff discussed are known Windows performance issues. How they’re going to solve the WinFS performance issue should be interesting.
It’s the WinFS feature that sounds doggedly slow to me. No one has time at work to wait 10 minutes to copy 10MB of stuff.
It’s too early to be talking about the performance of WinFS really. I suspect that a lot of things can be done to optimize the performance of it.
Besides since WinFS is only a service (afaik) it would make sense that it would be optional, but it could mean trouble if certian apps expects it to be there.
Only the in-house development versions of Longhorn will be built with debug output. The preview stuff will not, and is output in release mode. If they did, absolutely no one would be able to run it. Sorry but Windows, not just with all the stuff in Longhorn, is an absolute memory hog.
Even without debug information the code is not optimized. They won’t address profiling until the end of the dev cycle.
Windows isn’t much of a memory hog. On the contrary it has a bad habit of paging data to VM when there is plenty of RAM. Its a side effect of the VMS design philosophy behind NT.
Today business desktops generally have integrated low-performing gfxcards. It is cost-effective since XP, Word, Excel etc don’t require a separate card to perform well.
While I don’t think it will that much of a problem since 1.) You can turn the eyecandy off. 2.) High end videocards will be cheaper and probably integrated on MBs by then.
However, this does bring up an interesting point. For how long will companies put up with this upgrade madness? Does it really make sense to upgrade every computer at a company as often as they do now? The computers today are often too powerful for the tasks most people perform. Even if the software evolves to a point where they require more CPU-power a 2GHz CPU will still be enough for most tasks. It’s only in specific cases such as handling a large amount of media or making heavy calculations that they will require faster computers.
So is there really a need to upgrade when Longhorn arrives? or even 5 years after that?
I’m writing this on a 5 year old Celeron 400, a box that doesn’t need to be upgraded becuase it can do most general desktop tasks with good performance. I think it will be good for at least another 3-4 years. I’ll replace it when it breaks. Cause that’s what you usually do with household machines.
The current upgrade cycle is expensive to companies, and it actually doesn’t need to be these days, it has just gotten used to it.
> I always run “fc /b” on all of my burns to double-check. I don’t get any error messages
> from Windows, but I do find errors with “fc /b” and it’s always when I have been browsing
> the Internet or even My Computer at the same time I burn (I mean, the CD burns).
Seems that yor CD burner currenly works in PIO mode, try enabling DMA on it.
About users thinking they are seeing similarities with OSX I belive it’s because OSX is the only OS that use drop shadows and transparancy in any volume. This build of Longhorn does not look similar to Aqua in any way IMO. The alt-tab reminds me a little bit about Looking Glass though, but I guess GUI windows using rotation in a 3D space will look similar to Looking Glass no matter of how you design it.
actually it reminded me of “looking glass” from sun. it does seem to have the same kinda thinking behind it as exposé though. it’s kinda silly to have these “reviews” of an OS that’s gonna be released in a whole 2 years time.
Correct, Microsoft is not copying Apple. Just look at MSIE, and you’ll see that there’s no safari brushed-metal influence at all.
I’m waiting for the day that MS decides to place the menubar at the top of the screen. Ofcourse, that wouldn’t be copying, either. It would be the inevitable ‘evolution’ of the GUI.
I am so tired of people who accuse Microsoft of stealing everything, actually it was Microsoft who made the concept of Apple’s Exposé and Sun’s Looking Glass in 1999. Just because they didn’t patent it everyone has to accuse Microsoft for theft this time? Search through the Microsoft research pages. Even if there was perhaps someone before that, it was NOT APPLE, NOR SUN!
Even without debug information the code is not optimized. They won’t address profiling until the end of the dev cycle.
It depends what you mean by optimized. Even though a release will always be unstable and incomplete, it should be possible to garner what the relative performance is. Afterall, the core code base of Windows has been around for many years. If Microsoft have to optimize that much it just shows how utterly terrible their core codebase and ‘production line’ methods of programming are.
Windows isn’t much of a memory hog.
Please don’t make me laugh. Everyone who has ever used Windows in an environment wih many hundreds or thousands of desktops knows this. Run a memory intensive game or application (something you can compare), and then exit from that back to Windows, a Linux desktop or a Mac desktop. Rinse, lather and compare. It takes quite a while for Windows to get back to normal, and if you want to do something like burn a CD straight afterwards then your safest bet is to reboot the machine.
On the contrary it has a bad habit of paging data to VM when there is plenty of RAM.
Why?
Its a side effect of the VMS design philosophy behind NT.
LOL! NT is certainly not VMS.
“Microsoft exec watching the Exposé demo a year ago: “Damn, I wish I could have thought of that!!! I have to copy that!””
Doesn’t that go right back to Microsoft BASIC? Get rich by copying cool ideas and ruthlessly commercialising them. If possible put the originators out of business.
2-20mb for most apps wat are you smoking??
I’m talking about the applications. The fact that they include helpfiles and extras that requires a whole CD is irrelevant.
actually it was Microsoft who made the concept of Apple’s Exposé and Sun’s Looking Glass in 1999.
Are you talking about the task gallery? I think anyone who has ever thought about how a 3D gui could work has had the exact same though, just because MS decided to spend millions in research on an idea that has been dismissed for the past 10-20 years doesn’t mean that they made something that no-one had thought of before.
This whole idea-stealing thing is just pathetic. I spent time doing some research and prototypes of a portable MP3-player in the mid 90’s long before there was such on a thing on the market. Does that mean that they stole my idea? No becuase an idea is rarely unique I bet many people had the same idea at the time, only that few was able to make it reality.
It depends what you mean by optimized. Afterall, the core code base of Windows has been around for many years. If Microsoft have to optimize that much it just shows how utterly terrible their core codebase and ‘production line’ methods of programming are.
The core foundation of NT I’m sure is fine. Its the new dev frameworks and services that are not optimized. These are what slows the OS down. Remove WinFS and the longhorn alphas start running a lot faster.
Please don’t make me laugh. Everyone who has ever used Windows in an environment wih many hundreds or thousands of desktops knows this.
What in gods name are you talking about ?
Run a memory intensive game or application (something you can compare), and then exit from that back to Windows, a Linux desktop or a Mac desktop.
So basically run an application written to consume all free resources for performance and then take a wild guess that the OS has poor memory management? Sure buddy.
It takes quite a while for Windows to get back to normal, and if you want to do something like burn a CD straight afterwards then your safest bet is to reboot the machine.
Thats the design of the OS. Read about how NT manages memory. Its very much like VMS, which stood for “Virtual Memory System”. If you have to reboot after unloaded a game you either have a game with memory leak or something is wrong with your computer. End of story.
LOL! NT is certainly not VMS.
Not its not VMS but its design was led by one David Cutler who if you know VMS you’ll recognize as the lead architect who designed VMS. The kernel design of NT is so close to the design of VMS that Digital Sued MS.
Please learn your OS history before wasting any more of my time thank you very much.
http://www.winnetmag.com/Article/ArticleID/4494/4494.html
Cool name. s/Longhron/Longhorn/, I suppose?
The core foundation of NT I’m sure is fine. Its the new dev frameworks and services that are not optimized. These are what slows the OS down. Remove WinFS and the longhorn alphas start running a lot faster.
I’m not talking about the new frameworks – memory management is a well-known Windows shortcoming that has been around for years, and years, and years…
What in gods name are you talking about ?
Anybody who has ever used Windows in a production environment knows how much of a memory hog it is, over and beyond even the recommended specs. Obviously you haven’t used it in such an environment.
So basically run an application written to consume all free resources for performance and then take a wild guess that the OS has poor memory management? Sure buddy.
I’m talking about anything remotely memory intensive. Try a BSD, Linux or a Mac system and see what I’m talking about please. Besides, if you can’t use a system at maximum then it isn’t much of a system, is it?
Thats the design of the OS. Read about how NT manages memory.
It’s not very good then, is it?
If you have to reboot after unloaded a game you either have a game with memory leak or something is wrong with your computer. End of story.
No, with anything remotely memory intensive this happens every time. A Linux, BSD or Mac based system handles memory intensive apps and general memory, network and load stress flawlessly every time. End of story. Either Windows is not up to snuff, the Microsoft development tools are bad or the app developers are doing a bad job every time.
Like Microsoft, you’ll probably blame the latter. Nothing hurts more than real-world experience.
Not its not VMS but its design was led by one David Cutler who if you know VMS you’ll recognize as the lead architect who designed VMS. The kernel design of NT is so close to the design of VMS that Digital Sued MS.
I know the story, but NT is still not VMS because they had to make it totally Windows compatible, as well as re-architecting it for the Win32 API, providing COM hooks into the kernel and the device driver arichtecture. If they’d kept the layers intact (like a Linux/Unix system), and kept the VMS features intact they may have had a decent OS. Know your history please, and read your own article:
Gates decided that compatibility with the 16-bit Windows API and the ability to run Windows 3.x applications unmodified were NT’s paramount goals, in addition to support for portions of the DOS, OS/2, and POSIX APIs.
Please learn your OS history before wasting any more of my time thank you very much.
Don’t waste my time or insult my intelligence by quoting rubbish and trying to defend Windows on this please. Please don’t quote useless winnetmag articles either, because it shows just how much you do know – your name is very apt. Winnetmag is home to Paul Thurrott, and the article you quoted is an attempt to basically tie the history of a decent operating system like VMS to NT to lend it credibility. VMS was designed in a different era for different purposes. The fact that the guy who was responsible for VMS also arichitected NT, and some similarities can be drawn, is the only comparison that can be made because of the requirements demands placed on NT and the changes architected on it over the years. Implementation is everything, whatever the similarities.
NT is not VMS, and please don’t try to lend credibility to NT by labelling it as such. Besides, the essential truth is the fact that Windows today, whatever its lineage, is an OS that handles memory very badly for what it does.
(1) redesign the OS with security in mind.
It is being redesigned. Most of this new functionality is being added through managed code — and the CLR imposes security policy on all managed code.
(2) ship a much improved firewall.
They are. Longhorn will ship the much improved firewall that’s going out with XP SP2 — plus, better filtering capabilities.
(3) improve Windows update
You can already do this. But you need to set a system policy to choose updates.
(4) incorporate in-process security checks
They had demos of some of this at PDC.
(5) virus scanner?
MS recently bought a virus scanner company. It wouldn’t surprise me if they repackage the technology for Longhorn.
(6) incorporate a file monitoring system that precisely tells me when a particular file was created, etc…
It’s a good idea. Have you proposed it to them? NTFS will support it. It supports extensible BLOB properties.
“if that is the case then why do you need 4 cd’s for Microsoft Office to get basically 4 apps.
Most of the games installed on my windows machine take up several GB’s a piece. Install UT2k4, and then start adding maps, you burn up gig’s in not time. Hell even Diablo II which is several years old takes up 2 GB’s. ”
Wrong, office fits on 1 cd, unless something crazy happened since office XP. their is 2 extra disk, one is a big training thing the other one is a disk full of clip art stuff. Never the less, the 4 apps fit on one cd.
Far as games, how do they even remotely apply to applications, they are a freak in this department, they have so much more stuff in them that any real app will never have. Music type apps can get big to do to tons of samples and such, but that doesn’t have much to do with the app it’self.
Comparing Windows MM to other OS’s by comparing the relative OS performance after immediately quiting a game is difficult, if not impossible.
Windows has vastly more commercial games software written for it than Mac or Linux, so the problem is trying to get a fair comparison by using the same game on different OS’s.
Game writers frequently do NOT make the same quality of code
on different platforms, and a porting effort is quite often contracted out.
You need to list hardware specs of the computers tested and name the games you are refering to, for any comparison to be made.
Sounds like you might be using older windows PC’s, which always struggle to run resource hogs like the latest games.
You could probably recreate the look with a free window manager in XFree86, and Ive seen some 3D window managers out there aswell (although i havn’t heard of any of them taking off).
I must admit I like the whole longhorn look, but I get the feeling the novelty would wear off before long.
memory management is a well-known Windows shortcoming that has been around for years, and years, and years…
Please, this isn’t Slashdot. At least know a little bit about what you’re talking about before you post.
So what if Apple or Sun “came up with” those features first. They borrow stuff too. Linux groupies borrow stuff, BSD developers borrow stuff, everybody bloody well “borrows idea!” Grow the fuck up!
Besides, when Longhorn ships, all of these wonderful features will be available to people who are buying new computers, and for the desktop market at least, I’m sure enought that MS is going to be keeping their 90% market share, so that would mean that a great deal of people in taht time frame will have this technology at their fingertips.
Bono once said, “We’re stealing from the theives”… I think this sums up the whole ‘who stole what from whom’ thing.
I honestly hope MS “steals/borrows/re-invents” good ideas from other window managers and OS’s, and vice versa. I also hope they think of new cool things as well 😉 As a programmer, I do this all the time, I’m sure most of us who post here do.
BTW, we are still 2 (probably 3) years from seeing Longhorn, so the issue of speed etc. probably shouldn’t be focused on too much just yet. I grabbed the beta of OS X, then 10.0 when it came out and was horrified at the speed of Apples new OS. 10.3 runs very nicely, and I would almost guarantee that “Tiger” will be faster still.
Steve Jobs actually said of Longhorn (I’ll paraphrase as I can’t remember word for word 😉 that you probably won’t see a finalised version till 2008 based on his experiences with OS X. It wasn’t ready for the mainstream till 10.1, 10.2 or even 10.3 depending on whom you talk to (some might say we still haven’t ;-).
I’m not talking about the new frameworks – memory management is a well-known Windows shortcoming that has been around for years, and years, and years…
The only people I’ve ever heard complain about memory management in NT are people who don’t know how to program worth a crap. In my 11 years working with NT I’ve run across a couple of guys who like to bitch about it. I also don’t consider them very good coders to begin with.
Anybody who has ever used Windows in a production environment knows how much of a memory hog it is, over and beyond even the recommended specs. Obviously you haven’t used it in such an environment.
I use quite a few windows machines in a production enviroment and I don’t see what you are talking about. NT was designed to keep as much physical RAM available as possible and it does that quite well. In fact some people feel that its a shortcoming in the kernel design considering how cheap RAM is these days.
I know the story, but NT is still not VMS because they had to make it totally Windows compatible, as well as re-architecting it for the Win32 API, providing COM hooks into the kernel and the device driver arichtecture.
Win32 and COM sit on top the kernel. Sure there are some hooks for performance into the kernel but these systems are abstracted for the most part.
NT and VMS still to this day share the exact same threading model and task scheduling system. They also share a driver model that is a little more than coincidence.
You are right, NT is not VMS, it just happens to have a kernel design that is damn near identical.
A Linux, BSD or Mac based system handles memory intensive apps and general memory, network and load stress flawlessly every time
lol. You said it all right there. “flawlessly” – no such thing when it comes to computers. Every design has its pluses and its minuses.
I’m talking about anything remotely memory intensive. Try a BSD, Linux or a Mac system and see what I’m talking about please. Besides, if you can’t use a system at maximum then it isn’t much of a system, is it?
I do memory intensive tasks on NT every day and have been doing it for the last 10 years! I don’t need to screw around with my debian box to know you are so full of it that its coming out your ears dude.
Please don’t quote useless winnetmag articles either, because it shows just how much you do know – your name is very apt. Winnetmag is home to Paul Thurrott, and the article you quoted is an attempt to basically tie the history of a decent operating system like VMS to NT to lend it credibility.
Oh this is rich. The article is stating fact. Search through the web sometime. MS was sued by Digital over NT, as a result NT was ported to the Alpha. David Cutler was the lead architect behind both operating systems. This isn’t a fairy tale on slashdot my friend. Its for real.
Besides, the essential truth is the fact that Windows today, whatever its lineage, is an OS that handles memory very badly for what it does.
Its lineage isn’t unknown as its well documented.
If you don’t want your intelligence insulted then maybe you shouldn’t post here eh ?
It’s the WinFS feature that sounds doggedly slow to me. No one has time at work to wait 10 minutes to copy 10MB of stuff.
From my experience with the current alphas, once the database has been created, its really not all that slow.
[i]I hope there is a way to shut WinFS off.[i]
You bet there is. It’s a service, and like most of them, it can be disabled quite easilly. I doubt that they will make it a mandatory service in the final release.
Please, this isn’t Slashdot. At least know a little bit about what you’re talking about before you post.
This isn’t Slashdot. Read the post before you reply, and speak with some experience. Anyone who has used Windows in any sort of environment knows that Windows does not handle memory well at all – end of story.
I see various people are trying to defend what they know is a Windows problem – hence all the posts.
Comparing Windows MM to other OS’s by comparing the relative OS performance after immediately quiting a game is difficult, if not impossible.
Games are a bit more of an extreme example, because of their intense memory use, but this comes from experience of working and supporting various applications in an envirnoment with thousand of desktops. Try it.
The only people I’ve ever heard complain about memory management in NT are people who don’t know how to program worth a crap. In my 11 years working with NT I’ve run across a couple of guys who like to bitch about it. I also don’t consider them very good coders to begin with.
A hole in one! Microsoft: “There’s nothing wrong with NT4. It is all you application and device driver developers!” Is everyone doing it wrong? If it walks like a dog, barks like a dog, and runs like a dog – it is a safe bet that it is a dog. Wouldn’t you say?
In fact some people feel that its a shortcoming in the kernel design considering how cheap RAM is these days.
You’ve mentioned cheap RAM – which means you know this to be true. Nice try.
Win32 and COM sit on top the kernel. Sure there are some hooks for performance into the kernel but these systems are abstracted for the most part.
Do they really? Microsoft have never really abstracted anything because that would mean that competitors could find and use those layers. In Windows, everything depends on everything else. Now I really know that you’re not objective. I’m sorry that I’m attacking your baby, but nevertheless, it is a big problem with Windows.
NT and VMS still to this day share the exact same threading model and task scheduling system. They also share a driver model that is a little more than coincidence.
A howto on running vanilla VMS applications on NT would be good. You can do this with BSD applications on a Mac – not with VMS and NT.
lol. You said it all right there. “flawlessly” – no such thing when it comes to computers. Every design has its pluses and its minuses.
LOL. Yes designs do have their pluses and minuses, it is just that on a Unix-like system you pretty much never see the minuses for the jobs you want it to do – and that’s a heck of a lot, hence flawless. They are there in all their glory in NT, which necessitates buying more RAM, bigger processors, faster hard drives etc. Microsoft: “Oh, put more memory in it!” For someone with experience of Windows you’re not terribly familiar with Microsoft’s strategy on this.
If you know you’re wrong, talk about pluses and minuses and say that everything isn’t flawless .
I do memory intensive tasks on NT every day and have been doing it for the last 10 years! I don’t need to screw around with my debian box to know you are so full of it that its coming out your ears dude.
You might actually want to try doing the same, comparable functions with that Debian box as your NT one. Then compare. I love how people who defend Windows to the hilt on various important issues say they’ve got a Linux box somewhere . Certainly, talking about Windows over the last ten years, particularly NT 4, as a serious OS is very funny.
David Cutler was the lead architect behind both operating systems. This isn’t a fairy tale on slashdot my friend. Its for real.
I can’t think why this is rich, because it is deviating from a valid, known point I have made about Windows – IT’S MEMORY MANAGEMENT IS CRAP. Considering that Microsoft stole their lead architect, I’d be surprised if Digital didn’t sue. This happens all the time, so who cares?
I’m not particularly interested in NT’s lineage, because IT’S MEMORY MANAGEMENT IS CRAP. That’s my point, however NT came to be. You seem to be using the supposed VMS heritage as some sort of excuse – can’t think why.
MS was sued by Digital over NT, as a result NT was ported to the Alpha.
Well that is a big success these days, isn’t it?
Its lineage isn’t unknown as its well documented.
Doesn’t have anything to do with what I’ve written. IT’S MEMORY MANAGEMENT IS CRAP. Got that?
If you don’t want your intelligence insulted then maybe you shouldn’t post here eh ?
Probably the best thing you’re written.
These posts are very amusing. You touch a very raw nerve with what is a very, very, very, very well-known Windows shortcoming (that keeps it out of running many mission-critical systems on the server side, but presents problems elsewhere) and all sorts of people come out of the woodwork .
as the title says.. it looks ugly
[quote]MoronPeeCeeUsr – NT was designed to keep as much physical RAM available as possible and it does that quite well. In fact some people feel that its a shortcoming in the kernel design considering how cheap RAM is these days.
[/quote]
True. I did some registry hacking to let more system level code stay in RAM, I don’t want to have averything in a pagefile. Windows SHOULD detect my 1 GB RAM and stop paging my programs.
Keeping as much RAM as possible available is nice when you own a 128 MB machine, but the os SHOULD consider the amount of available memory and make choices based on that.
That being said, after my registry hack I’m very much satisfied with my Windows XP laptop.
I love Linux memory mgnt too – I can choose multiple swap files, and even swap partitions. Windows won’t let me do that.
have a look at this:
http://www.winnetmag.com/Articles/Print.cfm?ArticleID=42035
True. I did some registry hacking to let more system level code stay in RAM, I don’t want to have averything in a pagefile.
With a well-designed system no system level code should ever need to be paged in swap.
Keeping as much RAM as possible available is nice when you own a 128 MB machine, but the os SHOULD consider the amount of available memory and make choices based on that.
Never understood this ‘keep physical RAM available’ argument. The system should be able to fully utilize what is there, and only use swap and page when necessary. Physical RAM is only good if you actually use it efficiently, and there’s no getting away from that.
I love Linux memory mgnt too – I can choose multiple swap files, and even swap partitions. Windows won’t let me do that.
Linux doesn’t use swap files, and you will only see the swap partition get used under extreme circumstances – such as compiling a very large desktop environment. You will never need multiple swap partitions.
It’s good to see that you really are an uninformed tool!
Never understood this ‘keep physical RAM available’ argument. The system should be able to fully utilize what is there, and only use swap and page when necessary. Physical RAM is only good if you actually use it efficiently, and there’s no getting away from that.
Seems even the Linux devs disagree with you on that one. The 2.6 maintainer has been writing about how stupid it is to keep RAM full, and to barely touch the swap space.
Linux doesn’t use swap files
Bullshit. You can have swap partitions (yes, multiple), and you can use swap files, or both, or niether. Even though I’m not a tremendous fan of Linux, it seems funny to me that I actually seem to know more about it than it’s own overzealous fans! Linux is not quite so limited as your understanding of it, and the larger world about you.
You will never need multiple swap partitions.
Once again, you are the one making assumptions, and incorrect ones at that.
@ David: I like your comments, but Linux can make use of swap files for sure 😉 It’s nice if you don’t like extended partitions or repartitioning.
@ Anonymous: So someone does like to fill up more space in the swap, but WHY? Please explain the argument – I still believe moving pages from RAM to swap when it’s not absolutely necessary is a bad idea. It slows down the system (both for moving the pages and because paged programs become slow) and makes my system less stable (page errors).
So someone does like to fill up more space in the swap, but WHY? Please explain the argument – I still believe moving pages from RAM to swap when it’s not absolutely necessary is a bad idea.
Keeps memory free for tasks that need it. Chances are if something has been swapped out, it’s not entirely critical. At any rate, I’m looking for the 2.6 maintainer quote that I refered to in my last post. There were not only his own arguments, but also number to back up his reasoning.
It slows down the system (both for moving the pages and because paged programs become slow) and makes my system less stable (page errors).
Although I generally hate the “well it’s your hardware” argument, but in this case, it sounds like it’s a real possability. Although I wouldn’t put it past the Linux kernel to screw something like that up, as the VM system in Linux (regardless of the flavour) has always been one of it’s sore spots. It’s just plain bad. Even with Matt’s help.
Oh boy, now my windows can have shading! That’s is pretty advanced, considering almost every other OS has had it for a LONG time. But, I will say that the ALT-TAB thing is VERY nice. That is something not a lot of people would think of. But personally, Exposé would work more efficiently because you can see ALL of the window. Like, if you had 10 Word documents open, that side view would just show the right margin…useful, huh, a bunch of white. But i think with a little work, it should come out nicely. I also think that the theme(s) look SO much better than XP. The bright blue was getting to me. and the title bars are HUGE in XP, how could you miss the close ‘X’. the black theme looks very professional, but the green theme looks very nice, too.
I understand that you guys feel that “Windows (or Linux) has crappy memory management,” but I haven’t seen any evidence of either of these things from Googling around. If these are well-known problems I should have found something, right? Where’s the beef?
I agree that windows sometimes takes a long time to become responsive after closing a large memory intensive program (if 20 seconds is a long time), but this seems to be because it’s reloading all of the paged out memory from other programs. NT doesn’t crash, though, so I don’t see how this is flawed. Maybe NT sucks because you don’t agree with the way it was designed, but do you have any data? I can’t say I know enough about running memory-intensive apps on linux and I have never hit the swap, so I haven’t seen how linux does.
Seems even the Linux devs disagree with you on that one. The 2.6 maintainer has been writing about how stupid it is to keep RAM full, and to barely touch the swap space.
Mmmm, no – I’m afraid not, sorry. Using physical RAM efficiently is always better than paging and swapping to disk – you know, the hard disk is slower than memory (can’t believe I’m pointing that out). A set of links to relevant kernel archive posts would be good here. Besides, physical RAM under Linux is never really full – it caches efficiently. . Using swap space is good, especially for pages that sit untouched in RAM for some time – which is exactly what Linux does anyway – but using it excessively just slows everything down. The material point is that whatever Linux is doing, it is doing it better than Windows.
Bullshit. You can have swap partitions (yes, multiple), and you can use swap files, or both, or niether.
Yer…bullshit . As anyone who knows anything about anything would know, Linux can use swap files, but it is never, ever recommended nor should anyone with a modicum of common sense ever use a swap file (it is far slower for one thing) – I mention no names. Swap files tend to be used on smaller devices where a separate swap partition cannot be used for whatever reason – usually space restrictions. Looking at “Linux Installation and Getting Started” by Matt Welsh may be a good place to start for you. I fail to see how Linux would be limited in not allowing a swap file in most circumstances, but when you know about nothing else…
Even though I’m not a tremendous fan of Linux, it seems funny to me that I actually seem to know more about it than it’s own overzealous fans! Linux is not quite so limited as your understanding of it, and the larger world about you.
Oh, the overzealous thing – we seem to be running out of ideas. You might want to do some reading around first before embarrassing yourself because the stuff that has been pointed out is far from being zealous.
Once again, you are the one making assumptions, and incorrect ones at that.
Oh yer, if you don’t like it just call it an assumption . Make my day and give me an example then, and justify this as an assumption. Under what scenario are you ever going to, realistically, need multiple swap partitions – except when you are installing and re-using an existing one? I’d like to hear your thoughts on that topic, but maybe not.
Hint: Use the subject line for what it was designed for please.
At any rate, I’m looking for the 2.6 maintainer quote that I refered to in my last post. There were not only his own arguments, but also number to back up his reasoning.
Sorry, must have missed the link to that one .
must have missed the link to that one
I believe that the article was linked to here on OSnews as well. FWIW.
I believe that the article was linked to here on OSnews as well. FWIW.
Err, where? I’m not going to search for it.
Err, where? I’m not going to search for it.
And you expect me to spoon feed you? Sure you’re going to blather on about how I’m making stuff up, and that I have no proof, but if you get off your lazy metaphorical ass and doa simple search for yourself you might well learn something.
But I suspect that you will not, and you seem to like spreading any misinformation that will make Linux and anything anti-Microsoft look far better than they actually are. I’ve seen nothing in my entire life that required the same level of hype to sustain itself as Linux and it’s misguided movement.
Linux’s VM has been much improved in 2.6, offering better performance.
Meanwhile, if you’re going to make claims, the least you could do is provide links. But then again, it is typical of anti-Linux poster to put more emphasis on FUD than truth.
And you expect me to spoon feed you? Sure you’re going to blather on about how I’m making stuff up, and that I have no proof, but if you get off your lazy metaphorical ass and doa simple search for yourself you might well learn something.
If you said you’ve got a link to an article that you’ve quoted then you provide it – that’s common practice. I can’t possibly know exactly what article you are referring to, because funnily enough, I don’t do telepathy well enough yet . If you can’t, then people quite rightly assume that you’re lying.
But I suspect that you will not, and you seem to like spreading any misinformation that will make Linux and anything anti-Microsoft look far better than they actually are.
Well that’s not my problem, is it? All they have to do is read .
I’ve seen nothing in my entire life that required the same level of hype to sustain itself as Linux and it’s misguided movement.
Think what you like, it doesn’t alter what’s been written and what Linux is currently being used for in the world today. I can do nothing for your frustration – sorry.
I can do nothing for your frustration
Nor I for your ignorance
Nor I for your ignorance
We’re still waiting for that link.
Nor I for your ignorance
Oh yer, anyone who reads this long line of posts will see that . Your posts are getting shorter, which is a plus .
It never ceases to amaze me how people who really religiously support Windows or anything Microsoft exclusively can be put down quite easily by comments from people who use Linux, Windows and everything else in the real world. Mind you, Linux has its fair share of zealots. Unfortunately for you, I’m not one of them. In this case, Linux (or anything non-Windows) is just better.
“Although I wouldn’t put it past the Linux kernel to screw something like that up, as the VM system in Linux (regardless of the flavour) has always been one of it’s sore spots. It’s just plain bad. Even with Matt’s help.
”
buddy. linux 2.6 kernel is one of the best.