Microsoft will end extended support for the 8-year old OS at the end of 2004. Customers wishing support will have to sign up for a custom contract.
Microsoft will end extended support for the 8-year old OS at the end of 2004. Customers wishing support will have to sign up for a custom contract.
While not a big fan of Microsoft (since Ubuntu came into being I don’t use Windows anymore), I can totally understand why they don’t want to continue supporting an EIGHT YEAR OLD operating system. In the GNU/Linux world, people get upset when a distro has a 6 month release schedule and only supports 2-3 versions back.
I can also relate personally. I had a web design customer pay me $500 up front to do a site for him. I had everything finished and he promised to get the content together and then I never heard from him again. Just recently (four years later) he called and is interested in doing a site again. I haven’t heard from him again since I told him we’d be starting from scratch on the design and the finances. In the computer industry, even a year can be a lifetime.
First Pay for the Crappy OS and then for the Support!!!!! This probably means forcing companies to get new hardware and new OS as it would make more sense to get new stuff rather than pay for the support and updates!!! I wonder how many companies still use NT4 though.
The big problem will probably not be new hardware and new OS, the big problem is that they may have applications that are not supported on the current windows platform. This is probably the main reason why people still run NT4.
But then again, why upgrade if you have a functioning system. My guess is that an upgrade from winNT4 to windows XP/2003 would not provide any additional business value even if your applications not needed to be upgraded. The only thing you get is costs. Costs for deployment, tests and training. Not to mention that the upgrade isn’t free.
This is why Microsoft have to shut down support or charge you through the nose for it. If they didn’t people wouldn’t need to upgrade. It is usually referred to as the microsoft tax.
While not a big fan of Microsoft (since Ubuntu came into being I don’t use Windows anymore), I can totally understand why they don’t want to continue supporting an EIGHT YEAR OLD operating system. In the GNU/Linux world, people get upset when a distro has a 6 month release schedule and only supports 2-3 versions back.
The difference is that the linux update is free. Linux is also a much more mature OS than windows in the sense that things doesn’t really change all that much from one version to the next as windows do. It is a unix clone, so it will look like unix and that have looked quite similar the last 30 years.
If you installed Red Hat 6.2 back in the settings files etc will be very similar to what you see fedora core 3 today. To upgrade from one version to the next usually isn’t harder than applying a service pack in windows.
The difference is that the linux update is free.
Guess that depends on which distro you’re talking about yes? How many distro makers that charge would let you upgrade for free when the last version you bought was released 8 years ago?
Anyway, figures people would bitch about this, as if 8 years wasn’t long enough. Hell, even if MS supported this OS for 40 years and then droppped it, people would still find something to complain about.
redhat provide 7 years support for it’s OS.
but in desktop it’ll be different, ppl want to be on cutting-edge, so distros like Ubuntu provide 12(or 18) months support, and after that you dont have to pay to get newer version.
fedora the same, debian, gentoo and mandrake.
you only have to download the ISO image and upgrade when you have the money for some blank CDs.
and in some distro you can upgrade it with only 2 commands.
“No please go back to your LinSux shell…and come back when LinSux has 5% market…”
On desktops maybe but on servers linux already is the second OS.
NT4 was a bad idea as a server OS when Netware and Unix were the kings.
wonder why linuxsucks.org use linux if it sucks ?
http://uptime.netcraft.com/up/graph?site=linuxsucks.org
This makes perfect sense, it probably cost a lot of money supporting such an old platform, money down the drain as they don’t result in new deployments.
And here you can actually get a custom support contract if you want to pay for it. I don’t even think that is possible with end of life products from SUN for instance.
And no, upgrading is never free. Even if the software is free, it always takes time to do the upgrade, and test that everything is working. That is only free if you live somewhere where the wages are low, which i believe most osnews readers doesn’t.
This is also why a lot of companys dont want linux distros with a 6 month release schedule, and only 18 months of support, and this is why redhat enterprise linux makes a lot of sense, and as it stands you can kinda add debian to that list too 🙂
>Linux is also a much more mature OS than windows in the sense that things doesn’t really change all that much from one version to the next as windows do.
Define “more mature OS”??? I think it`s the other around my friend!!
Are you really honest when you say things doesn’t change in Linux? I think linux kernel always changes way more than windows kernel. Windows device drivers are compatible across many versions where as its not at all the case in Linux.
I was thinking in terms of education needed.
You are right, the kernel have gotten major improvement at regular intervals (usually 2 years). But older kernels are still supported but perhaps not from your distro.
sysinternals has a filemon driver to monitor file system activity on Linux and it broke from 2.4 to 2.6 kernel. Talk about compatibility. Windows provides the best compatibility.
I would think that one or two windows service packs have broken things as well. You don’t have to go back further than windows XP sp2 to get several such examples. Regardless of OS this kind of things are really annoying.
In terms of advnace, would someone give some features of Linux kernel which are better than Windows? Linux doesn’t even have proper asynchronous IO support in kernel.
What do you mean by proper?
I thaught this was added in 2.6
Sure, mandatory accesscontrol as of Linux 2.6.x. This means that there is no longer a root user with all privileges to do whatever he want. E.g. you could create a security policy that prevented everything touched by webbrowser or your mailclient to be executed by root. Or you could prevent programs from modifying themselves or other programs and libraries even if you by some bug manages to elevate the permissions to root. You can have users log in with different security roles for different tasks. This system works as an extra security layer independent of standard unixlike security.
the comments on articles like this always turn into flamewars. I bet theres going to be ~40 more posts or more by the time Iget back home today.
I’m not a big fan of Microsoft myself but I think that this was a good move. NT is very old by now and they should have pulled the plug on this years ago. In my own business I’ve had clients who moved to Server 2003 because they didn’t want to deal with old software. On another end I migrated many systems from NT to Linux.
WinNT 4.0 is obsolete by today’s standards. It was stable and fast I will give you that. Win2000 and WinXP Professional however support new equipment and equipment that NT cannot touch.
For once Microsoft has this policy right. Now I wish they would drop support for WinME and Win98. The 90’s are gone, the architechture must be rebuilt.
Mostly addressed to Wolf, who really doesn’t seem to have a clue what he’s talking about…
Redhat like companies charge for support from day one. The time NT4 came, LinSux sucked a$$ too. NT4 was much better for its time…now please start the MAC propaganda…
Ever used WinNT 4? It’s a piece of junk. Unless you’re very, very careful with it, it’s highly unstable, fairly slow, and generally doesn’t work properly. If correctly set up, with all service packs and patches installed, and with constant, careful maintainence, it possibly becomes useful as a server OS. Maybe. If you’re lucky.
Linux, at the time, wasn’t all that great either. However, it was never as much of a pain as Windows NT 4. And at the time the Linux kernel was around five years old, while Windows NT was around 8 years old and had many times the number of developers.
By the time WinNT 5 (Windows 2000) was released, the system was mature enough to actually be usable. The kernel is pretty much rock solid, although not bulletproof. So was Linux.
Windows device drivers are compatible across many versions where as its not at all the case in Linux.
sysinternals has a filemon driver to monitor file system activity on Linux and it broke from 2.4 to 2.6 kernel. Talk about compatibility. Windows provides the best compatibility.
Windows drivers are not generally compatible across multiple versions. Windows NT 4 used different drivers to Windows 2000. Windows 2000 and Windows XP use the same drivers, but they’re virtually identical so that’s no suprise. Windows 9x used different drivers again, with the exception of WDM drivers (from Windows 98 onwards). The only exception is printer drivers, which are implemented in user space anyway. Or, more accurately, should be implemented in user space, but printer manufacturers seem to put a kernel space component in the driver anyway. Longhorn will use a different driver interface yet again.
Linux kernel drivers tend to change with every major release. Not suprising, considering the amount of internal changes between major versions. So what? Binary compatability has never been a goal, and drivers for virtually every piece of supported hardware reside in the kernel source tree anyway, so even source compatability is not an issue.
As for the sysinternals thing… Although they may know their way around the Windows kernel, they obviously do not know their way around the Linux kernel. Such a thing need not be implemented as a kernel module, and in fact should not be implemented as a kernel module. It should be implemented as a user space program. Oh wait… it is implemented as a user space program – strace. That can trace all system calls, and therefore all filesystem access.
No one can prevent it. hence bring back some technical argument for superiority of LinSux even though we all know its technically inferior in design and implementation to windows.
That’s purely academic anyway. The design of the Windows NT kernel is overkill for what it’s used for. The microkernel design gives debatable benefits for the maintainability of the system, but none of the other benefits are exploited in Windows NT, and likely never will be. There’s a lot of stuff it inherited from VMS, just sitting there doing absolutely nothing. Aside from which, it’s no more a pure microkernel than Darwin.
The design of the Linux kernel, on the other hand, is perfectly adequate. It’s intended to be a free Unix-compatible kernel, and that’s exactly what it is. Nothing more, nothing less. A minimalist, modular monolithic design is much simpler than a complex microkernel, and therefore simpler to implement, although it’s really something of a hybrid, much like Windows NT is.
That’s generally typical of Microsoft software compared to most open source software. Microsoft’s software tends to be over-engineered and extremely complex, while open source software tends to be fairly simple, yet perfectly adequate for the job, and is then simply discarded or replaced when it isn’t good enough anymore. Linux could not have existed if it were designed the same way as Windows NT, because we do not have the kind of resources to write software that’s far more complex than it needs to be – have a look at the HURD, for example. Brilliant design, but it’s never going to get anywhere, for exactly the same reasons that Windows NT took almost ten years to become a viable operating system, yet it has a much smaller development team.
Define “more mature OS”??? I think it`s the other around my friend!!
The internal design of the kernel matters not. What does matter is the application programming interface, and the way user programs interact with the kernel.
From that point of view, Linux is far more stable. It has not really changed since it was first written. Software written for Linux versions prior to 1.0 will likely still work on a modern system if you install the appropriate ancient versions of the libraries. The basic API just doesn’t change, hasn’t changed for a long time, and is actually an implementation of the POSIX standard. Windows isn’t any kind of standard – it’s just whatever Microsoft feel like doing this time.
Define “more mature OS”??? I think it`s the other around my friend!!
By mature, I meant that it has found its form. In fact Linux had found its form long before even the first Linux kernel was combined with all the wonderful GNU tools to form GNU/Linux. By that time, unixlike systems had been around for 20 years.
A person who was fluent in UNIX 15 years ago would feel right at home in a modern linux distro. How useful would windows knowledge from 15 years ago be in win XP?
If you speek of real age of the Linux vs the NT kernel, the NT is only one year older than Linux. Linus tried to make a clone of an existing OS and by doing so, take advantage of the experience from its predecessors. Microsoft lost valuable time converting VMS to something totally new and different.
NT4 was a bad idea as a server OS when Netware and Unix were the kings.
Yes and that must explain the ass bounding Novell took in the 90s and I’m sure Netware being king is what prompted Novell to buy Suse.
Windows filled a real server need. Companies that needed some basic serving needs or group management but didn’t need the full on power of a UNIX server found a real deal in NT4.
It was about performance and usability at a good price. The same thing that is driving Linux on the server platform today.
Just blow it off. Give no credence to the other zealots. You cannot please 100 percent of the people 100 percent of the time.
Just my humble opinion rm6990.
Windows isn’t any kind of standard – it’s just whatever Microsoft feel like doing this time.
NT has been using the same API since the first release, Win32. Even with the .NET framework its still available and it can be used like its always been used. NT contains a Win16 subsystem. With a little tweaking you can get many windows 1.0 and 2.x apps running (not that you’d want to but its possible)
A person who was fluent in UNIX 15 years ago would feel right at home in a modern linux distro. How useful would windows knowledge from 15 years ago be in win XP?
On what terms ? A developer or user ? A user would find much of the same core functionality, windows, a mouse, min & maximize functions. All the “OG” windows utils are still there, paint, calc etc.
A developer would find that the tools have gotten vastly better. The API, while a little different is an extension of the one they would have been using 15 years ago (win16).
Linus tried to make a clone of an existing OS and by doing so, take advantage of the experience from its predecessors. Microsoft lost valuable time converting VMS to something totally new and different.
Using this logic its safe for me to say that Linus wasted valuable time re-inventing the wheel with Linux, but David Cutler had previous experience in writing operating systems so Microsoft was able to take advantage of that experience and create something new instead of just rehashing the same shit done 30 years ago over again. riiight.
Bottom line is that I don’t think you or Anon even know jack all shit about windows beyond maybe you have some experience as end users.
You surely didn’t develop on the platform over any length of time (thats obvious by the claims towards how windows development is such a moving target with constant changes) and no I don’t consider ‘hello world’ to be real application development.
I know Deutsche Bank still uses Windows NT (at least my division) on both the client and the server. Although they are upgrading to WinXP on client in January and to Win2003 on the server later in 2005.
But I can tell you that Windows NT is very stable. Sure, it doesn’t have all the latest bells and whistles and it is difficult to make a USB drive work, but it is very stable. Hardly ever do we see system crashes (i can’t remember the last one). People here who claim that the OS is a joke, have obviously never used it for more than 5 minutes.
But it is definetely showing its age.
“The big problem will probably not be new hardware and new OS, the big problem is that they may have applications that are not supported on the current windows platform. This is probably the main reason why people still run NT4. ”
Well that is why they are introducing Virtual Server, that gives the ability to run old legacy server applications on Windows Server and the reason why they are embedding a VirtualPC based VM for running legacy applications (Yes your MS-Office XP will be a legacy application when Longhorn arrives) in Longhorn. The Win32 is about to become terminated.
“NT has been using the same API since the first release, Win32. Even with the .NET framework its still available and it can be used like its always been used. NT contains a Win16 subsystem. With a little tweaking you can get many windows 1.0 and 2.x apps running (not that you’d want to but its possible) ”
Instead of introducing new API’s they have made major changes with every release, which is why you might not be able to run NT4 applications on W2K and W2K applications on XP and with W2K3 it was estimated that upto 30% of the available serverapplications was unable to run on W2K3 unpatched. Well done…
There are actually people here saying that Linux has better backwards compatibility than Windows? People, I’m no Windows-fan, but you can say all you want, but Windows provides backwards compatibility that no other OS can. When one upgrades from, say, ME to XP, the ME applications would still work. Contrary to Linux, where a point release of some vague library can screw the entire system.
Of course Windows has it’s flaws (security being one of them), but saying that Linux provides better compatibility over time is just plain ignorant.
This is the normal time to EOL NT4, just like other operating systems. It’s the company’s policy. No one is forcing people to upgrade to XP (or 2k), so if they want to use it, fine. If they still want support, then they have to pay for it.
“But I can tell you that Windows NT is very stable. Sure, it doesn’t have all the latest bells and whistles and it is difficult to make a USB drive work, but it is very stable. Hardly ever do we see system crashes (i can’t remember the last one). People here who claim that the OS is a joke, have obviously never used it for more than 5 minutes. ”
One of the large problems is the tendency for Redmond to throw everything but the kitchen sink into kernel level for optimation reasons (That sucks). Redmond has severel times attempted to build up a certification program for this reason but has only very resently had success.
This illustrates what can happen when a company is dependent on proprietary software. There is no way for the company to buy support from anybody but the original seller. If the seller decides to end support or make it very expensive, you will have to pay.
Well, 8 years IS a very long time so I must say that I don’t blame MS for this decision. There are software companies that are far worse…
“wonder why linuxsucks.org use linux if it sucks ?”
http://uptime.netcraft.com/up/graph?site=linuxsucks.org
ROTFLOL, tears in my eyes … that is SO funny.
That said, NT4 was a good stab at the server market, pretty clean (at first). I understand why these folks don’t want to UG, the tiny little “tools” in 2k- 2k3 in the MMC are nuts and esoteric.
hylas
I am not flaming or trolling by saying that NT is stable albeit slower on older equipment and companies with little budget for upgrading equipment or software.
There is always the difference between the “Ideal” and “Real” situations in an organization. Beyond these forums and our idealism or lack therof, many organizations just don’t have the cash to upgrade to the next latest or greatest from hardware or software providers.
Most organizations that are needing this so-called support are mid-range to large in size. They are not generally small organizations or entrenched closed Redmond shops.
1.) While Windows has had asynchronous I/O longer than Linux, the M$ implemention is not kernelized ( KAIO).
2.) A common error is to assume that all Windows NT capabilities were working fully when it shipped.
This is patently false. I consider NT4 to be usable as of Service Pack 3. For an OS that had fine-grained security built-in “from day one” it sure was easy to crack.
3. WDM, while similar to the NT4 driver model, is not identical and was implemented in Win2K
4.Probably true overall but depends on your hardware – I’ve had problems with my Savage MX-IX that was resolved with the addition of a single parameter in X but required locating and installing an unsupported driver for stable operation in Windows XP.
5.True, but it works well enough for me in Mandrake, even with .wmv files and Xine has capabilities that Media Player lacks.
6.True. But, this will quickly change ( I hope).
7.Poppycock. LinuxThreads has provided kernel threads since ’97.
8.Can’t really comment on this but doesn’t this only matter for hard real-time?
9.This appears to be true.
Here’s a question I’d like answered. If Windows NT4 was so well-designed and advanced, why was it so goddamn UNSTABLE and SLOW??
The first superior OS, in my opinion, to come out of Redmond was Windows 2000.
I personally think Windows XP is quite an acheivement, and if Microsoft decided to release a similarily stable system, like this in 1998 — I for one, chances are; would have never used GNU/Linux.
Windows 98se was a scourge and bane upon my computing life.
Many of the above posters arguements were at one point quite legitimate, though I encourage him to try out a user-friendly distribution.
And there are kernel-threads in Linux and have been for awhile. SELinux is fundementally different from default Windows NT access control features.
“Multimedia support in Windows rocks, try to do same in LinSux. I used redhat 9 and xp both were there around same time. redhat could never make my sound card work. Please don’t tell me that i need to keep on trying every damn new LinSux distro and hope one will work. ”
Oh my God. You still use a descontinued version of Red Hat and says that linux sux ?! Red Hat is also the worst distribution in multimedia area.
I can see more movies and play multimedia files on linux with mplayer and xmms than Winsux and Winblows Media Player. Try to listan an ogg file or play a xvid-coded movie on Windows and you will see that windows without thirdy-part programs is bad as multimedia station.
“8. Linux kernel was cooperative pre-emptable till 2.4 which means kernel code could be only in one cpu at a time. ”
linux 2.4 is past. Try to use your Windows XP home (or even Professional) with 4 or 8 cpus and you see linux 2.6 is much better.
“4. X- Sucks…Windows graphics are much more smoother and responsive ”
Try to run an application on a normal windows (without terminal services, which is very expensive and with stupid licenses) computer.
X is not slow. The video adapter drivers for linux are less mature than windows counterparts because hardware makers don’t invest much money and time to develop them. I can run Doom III perfectly with high quality using my linux box with a Nvidia Fx5700 video adapter.
“6. Wireless support in windows is much better ”
And what ? This is insignificant to servers and to most of the people of world nowadays. This is a problem of lack of drivers because hardware makers don’t do linux drivers.
“2. Even though NT supported multi-user from day one they should have made it possible from normal users to run programs as smoothly as in Linux. Somehow windows made admin users login a common practive and so we see tons of viruses and malware getting installed all the time.”
This is a big problem and it is Microsoft’s fault. NT applications also are not made to work perfectly on simultaneous multiuser environment. Because of this fact there are severeal windows applications that don’t work with NT terminal services.
NT kernel is serious (because is a copy of VMS) but Microsoft destroyed the project when put win32 API on it and GDI at kernel level.
See NT technology:
http://www.computerjokes.net/162.htm
I’m just kidding… 🙂
“…GDI at kernel level.”
didnt ms have to do this because of slowness of the pci/agp x86 intel architechture? i read some where that this was done on purpose to speed things up and compensate for the defects/latency of x86.
> There are actually people here saying that Linux has better backwards compatibility than Windows? People, I’m no Windows-fan, but you can say all you want, but Windows provides backwards compatibility that no other OS can.
> When one upgrades from, say, ME to XP, the ME applications would still work. Contrary to Linux, where a point release of some vague library can screw the entire system.
Windows has better backwards compatability than Linux. It’s far from providing “backwards compatability that no other OS can.” Just because one os is better than another in some area does not make it the best of all operating systems in that area.
Actually, this story isn’t about getting free updates to your OS, it’s getting support for your old OS. Do any linux distros, Sun, Apple support their OS’s from 8 years ago? I’m pretty sure they don’t.
Personally I don’t think MS is unjustified in ceasing support for Windows 95/98/ME as well, but that’s just me.
Asking a Linux distributor to support an OS for 8 years when you downloaded it for free is asking a bit much, don’t you think?
I think Debian has the best bang for its buck…considering it is free, it has an incredibly slow release cycle (Before the lag between Longhorn and XP Microsoft was releasing new Windows versions about the same or faster than Debian was releasing new versions of its OS) which is awesome for servers. Plus, it is incredibly stable. I ran the testing tree and I never had to reboot and never had it crash on me. I can only imagine how stable the stable tree is.
suse costs a fortune now and they don’t even provide real support for their *actual* version. just look at the list of things for which they still charge via *expensive* numbers… they practically exclude every possible problem…. as if this wasn’t enough, support for versions cease 30 days(!) after a new version is released…
why isn’t anyone complaining about that?
NT4 was a bad idea as a server OS when Netware and Unix were the kings.
Right. Which explains why NT wiped the floor with Netware and why they’ve just bought Suse to try and get hold of a product to keep them in business.
Ever used WinNT 4?
I have. I switched from OS/2 to NT4 beta 2 in February 1996 and used it solidly until Windows 2000 replaced it.
It’s a piece of junk. Unless you’re very, very careful with it, it’s highly unstable, fairly slow, and generally doesn’t work properly.
I can count the number of times my NT4 systems crashed on one hand and most of those crashes are directly contributable to hardware failure or porrly written low level software.
No other OS I’ve used performed as well with equivalent functionality on equivalent hardware. Certainly not Linux, which was still doddering along with FVWM as “cutting edge”.
If correctly set up, with all service packs and patches installed, and with constant, careful maintainence, it possibly becomes useful as a server OS. Maybe. If you’re lucky.
NT4 was an excellent corporate and power user desktop OS.
Linux, at the time, wasn’t all that great either. However, it was never as much of a pain as Windows NT 4.
Bollocks. Its GUIs sucked, the applications were worse, performance was barely average and SMP support was atrocious.
A lot of that “cutting edge” stuff that’s only relatively recently gone into Linux like ACLs and the O(1) scheduler NT4 (and NT 3.x for that matter) had way back then.
Windows drivers are not generally compatible across multiple versions.
[…]
[i]Linux kernel drivers tend to change with every major release.
Linux drivers are not generally compatible across *minor point releases*.
The internal design of the kernel matters not. What does matter is the application programming interface, and the way user programs interact with the kernel.
From that point of view, Linux is far more stable. It has not really changed since it was first written.
O_o
Wow, it’s not often I’m speechless, but that really takes the cake.
Binary compatibility under Linux is practically nonexistant. You’re lucky if low level programs that interact with the kernel are compatible across minor point releases (x.y.1 -> x.y.2), let alone major releases. Binary compatibility is a *long* way down the list of priorities held dear by Linux developers.
Indeed, the poor API stabiluty of Linux is one of the major reasons its commercial support has been so poor.
Software written for Linux versions prior to 1.0 will likely still work on a modern system if you install the appropriate ancient versions of the libraries. The basic API just doesn’t change, hasn’t changed for a long time, and is actually an implementation of the POSIX standard. Windows isn’t any kind of standard – it’s just whatever Microsoft feel like doing this time.
Windows software dating from Windows 2.x in the late ’80s still works on NT. You have no idea what you’re talking about.
The difference is that the linux update is free.
Not if you’re using a commercial Linux.
Not if your time costs money.
Linux is also a much more mature OS than windows in the sense that things doesn’t really change all that much from one version to the next as windows do.
Rubbish. Linux distributions change substantially between releases and differ substantially between each other.
It is a unix clone, so it will look like unix and that have looked quite similar the last 30 years.
This sounds reasonable in theory, but doesn’t work in practice.
A person who was fluent in UNIX 15 years ago would feel right at home in a modern linux distro.
I sincerely doubt that. If you pulled a Solaris admin off a 5 year old Solaris system and plonked him on a modern Linux distro he’d probably have several days worth of familisarising to do.
How useful would windows knowledge from 15 years ago be in win XP?
About as useful as 15 year old Unix knowledge would be trying to use Ubuntu.
Actually, this story isn’t about getting free updates to your OS, it’s getting support for your old OS. Do any linux distros, Sun, Apple support their OS’s from 8 years ago? I’m pretty sure they don’t.
Personally I don’t think MS is unjustified in ceasing support for Windows 95/98/ME as well, but that’s just me.
I don’t blame them either. From a business perspective this is the only sane for them MS to do. What I can’t get is that their customers buys their systems from them.
Computers and their software is mission critical to most business today. Why accept having your information held hostage by one single firm? If MS decides to discontinue a product they can do so. If MS decides to charge for support they can do so, and you will have to pay whatever price they decide. You simply just don’t have just one vender to supplie support business critical systems.
If they had made their source code available in a way that other companies could give you support it would have been anonter matter. Then you could go to sombody else if the MS price was too hight. Just the fact that you then would have had competion, would probably have lowered the MS price in the first place.
MoronPeeCeeUSR:
NT has been using the same API since the first release, Win32. Even with the .NET framework its still available and it can be used like its always been used. NT contains a Win16 subsystem. With a little tweaking you can get many windows 1.0 and 2.x apps running (not that you’d want to but its possible)
Technically, Win32 is not the NT API. It’s just another subsystem server, same as the Win16, OS/2 or POSIX servers. The only difference is that it’s the primary one. The NT kernel API hasn’t remained anywhere near as stable as the upper level APIs have. Nor, in fact, have the libraries that run on top of the Win32 APIs. One minor change to a shared library can screw up almost the entire system, and until very recently there has been absolutely no mechanism in place to prevent that from happening. Heard of DLL hell? Same problem applies on any system that uses shared libraries.
The exact same is true on Linux. The kernel has changed dramatically, but the important functionality has remained largely the same. The standard libraries use the exact same API they always have, and if they don’t then you can install multiple versions of the same library, each exposing a different API. Each time binary compatability for that library is broken, the major version number is incremented, and you just install a newer version. Even Microsoft does that with the MSVC runtimes and MFC. Glibc is even better – it has independent versioning on every symbol exported by the library, so applications will simply use the version they were linked against.
The upshot of this is that any Linux app in ELF format using Glibc will work just fine on any modern Linux system, and will keep working pretty much forever, as long as the appropriate library versions are present. ELF binaries using the older libc5 will work if you install the libraries it requires. a.out format binaries are more work, but with a little tweaking you can get those running on a modern Linux system as well.
And those older libraries will still work on Linux. The kernel-level ABI has not changed in a way that has broken binary compatability yet. More importantly from a user perspective, the X11 protocol hasn’t changed in a way that’s broken binary compatability either. You can run an X11 Linux app from as far back as X11 worked on Linux, and it’ll still run exactly as it used to. It’ll look severely out of place, but so do Win16 apps on a Win32 system.
David Cutler had previous experience in writing operating systems so Microsoft was able to take advantage of that experience and create something new instead of just rehashing the same shit done 30 years ago over again.
Well, the NT kernel is pretty much a rehashing of VMS. It shares the exact same internal architecture, up to a certain point. That point is where Microsoft insisted that Cutler do things their way. That means that it has lost some of the capabilities that VMS had (full, native, kernel-level support for clustering, for example), and gained a few design stupidities, such as putting the GDI in the kernel.
drsmithy:
I can count the number of times my NT4 systems crashed on one hand and most of those crashes are directly contributable to hardware failure or porrly written low level software.
I can say the same about Linux, from around 1997 to present.
Bollocks. Its GUIs sucked, the applications were worse, performance was barely average and SMP support was atrocious.
A lot of that “cutting edge” stuff that’s only relatively recently gone into Linux like ACLs and the O(1) scheduler NT4 (and NT 3.x for that matter) had way back then.
The GUIs and applications have sod all to do with Linux itself.
As for the “cutting edge” stuff… ACLs are a massive improvement over OS/2, Windows 3.x or Windows 9x, which had absolutely no security measures whatsoever. They aren’t such an improvement over standard Unix permissions. It’s possible to do virtually anything you can do on one with the other. The O(1) scheduler is just a different way to handle a problem, and although better than an O(n) scheduler under load, it’s certainly not advanced or cutting edge. Same deal with kernel preemptability – it’s just a side effect of having a fully reentrant kernel, isn’t an advanced feature, pretty much comes for free with an SMP capable microkernel, and doesn’t give any actual benefits if the rest of the kernel is correctly designed.
Binary compatibility under Linux is practically nonexistant. You’re lucky if low level programs that interact with the kernel are compatible across minor point releases (x.y.1 -> x.y.2), let alone major releases. Binary compatibility is a *long* way down the list of priorities held dear by Linux developers.
Indeed, the poor API stabiluty of Linux is one of the major reasons its commercial support has been so poor.
First, you should not be writing programs that interact with the kernel on that level. It’s a fundementally bad idea, and you are just begging to have it break as soon as the kernel is changed.
Drivers? Perhaps. They don’t usually break across minor point releases though. They likely won’t remain binary compatable though. But we weren’t talking about drivers, were we? We were talking about user level programs. User level programs always go through the rigidly defined external API (the system call interface), and they make use of the standard libraries. The system call interface has never broken backward compatability. The shared libraries have, but you can always install multiple versions of them.
It’s quite possible to upgrade the kernel from 2.2 to 2.6, and everything will keep working. You will have to replace some of the really low level system tools or user space helper programs, but those programs are the only programs that you need to change.
Windows software dating from Windows 2.x in the late ’80s still works on NT. You have no idea what you’re talking about.
Windows software from the late 80s is working through a compatability layer. Of course it still works, but only if you screw around with it first. Hell, it works on Linux too, using WINE. There’s no real difference – the Win16 subsystem sits on top of the Win32 subsystem, and allows Win16 apps to run by translating between the two, and providing the required functionality. Similarly, WINE sits on top of Linux + X11, and allows Win16 apps to run by translating between the two, and providing the required functionality.
Yes, Linux apps from as long ago as there were Linux apps will probably still work. There may be a few that don’t. But there are old Windows apps that don’t work on current systems too.
I sincerely doubt that. If you pulled a Solaris admin off a 5 year old Solaris system and plonked him on a modern Linux distro he’d probably have several days worth of familisarising to do.
Have you any experience from Solaris or Linux? I doubt it. I have run them both. If you only go as little five years back they are almost identical to admin.
E.g. if you want to change from file based login authentication to some catalog based system e.g. LDAP you would edit the same files in the same way on both systems. Have him configure what nameserver to use, in both cases he would edit /etc/resolv.conf. If he wanted home directories to mount automagically when users login, he would edit /etc/auto.home and /etc/auto.master just as in Linux. Both systems startup procedures begin in /etc/inittab. They have the same run levels. They have extremely simmilar directory structure. They have the same system to start and stop services at bootup, If a disk gets corrupt he would use fsck on both systems. The way to configure what program to start when a user loggs in would be the same,… I could go on and on.
Now, most of this would apply even if it he had his Unix knowledge from 10, 15, or even 20 years ago.
Sure he would have some familiarising to do but not more than he would have to do if he switched to AIX of five years ago, or AIX of today. I would say that the way to run Unix have found its form. This really do make a difference when it comes to costs for education.
Technically, Win32 is not the NT API. It’s just another subsystem server, same as the Win16, OS/2 or POSIX servers. The only difference is that it’s the primary one. The NT kernel API hasn’t remained anywhere near as stable as the upper level APIs have.
Nor does it need to, given the tiny proportion of things that actually have any justification for using it.
There’s a very good reason why the documentation on the “native” NT API is thin on the ground – it’s because developers aren’t really supposed to use it, they’re supposed to use one of the personalities like Win32 or POSIX – precisely so Microsoft can make changes to it as requirements demand, with little fear of breaking things.
Nor, in fact, have the libraries that run on top of the Win32 APIs. One minor change to a shared library can screw up almost the entire system, and until very recently there has been absolutely no mechanism in place to prevent that from happening.
When are you measuring “very recently” from ? Mechanisms have been in place since _at least_ the mid ’90s.
Heard of DLL hell? Same problem applies on any system that uses shared libraries.
“DLL Hell” largely disappeared in the late ’90s as the last bits of 16 bit Windows software died away. I can’t even _remember_ the last time I saw a DLL conflict.
The upshot of this is that any Linux app in ELF format using Glibc will work just fine on any modern Linux system, and will keep working pretty much forever, as long as the appropriate library versions are present. ELF binaries using the older libc5 will work if you install the libraries it requires. a.out format binaries are more work, but with a little tweaking you can get those running on a modern Linux system as well.
The difference is under Linux to get this benefit, you need to do a great deal of fucking around finding the right libraries, compiling them, finding the dependencies they inevitably have, compiling them, installing it all and finally crossing your fingers and hoping it works.
On Windows, I just install the thing and run it. The chances of it not working are extremely small.
The GUIs and applications have sod all to do with Linux itself.
Yes, they do, because a kernel on its own is about as useful as tits on a bull.
They aren’t such an improvement over standard Unix permissions. It’s possible to do virtually anything you can do on one with the other.
Well, not really. You can sorta-maybe cover some of the functionality of per-user ACLs with unix groups and the u/g/o permissions model, but it very quickly becomes horrendous to manage and in some unixes you start crashing into restrictions on the number of groups users can be in, the total number of groups allowed, etc.
The O(1) scheduler is just a different way to handle a problem, and although better than an O(n) scheduler under load, it’s certainly not advanced or cutting edge. Same deal with kernel preemptability – it’s just a side effect of having a fully reentrant kernel, isn’t an advanced feature, pretty much comes for free with an SMP capable microkernel, and doesn’t give any actual benefits if the rest of the kernel is correctly designed.
Well, according to the Linux crowd all this stuff was the bee’s knees back when it was getting shoehorned into Linux, while they conveniently ignored the way other OSes like NT and Solaris had had them for years.
Drivers? Perhaps. They don’t usually break across minor point releases though. They likely won’t remain binary compatable though.
To most people – pretty much anyone outside the Linux community, really – that’s “breaking”.
Linux probably has the _worst_ reputation of just about every remotely mainstream OS for backwards compatibility and legacy support. A dubious honour the community has worked hard to achieve, I might add.
Your comments would apply well the to the BSDs, where the developer community *does* consider backwards compatibility and legacy support an important issue, and design and development is carried out in accordance to that. But they are poorly suited to Linux, where backwards compatibility is at most considered an incredibly annoying afterthought (an attitude largely influenced , IMHO, by the GPL zealots).
But we weren’t talking about drivers, were we? We were talking about user level programs.
You were talking about drivers.
I’ll add that just about everyone else – Microsoft, Sun, *BSD and Apple manage to keep low level binary compatibility – including with drivers – between minor and sometimes even major kernel revisions.
Windows software from the late 80s is working through a compatability layer. Of course it still works, but only if you screw around with it first. Hell, it works on Linux too, using WINE. There’s no real difference – the Win16 subsystem sits on top of the Win32 subsystem, and allows Win16 apps to run by translating between the two, and providing the required functionality. Similarly, WINE sits on top of Linux + X11, and allows Win16 apps to run by translating between the two, and providing the required functionality.
The difference is, I can install Windows XP and out of the box – or with extremely minor additions usually made automatically by the program installer – be able to run 90%+ of all the Windows programs ever written (and probably ca. 70% of all the DOS programs). The same applies to other OSes like Solaris, OS X and FreeBSD with their relevant legacy applications.
On Linux, you have to screw around for hours, if not days, (if not weeks) tracking down old libraries and their dependencies to get the same levels of legacy support. Out of the box, you’d be lucky to run any applications more than a few years old, and anything remotely low level wouldn’t even be worth trying.
The real hypocrisy is, IMHO, the well-earned “DLL hell” label that Windows had back in the 3.x days had all but disappeared by the late ’90s, right about the time the fragmentory nature of “Linux” gave it a whole new meaning, yet the Linux zealots *still* give Windows stick about “DLL Hell”.
Have you any experience from Solaris or Linux? I doubt it.
About 6 – 7 years worth.
I have run them both. If you only go as little five years back they are almost identical to admin.
A 5 year old Solaris system is rather different to admin – and even more different to “just use” than a modern Linux system.
Of course, the Solaris admin would fare a lot better than the Linux admin going in the opposite direction…
E.g. […]
Now let’s consider some other stuff. Device names, RAID setup, package management, shell scripting environments, standard tools, X configs, system installation procedures, commercial software support.
Then there’s the stuff that will really annoy the Solaris dude – patches breaking things, binary incompatibilities, the patchwork quilt and bloat of the GNU userspace, “DLL hell”, etc.
Sure, a Solaris admin is going to be able to “get by”, but it’s probably not going to be an enjoyable experience and he isn’t likely to be adminning the Linux system the way its “supposed” to be adminned, he’ll be shoehorning-in his Solaris admin techniques or just rolling his own tools to do stuff.
As an aside, most Solaris admins – particularly old-skool unix hackers – I’ve met *hate* having to use Linux systems. They much prefer the BSDs.
Instead of introducing new API’s they have made major changes with every release, which is why you might not be able to run NT4 applications on W2K and W2K applications on XP and with W2K3 it was estimated that upto 30% of the available serverapplications was unable to run on W2K3 unpatched. Well done…
A few things have been depreciated since NT4 and for good reason. A few of the depreciated API calls actually map to the newer functions.
As for Win2k3 the heavy changes came in the security model and IIS. The API itself had some additions but application wise, in terms of the API things didn’t change that drastically. ASP applications felt a big hit, because they were the most effected by the security changes. ASP also compromises the vast majority of ‘server applications’ on windows.
You can’t have it both ways. You can’t on one hand ask for better security from MS and then on the other flip out when attempting to make that change makes some of your *script* not work.
Most of the changes that came in Win2003 that broke things weren’t even windows itself, they were IIS changes.
”
As an aside, most Solaris admins – particularly old-skool unix hackers – I’ve met *hate* having to use Linux systems. They much prefer the BSDs.”
exactly why Linux is way ahead of any BSD system. who cares about old skool conservatives. hell they even complain about something like project utopia because it auto manages their cds,usb and digital cameras, claiming the only true way is to mount it themselves even though they have a preference to do exactly that.
kill the old skool conservatives….
A 5 year old Solaris system is rather different to admin – and even more different to “just use” than a modern Linux system.
Of course, the Solaris admin would fare a lot better than the Linux admin going in the opposite direction…
So, you say Solaris is rather different to use or admin than Linux. Lets buy that for a second and apply your perception of different to the differences between the first version of windows and current windows XP. Then you have unbearably different.
If you started out using windows from start and been a good boy and upgraded to new windows version as soon as they arrived, you would have experienced unbearably different several times over the years. And I guess unbearably different probably means that you either are very unproductive or that you need to ask your employer to pay for education.
So, you say Solaris is rather different to use or admin than Linux.
I say a 5 year old version of Solaris is different enough to a current version of Linux to cause some short term pain for anyone making that sort of leap.
Lets buy that for a second and apply your perception of different to the differences between the first version of windows and current windows XP.
Well, now you’re talking about a twenty year difference, not a five year one.
If you started out using windows from start and been a good boy and upgraded to new windows version as soon as they arrived, you would have experienced unbearably different several times over the years.
Perhaps 3 times. There were major changes in the UI at 1.x -> 2.x, 2.x -> 3.x and 3.x -> 9x/NT4/2000/XP.
And I guess unbearably different probably means that you either are very unproductive or that you need to ask your employer to pay for education.
More than likely. But the same would apply to a unix admin or user stepping out of a ca. 1984 unix system onto something like Ubuntu or Mandrake.
“NT has been using the same API since the first release, Win32. Even with the .NET framework its still available and it can be used like its always been used.”
Of course…
Looks like you never programmed in windows… I for one, always find it funny when some Windows API doesn’t do what it was supposed to do from one version to another, because a patch or a new version of the development framework changed.
Like those ratter good .NET CF API that simply, don’t exist (but are documented!), exist (but don’t work), or something in between (i’m specially fond of the crypto minefield between .NET CF and the standard .NET).
Stable API… NOT.
Looks like you never programmed in windows…
Of Course…
Looks like you are talking about something a lil’ bit different than my post. I’m not talking about .net and the compact framework. I’m talking about the true and blue old school Win32 API.
Compact Framework? pardon me but WTF does that have to do with Win32 compatability over the NT product line.
I for one, always find it funny when some Windows API doesn’t do what it was supposed to do from one version to another, because a patch or a new version of the development framework changed.
well .net is a bit of a different beast I’ll agree. And the Win32 api itself has gone through some changes. However it is not some rewrite everything ordeal with every point release of the OS like a lot of people like to claim.
It just simply isn’t that bad.
The difference is that the linux update is free.
Irrelevant. We’re talking about support here. The distros won’t support code more than 2 years old. It’s a fact. Deal with it.
Linux, at the time, wasn’t all that great either. However, it was never as much of a pain as Windows NT 4. And at the time the Linux kernel was around five years old, while Windows NT was around 8 years old and had many times the number of developers.
You’re comparing apples and oranges. At the time NT4 was released, Linux was little more than a college-quality experiment. It couldn’t compare to NT4. It didn’t support SMP. It only supported a narrow range of hardware and CPUs. So comparing today’s operating systems to an 8-year old OS is just ridiculous.
with W2K3 it was estimated that upto 30% of the available serverapplications was unable to run on W2K3 unpatched. Well done…
There’s a very good reason for that. W2K3 security is seriously good now — and it enforces restrictions on apps that didn’t exist on previous versions of their server software. Apps took advantage of that laxity to shoot themselves in the foot. W2K3 won’t let them do that anymore. And there’s nothing wrong with that. If some of you Linux folks were told that you had to compromise security in order to accomodate old apps, you’d start whining and crying about it … Don’t blame MS for doing the same thing that you would have done in their shoes.
Looks like you never programmed in windows… I for one, always find it funny when some Windows API doesn’t do what it was supposed to do from one version to another, because a patch or a new version of the development framework changed.
Example?
The design of the Linux kernel, on the other hand, is perfectly adequate. It’s intended to be a free Unix-compatible kernel, and that’s exactly what it is. Nothing more, nothing less. A minimalist, modular monolithic design is much simpler than a complex microkernel, and therefore simpler to implement, although it’s really something of a hybrid, much like Windows NT is.
Uh, regardless of its simplicity, I’m going to have to defer to Andrew Tanenbaum — widely regarded as “the” expert on OS architecture and design — on that issue. And he’s of the opinion that Linux’s monolithic kernel design is a dated kludge. Even Torvalds admits that microkernel design is preferable.
See http://people.fluidsignal.com/~luferbu/misc/Linus_vs_Tanenbaum.html for details.
> > “Multimedia support in Windows rocks, try to do same in
> > LinSux. I used redhat 9 and xp both were there around same
> > time. redhat could never make my sound card work. Please
> > don’t tell me that i need to keep on trying every damn new
> > LinSux distro and hope one will work. ”
>
> Oh my God. You still use a descontinued version of Red Hat
> and says that linux sux ?! […]
When was RH9 released, when was it discontinued? And now who rants about Windows NT being discontinued after 8 years?
“There’s a very good reason for that. W2K3 security is seriously good now — and it enforces restrictions on apps that didn’t exist on previous versions of their server software. Apps took advantage of that laxity to shoot themselves in the foot. W2K3 won’t let them do that anymore. And there’s nothing wrong with that. If some of you Linux folks were told that you had to compromise security in order to accomodate old apps, you’d start whining and crying about it … ”
Linux folks, who me – wait a minute, I’ll have to find a GNU/Linux dist to install.
“Don’t blame MS for doing the same thing that you would have done in their shoes.”
You’re quite right the first thing I would do is to introduce an API morphing cycle of 18 months and refuse to disclose the full specs.
“Uh, regardless of its simplicity, I’m going to have to defer to Andrew Tanenbaum — widely regarded as “the” expert on OS architecture and design”
Oh yeah, because Minix is such an incredible piece of work. Anybody can talk, but when Tannenbaum creates an OS better than Linux, actually has something to show for his efforts besides just writing a textbook, then we’ll talk about his “genius”. Until then he gets lumped in with all the people who never even thought Linux would get this far. Meanwhile some of the biggest corporations in the IT industry are either selling Linux or contributing to OSDL. They’re all wasting their money and time? Maybe you should tell them that.