Last week we talked about what Linux (well, okay, X) could learn from Windows Vista and Windows 7, focusing on the graphics stack. A short article over at TechWorld lists seven things Windows 7 should learn from the Linux world. Some of them are spot-on, a few are nonsensical, and of course, and I’m sure you have a few to add too.
The article starts off with stating that Microsoft should adopt more frequent release cycles.This is probably something we can all agree on here – it would be nice if Microsoft released Windows versions a bit more often to keep the operating system up to date and ready to face the competition. A more incremental approach to development would also make it easier for businesses and home users to keep their computer up to date with the latest Windows version – which means more money for Microsoft.
The second point is a bit silly to me. The author states that Windows should adopt a saner release versioning scheme, but I don’t really see why. Of course, it would be preferable if Microsoft stuck to the simple number scheme from now on (Windows 7, 8, 9, and so on), but in the grand scheme of things, who really cares? As long as they don’t start using silly alliterating adjective/animal combinations, I’m happy.
The author also advocates online operating system upgrades. That is very welcome indeed, and should tie in with the first point about more regular releases. If you have more regular releases, breakage between different versions is less, and that should make upgrading easier. As for keeping third party applications up to date – please don’t implement something along the lines of .deb or .rpm. While these systems have their obvious strong points, they are still flawed and archaic, just like any other popular program installation system. If you want to improve in this area, then do it right.
Better web application integration. Microsoft, please don’t waste your time on this. The social networking world is a fickle one, and people will jump from one shiny object to the next like a bunch of hyperactive magpies. I’d rather have Microsoft fixing bugs or something than wasting time on integrating the social network du jour.
The author also talks about supporting open development environments, stating Microsoft should deliver bindings for open source programming languages as well as the tools to use them. Now, I’m no programmer, but I always thought that Microsoft provided fairly decent support for things like Python and Ruby, for instance within .Net. I’m sure some of you more educated folks out there can detail this a little better.
Another suggestion is that Microsoft should slim down for the mobile world, and I concur quite strongly. While Windows 7 performs much better on netbooks than Vista does, it still takes up way too much hard drive space making it problematic to run 7 on netbooks with small SSDs (4-8GB). It would be nice if OEMs and users were given more control over Windows’ components – contrary to what the author states, Vista and 7 have seen major efforts by Microsoft to componentise the operating system. I’d love to be able to assemble my own Windows installation in a blessed way – instead of relying on third party tools which I don’t trust.
Lastly, the author suggests better device support, stating that users should no longer have to hunt down drivers separately. I guess the author hasn’t installed Windows 7 yet, because that’s exactly what Microsoft has been doing. I’ve installed Windows 7 on four different systems so far (not very scientific a sample, but still, they range from Atom single and dual-core, to Pentium IV-era, to Phenom X4) and all hardware was detected either out of the box, or via Windows Update. I’m sure it’s still not up to par with Linux, especially when it comes to printers, but it’s already improved epically.
So, what would you like to add to this list?
The possibility to buy any computer without Windows, perhaps?
Thom, which are nonsensical? They all made sense to me.
Read the article,
Thom thinks the point about version naming, and the one about better web application integration are nonsensical. I agree with that.
Well, the actual version number of Windows 7 is 6.1. That’s pretty stupid if you ask me. 😉
I found original article (not Thoms) really stupid and just shows blindness that todays “tech journals” have about standard user. Most people don’t change newer OS until they really need usually because there computer brokes. Faster release cycle would only harm Microsoft because most people don’t want to change how things work. We might have liberal thinking but our habits are very much conservative.
What I would want to see is Microsoft bringing platforms that would allow other firms bring content. Steam is good example, but its too much Valve control. Make platform that would bring game news from news sites, game demos from publishers and allow people buy games online. Sure Internet pretty much has this but all content is broken in hundreds of sites. Same thing with Marketplace and upgrading.
Seem to me that your sentence (in bold) implies that even with faster release cycles MS would still change the way the OS works in style of the XP->Vista change; which I don’t argee on.
Faster release cycles combined with online OS upgrades could enable MS to make small incremental improvements to its OS without drastic changes to the way things work.
Most linux distributions seems to manage that.
MS could also implement a subscription style payment scheme to enable people to choose to either buy the OS “as it is” (as in todays situation) or to buy the OS with a subscribtion to yearly updates and improvements beyond the standard securityupdates and servicepacks.
I don’t disagree on the conservative nature of most users mind.
Change is often scary
They already have a subscription scheme. Actually they have tons of them.
Most of them are not for single home users (MSDN, OpenLicense, etc). However there is at least one that may appeal to the public. If you can get TechNet at discounted ($100/year) price, you’ll practically get “everything” MS produces.
for testing purposes only.
On the matter at hand, I don’t really see a quick update cycle anytime soon, nobody I know want to update their system on their own, so add a 50-100$ installation fee for a technician + 50-150$ for the os itself and you have nobody upgrading. (online upgrade is nonsense to me, what if you wanna reinstall it? sorry nogo for most people) Microsoft stated after vista they wanted to maintain teh 2y release cycle they had before the whole vista fiasco.
Having some sort of “app store” would be great, but microsoft is a monopoly and they will get sued all the way and the number of applications developed for windows is just huge, nobody could manage that flow of apps … but the idea is great, just not realistic.
makes program transferable without being tied to apps&settings/registry/windows doc, make them portable. Microsoft already have this technology avaialble to big enterprise, why not use it ? (like they should’ve done with shadow copy by integrating a nice interface) Each time I install windows I have to go through the 1h+ process of reinstalling everything (Except Steam, which reinstall itself (the srvice) by launching it, which is just great, no more reinstalling games !)
The topic is great, but the point presented (didn’t have time to read all the comments, yet) aren’t very realistic from the perspective of microsoft and their intended customers, unfortunatly.
People won’t pay for a subscription like that.
Microsoft has already found that out the hard way from their Volume License customers. When they made the switch, 66% of their VL customers upgraded to the new license (the other 33% dropped it); of those 66%, only 33% renewed their contract when it came time.
And that is only the commercial, large-volume world – speaks nothing of the SMBs, SOHOs, or home users; of which, most don’t have the money to spare for such subscriptions.
I’ve installed Windows 7 on four different systems so far (not very scientific a sample, but still, they range from Atom single and dual-core, to Pentium IV-era, to Phenom X4) and all hardware was detected either out of the box, or via Windows Update.
Windows 7 is detecting all hardware now because it’s something new. I wish that it stayed that way via Windows Update. One can only hope…
It’s funny how some things have drastically changed.
Consider the following quote:
“all hardware was detected either out of the box, or via Windows Update. I’m sure it’s still not up to par with Linux”
It would have been complete nonsense a few years ago.
Regarding application installation, I think that the end-user experience of installing applications on linux is really great and something that windows should learn from.
In windows you need to open a web browser and google the application’s name and then navigate the app’s website to locate the correct package, and then execute it, and then answer questions from the installer.
In linux, you pretty much just need to open the package manager UI front-end, input the app’s name, press a button and… That’s all. The relevant packages are summoned and installed for you.
So even if the various package systems used on linux have their faults, lack standardization and the filesystem layout is an archaic mess, the end-user experience is very simplified and very good (and thanks to package kit, the front-end to manage packages is getting standardized, even if the underlying package management systems are not).
Edited 2009-08-19 10:04 UTC
i see myself that searching application on google, download and install application is much easier than using package management on linux, i’ve been using linux for long time even i am a network administrator which linux is main O/S.
when using package manager, you don’t know where the application data goes after installation, then you must really know the exact name of application, and it’s waste of bandwidth, for example on windows when you download installer, you can save it, and give it to your friend for example, just copy and run it on another PC. on linux it was a pain in the ass for novice user, copy cache, copy rpm, run package manager installer, configure that, everything is a mess
linux really could learn from windows or even better from OS X in this case
and why should anyone really care where the application data goes? We’re talking about ordinary home users here, not geeks who read osnews. Besides, all personal settings for the majority of desktop application (the whole kde stack, office, multimedia, etc) goes into /home/user/.kde, /home/user/.openoffice, etc. It isn’t much different from how windows does it, at least XP stores settings in several places. C:\Program Files is just one, then you have C:\Documents and Settings with 1) All Users 2) Default user (hidden) 3) Username (that’s 4 places).
that’s complete nonsense. All modern desktop linux distributions (lets compare oranges with oranges, shall we?) offer search by name/category/description. In mandriva you even have everything neatly categorized (Multimedia, KDE, etc), but you can search for RPG and it will bring up all role playing games in the repository.
and how common is that, really? are we talking about “sharing” our application CDs, or your downloaded share/free/crapware executables? I have an application directory for people I install windows for, the total size of everything I can think of is less than 200 Mb. These include freely available (eg downloadable from the internet software) – things I consider useful for your average user (picassa, google earth, vlc or smplayer, openoffice, etc.) Oh, and I forgot to mention the risk of gettings some nice virus while you copy application X from your usb..
That was like 10 years ago… copy cache??? I don’t even know what you’re talking about. Repositories are auto-configured, and it’s a one-time job (takes 1 minute) if you want more than available by default. “copy rpm” – again, no idea, you don’t have to copy anything manually on any of the modern distributions (Mandriva/OpenSuse/Fedora/*buntu. It’s all search-select-click install.
I just finished setting up an Eeepc for my mom (1000HE) – and installing all the software + updates took 6 hours, including about 20 reboots. I wanted everyting set up as a normal user, and it was such a pain in the ass in every regard. Lots of unnecessary icons on her desktop, I had to hunt down their location as Admin, same with the start menu (I wanted something simple for her). Even though as I said, I have a Software folder just for cases like this, for every application I needed (and wanted an up-to-date version) I had to go on the internet, google search for the downloads, wade through the individual websites for the latest versions, download them one by one, deal with several different installers, click half a dozen EULAs (either on the website or in the installer) – and you’d like to have the same on Linux?
Were it not such a pain in the ass (and if I had more time right now), I would have installed linux (but setting up Mandriva spring on a USB didn’t work for me). With linux, you write a short list (openoffice, skype, smplayer, google earth, firefox, etc.), select them in the package manager UI, and install, latest & greatest in like 10 minutes…
Edited 2009-08-19 15:49 UTC
//With linux, you write a short list (openoffice, skype, smplayer, google earth, firefox, etc.), select them in the package manager UI, and install, latest & greatest in like 10 minutes… //
And then spend two years explaining how to open an Office 2007 document.
Uhm, same as on Windows … just double-click it in the e-mail program (attachment) or file manager and watch it open in OpenOffice.org 3.1.
Or (my preferred method – can’t stand document-centric, I prefer app-centric computing) you start OpenOffice.org, and use CTRL+O or File -> Open and find the file.
You don’t need to know what files have been installed, it’s managed for you. If you do need to know, it’s available in at least Synaptic, I’m sure else where too. Giving the rpm/deb package to your is the wrong thing because it won’t keep in date. All they need is the name, then they can install it from the package manager themselves and it’s kept up to date. If it’s not in the repository, tell them to add the repository it is in. If bandwidth is a issue for a network of many Linux machines, you can use a repository mirror/cache, like Apt-Cacher.
I miss repositories when stuck on Windows. Especially when building open source code. Without package management your left manually resolving all the dependanies to the right versions of libs. Argh, it sucks. No, all OSs will have package management in the future. I mean what is the iStore if not a pretty package manager! Window will have to catch up some day.
Netbooks with small storage space are finished. Nobody wants them. Slimming down Window 8 (could never happen with Seven) for something that is already now a thing of the past is foolish.
Well to start off:
1. Windows detects all hardware … as far as I remember it as been like that since XP era, the drivers were there for the current hardware in the market. With linux it was the same … it worked with current HW. Want an example? Huawei E220 no system had drivers for it, you plug and install driver…
2. Package management. Linux has in it repositories some software, but if you use specific things for Example Visual Paradigm, or MagicDraw or other application frameworks, we still need to go to their site and download the product .. just like windows. so .. windows is fine as it is.
3. Release management quicker? For god sake … and it would be happening like happens with Linux 6 months release and braking backward compatibility. The Windows releases updates, security and functionality with SP. Without braking software (at least most of the times)
Just my though and experience
So, in 1% of the cases, Linux is as hard as Windows. Therefore Windows is just fine?
That makes no sense.
Unless it’s one of the countless SATA, SCSI or RAID cards that XP doesn’t support. Then you have to have a floppy disk with the drivers on to actually install it. Which is quite fun when the computer you are trying to install XP on doesn’t actually have a floppy drive (what with it not being the 90’s and all). So you then you have go to a different computer, download some software and build a custom windows XP iso before you can actually install. Doable, but hardly “out of the box”
Like I said … it had detect current hardware not new … as far as I know OS (linux, windos, leopard, solaris, aix and so on ) don’t know about the future
Windows does, through windows Update. Microsoft adds drivers to all the time. And technically, adding drivers to the repositories in f. ex. Ubuntu is the same thing.
The big difference is that Linux lets me regularly download install CD’s with all the latest drivers included for free, while Microsoft does not. Also most Linux distros offer you a number of ways (like grabbing them over a network) to add drivers during install while XP only lets you use floppies. The problem isn’t so much that XP doesn’t have drivers just that they are an epic pain to install.
it seems that everyone forgots that XP is 7 YEARS OLD or more. Try do install vista and 7 and you have that option to!!!
But it seems that most people here use ubuntu and the ones that remember how linux was 7 years ago you would need to grab the source code and compile yourself … if you want a taste of that try Gentoo.
Or if you want to go a bit further back when KDE was not even in the plans of anyone and gnome was still in it’s pre-alpha … to configure X you would need to know exactly the specs of the monitor.
But anyway, getting back to present, yep Windows doesn’t have a repository for Software but as for drivers it gets them with update. Same as Linux.
By the way, yeah if I change a system board of course Windows won’t boot … the device path changes … Linux boots because it uses sda or sdb or whatever when the underlaying device path in windows is a bit more complicated than that. Similar to Redhat when you replace a system-board and you loose the network, and why does it happen? because the eth? are bind to the mac address. In windows same with the disk device path and some other systems. I would say unix since they are picky with that.
As for the release again … a 3 year cycle for me is more than enough, at least I’m certain that my software will work for 3 years without updating anything. As with Linux, after 6 months a new release and if I want to use a new browser and I’m a really dump user. I have to install the new version or wait for the release of the package for my distribution.
anyway … argue what you want, I’m stubborn like hell and sleep seem nice for now
If I can be pedantic for just a moment…
Gnome was created in response to KDE…so that should probably be the other way around.
http://en.wikipedia.org/wiki/GNOME#History
Well, when was the last update to windows XP? Oh, just a few days ago… So yes, I agree, your vanilla windows XP install CD shouldn’t know that much about hardware released 6 years after its inception, but the problem is that a Windows XP SP3 CD is just as crap as recognizing anything, even ethernet cards nowdays as it was 7 years ago…
Huawei’s 3G dongles have the required Windows drivers stored in their flash rom.
You may still need to install the Motherboard Drivers first for USB, as Windows 7 RC1 didn’t detect my Intel drivers. Thom is right about Ubuntu and printers.
I think the article implies that driver accountability lies with the Operating System and not people digging in driver forums so much.
UNIX and Linux were always great at detecting wired networking cards, but needed ndiswrapper for Windows wireless cards.
OpenBSD managed to rewrite wireless driver code for security flawed version 4.
I love OpenBSD’s code auditing scheme, but Microsoft seems to extract bits of BSD for it subsystems and not know the base system. Why doesn’t it extend its own network auditing system to programming and clean it up? KISS principle.
I recently changed the motherboard in my computer and Windows refused to boot.
It turns out that Windows can’t properly detect hardware changes – yet linux can.
If you changed chipsets in the board swap, primarly the drive interface (from IDE to SATA, or from ICH7 to SB700) then whoops…you did it wrong. The OS cant load if it doesnt have drivers to…well, load itself. Its pretty easy to do a full board swap in Windows you just uninstall some drives. Turn the PC off. Swap out, boot up and done. Its how I’ve upgraded my parents pc with the same install of windows for five years.
Hi,
For some versions of Windows (I’m not too sure if it’s only OEM versions or what), you are only allowed to change 3 pieces of hardware before the OS decides it’s running on a “different” computer (rather than the computer it should be running on), and then refuses to boot because of that. If you change the motherboard, then chances are you changed memory, CPU, hard disk controllers, USB controllers, ethernet, onboard sound, etc (they’re all counted as separate devices), so you went past the “3 changes” limit.
If this is the case, then reinstall Windows from scratch and it’ll probably detect everything.
-Brendan
It doesn’t “refuse to boot”, it just pops up a screen saying you’ve changed a lot of hardware, and it needs to be reactivated. You read a bit of text, make sure you have a network connection, and click a couple buttons. 30 seconds later, you’re back in a working Windows install.
I just went through this with Windows XP on my mom’s computer. Windows would BSOD on normal startup, but worked in Safe Mode. Removed all the non-generic chipset drivers, moved the harddrive into another computer, booted, re-activated, waited for all the new hardware to be detected and drivers installed, and was able to fix the issue.
Then did the process again to move the drive back into the original computer.
I’ve had the opposite happen, I’ve changed the MB in a computer that dual booted Debian Etch and Windows XP, and Etch kernel panicked on the way up, XP trundled along, the screen flashed a few times, and it managed to allow me to login.
After login, it continued to grind, then it said it was finished installing new hardware, and wanted me to reboot. When it was done rebooting, XP ran fine. I had to reinstall Etch.
Odd, considering you say this;
That you didn’t know you could just boot into a liveCD, chroot into the etch install, and reconfigure everything.
What part of Etch crashed? Just kernel panic, X die, what? I’ve switched hardware many times, and have never had such a problem. Generally if it crashes on a motherboard setup after a swap, then it just won’t run on it at all (due to buggy hardware / driver or whatever) The fact that you were able to re-install on it says otherwise though. Could have just been a module option that was set that messed up on the new motherboard.
XP and pretty much all versions of Windows that have any sort of ‘Plug and Play’ have had the issue of either being completely unstable after a major hardware swap, or simply don’t work.
One thing that could have messed up your Debian install was AHCI vs Legacy. AHCI has been kind of a harsh nail for all operating systems (for example, to get AHCI support in Windows at all you have to reinstall to use it, but for the most part it’ll emulate the legacy mode, which Linux doesn’t seem to like too much.)
it kernel panicked on boot, before loading the filesystem. It was pretty much immediate. As it was my home system, It was faster to reinstall then to start down the long road of fixing that error, then the next one and so forth. I assume it was the new SATA controller that caused the error.
I certainly could have used a live CD, but I wasn’t prepared to deal with it because I expected XP to barf, not Etch.
At work, I don’t run into those types of problems because only a fool would swap a MB from a production box with a new, totally different MB. I administrate the boxes, which means I want them up and working.
Edited 2009-08-20 12:17 UTC
“As for keeping third party applications up to date – please don’t implement something along the lines of .deb or .rpm. While these systems have their obvious strong points, they are still flawed and archaic, just like any other popular program installation system. If you want to improve in this area, then do it right.”
Heh *wow* … nice arrogant attitude you have
A serious question to Thom then, what is the right way?
There’s a link in there…?
Apologies… I missed that the second time round when reading from the quote.
Last weekend I finally found the time to try out my Win7 RC1, just to see what it’s like. Not that I’ve used it much, but 2 things stood out for me:
First, and very predictably, the Windows installer still doesn’t have the good manners to keep its paws off the master boot record. At the very least, it could scan for non-FAT and non-NTFS partitions. If it finds some, it should have the grace to *ask* if it may overwrite the MBR, or if it should confine its startup code to its own partition, warning the user he’s responsible for updating his boot manager.
The second is something I’ve noticed before when occasionally using a recent version of Office or IE on older Windows versions as well: I find its font rendering highly unpleasant. This is due to the coloured artefacts that the subpixel rendering leaves around the edges of the characters. Perhaps I’m overly sensitive to those: I have fairly strong glasses which create some kind of “rainbow effect” at big contrasts somewhat removed from the center of my vision, and I really don’t need the font rendering to add to that effect.
I’ve gone through the ClearType wizard several times, and I did manage to mostly get rid of it. I’d say that as a result, the text is still slightly more fuzzy than what I get on my beloved Gnome desktop (with font rendering set to “Best contrast”). Worse though, as soon as colours come into play, the coloured edges re-appear. And since Windows has a knack for using dark-blue text and/or coloured backgrounds for captions, headers etc (like in many configuration screens, but also lots of web pages), there’s still a lot of that going around.
I realize taste in this matter may differ from person to person. But for me, the font rendering in Windows is really grating on my eyes.
Not wanting to only whine, I’ll end on a somewhat more positive note. I’m actually quite impressed with the smoothness of how it runs on my aging machine (Pentium 4 2.6GHz, 1.5 GB, nVidia FX5200). Granted, I haven’t installed background programs like a virus scanner, but it starts quite fast, and seems nicely responsive. I’ve never tried Vista on that machine, but there could well be some truth to the claims Win7 uses less resources than Vista.
1) How to make a drop dead easy installer. From the first second linux is significantly easier than windows
2) Configurable 3d effects. Aero has a nice look to it (I actually do really like the frosted glass bit), but without any way to tweak it beyond basic colors, it’s kind of lame.
3) Unified application update system. Probably the single best feature of linux today.
You haven’t installed windows 7 have you?
The only thing you need to do is choose your hard drive and enter your key (which it allows you to skip).
How is that NOT drop dead simple?
Because like all Windows, you then must install all the other drivers, hopefully you able to do so which to much bundled crap being installed to. If not, you can either take the bloat, or rip it out manually. Fun fun fun.
Install all the applications you need, from the required discs, downloads, etc etc. If you’re doing it the true closed/windows way, each app might have its own key/copy-protection-thingy. In Linux, all but the problematic closed graphics drivers are automatically there, bloat free and all the software you need is in one place, the repository.
Windows Vista had 8 times the drivers in windows update that XP did, and Windows 7 has twice the drivers that windows Vista did. You have to have some pretty obscure hardware to have to go downloading drivers.
Maybe, but without a system updating those out the box drivers, you will soon end up downloading drivers again. Plus any “old” hardware that’s not past some kind of cost analysis and passed, won’t be supported. Which is why you can find many old devices that can only be make to work with Linux (maybe BSD too) if anything at all.
Old drivers still work with the latest versions of Windows. Most drivers that worked in 2000/XP continue to work in Windows Vista or 7, even though in some cases a new driver interface was introduced.
And old hardware might not even work in Linux; if interest is lost in maintaining the driver, and something changes that breaks that driver, you’re on your own. This has happened to me in the past.
I’ve experienced multiple regressions in Linux with recent hardware. On one of my systems, audio works (partially – via headphone jack only) in RHEL 5.3. On Fedora 10, I hear nothing unless I stop using ALSA and install OSS. Fedora 10 also broke Intel graphics 3D acceleration.
On Windows, I could solve either of these issues by using older drivers. This is not a trivial thing to do in Linux.
The lack of driver interface stability seems to be a Linux problem. I believe Solaris has binary driver compatibility going back quite far as well. Because the driver interface in Linux changes, vendors like Red Hat and Novell have to invest time in backporting drivers from the latest kernel versions to their supported kernel versions.
NO. You won’t have to do that for a while. It doesn’t ask for drivers for hardware it can identify, and being so new, it should support almost all MBs in use at launch.
It’s only the case you have out the box support for a small window relative to the OSs life. On top of that, it will only support any hardware it economically worth support at launch, which isn’t the same as what is in use. That’s fine for most people (gamers, businesses, new PC buyers), but not a build your own pc, upgrade parts as required, kind of user.
So you’re making a complaint about something that you know nothing about.
Sorry for the following rant, it’s a raw nerve at the moment.
Most of the real lessons from Linux are really Unix lessons that MS have been willfully ignoring then having to back track on leaving nasty legacy issues.
* Make mounting universally possible at any folder in any FS. Not just NTFS.
* True universal sym links and hard links. Not just NTFS.
* Use a single file hierarchy, not many different ones (OS object/device one, registry one, each drive ones, and faked explorer’s namespaces semi universal one). Applying KISS to everything to do with files/objects/keys/devices/etc.
* Expose OS functionality via the filesystem so any programming language has access without the need for custom APIs.
* Custom filesystem namespaces so with the above you can create mash up computers of other computers on the network. See Plan9.
* Support for many filesystem formats out of the box. Not just a handful of MS controlled formats.
* Make writing drivers as easy as possible, ideally with access to source for real life drivers which would also encourage reuse (never going to happen for a closed OS).
* Encourage drivers to be invisible, not hulking great monsters that must be installed by the user and can’t (easily) be installed without crappy bloated branded bundled software.
* Include package management so software/drivers can be installed from (and updated from) trusted repositories, instead of users doing one time installs from random, untrustworthy, locations.
* Enable the system to be stripped down to next to nothing. I.e. just the kernel a few drivers and a few GUI-less applications.
* Use X11 client/server model so applications can be run over the network so a desktop can be a seamless mash up of applications running locally and remotely.
* True POSIX to join the rest of the OS world. Or at least some API other OSs can agree with. (Win32 is not good).
* Use a single type of path slash! Two types makes the path handling/caching uglier code, meaning more likely to be buggy.
* Move case insensitivity from kernel into explorer. This only needs to be done at user level, not all the way down. Case insensitivity is a pain because the moment you do stuff with paths, you must remove the case, but keep a copy with the case, etc etc. It makes the path handling/caching uglier code, meaning more likely to be buggy.
* Make a temp folder mean a temp folder, i.e gets emptied each boot.
* Use a swap partition, not a fragmentable file.
* On the subject of fragmentation, don’t try and pack the files tightly together, leave them room to grow!
I’m sure some old dude with a beard and sandals can add much more to the list. 😉
Listen to your elders. What’s most amazing of all about computers is how much hasn’t changed with the ever faster hardware. Be very suspicious of the young “genus” who thinks they can ignore all that has gone before.
Personally, I’m not sure MS can do anything with Windows to make it a good (well designed) OS, not without breaking so much compatibility it would be useless. It strangled by legacy. That’s what happens when a limited Desktop OS for limited hardware is grown to be a “real” OS. Linux has the advantage of being a Unix system, so having been designed as a “real” OS from the start.
The moment I can, I’m dropping Windows for work like I have for home. I can deal with Linux’s lack of productization.
I feel better now I’ve vented. 🙂
What makes the Unix or Linux way of doing things the right way? I think this is a limited way of thinking about operating systems. Some of your suggestions wouldn’t make Windows actually work any better; it would just make it more like Unix or certain Unix implementations. Others would provide functionality that’s already available via third-party (and often open source) software… not that it wouldn’t be nice to have many of these features.
The lack of the features you list don’t really make Windows any less of a “real” OS, anyway – it’s just not Unixlike, and not like Linux. Windows still has the important features of a modern OS, like memory protection, preemptive multitasking, multiprocessor/multicore support, IPC, networking, threads, file system support, power management, a graphics system, multi-user support, access controls, etc. Even if you think some of those features are poorly implemented, Windows still has them.
Take a look at OS X: it’s a certified UNIX operating system, but the parts of OS X that make it worth using as a desktop operating system aren’t really from Unix. The windowing system is unique, as are the Cocoa and Carbon APIs. The fact that OS X is based on UNIX means little to me as an end user.
What makes the Unix way (or better the Plan9 way) better is that it’s simple. “Keeping It Simple Stupid” is the most important lesson. Yes windows have the features of a modern OS shoehorned into it, but that’s not the same as being designed properly in the first place. Windows is far too complex, a big part of which is the legacy of its history, and the lower you develop on it, the more that slaps you in the face. You might not think this effects the everyday user, and I admit the user doesn’t care about the aesthetics of the OS design, but it does make the whole system less flexible, slower and more unstable, which affects everyone. OS X gains hugely from its UNIX underpinnings, which is why it’s based on UNIX, and the lower the user/developer goes, the more thankful they will be of this.
Windows NT was a modern OS to begin with. It just happens to support the same/simliar API as classic Windows, and shares some conventions with DOS (such as drive letters and backslashes in path names.)
Simple is not inherently better. For example, the simple Unix permissions (rwx for user/group/other) is rather limited, and modern Unix operating systems have implemented flexible ACLs in addition to this. Complexity can be advantageous. For example, X11 could be simpler if it did not have network support; but it does, and this network support is an advantage.
Can you point to any specific things in recent Windows NT that make it more unstable or slower compared to other operating systems?
Some of you are thinking too critically.
Read Simplicity by Edward de Bono, inventor of the dictionary term Lateral Thinking
and author of The Six Thinking Hats.
It is worth the time and effort looking for simpler methods that increase value by 500%.
Only use your Socrates heritage with the Black Hat.
If you are looking at just the core kernel, you can argue that. But a OS is not just the kernel, and certainly not just the core of the kernel. You can’t remove the Win9x (and ultimately DOS) legacy from Windows, even it’s just old interfaces. Driver letters, Win32, reg keys, and a whole butch of other stuff are inherited.
“As simple as possible, but no simpler.”
There comes a problem when something is too simple to do it’s job properly. I would argue that Linux couldn’t change the file permissions because it’s own legacy issues, this time Unix.
It’s pretty well documented that doing Windows drivers is harder than other platforms, and it’s also pretty well documented (at least by MS) that most BSOD are from drivers not the OS. Please look at the source for a Windows IFS and then look at a filesystem driver on a Unix.
Besides drive letters and backslashes, the only DOS heritage in NT is the virtualization that allows DOS programs to run. This is not a crucial part of NT, and is only present in the 32-bit version.
Win32 originated with Windows NT, before Windows 95 was released. The registry was also in early NT.
I only use NTFS, FAT32, ISO9660/UDF and CIFS on Windows. Most Windows users don’t use anything else either. These filesystems have been stable for me, as have ext3/4 on Linux and HFS+ and FAT32 on OS X.
In the past year, I’ve had more kernel panics in OS X than similar events in Linux or Windows. Most of them were caused by the OS X smbfs client driver.
HW driver crashes are not unique to Windows. I’ve had ATI graphics driver failures on Linux, Windows, and OS X systems; the Linux and OS X ones resulted in a panic, lockup, or loss of the current session, whereas most of the Windows graphics crashes resulted in the GPU driver recovering without an issue. I’ve also experienced lockups in Linux caused by Intel graphics code.
I was on RiscOS only until about 97, so didn’t really experience Win3.1 and DOS. I was just guessing what was from where. Win32 design is based on Win16 though isn’t it? I bump into Win16 call implimentations sometimes when looking at the Wine source, and the Win16 and Win32 have so far been pretty similar.
There is a whole separate argument about how right it is that all the r/w formats out of the box supported as MS controlled (TomTom & Fat32).
SSHFS is probably the filesystem I explicitly use the most. You must see the advantage of tmpfs type of filesystem combined with mount anywhere in the tree?
I’m working on virtual filesystems on Windows right now, and I wish I was on a Unix machine. Dokan is good, but it’s not as good as FUSE, and that’s not Dokan’s fault.
Of course Linux drivers cause problems too. But it’s quite hard to say that complex APIs don’t make for more bugs.
I’ve been using/administrating/developing Windows for about 20 years, and using/administrating Linux for about 10, and I have to say, other than exposing OS functionality through the filesystem, I have never found any real need for most of the stuff in your list with Windows.
I don’t really see the need for directory mounted FSs (and you can do it with NTFS, so there ya go) The only time I ever use it on linux is for mapping smb shares, and that’s because I am lazy, and can put it in fstab.
Case sensitivity, don’t need it, never needed it, constantly messes normal users up quite a bit when using Linux. They don’t care enough to want it, or to understand why textfile.txt is different than Textfile.txt.
X11’s network functionality, frankly sucks, it’s slow and decrepit, and RDP is much more usable over the internet. Trying to use X over the internet is like watching a powerpoint presentation. Even VNC is a better solution.
POSIX? Ha Ha Ha. The standard barely anybody really follows, with so many optional things like FS locations that from Distro to Distro you can’t be sure where have of your stuff goes.
Windows only uses one kind of slash externally, the “\” the things that use the “/” are internet and network apps that use the unix convention.
As far as package management goes, MS doesn’t control most of the thousands of apps on windows, so really, how do you expect them to properly build and integrate package management, especially when if they were to try, they would probably get smacked down hard for anti-trust.
I like Linux, and I use Debian constantly at home, but that doesn’t mean I want Windows to turn into Linux, I prefer Windows to be Windows, not just another distro.
The main reason why I think your rant is way off base is because the 900,000,000 other people that use windows shouldn’t need to relearn all their skills just because a handful of Unix/Linux Geeks don’t like having to do things a bit differently from what they are used to.
Edited 2009-08-19 17:34 UTC
If you are writing cross platform code, the easiest way is to use /. It works with win32 syscalls.
Using \, the common escape character, is clumsy and annoying. Sort of like the \r\n line separator.
You can tell a diehard windows user by the fact that none of this offends their sensibilities, and they probably don’t mind space and unicode characters in file names either.
Diversity is good. It’s also good that the OS juggernaut is not perfect – Open Source could be more of a fringe movement if Posix didn’t attract users by being a refuge of good taste, as opposed to being merely about Freedom.
Diversity is good, that’s why I think that windows is fine just the way it is, from a functional standpoint, and that’s why I said I didn’t want it turned into Unix. It’s not Unix, it’s different from Unix. It has it’s problems, but so does every other OS on the planet
Edited 2009-08-19 21:06 UTC
I’m not sure I want Windows to be a Unix, but I want it to at least match it’s features and simplicity. Right now I just want it’s ugly ass to disappear, it’s of no interest to me.
I agree wholeheartedly! For the love of god, get rid of the damned C:\ crap and use mount points! Partitions should be invisible to the user, once the system is set up.
For everything from programming to trying to find a file through the command prompt, Drive lettering is a pain. Much simpler to use /media/disk or /media/disk-1 Or even more specific, /media/Windows /media/Stuff etc. That’s what I have setup under Linux, and guess what? It actually keeps that info if you name the hard drive, so Ubuntu always mounts it under /media/Stuff when you install! what a novel idea!
Seriously, even typing the \ instead of / is more annoying. Especially on those older keyboards when it is next to the backspace key!
Anyhow, enough of my angry ranting, it’s just that / is standard among ALL operating systems except for Dos and Windows. Hell, I think even CP/M which DOS ripped off doesn’t even use a / or \ Just ‘cd a:’ as it shows here http://en.wikipedia.org/wiki/CP/M
Your forgetting the evolution of the Internet. In 1994 we all had to get used to forward slashes instead of backslashes and case sensitivity, entering Web addresses. We still may need today.
Unless there is an accessibility extension for Firefox etc. that translates MS Format into Unix Format.
Do users still get confused by this issue?
Quite a bit.
Then you missing out. You haven’t got friends and family with Fat32 still for C? You haven’t got friends and family you are trying to get to separate programs and data for backup? Learning the Unix way has open my eyes to the fact that “Document and Settings” should be a separate partition mounted there. Now writing filesystem I notice that each windows filesystem needs having support of sub mounting put into it. If this wasn’t the case, we could sort out friends and family by mounting the shiny new harddrive at “Document and Settings”. Next Windows update/install, we can keep there data without blinking. The ability to mount at any folder is increasibly useful. That’s why Windows fakes it with the namespace system (it’s fake because the paths don’t exist, it’s just a GUI thing).
This can be hidden from the user in the GUI. It’s a GUI thing, not a every level of the system thing.
Try X forwarding over ssh some time. For whole desktops, sure VNC is better, but not single applications. I frequency run firefox on desktop with X forwarding to the craptop, and it’s faster than firefox running completely on the craptop. Sounds like you haven’t really played with X.
I think you are confused by NT POSIX. You should read:
http://www.tuxdeluxe.org/node/296
POSIX done right works just fine, that’s why it’s in such wide use on almost every OS bar MS’s, and there are multiple POSIX interface wrappers for Windows. (DLL of which could well be sitting on your machine somewhere right now for ported Unix apps.)
CreateFile can be called with either slash type. Somethings work with both, some only with \.
They have a package management system (in their case probably including a payment system) with their applications in, but they have a easy way of adding 3rd party repositories. For instance Adobe’s repositories would come with it. Easy.
I think you clocked me wrong. I’ve gone.
RiscOS -> Windows -> Linux
I program Windows for a living. I know the platform well.
As for the millions of users, they are GUI users, for them the system only differs as much as the GUI. Besides, they cope, mostly, with new versions of Windows.
Create a separate partition for the OS, and one for data. Use GPO/Document properties to point user documents settings there, and your’re done. Not much different then mounting a filesystem on a directory, froma functional standpoint, and you get the same effect.
Doesn’t help them learn the proper way to learn linux/user does it? The parent post wanted windows to adopt the conventions of Unix. Hiding Case is not the answer to either his wants, or anybody elses.
I used to use ssh tunneling all the time, but it was still slow as hell over the internet, which is where it is most useful. Face it, X sucks, and there isn’t too much you can say to change that fact. I tend to use freenx, it’s faster and easier to use. Much easier for non techies to use too, once it’s setup.
[/q]
Right….and everybody does it right, all the time, sorry, if that was the case, there would be a lot more compatibiltiy between *nixes.
I said externally. say it with me now EXTERNALLY. Normal Windows users don’t have much need to be calling CreateFile now do they.
Regardless if it’s possible to do technically,even if they get their competitors to use their wonderful package management system, they couldn’t exhibit one once of control, or they would get sued out of existence by ant-trust groups from the EU, South Korea, and Japan. Why would they bother?
[/q]
But those are not the grand changes the op was talking about, so I’ll leave it at that.
If you had mounting it could be completely invisible to the “normal” user. It could even be the default install option to use different partitions, perhaps even different filesystem even. You know, a versioning filesystem for the user data or something.
No that is what I said right from the beginning “Move case insensitivity from kernel into explorer”.
I’ve not used X over the internet, only locally where it’s been very useful. Over the internet I’m more likely going to use vnc anyway so I can show what I’m doing so I don’t have to do it again.
I’m a programmer, and it’s as a programmer I was complaining, I don’t expect users to give a stuff or even notice.
It’s worth bothering for keeping everything up to date, knowing you are installing from a trusted source, to have a system to resolve dependancies and conflicts, all the other reasons they are so common and normal users get so worked up about the iStore.
The things I listed Windows could learn from Unix where from a programmer perspective. Normal users need not notice. If I was unclear about that, sorry.
Having used Windows 7 since its RC, I can safely say that most of this article is inaccurate or inane. It’s odd – there are a lot of articles on the net lately that are just blatantly wrong.
1. More frequent release cycles
Microsoft is supporting a 20 year ++ ecosystem. That means in short there are a lot of software to test for any change. Should it be more frequent than Vista (5 years?) – Yes. Should it be every 6 months – not necessarily. Not necessary by a long stretch. The amount of software built on a microsoft OS is mind blowing compared to linux.
2. Sane release versioning.
Most users are not confused by this.
3. Online OS upgrades.
Fair enough. Windows and upgrades are not friends. They would have to do both – make upgrades work better, and allow for this option online.
4. Better Web app integration.
Web apps are pretty silly. There’s a million ways to transfer information. I may have to take a pass on this, I don’t really use a lot of the social website stuff.
5. Support open development environments.
I don’t understand why a for profit company needs to do this.
6. Slim down for the mobile world.
They have already done this. In particular, this statement:
And with Windows Vista being the resource hog that it was, Windows 7 has a big task ahead to match the nimble Linux.
Is incredibly inaccurate. Microsoft noticed that Vista sucked, and was a pig, and Win 7 uses a 400 mb footprint to boot (default services) instead of 700 mb for Vista.
7. Better device support.
Windows has the best driver support out there.
I’m all for comparing operating systems, but this article wasn’t nobel laureate material..
Morglum
> Windows has the best driver support out there.
That is very debatable. It depends greatly what hardware we are talking about. If you look at total number of devices support, of all ages, Linux dwarfs any OS ever made. If you have some old device chances are it will work with latest Linux but not the latest Windows.
http://www.linuxfoundation.org/collaborate/publications/linux-drive…
But latest desktop machine’s device from a manufacturer doesn’t support Linux, well yes, there is a problem. But increasingly less so.
It also depends on how you define “best driver support.” Is it the overall number of hardware devices that are supported, or the completeness of the support for individual devices? If it’s the former, then Linux wins easily – if it’s the latter, then Windows probably gets the crown (if only by virtue of Windows drivers having been produced by, or in collaboration with, the same company that made the hardware).
I think the most useful real world definition is “If I go out and buy a new blah-blah card at Best Buy, how likely is it that it *won’t* work when I get it home?”. By that I mean that it is the hardware being sold today which matters most, and it is the failure rate and not the success rate which is important. If 10% of the time, the hardware doesn’t work because it is too new, that’s bad. People notice when things *don’t* work, and not when they do.
And if some 2400 baud ISA modem from 1985 works, that’s cool. But who really cares? (Other than Greg Kroah-Hartmann, of course, who will gleefully tally it up and brag about the total.)
Edited 2009-08-21 00:50 UTC
Should I remind you that they have their profit as a direct result of having applications and developers of those applications. Even Steve “Monkeydance” Ballmer agrees.
Last time I checked, most netbooks come with 1gb of ram, so 400 at start-up is 200 too much.
Nope. Not even close. Windows is supported by most consumer device manufacturers and nothing else.
There is a new man at the wheel of Windows 7’s development so I’d say that many of the problems are going to be addressed but not at break neck speed. Just like Office 2007 and eventually 2010, there will be a logical progression towards a product that is coherent – but it’ll take time due to the size of it.
With that being said, I think there is a lot the Linux world can learn from not only Microsoft but the commercial software world; specifically for programmers to focus on user orientated issues that exist in their software rather than harping on about XYZ ‘evil corporation’.
As Linus pointed out, “hatred of Microsoft is a disease”.
Yes. Let’s fight the major cause of this disease.
Windows? Or the big-wigs at MS?
I am not quite sure about release cycle. I think that term has meaning when the cycle is fixed, or it takes place in nearly fixed intervals. Those who release distributions are not, in general, the same ones who write software included. So, I think, distribution creator checks what is available, and decides if it will be in next release, or not.
Microsoft is the sole company behind Windows, so, when they have something, they release. They don’t wait for anyone else. It took quite long to releaee Vista, and, apparently, it was too early. I have a feeling that Windows 3.11 were just an upgrade (or even bugfix) for Windows 3.1. The same goes for Win98 and Win95, and WinXP and Win2000. Actually, there are too many releases and too few upgrades, I think. Perhaps, some of those “upgrades” should have been available free of charge for users who bought “base” version, because they fixed problems in previous releases.
1. More frequent upgrades
Ok, so long as they only make MAJOR changes to the OS every few years. I’m kind of the designated ‘computer guy’ for all my friends and family… the last thing I want is a new version of Windows coming out every 6 months to a year and breaking people’s sh*t.
3. Web upgrades
If you’ve used Windows for any length of time, you would know that this is the worst idea in the history of bad ideas. Windows is seemingly designed from the ground up to make bad things happen when you upgrade one version of Windows over another. A clean install of Windows is the only way to go. This is the way it’s been since at least 1993 (as long as I’ve been using it), and I don’t see it changing anytime soon. At the very least, if you’re going to do this kind of upgrade, requiring users to install ANYTHING from Norton should be manditory, cuz you know that’s gonna crash and burn.
I’m not too impressed by the list. It’s OK but some of the listed items are silly or just don’t make sense.
1. More frequent release cycles. (Good)
2. Sane release versioning. (Definitely)
3. Online OS upgrades. (Would Be Nice)
4. Better Web app integration. (Huh?)
5. Support open development environments. (It’s Getting Better)
6. Slim down for the mobile world. (Bad Comparison)
7. Better device support. (What The Heck?)
Off the top oh my head:
1. No fork()-like system call. See:
http://en.wikipedia.org/wiki/Fork_%28operating_system%29
Very useful for multi-tasking and security, and not present in Windows. cygwin emulates it using CreateProcess and a lot of hackery, but it’s much slower than on Linux and other UNIX systems.
2. KDE has virtual desktops (also known as virtual workspaces). I never saw a Win32-based virtual desktop solution that worked properly. Maybe it’s better in Vista and Windows 7, but I’m not sure.
3. KDE and E17 have a different wallpaper per virtual desktop.
4. A sane command line. The CMD.EXE shell sucks. cygwin is slow (but still quite usable, I admit), I don’t understand or care to understand PowerShell, and Services-For-UNIX (SFU) has been neglected lately.
5. A sane default console window. CMD.EXE is absolute hate. The open source Console-2 is getting there, but still much more annoying than KDE’s Konsole.
6. Fewer signals (The UNIX kill()/signal() system calls) are supported in Windows.
7. Like the author of the original article, I swear by such package management systems as Mandriva’s urpmi, Debian’s apt-get, and RedHat’s yum. They make software installation and maintenance much easier than the Windows Download->Execute->Confirm Exception->Next->Next->Install dance.
8. One can change to a different user or run a command as a different user in UNIX very easily using “su”, “sudo”, and similar tools. I was told it’s very lacking in Windows.
9. The Windows GUI library is awful, and incredibly inconvenient. MFC is not much better. One should note one can use the LGPLed Qt or wxWidgets (or perhaps Gtk+ or Tk) which are better and will also run mostly natively on X-Windows and Mac OS X.
Maybe Avalon ( http://en.wikipedia.org/wiki/Windows_Presentation_Foundation ) is better, but I was told it is very complex.
10. I do wish the default Windows distribution was a bit more featured. Recent Windowses ship on DVDs and I’m getting more functionality from a single Linux CD. You can install a lot of high-quality open-source software on Windows (or not open-source software), but it’s still time-consuming, as there are no pre-installable decent Windows distributions like there are for Linux and I still wonder what is MS wasting all the space on.
11. The open-source nature of GNU/Linux, and other open-source UNIXes (such as the BSDs, the OpenSolaris) allows me to adapt the programs to my needs, fix bugs or pay someone to fix them, study the programs, and redistribute my changes. Recently, I fixed a bug in PySolFC by editing its Python source code, and it’s something I’m legally allowed to do.
———————————
I’m not saying Linux is perfect or that Windows does not have any advantages over it. But I still find working and developing on Linux a better experience than I do on Microsoft Windows.
Regards,
— Shlomi Fish.
It’s too bad that 4nt.exe never got purchased by MS and integrated as a cmd.exe replacement. 4dos.exe and 4nt.exe were excellent shell programs for Windows 9x and NT/2K/XP. Used to use 4dos on all my Win9x computers, and 4nt on some of my 2K boxes.
Yes, a proper console window would be nice. Along with proper command-line utilities, and a built-in SSH server. Would make remote administration so much simpler.
There’s the Run As… context menu item for all executables. And a command-line runas.exe app. Would be nice if there was a separate GUI for it like kdesudo/gksu, but it’s certainly usable.
It’d be nice if MS would just go through everything included on the Windows disc, and update *everything* to use the same widget set. Just standardise on one already, and *use it*. Win7 is better, but there’s still too many little apps and GUIs sprinkled throughout that use ancient widget sets and just stick out.
Amen. Still have an old copy of 4NT on my XP machine, it – combined with the unxutils package – is about the only way I can stand using the command line in Windows.
While runas.exe does exist, its command syntax makes my brain hurt. E.g.
sudo app filename
compared to…
runas /env /user:[email protected] “notepad \”my file.txt\””
I stopped reading when I reached this statement:
“I don’t understand or care to understand PowerShell”
How in hell can you possibly know it’s level of usability or “saneness” if you refuse to lean it or try it?
It just killed any credibility you had.
Hi!
I’m sorry for mis-speaking about PowerShell. This entry started from some stuff I said on IRC, and which I extended before I posted it here, but forgot to remove this part.
PowerShell looks interesting, but I was told it tends to be verbose (and so not very effective as a shell), depends on the .NET run-time, and has not been ported to non-Windows platforms. It was also supposed to be part of Vista, but eventually was excluded (along with Avalon, XAML, and most other promising stuff).
A usable shell is not too critical for me as I tend to use it only for interactive use, in which case cygwin should be good enough, and if it becomes overly complex, I tend to convert it to Perl. You may choose Python, Ruby, or whatever instead naturally.
In any case, my criticism of PowerShell does not detract from the rest of my comment, which contains other faults I find in Windows, unrelated to it. So please take the time to read it.
I agree with items 1-9, with the addition of my own.
10. Do not treat users as criminals. The only people that activation stops are normal users, the determined will always find a way around protection (ask Apple about Hackintoshes).
11. If item 10 is not reasonable, at least provide security updates to the non-activated. This would make the Internet better for everyone. Upgrading IE 6 to 7 should also be included
12. Actually implement POSIX correctly, fully and without fanfare.
13. Allow the user to configure the gui to allow any window to stay on top (other than Task Manager). Most window managers allow this.
14. Like others have said, move to a case sensitive file system. While you’re at it, make it not fragment.
15. Create or sponsor some sort of online repository for trusted .NET apps. This doesn’t necessarily have to be run by Microsoft or be a store.
15. Depreciate all of Win32 for modern APIs (like .NET) until no major application uses it, then remove it. Painful, but Win32 needs to die.
16. Put Midori on the fast track to be what replaces windows 7.
———————–
Not everyone that uses Linux uses it because they’re raving GNU zealots. Some base their decisions on technical merit as their first priority.
Hi,
WARNING: Sarcasm follows!
I agree – Microsoft should create a minimal version of Windows specifically for small/mobile devices. Maybe they could call it Windows CE, or Windows Mobile, and maybe one day (after Windows 7 is actually out of beta and officially released) they could release Windows Mobile 7.
I wonder why Microsoft haven’t thought of this already….
-Brendan
– get rid of drive letters
– case sensitive file system
– remove file locking (allow delete/rename of files in use)
– default dev tools (compiler, debugger) installed
– reliable command line tools (cygwin etc. aren’t it)
– transparency (make system understandable by non-specialists)
– “apt-get source”
– non-idiot user community
1. More frequent releases may make the computer enthusiast within us salivate, but those of us who maintain a large number of systems also dread it. A five year release cycle may be acceptable. A two year release cycle means that many institutions will be supporting three or more versions of the operating systems. (Linux sort-of gets away with short release cycles because most distributions are free and most of the rest are sold with support contracts. Windows does not fulfil the former, and rarely fulfils the latter.)
2. Number versions, sequentially, so that people know what they are dealing with. Numbers tend to be selected sequentially because version numbers were designed to be changed sequentially. Names are rarely selected in dictionary order, and the names are often contrived when they are selected sequentially (Ubuntu anyone?) I’m sorry, but I’ve reserved far too many neurons to remember the cute names for Mac OS X when those neurons should be reserved for important things.
3. Online updates, nevermind entire upgrades, have to be thought out carefully at the vendor’s and customer’s end. Vendors may be able to buy copious capacity because they are being paid for the service, but there are still limitations because they would need far more capacity during the actual release than during the intervening periods. That does one of two things: it either drives up the overall cost, or some customers are prioritized over other customers. (Well, there is a third option, a free-for-all that diminishes reliability and performance.)
They also have to look at the customer’s end. A customer with a single desktop isn’t much of an issue. A client with dozens or hundreds of desktops is an issue. Managers of large networks will need to conserve bandwidth, which means that upgrades needs to be cached on local servers (something that most vendors don’t like). It also means defining policies about which machines get upgraded and when. From a more paranoid perspective, these managers may have to be concerned about forced updates/upgrades. Because, let’s face it, sometimes the vendor pushes stuff out that the client isn’t ready for (either for security or marketing purposes).
4. The web services point I actually agree with.
5. Microsoft’s purpose in life is pushing their products, or the products of vendors that reflect their purposes. How would pushing open source development environments (or even their respective runtime environments) help them? Not at all is the most obvious answer. Indeed, it may even be counter productive. Just look at Linux. While Linux may be a developer’s paradise, the proliferation of development environments (both with respect to languages and libraries) makes the entire system a heck of a lot more difficult to support. Particularly when it comes to Microsoft’s stronger point: backwards compatability.
6. Slimming down is easier said than done. There are basically three ways to slim down: trim the resources, reduce the number of bundled programs, and strip the API. A tonne of disk space and memory could be recovered by simply trimming resources like graphics, sound, and documentation. (How do you think the tiny Linux distributions can be shrunk so much, yet still be extraordinarily capable?) Alas, neither the marketing department nor the customer will support that. Killing bundled applications may help in some cases, but a decision that one customer will cheer another will hate. Worse yet, some parts of the operating system are only accessible via those bundled applications (think about the system management stuff that only a handful of people use). Trimming the API is the absolute worse though, since it means that one version of the OS will not be compatible with other versions of the OS — even though they are shipping concurrently.
7. Windows can do the driver bit better but, as many have pointed out, it’s difficult to develop drivers for products that don’t exist yet. You can do it in some cases (USB mass media, PostScript, etc.), but would you really want your high end video chip to use generic drivers simply because they weren’t available when the OS was released?
I would welcome that, but our company would not. I think they are still at XP SP 2.
It’s not bizarre animal names that MS needs, just ones that we can remember.
To say “Microsoft Windows Vista Home Premium” was quite a mouthful. As Home Premium will be the main home-use operating system, they should just shrink the name to “Windows 7 Home”.
I can’t think of much I’d want from Linux. userspace filesystems, that’s pretty much it.
Those are product names / SKUs, not OS versions in the numbering sense.
What this item is about is the way the OS has been versioned internally vs externally:
Windows 1.0
Windows 2.0
Windows 3.0
Windows 3.1
Windows 3.11
Windows 95
Windows 98
Windows 98SE
Windows Me
And then the NT family:
Windows NT 3.1
Windows NT 3.5
Windows NT 3.51
Windows NT 4.0
Windows 2000
Windows XP
Windows Vista
Windows 7
There’s no continuity in the names. Some use the real version number, some use the year, some use catchy names. If you saw “7, XP, 2000, Vista, 3.1”, could you put those in the right order? (Well, we here could, but could “random guy on the street”?)
Then there are the internal version numbers where Win95 is 4.0 and WinNT is 4.0, and Win2K is 5.0, then WinXP is 5.1, and Vista is 6.0, but 7 is 6.1.
Yes, all the variations on the SKUs adds to the confusion, but the real issue is below that.
When I moved from XP64 to Vista I made a specific decision _not_ to search for drivers. It needed to come off the install or through Windows Update.
My hardware RAID worked, my built-in sound card, my chipset drivers, firewire and esata, network cards, printers (photo and laser), web cam (and its mic), both video cards, etc. were all supported out of the box or through Windows Update facilities. It has proved very useful for maintenance as well, as I have received regular updates for all that hardware straight through WUP without needing to locate or download updates for specific items.
When all was said and done, I ended up downloading two things by hand:
1) My external USB DAC/ADC/recorder, which is a rather specialist piece of equipment.
2) My mouse.
The mouse worked, of course, but didn’t have full button programmability without installing its app.
W7 by all reports is even better, but I think even at the Vista level this is a bit of a spurious complaint. My questions of “will it work” and the level of effort involved in making them work are always higher for Linux, and usually must be started very early, to the level of selecting specific hardware to use before installation.
One of the things that make me sick under Windows is the fact that Windows wants to reinstall all drivers if you plug a device into another port. Since XP it’s mostly a behind-the-scenes process where you don’t need to insert the actual disk anymore (Win keeps copies of the installation files), but Windows still needs to work quite some time until the device is ready to use.
A few weeks ago I bought an USB hub and plugged all my USB devices into it (mouse, card reader incl. an SD card, external HDD, game pad). Under Linux everything was ready to use right away. Then a bit later I booted Vista to play Street Fighter 4 a bit.
New Device Found: USB Hub –> Initializing –> Ready to Use.
New Device Found: Mass Storage Device –> Initializing –> Ready to Use.
….
You get the idea.
Overall it took Vista roughly 2 minutes until I could even move the mouse cursor!!! WTF!?!?!
“Grace Under Pressure”
I don’t mean continuing to smile while being sat on. I mean the same problem I’ve had with every version of desktop Windows I’ve ever used, including Windows 7 RC.
It’s this. I install Windows and it runs fine. I install a few programs and stuff and Windows still runs fine. But two or three months later, by which time I’ve installed quite a few programs and more stuff, Windows is going clunky on me. The logs are starting to fill up with worrying messages about failures and no responses, and one or two things are no longer working properly. In the case of Windows 7, sleep and back-up have stopped working. With Vista, it was bluetooth and some multimedia stuff.
I simply don’t get this with Linux. Individual programs may foobar but the underlying OS is very, very resilient. It does not go clunky on me. The only real problems I’ve had in more than eight years of using Linux is a few hard locks with X. But switching from ATI to Nvidia graphics has sorted that.
I don’t claim to have any answers. I’m just pointing this out, though I suspect that Linux’s ruthlessly modular design has a lot to do with it. Microsoft Windows plus Microsoft Office is fine, but go much further than that and you’re on your own. I just don’t have this with desktop Linux.
Edited 2009-08-20 14:29 UTC
Install hardware drivers without restarting computer!
>In fact, the Linux kernel ships with more device drivers than any other operating system.
Yeah quantity Vs quality. Many drivers aren’t worth mentioning.
“As long as they don’t start using silly alliterating adjective/animal combinations, I’m happy.”
Oh, totally loved that one! I don’t like the way it is done in Ubuntu too.
“Another suggestion is that Microsoft should slim down for the mobile world”
That’s almost impossible. MS doesn’t target the mobile devices market [Windows Mobile and CE … well, that’s a joke]. The only reasonable way of putting MS on a device without sacrificing all of it’s power and HDD space is to use Embedded XP/other version of Windows, but – for now – it takes a whole load of truck of cigarettes, 5000 gallons of coffee and loads of time to create such installation. The procedure of preparing Windows Embedded is completely $@#$@ up and too complicated.
P.S I don’t care for the wars between Windows and Linux, because there are bunch of other OSs in the world. I only pointed out some of the obvious things that some of the folks have to face.
Edited 2009-08-22 11:16 UTC