Three years ago I came across an interesting paper written up by a Microsoft employee, Kent Sullivan, on the process and findings of designing the new user interface for Windows 95. The web page has since been taken down – one reason why I’m a bit of a digital hoarder.
It specified some of the common issues experienced from Windows 3.1’s Program Manager shell and looked at the potential of developing a separate shell for ‘beginners’. Admittedly my inclination was that this was possibly inspired by Apple’s At Ease program that was reasonably popular during the System 7 days. I remember At Ease well during my primary school years, so kids couldn’t mess with the hard disk in Finder.
So here’s what Kent had to say verbatim in his paper titled “The Windows 95 User Interface: A Case Study in Usability Engineering” so it’s not lost altogether.
However you feel about Windows 95, there’s no denying that its user interface is probably one of the most iconic and well-known user interfaces ever designed and developed. Literally everyone knows it and has used it, and it singlehandedly defined what a personal computer’s UI should work like. It’s incredibly fascinating to read about the thought processes behind its development.
“Literally everyone knows it and has used it”
That’s not even remotely true.
Even if you stretch the meaning so that all windows versions count (people born in 2000 are 18 now, it’s entirely possible adults that can legally vote and drink were first using Windows at windows 7).
Consider Africa, where a lot of people have used phones and smartphones as their first and only computing devices.
That aside, this is still fascinating
I remember years ago an article that tried to make the case that it was the attention to UI design and the iterative UX testing that Microsoft did that allowed it to triumph over OS/2 Warp.
I think that’s a bunch of ux nerd talk, personally. The examples they gave were pretty obscure. If people actually had a chance to compare and use them side by side, I think a lot of people would have been won over by the OS/2 stability and ability to work with lesser hardware. I had friends who’s families had saved up for a new computer and purchased a machine loaded with win 3, that was incompatible with the new windows 95 that was released less than a year later. But these machines worked fine with OS/2. Not sure how common that was. But the bad UX of win 3.1 was a big mover in people wanting to upgrade, the stability problems of win 95 didn’t seem to deter people.
I remember Win95 on a 386SX with 4Mb of RAM. The base OS worked fine – it just didn’t have enough RAM for applications. OS/2 wasn’t really any better; I used it on 4Mb & 8Mb 386’s too. The difference was Win95 had applications and OS/2 didn’t, so Win95 benefited from hardware that the OS/2 ecosystem couldn’t fully exploit. That’s not a statement about the operating system though.
Maybe it was just me, or my memory has been clouded by time. But I really remembering OS/2 flying on a 386 with 4 MB. It had pleanyt of Apps! (Er well all the windows 16 bit apps and the DOS apps!). At the time, that was more than enough.
I think software developers tend to massively underestimate how important UX is to the average user.
As a side story here, a side-by-side comparison was performed at the place I worked in the era there was a multitude of operating systems on the market. Up to that time, there was a bit of a mixture of mostly DOS, Win 3.11 WFW, and VMS (VAX).
The systems compared were AmigaOS, Windows 95, and OS/2 Warp 3.0. For this, a small network environment was set-up for each and a group of selected users explored each one and performed their daily tasks.
I don’t remember much of the details. The executive sponsoring the evaluation program was using an Amiga for non-work related tasks. Since AmigaOS was not X86 based at the time, it faced a huge hurdle in the evaluation. Nevertheless, it allowed the executive to ask the right questions.
Coming from that evaluation, Windows 95 became the “corporate standard”. And the rest was history in my work place.
Odd (even considering Amiga-loving executive) that AmigaOS was even considered for evaluation in 1995 or so, coming from a dead by then company… (really dead? This calls for the following link! http://www.bbc.com/news/uk-england-manchester-28483217 )
OS/2 had TWO major problems. It needed very powerful (for the time) hardware and had almost no native software.
And couldn’t print for sh!t, and installation failed even in some of IBM’s own systems, and the apps could freeze the entire UI.
Half-an-OS indeed. This is what I tell to the linuxeros who accuse Windows of allegedly being an inferior product. Windows survived the real desktop OS wars of the 90s: MacOS, OS/2, NextStep, even Amiga still had a foothold in the early 90s. Windows won because all these other products were either unaffordable or worse.
Edited 2018-02-11 09:00 UTC
I don’t think they were worse, just different. Windows had major selling point of compatibility with DOS software – nearly unlimited library. Every company that used/wrote for DOS could trivially run their stuff on Windows, or in worst case start e.g. Norton Commander from autoexec.bat and begin from there.
MacOS required hardware which almost nobody had – probably because it wasn’t compatible with PC. I remember some crappy Apple network at school which died because 2 or 3 people wanted to copy file between Macs. PCs used cheap BNC network which worked quite well.
NextStep – never seen any of it anywhere. Maybe in USA it was different ? Hardware was probably too expensive. Maybe if one could run it on PC back then ?
OS/2 – who needs it when you have Windows ? I don’t remember anything about OS/2 in computer magazines back then – expect maybe some one paragraph corners. I tried OS/2 around 2000 and it was kicked out faster that it installed – I couldn’t stand the UI.
Amiga – maybe it was too expensive for common people, but still many people used it. I remember outrageous prices of $5000 for crappy PowerPC “accelerators” which looked like an ugly tumor. Again – for that price you could buy PC way more powerful, and technology developed so fast at that time, once you buy computer the next day it was obsolete so it was probably better idea to buy upgradeable PC for cheap price and upgrade it gradually than throwing $10000 on superior hardware which became obsolete the next month because comparable or better PC component was sold for the fraction of that price.
Edited 2018-02-12 05:05 UTC
NeXSTEP 3.1 ran on i386 and was released in May 1993.
It was a viable alternative to running on m68k NeXT branded machines, but a bit picky on hardware. Since most development was done in Interface Builder, it was a relatively easy to rebuild your applications from prior releases. FAT Binaries allowed one to build for multiple architectures simultaneously.
I still miss the “Tear Off” Menus and Scroll Bars on the Left….
Edited 2018-02-12 07:29 UTC
Or too late… (BeOS on x86) But I wonder, why NextStep on x86 failed? It was out 2 years before Win95… Or maybe contemporary Win3.x had already too much momentum? Was it too expensive hence hardly anyone bought it / OEMs didn’t ship PCs with it? Had it too high system requirements neccessitating an expensive PC? Did lack of DOS compatibility kill it? Some other reason?
Edited 2018-02-13 23:18 UTC
Everything I’ve read said that one of Windows 3.x’s greatest competitive advantages was its combination of DOS application compatibility and not being IBM’s attempt to chain everyone back to their proprietary licenses.
Likewise, later versions of Windows leveraged that same effect relative to earlier ones, except that none of the competitors had comparable application compatibility and a company the size of IBM pushing them, so Windows’s dominance was even more secure.
Edited 2018-02-15 08:43 UTC
I used NextStep on a 486 Gateway 2000 for many years. On the same machine Win 3.1 and 95 were often unstable and crude tools in comparison. I don’t think there is any single reason for its standalone failure. Depending on the “use case” pick one or more of the following reasons, or add your own.
1) Wordperfect but no MS Word
2) Often didn’t run on bare minimum hardware. Needed more Memory and Disk Space that the ‘average’ buyer was purchasing. Also required a decent bit mapped display card (e.g. ATI Mach 32) and often 17″ inch a monitor for regular use.
3) A Bit Slow on IDE. much better on SCSI controllers
4) No compatibility with MS-DOS software
5) Lotus Improv but no MS Excel
6) No Rolling Stones Kickoff
7) Bundling of MS OS with with new hardware
8) Steve’s return to (take over of) Apple.
9) No drivers for many devices, especially a) those in which the device had limited “intelligence or controller” and the PC did all the computing and/or b) the device was proprietary or evolving interface requirement the vendors support to author a driver.
Hm?
went the way of the Dodo with Windows 8 onwards.
The introduction of Metro/Modern/Tiles (call it what you will) was the beginning of the end for MS in my eyes.
Then the [redacted] Ribbon.
After 20+ years of developing software primarily (40+ in total) for the Windows platform I threw in the towel and retired.
Just my worthless $0.02 worth so can be ignored.
I think the reasons Windows 95’s UI was so well received are:
1) Simple language: “Network neighborhood” instead of “network topology map”. “My Computer” instead of “File System explorer”.
2) A button called “Start”. The problem most people had back then when it comes to using a computer was that they didn’t know where to start from. Having a button called “Start” helps in that regard.
3) Each drive has its own root. I know this will annoy linuxeros and Unix people, but the concept of your main harddrive being the “root” and other harddrives becoming a subfolder in your main harddrive is a bewildering concept to the average user. You are exploring the directories of your main harddrive, you enter that special directory in your main harddrive, and suddenly you are browsing the files in your USB drive. So, am I still in the main harddrive? Did my USB drive files got copied to my main harddrive automatically? That behaviour should either not be the default behaviour or not be visible to the user and abstracted away in the UI level.
4) And last but certainly not least: The Windows 95 UI experience was complete. By that I mean that all tasks could be accomplished from the UI. Even complex tasks like device driver installation and installation of the OS itself could be done exclusively from the UI, using various wizards that guide the user (wizards implement the “UI actions must lead to closure” design rule), with buttons in visible places that drive the user to the wizard. In that regard, Desktop Linux is still behind Windows 95. For example, I tried to install some proprietary WiFi drivers in my Acer Aspire One on Ubuntu. The UI driver tool didn’t work and had to drop back to the terminal in order to dpkg manually. For most users absence of a UI button to do a task, or a button that doesn’t work, leads to the impression that the task cannot be done.
Which brings me to Metro: The Metro UI experience is not complete for laptops and desktops. You constantly have to drop back to the traditional desktop. Despite Metro acting like a cancer on traditional desktop, aka slowly devouring parts of it, Metro is not a complete experience which makes for a schizo OS that constantly changes UI styles. Still better than having to dpkg manually for the average user though, because the user doesn’t even know something like dpkg exists in that DOSbox looking thingie called a terminal.
Edited 2018-02-10 13:00 UTC
I think your last point, that the Windows 95 user experience was complete, was a major one. Many users at that time “hated” DOS because of the requirement to use batch files and typed-in commands for many tasks.
The fact that the Linux distributions still require the use of the command line interface for many basic system configuration tasks is likely a stumbling block for many.
Oh come on, stop spreading this FUD. Unless you`re doing something unusual, you don`t have to use cli at all. And that`s real state since at least a few years at Ubuntu (even non-free drivers you can install by click). You can use cli only if you want to, just like on Windows.
As I previously stated, I had to use the Terminal to install proprietary WiFi drivers in my Acer Aspire One.
Edited 2018-02-10 15:46 UTC
Define ‘unusual’. Does that include wanting to install something outside of the default package manager? Or updating the entire system?
Edited 2018-02-10 16:27 UTC
Both of those can be done via the GUI (KDE, GNOME, or XFCE, have no experience with other DEs, but should be similar).
Download the .deb package from whatever website you want, and double-click on it. Similar to installing something on Windows via .msi, .exe, or similar installers.
Ubuntu and Debian both include GUI system upgrade tools that appear in the system tray when a new OS release is available. You can also manually run the GUI package tools and do an OS upgrade from there. No different from Windows Update.
There are tons of Linux things which can only be configured using the CLI. Stop being in denial about reality.What you’d be better off doing is using a linux machine for a week or a month and documenting every time you had to access the CLI.
True, but quite sad at the same time. Since this was the situation 10-15years ago with several distributions, it really show the backwards step Ubuntu inflicted on Linux usability.
You’re obviously not a Linux user so you probably shouldn’t comment on things that you clearly don’t know much about. Linux distros do not in fact require the command line interface for basic system configuration tasks. Sure, one can use it if one wants but most mainstream Linux distributions have had fully GUI desktops for some time now (in fact a multitude of different choices of desktop environments exist for Linux distros such as Gnome, KDE, Cinnamon, XFce, LXDE, Pantheon and Enlightenment…just to name a few.) System configuration is done on Linux just like it’s done on Windows or Mac, through a control panel or a system configuration applet.
To be fair, it depends on the desktop.
In KDE or GNOME, sure. You’ll probably be able to configure everything graphically.
Under LXDE, you’re likely to need at least one visit to a raw configuration file to get everything just right.
Wrong. Installing WiFi drivers is a basic system configuration task and I had to use dpkg to get it done.
Edited 2018-02-11 08:53 UTC
What distro did you use? Most would have a GUI for that, but if you had one that was less mainstream, or a less supported card, you may have entered the realm of “unusual”
WiFi is definitely an outlier in that respect. It and printers by companies like Lexmark are throw-backs to the bad old days of driver support and the only situation I’ve run into with modern Linux where you’re likely to need to leave the GUI for “average person usage” on a major desktop like KDE or GNOME.
So what? Just because you had to manually install a driver doesn’t mean everyone will need to do that. I haven’t had to monkey around with wi-fi drivers for my various laptops on Linux since the early 2000s. In fact Linux’s plug-and-play driver management far exceeds Windows in my opinion.
At no point, during the installation of a WiFi driver on Windows, will you need to open PowerShell and start copying & pasting commands from a post you found on an obscure forum from a link on the fifth page of results of a Google search.
Just sayin’
Try to install e.g. MS-Project with a standard license next to MS-Office-365, or migrate your Exchange-server: welcome to command line!
(all software beging from MS itself)
Migrating an Exchange Server? Are you comparing that to a WiFi driver install?
Of course you are going to use PowerShell, PS was designed just for that kind of tasks. Two years ago I migrated 7 SBS 2011 Standard boxes and having PS automation was a godsend. No one will want to do that kind of task using a GUI. That’s no work for a common user.
However, trying to reset a wireless network and force it to forget the network (and something else I can’t recall now) will require you dropping to a command prompt and running esoteric “net use” commands. Had to do that on two Windows 7 laptops to get them back online after a failed driver upgrade.
There are GUI tools for installing .deb packages. gdebi and kdpkg are the two main ones. And simply double-clicking on the .deb file in a GUI file manager will start those automatically.
Sometimes.
Sometimes, though, it’s through visiting a webpage on localhost and some obscure port number, as it is with CUPS and SAMBA.
Sometimes, it is provided through your desktop environment. Sometimes, it isn’t.
Sometimes, it is provided through your distribution’s custom configuration utility, which may or may not interfere with a utility that your DE includes.
The major distros have used GUI based tools configuration since the early 2000s e.g. Yeast and Red Carpet.
I’d argue Win95 was a bold step towards UNIX in this respect. Win3.1 exposed each drive uniquely in File Manager and the open/save dialog. Win95 exposed a tree rooted at “Desktop”, and under that “My Computer”, and under that drives. If /mnt were renamed “/My Computer”, the two would look similar. The difference is in Win95 the user saw this hierarchy but developers needed to think in terms of drive roots, whereas in UNIX the developer sees something closer to what the user sees.
Nope. Hierarchy-wise, the files of the CD-ROM drive did NOT appear under the C: drive’s root. The weird thing about the Unix file system is that your main harddrive is in “/” and “/mnt” is other drives, inside the main harddrive. Good luck explaining to the average user how the main harddrive does not have a mount point in mnt but every other drive has, and mnt is under the main harddrive hierarchy-wise which is actually “/”.
Let’s be honest, the hole thing was a giant hack to quickly kludge multi-drive support onto Unix without going through the effort to make the OS support multiple drive roots. The fact each filesystem is a tree and hence the natural way to represent them is as a forest did not occur to these guys because clever hacks matter more than usability in unixland.
Edited 2018-02-11 00:29 UTC
not everything is /mnt, and what is the problem with / is your computer and ../.. is other storage attached to it?
This is insightful. Good getting into the inexperienced user’s mind. I remember the first time I had to do a stats lab. Long time ago; used an amber terminal (similar to VT 100). Text screen, no idea of what I could do, no onscreen hints as to how to bootstrap my experience. In the end I just followed a recipe from the lecturer with no understanding. Felt lost. Needed to go find a manual for HP UX (as was).
A button labelled ‘Start’ seems stupidly simple, but if you know absolutely nothing you can click it and start looking around.
Reading this paper drives home the importance of iterative UI design, and indeed UI design and usability testing in general. Without having performed the studies they did, they would have never discovered the deficiencies of Windows 3.1’s design and never have been able to come up with something demonstrably better. Rather it would have just been a matter of Developer X or Designer Y insisting one solution is better than another (which is unfortunately still what happens in all too many software projects to this day).
By the way, the “infamous” MS Office ribbon was created using similar methodologies and despite its detractors it’s the demonstrably superior solution for casual users to grasp the complex functionality of Office, even if it’s perhaps not the best solution for every conceivable application.
https://channel9.msdn.com/Events/MIX/MIX08/UX09
Edited 2018-02-11 12:35 UTC
Which is what happened with Metro. Microsoft let some hipsters loose to do what they want. Even on touch platforms Metro looks ugly and initially was full of mysterious gestures, so even of you assume they were fine with burying the Desktop to advance a mobile-first agenda it was still awful.
Edited 2018-02-11 13:54 UTC
Reminds me of UX articles I read about touch gestures serving the same role on mobile as the CLI does on desktop.
(Non-discoverable, but high-efficiency once learned.)
It’s sad that this is the state we’re at in 2018 when there is so much more nuanced stuff we’ve known for so long.
For example:
http://uxmovement.com/buttons/why-ok-buttons-in-dialog-boxes-work-b…
(There’s good reason why, to this day, when I’m poking at old OSes for retro-hobbyist reasons, pre-OSX versions of MacOS have a magnetic pull on the rare occasions when I interact with them, despite my having much more reason to spend my hobby time in Windows 3.1x and 9x. Classic MacOS was a BIG pile of UX R&D discoveries merged together.)
Edited 2018-02-11 15:54 UTC
But at least not that high as often claimed… https://9p.io/wiki/plan9/mouse_vs._keyboard/index.html
I don’t think the problem with the Ribbon was that it is a bad UI concept, the problem with Ribbon was that it was a different UI concept introduced into the middle of the older Toolbar concept and caused too much confusion.
No, the problem with ribbon is that it took too much screen space and showing even less options than a regular old fashioned drop down menu that worked flawlessly. Now for the hidden options, you have to click the little arrow in the lower right corner of each section only to find the old dialog box. Improvement, really ?
I think there may be some arguments that there are issues with discoverability, but on reflection the “Click the arrow to show options” is no worse than a nested menu item.
Likewise screen space may have been an issue when they were introduced but it’s hardly an issue these days.
Well, since laptop are still sold with 15″ at 1366×768 screens, it is still an issue. The toolbar icons were 16×16 pixels and were perhaps a bit too little, 24×24 might have been a compromise. But ribbon ?
“We heard many stories from educators (and witnessed in the lab) how users caused the computer to run out of RAM by starting multiple copies of a program instead of switching back to the first copy.”
Still see this a lot at the Help Desk I work at. People will have multiple instances of Outlook open, for instance.
That’s missconfiguration on the part of the sysadmin as Outlook by default doesn’t allow starting multiple copies of itself.
Either that or they are doing it on purpose for multiple Profiles.
http://www.oed.com/view/Entry/109061
The figurative use of the word is literally in the OED.