The IT sector today is a complete mess. The end-users rarely understand this, but most insiders reach a point when they realize that things should be different. The problems are numerous but they all reduce to a basic principle. IT and consumer electronics companies are interested more about money than helping people solve their problems. Of course companies need to make a profit and nobody denies that. They should however make money by helping people and not by creating more problems for them.
Driving a car versus using a computer
Forget everything about IT for a moment. Go back to your driving lessons. During your first steps, driving a car was full of small details. Turning the wheel, hitting the brakes, using the clutch and so on. After a while however, as experience was accumulated all these things became minor. An experienced driver thinks in a much higher level. “I use the car for transport”. A driver enters a car thinking that he/she wants to get from point A (e.g. home) to point B (e.g. work). Using the car is an intermediate stage which serves this higher goal. Using the car is a temporary low level task which fulfills the high level need of transportation.
Now come back to computers. Think the user. If you are reading this you may be more experienced than the user I am talking about. Think your grandmother or aunt who uses a computer for basic things (email/surfing/word processing). This is type of user I mean. A user as described above always boots a computer for a high level task. “I will prepare my presentation for tomorrow”. “I will search the Internet for information on the Roman Empire”. “I will write a mail to my cousin”. No user boots a computer in order to partition a hard disk, or download new security updates. Unfortunately every user soon realizes that things are not so simple and in order to reach these high level goals there are many boring details that need to be examined first.
Thus the user is forced to learn basic things like screen resolutions, folders and files, formatting floppies, installing programs, Menus, double clicking and a bunch of other details that (guess what) nobody cares about! Even simple actions like opening a document in an application are more complex than they seem. One has to remember the exact location of the file (its position in the file system) in order to retrieve it. Search functions and “recently opened” lists solve partially this, but the problem never disappears. Imagine a user who inserts a DVD full of digital images in the drive and wants to show a specific image to his/her friends. The search function will be slow (4.5GBs hold many images), the “recently opened” list doesn’t apply (since the DVD has never been inserted before).So the user will spend several minutes to find manually the requested image. If the user is smart enough the descriptions will be hard coded in the names (John-and-Mary-beach.jpg) or the folders will have some meaning (d:\holiday\beach\whatever.jpg). In most cases however, the images will have cryptic names like DSC06458.JPG and the directories will show just the model of the camera. The user ends up manually searching all the images previewing them in one of the zillions programs that exist for this kind of boring operation.
Users think in high levels
The situation I just described is totally unacceptable. Users don’t have to be tortured like this. It is true that graphical user environments (GUIs) have improved computer usability but this is certainly not the end of the story. Additionally all these flames between Linux zealots and windows fanatics are completely useless. I hope that after reading this article you will agree with me that all operating systems are inefficient. And that includes the Apple stuff too. Just think of all the amount of information a naive user must digest before finishing any work. I don’t mean the usual suspects (defragging? drivers? divx codecs?) but more basic things. The whole concept of different programs and windows that need to be resized and moved (isn’t this the job of the window manager?). The whole interface is a mess. The WIMP interface has been criticized long before this article and will also be in the future. Ask the users themselves. Most of the times they will surprise you. The classic one is “why do I need a thing called Nero to write a CD and can’t just drag and drop the files on the CD as I do with the floppy?”. Yes I know that windows XP does this but still the process takes two steps (1.drop files 2.write CD). This is something completely strange for the naive user. (Other solutions which involve different CD-Writing technologies are unknown to the common public.)
If you write down all complaints from casual users most of them follow two principles.
1) Users work with a high level goal in mind. Every task which is not directly related to this goal is a distraction which needs to go out of the way as soon as possible.
2) Users expect the computers to be intelligent and take decisions in the background for them. They get very frustrated when they realize (as they become experienced) that computers are stupid machines that need to be told what to do.
I haven’t said anything really new at this point. Most readers who know anything about human-computer interaction or have programmed guis know all this. The usual answer for the situation is that computers are complex machines, blah blah, they are not a VCR blah, blah, or a washing machine blah, blah. Basically that since computers have multiple roles, their interface cannot be too simple and so on and so on. Well I disagree.
Shifting the workload to the computer
Nobody expects computers to act like a washing machine or require zero experience. After all if you want to drive a car legally you need to have a driving license which is obtained after driving lessons. The car however, takes some decisions behind your back. For example the onboard embedded computer controls automatically the fuel injection process. It regulates the fuel flow to the engine in order to minimize the harmful emissions and maximize performance. This happens transparently from the user. No driver cares about such low level decisions. All drivers want to reach their destination safe and fast.
In a similar manner computers need to do some things automatically. We don’t have to create the ideal user experience but we can take small and important steps slowly and gradually. Why have a defragging application on its own? Why doesn’t the computer defrag itself when idle? It is not a technical problem (see most UNIX filesystems). Automation is not something exotic or new. If you spend some time calculating how much time you spend in front of the computer doing actual work and how much time you spend on unneeded management, administration and maintenance you will be surprised.
Why do we have a save function in Word 2003? The same function existed 9 years ago in Word 6. Thousands of users have lost their documents during blackouts. Thousands more will lose their work in the future. Saving can be automatic. I am not talking about partial solutions (like Vi and Emacs do) which protect the user from losing work. I am talking about the whole idea of saving. Why torture the user with the save function at all? The application should save the document at all times keeping different versions and revisions. The whole .doc file should contain all user actions on the document (think CVS in a single file).Opening the file would be a simple question. Open the latest version or edit the version of a specific date/time. The Word application should not have a save menu/button anywhere on the interface. The user doesn’t care about this. (Ok, ok maybe a “save as…” which just relocates the document file but you get the idea).
Making money by adding complexity
Now we reach the key point of this article. The situation is horrible simply because the companies behind the scenes are greedy. I am not talking about Microsoft (only).I mean the mafia of software and hardware companies which act as they see fit. Money, money and more money. Helping the user is an afterthought. What really pushed me to write this article is the new digital photography era.
It is no secret that with the boom of digital cameras a lot of people bought computers in order to edit and store their photos. The simplest approach (from the user’s point of view) is to have some kind of “disk” where photos are stored. Digital cameras write this “disk” and then computers can read this “disk” in order to edit the photos. The “disk” is universally accepted. Joe User can take his “disk” to Jenny User and insert it into her computer. No fuss no problem. Now take a look at the real situation. We have compactflash, SD, smartmedia, memory stick and so on. Each format is supported from different companies. Why oh why? Why make the life of the user a living hell? Why make money from all the adapters that have flooded the market? Why do I have to buy a “4 in 1” card reader? Why?
This is a classic situation which shows that things are organized around companies and not around people. Coming back to computer interfaces the situation is similar. Each operating system is just a platform. Each company creates different applications which have a different purpose. There are thousands of applications and thousand of file formats. The user needs to find the correct application for his high level task. Sometimes two or three applications are needed for one high level task. The whole IT sector is centered around companies and not around the end users. Things work so that companies make money, while in reality users get little job done.
Google mail: one small step …
The “high level computing” dream is not hard to achieve. We can reach it with simple steps which make the user suffer less and give more work to the computer. The latest example is the Google email (gmail).
Gmail offers 1GB of storage data. Everyone is impressed by this number. Some people have already created several utilities for accessing this space remotely (1GB internet drive).This is not however the important news. The side effects are more critical for the users. By giving away 1GB and encouraging users not to delete emails but to archive them instead, the end users have one less constraint. Think Aunt Tillie. No more “you have blah,blah space left in you mail account”. No more “I have to free some space in my email account”. Mail management with Gmail gets one level higher. “I send and I get emails”. There is no “I delete emails” in the picture or “I monitor my account space”. One less problem for Aunt Tillie. This is the “high level computing” I am talking about!
The same approach can be applied to user interface, consumer electronics,compression algorithms, image formats (do we need all of them?) and most other areas of IT and computing in general.Think the user first!
I could write a load more about autonomous computing, the amount of money companies get from technical support or even several other interesting ideas that have recently appeared (ratpoison, ion and friends) but this is just a simple article and nothing more. Food for your brain….
About the author
Kapelonis Kostis is a computer science graduate. He believes that all operating systems suck and envisions a day where computers work like those in the movies. They are fast, simple and easy to use. They are user-centric and they help users get on with their lives rather than wasting time in trivial details and low level decisions.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.
Do you really think Aunt Tillie is going to have an easier time with revision control management than with saving a file? What if she has to merge changes or back out changes? People understand saving documents.
On the flip-side, some GNOME devs has been talking about automatically saving, but I still think this can be a bad idea. Sometimes I’ll open a document, make some changes, and then decide to scrap them by closing and not saving. With automatic saving, this might be difficult to do.
I think there’s really two separate issues. The first is preventing loss of work in a crash. To address this, a program should have a constantly “automatically saved” version in the background. The second is revision control, but unless it’s very simple, many users won’t be able to effectively use it.
Man, this discussion is passé. It’s boring too. Some people are apparently still living in the early 90’s.
“Sometimes I’ll open a document, make some changes, and then decide to scrap them by closing and not saving. With automatic saving, this might be difficult to do.”
Well, if the file contains information about changes then you’ll just have to press the undo-button.
But I agree that it could be difficult to persuade users that the new ways of doing things are actually better. They will have to re-learn.
First off, the car analogy is a stinking carcass that should never be brought into this conversation (and, yes, its been had before). Secondly, your basis is flawed — and since you’ve brought this awful analogy into the conversation, we’ll just stick with it for a moment.
When people get into their cars it doesn’t just magically transport them from point A to point B. You have to put gasoline in the blasted thing. And oil. Then you have to check the break fluid, trasnmision fluid, breaks, wipers, tire pressure, mirrors, etc, etc. Failing to do this necessary maintenance on your vehicle will result in a short-lived mode of transportation.
Are you about to tell me that “The situation I just described is totally unacceptable. [Drivers] don’t have to be tortured like this”? No. You’re going to tell me that its widely expected that the driver be competent enough to either A) learn how to do some basic car maintenance themselves or, B) pay someone who will do it for them. Being too stupid to do either will result in your car dying. But you’re not going to go whining to the car manufacturers and demand that they take “driving a car to a higher level of abstraction”.
Don’t be that evil…
I’d say that even when the car analogy really sucks -as it has been described by the author-, in the end, he’s right about a few things. Stop thinking of your super-customized linux box (in case you have a linux distribution); even the beautiful Mac OS X, or -why not- the XP Box. Computers are still tools that require far much training to use. Put it this way, there are too many factors in a computer. Abstracting them all, it is -with the OSs we’ve- imposible; a lot of time will pass before a computer is ´as simple as a microwave oven´.
The author has that idea in mind; it’s not incorrect, but yet it’s not possible with today’s technology.
First of all, this conversation is not passe, second of all, you’re missing the point.
The point being:
Microsoft (and computer manufacturers) have been selling a complex operating system *as though* it were a washing machine. End users are told their computers are easy to use, and they simply are not. At least cars come with a manual of sorts that explains basic upkeep. What manufacturer adequately informs their users that IE is the chief cause of spyware installation and virus propogation on their computer? And supplies an alternative?
Yes, computers are complex and wonderful things, but until either a) users are taught what they are *actually* in for as opposed to what they are told they will be able to do (generally more complex than the manufacturer lets on) or b) the manufacturer actually creates an os that is as simple as an appliance, this debate will continue.
Bottom line: Computer manufacturers lie about their products. Period. You can be an elitist snob if you want, but your grandmother was not told she was getting a complex device, she was told she was getting a tool she could “do the email and look at pictures” with.
I couldn’t agree more on this: “IT and consumer electronics companies are interested more about money than helping people solve their problems”
I just read today about the Island of Man administration converting themselves from novell/unix/sun to win server 2k3, as a complaint from the side of the administration: “I believe open source solutions are all point solutions. It is really about technology for technology’s sake […] He says, in slightly guarded tones, that Linux people tend to be more technology, than business focused.”
Gee, whatever.
Even though car maintenance is complex, a lot more people are comfortable with at least adding oil and checking tire pressure than they are with, say, burning a CD or downloading music. This is because cars have been around for over 100 years. Just wait until the 100th anniversary of the PC. (Of course, by then we’ll have high-res retinal displays, ‘memory card’ readers built into our skulls, and a permanent wireless Internet-4 connection wired to our brains. (Or at least, I hope we will! (: ))
while the metaphor has some flaws, you nay-sayers are focusing too much on the technicalities (pun _not_ intended) rather than the point.
it can all be boiled down to one thing: automation is not a fancy new idea. let my computer handle the more basic necessities… and let it do it without telling me every damn second. useless dialogs get in the way of productivity.
i don’t need 10 million disk formats. i was told years ago that the market would dictate the leader. instead, i have a 12-in-1 card reader. bull.
“End users are told their computers are easy to use, and they simply are not.”
soooooo true. even the most advanced users fall into caveats, no matter the operating system.
i, too, want my touchscreen membrane-keyboard sci-fi easy-to-use computer. today’s computer is a cluttered crock.
i have a love/hate relationship with technology.
Why do you need a card reader? Just plug your usb cable into your camera/pda/whatever.
All the devices I’ve used work in Linux w/o any card readers.
Embedded computing could solve majority of these issues. Personally, it was easier to operate Rom based Macs, Amigas, and Ataris than the modern computer. You turned your computer on with the operating system in ROM. You put your programs in and they ran and so forth. In order to provide the scenero that you desire we would have to do away with all the options and make computers for specific tasks instead of having all in one. I believe that we will see the return of the ROM based machine. Heck, we already have in the mobile market.
Jim
… not all devices support USB/Firewire. Old ones in fact, did not.
Computers are the most complex tools we use. They’re complex because…well…the tasks we want to accomplish with them are complex.
Even something as common as ‘writing’ a word document. Think of all the things people want to do with it. When people work with a word processor, they’re not just thinking ‘I want to write a document’.
They’re thinking
‘I need a table here’,
‘I want this cell to be red’
‘I want all sentances automatically started with capitals…except this one…I like the small i here’
‘I want a 3 column look here’
‘I want it to spell check…’
…
you get the idea.
By all means, I think the crux of the article is right in that we should be making life nicer for people. But lets not forget the basic law of engineering Everything is a tradeoff. I like the google example. Google is by far the easiest/sleakest to use webmail program. Yet look at the trade offs it has made. Yes, the UI is much simplified…but how easy is it to ‘reply’ or ‘forward’ a message. Now it’s hidden under ‘other options’, or at the end of the message. (still IMHO, the right decision by google). Now I’ve got this new concept called a ‘label’ and ‘starring’ a message.
Sometimes automating things is more harmful than doing it manually. Take spell checking. A very easy candidate for automation. Do I really want it auto-correcting things. What if it chooses the wrong alternative…now I won’t know about it. Do I simplify the spell check to only include fix/skip options…how about add/ignore/ignoreall/change/changeall…
Computer programs are complex because they are the system we interact with that has the greatest number of inputs and options and permutations. If we’re worried about aunt tillie, perhaps the generic computer is not for her. Maybe she needs a basic system that auto-updates/defrags itself, where you cannot install any applications and comes standard with a web browser, gmail account, word excel, and a My documents folder.
Even take firefox that I’m working on right now. Take a look at its options menu. There’s a lot of crap in there. Yet, I just went through it…there’s not much I’d want taken out.
If we’re willing to deal with less flexibility, by all means, lets Simplify maaaan.
Things of course are much simpler, if we simply made word processing like a virtual typewriter Or if we wanted, we could actually use the car analogy. Either learn to change the oil yourself or be sure to take it in for servicing. Buy a new computer…warranty only valid if you don’t install new programs, and you take it in for servicing every X time (they will clean out the cache, defrag, wipe out temporary files, adjust program menus to suite you, install new apps/extensions for you…)
No one is arguing that there shouldn’t be complex tools for users who do complex things. Why not a simple interface for, say, a word processor as well as one that does everything and the kitchen sink (ie MS Word). While OS X is not perfect, the apple iLife apps have gone a very long way in terms of design (not just automation) in making things usable by Joe Bagadonuts. Very little in the windows world even comes close to the simplicity and usefulness of iTunes or iPhoto. Yes, they are simple programs, but advanced users have options (like Photoshop) to suit their own needs as well.
Again, the point is not that computers need to be simplified, its that they shouldn’t be sold as simple devices when they are in fact not.
“You can be an elitist snob if you want, but your grandmother was not told she was getting a complex device, she was told she was getting a tool she could “do the email and look at pictures” with.”
Actually, my grandmother and her husband both know more about their computers than a fair number of the technophiles on this list. They’re pioneers in the field, so don’t be so presumptive.
And what is this? 19-freaking-90? OF COURSE you’re buying a complex piece of machinery when you buy a computer. People know this before they even come to terms with the idea of owning one. The reason many other folks DON’T own one is because they know computers are complex and don’t want to deal with it! Its old news. Quit giving the consumers so little credit. They’re smart. They generally know what they’re getting into — even if they’re still frustrated by the experience. And, (horror of horrors!) they KNOW that Microsoft overstates the craftmanship of their products. Its called marketing and every blasted company since the dawn of time has done it.
What does “What is this 1990” even mean? Have you ever supported someone who bought a computer at best buy? Have you ever removed spyware from a computer.
I’m not being presumptive, I was obviously using a generalization to make a point.
I love this kind of tunnel vision where people can’t seem to realize that simply because *they* know something, everyone else should to.
Anyone who has ever supported a PC user (even those with a little knowledge) knows that they are ill equiped to deal with the OS beyond the absolute basics. It has nothing to do with intelligence, it has to do with what they expect versus what they get, and there is a HUGE disconnect. Even if they do expect something difficult to use, they are not given the proper tools to learn how to use it effectively.
As for misleading marketing, you may think that Microsoft (and AOL and Dell and Gateway) have every right to market as they do, but you also have absolutely no right to bitch when users aren’t “smart” enough to use their computer properly. Again, they are being lied to.
I can certainly understand your frustration, and it only gets worse after scrolling through comments people have already left. I’ve been ranting about this particular topic recently and even started re-evaluating my own endeavors. I believe companies are rarely interested in the well-being of their customers and anti-corporate types only copy or over-complicate the process. Anyone working in the industry for a handful of years sees this first-hand. People are missing the bigger issue behind your analogies.
It’s funny, because my step dad JUST bought a digital camera out of the blue and wants advice/help on it. What can I tell him without sitting him down with some specific set of programs? Plugging these things into the computer inevitably comes up with some disparate driver with it’s own stupid software, or a built in wizard, or (even worse) just shows up as a “removable” drive.
Heck, even I wish I had to do less. We’re telling the computer to do things that it should do on it’s own. And I love it when people say, “Well, have you tried software X or application Y?” That just re-enforces the entire problem.
I think the data can remain relatively the same. Data can change as algorithms get smarter, faster, and achieve better compression. The way this data is treated internally needs to be improved, and maybe that will be the catalyst that causes everything else to improve. I can only hope.
#1 The author has some valid points about computer interfaces Why can’t disk defrag be done automatically in off hours? The same can be said for the whole issue of backup. However these are very low level issues that almost everyone would rather not have to deal with. However as you move up the work hierarchy you will run into differences of opinion on what is good automation and what is interference. The multitude of comments here on auto save are a good example. The same can be said for window management. I change my window layout constantly to meet my needs. An OS that did it automatically would probably get in my way.
#2 The author’s complaints about the lack of standards in flash memory systems are a little more disturbing. And the cure may be worse than the problem. Why so many different form factors for camera memory? Simple this industry is immature. Manufactures are trying different ideas in their efforts to build camera’s to fit into various market segments. In cameras one size does not suit all users, and one memory form factor does not work well with all camera sizes.
Now imagine what it would be like if in 1998 some standards body mandated that all camera’s must use the Smart Media standard. High end camera users would not have the capacities afforded by compact flash based minidisks. What if in 1982 some standards body had decided that all computers must use CP/M as the OS? Or maybe in 1984 all computers must have two 1.2 MB 5.25″ floppy disk drives. Or maybe in 1995 an international standards body said all computers must use Windows 95.
There is one major flaw in this commentary and that is that the consumer are in charge here.
As long as the consumers (companies and people alike) buy this stuff and even ‘demand’ the feature-upgrades from the vendors it shouldn’t be a suprise that the industry looks like this.
The major problem is that people primarily look at the price and what features they get (more is better!). Support and quality are things you encounter only after the sell.
I think there will be a shift that consumers demand higher quality from their vendors. But that only happens when most people are feature tired. Relax, give it some time and it will change.
Making things easier is not always the way to go. The best example I have is iPhoto.
I bought an iMac a few months ago. I was very happy with it. I transferred all my digital content (only photos, actually) from my x86 to my iMac (I believe I used my network). Then, I opened iPhoto to order all my photos and pictures. So far, so good.
However.
On my x86, I organized my photos per directory. “Backgrounds”, “Going out, 21-6-2004”, “My Car”, etc. etc. Here comes the problem: iPhoto doesn’t do that! iPhoto creates a database, which isn’t based on directories. This means that when I wanted to add “Car Front.jpg” to the “My Car” album, I had to a) add it to the iPhoto album, and b) to the “My Car” directory. On my x86, on the other hand, I simply dropped the photo into the right dir and done.
“But that’s just the way iPhoto works!” I hear you say, “Don’t use iPhoto!” The problem is, that if it were up to the author, all programs would behave in the same way iPhoto (and iTunes) does. Leaving me with a mess of photo’s.
Then, some friends of mine from Berlin came over. They made shitloads of photos. We imported those photos using iPhoto. We burnt them onto a CD using iPhoto, only to realize that iPhoto burns a disc which only seems to work when again using iPhoto. Those friends did not own Macs. So, we had to find the dir in which iPhoto criptically stored the photos.
What I’m trying to say: the author’s suggestions make sense when one uses one application “to do photos”. Well, most people simply don’t. Those ideas of the author are nice, but they wreck interoperatebility.
PS: I replaced the iMac with a iBook a few days ago, btw. And, I now simply use Finder + Preview to manage my photos, it’s much, much easier. Finder shows previews, and Preview opens them. Perfect.
By giving away 1GB and encouraging users not to delete emails but to archive them instead, the end users have one less constraint.
There seems to be only one obstacle.If i would want to save my e-mails,i think i have on my local computer better search capacity then remotely on the mail server,as the situation is today.
Well companies clearly care about money, that’s why they outsourced, and after everyone is fed-up with not being able to understand them, there won’t be a point in having tech support which will save more money.
Sorry for not RTFAing
Exactly Sean, I don’t know anyone that isn’t able to see through marketting.
“Again, the point is not that computers need to be simplified, its that they shouldn’t be sold as simple devices when they are in fact not”
I’d actually say the opposite is true. Too many people think computer/managing their own computers is too comlpex. People do know what they’re getting into. Anyone who has a friend/parent who isn’t computer literate can attest to the fact, that they’re very hesitant to try/install/buy anything without someone there. They don’t trust the industry at all. They don’t trust anything to work correctly. Have you ever had someone ask you if their their new mouse will be compatible. It’s not something I think about anymore.
The computer companies are actually guilty of under-marketting IMHO. If people were actually being sold an easy-to-use product, they’d be up in arms at the lack of warranty/support out there. Instead, they actually expect things not to work. They actually expect to do maintanence and repair on their own Now that’s a sad state of affairs.
I’d say the auto manufacters have done a far better marketting job. They sell you an item worth 10s of thousands of dollars, and sell you on the idea that’s its simple to own.
If aunt tillie wants a no worry solution, she should buy it from a reputable company with a service station close to home…Just like a car. And she should demand they configure/install everything she wants under a warranty. And she should pay for that warranty/servicing she wants.
The other complaints, like things like different memory cards…are more standards related. This exists in any industry. Roughly 4 different kinds of digital camera storage. That’s actually pretty good. Not to mention that adapters actually exist for all the types. Ever take a stroll through the auto parts section at an auto store? I’m sure people get confused buying tires. P195/75R14 What on earth do those numbers mean and how are those compatible with my car (I had to learn that at one point)? Aaah screw it, I’ll just take my car in to the auto shop.
I definitely agree that trying to deal with iPhoto (or even iTunes) cross platform can be trying, but ultimately I think that iTunes and iPhoto are a *better* way of working, and they make the lives of mac users much easier by taking away the idea of organizing by file and instead let the user create their own ways of organizing the data on top of the iphoto or itunes db. Yes, it does create cross platform hell, but in the long run, both Windows and OS X are heading in this direction.
Btw, with iPhoto you have the option of exporting the files to your desktop. You could have selected sets from iPhoto, exported them into individually named directories, and burned the cd with apple’s built in cd burning capability.
This solution is of course outside of the “simplicity” that iPhoto was meant to instill, but your problem is outside of the needs of decidedly outside that of an ideal mac user.
Yes, yes, I know that many people have macs and pcs- if you do, you should have no problem doing what I just said instead of burning the photos directly from iPhoto.
…who, yes, did expect that his iMac would do things like organize his stuff *for* him, or at least that he would be able to figure out where that stuff went, how to open documents/programs, etc., etc….
And he still has no idea of the distinction between the Internet and his local machine – I just get blank looks whenever he’s trying to “save” a Web page and mail it to someone, and I tell him just to copy-and-paste the link instead. The link? Yah, that thing in the address bar….
Occasionally when he’s venting his frustration, I ask him what crystal radio sets were like when he was a kid – fiddly damned things, weren’t they? And now you just mash a button and your station starts playing. Though Dad almost certainly won’t get to see it, the push-a-button-and-get-what-you-want-instantly phase of computing is coming, I think.
I agree with the comment re embedded devices. I’m not at all sure the general-purpose computer will be with us a decade or two from now. More likely game machines will be separate from media controllers will be separate from handheld phone/chat/email/Web devices (that will do word processing via dictation).
I would like to say that Complex doent mean that it has to be hard. If I do a complex job doesnt mean that I need to have a hard time doing it. It will be hard if I do not have the proper tool. So the point of the article is to improve the tool that we use to accomplish a Complex job.
“I’m not being presumptive, I was obviously using a generalization to make a point.”
I KNOW you were making a generalization. MY POINT is that such generalizations in general tend to be undeserving
“I love this kind of tunnel vision where people can’t seem to realize that simply because *they* know something, everyone else should to. “
Where in any of my posts did I make a claim to superior knowledge? What I did was take exception to the “computers should be like automobiles” parallel that keeps getting dregged up. Its just not a good analogy. It doesn’t fit. Period.
“but you also have absolutely no right to bitch when users aren’t “smart” enough to use their computer properly. Again, they are being lied to.”
Again, show me where I bitched about someone not knowing as much as I do about computers. I don’t take exception to folks to don’t care to learn about their machine. They happily pay me to go through that pain for them. There’s nothing wrong with that. So once again your’re being presumptive. Or your not actually reading what I’ve said before you pop off at the mouth — even worse.
“I agree with the comment re embedded devices. I’m not at all sure the general-purpose computer will be with us a decade or two from now. More likely game machines will be separate from media controllers will be separate from handheld phone/chat/email/Web devices (that will do word processing via dictation).
”
I hope so. We’ve let the computer industry get away too long with giving us generic solutions and forcing us configure our way to a suitable ‘product’.
For sure, some concepts will still need to be learned. Even with the radio, you still need to know the idea of different stations, and how to locate them. Presets, scanning can make things easier…but ultimately, you still need to know the concept of a ‘station’. A computer user should know the concept of a directory (at this time) and a ‘file’. The searching and stuff is getting better with more processing power/effort. Google desktop search / winfs / whatever linux solutions there are.
It’s a balance between portability, flexability, performance, usability.
Like all tools, you have to know how to use them and practice good methods. Using a band-saw without practicing good safty will lose you a finger. Using a chainsaw improperly can lose you an arm.
Driving a car is a complex task, but you don’t think it is because you’ve learned how to do it. Sure, computer users only care about writing their document, but when I drive, I only care about getting from point A to point B. While driving though, I still have to know how to operate the thing. I need to be instructed on proper use of the gas pedal and brake, I need to know how to monitor when I’m running out of gas and how to fill it up. I need to monitor oil levels and such. And most importantly (and often overlooked), I need to know how to deal with the other drivers and traffic and the road conditions that I’ll need to face while driving. If I don’t know how to do these things, I’ll run into trouble.
So a computer is also a tool, like these others. You need to know how to use it. I don’t know why people keep insisting that computers should be smart enough to know what you want it to do when clueless users often don’t even know what it is they want to do.
Now, I’ve operate a car, and I can do so rather thoughtlessly. I’ve also learned to operate a computer, and can do so rather thoughtlessly.
As for computer hardware/software manufacturers being greedy, never attribute to malice what can be explained by incompetence. Many computer makers would love to make a trouble-free system. You know why? It would sell. Some of the arguments between companies over standards are connected to a desire to own the standard, but some are honest disagreements over “what makes a good standard”. Regardless, whatever it is that sells, companies will be rushing to put that on the market. That’s how greed works.
While the author brings up numerous valid points I also agree with other respondants who point out that the computer is multi-task device which of necessity requires more complexity than an single task device such as an automobile or microwave.
Back in 1980 you could buy a word processor which basically was a glorified typewriter. It was fairly easy to learn how to use that device and you could edit and correct your spelling mistakes before printing what was basically typewriter output. Current wordprocessing software allows you to do much more with formatting, to the point that you can actually typeset your document before printing it.
But of course, having more capability necessitates more complexity. You have to make decisions about what type of formatting, type faces, margins, gutters, etc before you print the document. You could, of course, create “style” sheets that load all this information automatically, but you still have to create the style sheets beforehand. And even if the software could do this for you, you’d still have to make the choice of which pre-configured template you want to use. And, of course, the computer can do many other things in addition to this!
Yes, there are many “housekeeping” tasks that could be automated and require little if any user input: defragging is a good example; autobackup is another, both of working files and the entire file system. I think we are moving in the right direction and I don’t think software and hardware companies are purposely making things harder for users just to make an extra buck.
Just a couple of quick examples from my own experience. My wife and dad are both typical non-technical users. My wife has digital camera and makes hundreds of photos which she sends to her family friends via email. She uses a SuSE 9.0 system which permits her simply to plug-in the camera via USB. A camera icon appears on the desktop and she simply copies the photos to her harddrive. She then attaches them to her web-based emails. She has learned to do these things relatively painlessly and seldom has any problems. She has even learned how to resize and rotate photos in GIMP (no small task!).
My dad uses an XP system and was continually complaining about the popup ads while using IE. Even though he has only a dial up connection, his computer also got infected multiple times by viruses and spyware. This required more than one trip to the local shop to have his system cleaned and restored. Finally, at my prompting, he downloaded and installed Firefox. Now he browses realively safely and sans popups. He is contemplating a second computer purchase and says he may try installing Linux on one of the two!
Both my wife and my dad need help using their computers for stuff they are not yet familiar with, but they both learn because they want to do things on their own. Yes, many things are more difficult than they could be, and many things are just too difficult. But nothing is impossible.
If we had a dedicated computer for every task we would want a computer to perform, yes each computer would probably be a lot simpler to use. But then we’d have to learn how to use many different computers, each one requiring different input from us. So I don’t think that is the way to go. As I said, I think computer-user interface is moving towards much better interaction. It’s not easy enough yet, but it will get easier over time. I don’t think it will ever be totally trivial, however, Star Trek fantasies notwithstanding.
Even simple actions like opening a document in an application are more complex than they seem. One has to remember the exact location of the file (its position in the file system) in order to retrieve it. Search functions and “recently opened” lists solve partially this, but the problem never disappears.
Of course it doesn’t, because the “problem” is inherent to the whole concept of storing things for future retrieval (be they computer information or physical items) – they have to be put somewhere.
Search functions _are_ a solution, because they make the location operation – inherent to the concept of store and retrieve – quicker (do you have a search function to help you find that book you left lying somewhere around the house ?).
In most cases however, the images will have cryptic names like DSC06458.JPG and the directories will show just the model of the camera. The user ends up manually searching all the images previewing them in one of the zillions programs that exist for this kind of boring operation.
This is why things like “thumbnail view” were put into file managers back in the mid 90s (if not earlier).
You’re not identifying a “problem”, you’re describing aspects of the whole concept of storing and retrieving things. Moreover, you’re not even offering possible solutions at a conceptual level, let alone a technical one.
The whole concept of different programs […]
The concept of “programs” driving the UI was largely eliminated back in the mid 80s with the Mac and has been being refined since. That some (most) people insist on keeping it alive is not the fault of the computer.
MacOS, Windows and most others have had interfaces that stressed the concept of primarily operating on “documents” and “objects” (rather than opening programs and then “loading” files) for quite some time now. Obviously in some contexts this analogy can’t work (eg: Calculators) but in most of the cases it can, it does.
[…] and windows that need to be resized and moved (isn’t this the job of the window manager?).
No. Windows can be resized and moved by the user because that is a *good* thing. There might be an argument that window sizing and placement algorithms could use some improvement, but the concept of being allowed to move and resize windows is quite sound.
Yes I know that windows XP does this but still the process takes two steps (1.drop files 2.write CD). This is something completely strange for the naive user.
It’s all in the explanation. The two step process is attributable to the physical limitations of the technology (CDs are WORM devices). If you explain to the end user that CDs can only be written to once [0] and that because of this they must first “copy” the files to CD and then “confirm” (or “commit”) them, the confusion largely disappears.
In short, no amount of UI improvement or “automation” can circumvent physical limitations.
Why have a defragging application on its own? Why doesn’t the computer defrag itself when idle? It is not a technical problem (see most UNIX filesystems).
Firstly, unix filesystems don’t defragment themselves.
Secondly, for almost all users (certainly the ones being identified by this article) defragging is not something they have to worry about. Fragmentation is a vastly overblown “problem” that only has a meaningful impact in a very small list of corner cases.
Thirdly, defragging programs (and procedures, for those platforms that lack specific software tools) exist because in some situations it *is* useful to be able to do.
Again – like your “programs” distinction – this falls into the category of people making things more complex than they need to be, generally because they either don’t know better, or can’t be bothered learning.
For example, the new Start Menu in Windows XP is a vast improvement on the old one in terms of functionality and efficiency. Yet the number of *experienced users* who immediately change it back to the old way because the new way sucks [1] is enormous.
Automation is not something exotic or new. If you spend some time calculating how much time you spend in front of the computer doing actual work and how much time you spend on unneeded management, administration and maintenance you will be surprised.
The level of automation in OSes like OS X and Windows is _massive_. In many cases when people like you are complaining about things that should be automated, they often already are and you just don’t want to use the new methods.
Why do we have a save function in Word 2003?
Because there needs to be a way for the end user to tell the computer to commit their changes.
The same function existed 9 years ago in Word 6.
Your reasoning is stupid. The Bold and Italics features were in Word 9 years ago as well, do they need to be removed as well ?
Saving can be automatic. I am not talking about partial solutions (like Vi and Emacs do) which protect the user from losing work. I am talking about the whole idea of saving. Why torture the user with the save function at all? The application should save the document at all times keeping different versions and revisions. The whole .doc file should contain all user actions on the document (think CVS in a single file).
Word already does this. Revision tracking is only enabled by default in Word 2003, but the facility has been around for a decade or more.
The big UI problem with your scheme is that it becomes frustrating to deal with “confirming” all those changes.
Opening the file would be a simple question. Open the latest version or edit the version of a specific date/time.
For someone who professes not to like the separate application paradigm, you seem to be advocating it here.
The Word application should not have a save menu/button anywhere on the interface. The user doesn’t care about this.
The purpose of the “Save” function is to tell the machine “all the changes I have made up until this point I am happy with, please keep them”.
(Ok, ok maybe a “save as…” which just relocates the document file but you get the idea).
No. The “Save As” functionality should *NOT* be in programs at all. “Save As” is a *file management* operation and should be performed from the shell.
Why do I have to buy a “4 in 1” card reader? Why?
Because we operate (ostensibly) in a “free market”. That means there isn’t some central authority deciding which technology should be used. Most people consider this to be a good thing.
I wonder if you are similarly annoyed at being able to pick from a plethora of OSes to put on your computer as well ?
The same approach can be applied to user interface, consumer electronics,compression algorithms, image formats (do we need all of them?) and most other areas of IT and computing in general.Think the user first! I could write a load more about autonomous computing, the amount of money companies get from technical support or even several other interesting ideas that have recently appeared (ratpoison, ion and friends) but this is just a simple article and nothing more. Food for your brain….
I think you need to work through some of your examples and ideas first. In particular, UI paradigms with regards to “applications” and the problems of getting people to actually use the functionality they already have.
[0] Yes, I know about multisession disks. No, I don’t think it’s relevant to this context.
[1] 9/10 times people can’t say why it sucks, they just say it does.
Why have a defragging application on its own? Why doesn’t the computer defrag itself when idle?
Windows XP does this as far as I know. Mac OS X does something similar. Defragging is only really needed for piss-poor filesystems like FAT.
Why do we have a save function in Word 2003? The same function existed 9 years ago in Word 6.
Agree on this one.
The classic one is “why do I need a thing called Nero to write a CD and can’t just drag and drop the files on the CD as I do with the floppy?”
Uhh, limitations in the technology. CD-Rs are write-once, can’t get around that.
Imagine a user who inserts a DVD full of digital images in the drive and wants to show a specific image to his/her friends. The search function will be slow (4.5GBs hold many images), the “recently opened” list doesn’t apply (since the DVD has never been inserted before).
This is unfair. No way around this either. Come back in 2030* when we have optical discs that can read 4.5GB in a second or so.
* Rule number one of prediction: don’t make predictions!
This is unfair. No way around this either. Come back in 2030* when we have optical discs that can read 4.5GB in a second or so.
This makes no difference to his argument, which is not that searches are slow, but that we shouldn’t have to perform searches at all.
“This makes no difference to his argument, which is not that searches are slow, but that we shouldn’t have to perform searches at all.”
How many times have you put your keys down and later forgotten where you had put them? You had to look for them, didn’t you?
The same thing happens with computer files. Only with the computer, you can tell the machine what you’re looking for and to go find it for you. You can only do that IRL if you have children.
I know it’s sad, but not every task is as easy as opening a beer bottle.
How many times have you put your keys down and later forgotten where you had put them? You had to look for them, didn’t you?
The same thing happens with computer files. Only with the computer, you can tell the machine what you’re looking for and to go find it for you.
Um, that was kind of the point I was trying to make – that you _can’t_ eliminate the search or organisation processes (only optimise them) as they are inherent to the very concept of storage and retrieval.
Computers used to come with manuals for their upkeep, but since the elimination by Microsoft of commercial competition on the desktop, the quality of the product has seriously slipped and prices are high.
I remember the manuals with Amigas, Archimedes and Apple MAC.
It is not true that automatical functions and wizards and such are just plain easy and a good way.
I have an 40 year old illustrator friend who breaks cold sweat everytime he needs to use a computer. We had a course on Dreamweaver. He did not understand it, mousing around in menus was complicated, offering lots of different choices with little info and he got scared. He wanted to understand and not just do what he was told to. I tried to explain and help but he was just staring me blankly.
So I teached him plain HTML with a simple text editor in 15 minutes. And that was easy for him to understand because now *he* was in control, not the fscking machine!
I think systems should be more transparent about their inner workings, allowing users to understand.
…but the number of responses about the car analogy not working are absolutely correct. Computers aren’t like cars, just like they aren’t hammers or fishing rods or shotguns. Computers are not specialized devices; they are general purpose tools. How many other pieces of equipment can do your taxes, surf the web, typeset documents, play games, automate machinery, etc?
No offence meant to the author, but although he acknowledges a fairly well known fact (e.g. computers are general purpose machines and don’t work amazing at any one particular task), he doesn’t offer anything substantial to the reader. Anybody that has been in this field for any amount of time already knows this.
The implementation of automatic disk defragging in all operating systems (like Mac OS 10.3’s albeit limitted implementation) would be a good first step though!
Excellent points. I agree with most of them.
The article opens with the claim that computers are too complicated because that’s what makes people money. The arguments are mostly BS as you rightly pointed out but I agree with the original premise.
Computers are too complicated, but the problems are much harder to solve than getting rid of a save button. They are always getting better though. Mt Rainer will eventually make floppy-like access to CD’s possible. Rapid searching of large bundles of end user’s data will eventually get easier. Coupling of metadata with binary data will get easier. As computers get faster and have more resources and expand into new rolls they will keep getting better at the things that people use them for.
There is, however, a fundamental problem with software that’s income counts on selling upgrades: The upgrades must have new features and look different or nobody will buy them. That is annoying for users who learned how to run the old ones. F/OSS has the advantage of, if it’s not broken, people usually leave it alone.
For example, the new Start Menu in Windows XP is a vast improvement on the old one in terms of functionality and efficiency. Yet the number of *experienced users* who immediately change it back to the old way because the new way sucks [1] is enormous.
It does suck. It sucks because it takes up highly valuable 1’st tier space with “My Pictures, My Music, My Documents, and Set Program Access and Defaults” all of which I almost never access (Yes, I know you can remove them.) It sucks because it violates object permanence. Unlike most people I purposely organize my start menu and I want certain things in certain places. It sucks because I can’t put only the things I want in it. It has 2 main advantages, the 2 columns and the recently used list. Neither of them are worth giving up the ability to customize the menu.
The answer is to provide UIs for the different types of user. Ones that eliminate jargon that a new user wouldn’t know. We need user distinctions such as newbie, computer literate, power user at the very least.
To some extent this is happening today. I am heartened that my CD burning software has wizards if I want them, although I bypass them and get straight to business. Even sofware installations allow you to choose the default, or go into greater complexity if you care too.
Complexity needs to be hidden yet accessible if you want it. I know in Windows that menu items will dissapear if I don’t use them or only contain what is absolutely required. If I want the full set I can maximise the menu.
Graphics UIs and Windows managers came about because people didn’t want to learn obscure CLI commands. Perhaps its time to invent something new so people don’t have to learn obscure mouse operations.
Perhaps programs ought to provide hooks so that their UIs can be circumvented so that developers can provide an even simpler UI to newbies. FOSS should consider this, not by directly dictating the easier interface, but allowing the hooks to be there.
This may be Off Topic but part of the problem is that IT makes money from IT. If IT wasn’t complex then there would be no IT. Take MS for example. The API gets more and more complex requiring you to learn new ways of doing things and ever increasing complexity. They invent a new fangled technology, that really does something you could do all along, but makes it more complex requiring more techos to operate. All OSes get bloat that isn’t really needed and the techos need to learn more to the same thing they could always do. Just look at MSDN and tell me I need half that crap. Its like these vendors want things to remain complex so that can continue to say they are innovating whilst making money of the IT eco-system.
The other problem, which may also be off topic, is that the OSes are evolving as fast as the hardware such that the increases in the hardware are swallowed up by the compexity of the software. I seem constrained by even keeping up. Google Mail providing a GIG of storage is only remarkable in that it is an instance of Hardware progressing faster than the software’s ability to render that space essential. Win95 might be considered a POC but I bet it loads in a second on modern hardware and takes up barely any space in memory. I could still run the same kinds of productivity software as on XP at a fraction of the hardware requirements. Its like software is becoming more complex just to swallow up the space and this is part of the complexity problem. Why don’t my freaking windows open in milliseconds after all the money spent in upgrades?
Ok. Scampering back to my happy place now.
It does suck. It sucks because it takes up highly valuable 1’st tier space with “My Pictures, My Music, My Documents, and Set Program Access and Defaults” all of which I almost never access (Yes, I know you can remove them.)
The reason you never use them is probably because you have your own directory structures and are set in your ways, as I said. Remember, for the typical ignorant user, their documents *will* be in My Documents, their music *will* be in My Music and their pictures *will* be in My Pictures. Hence that first-tier space isn’t “wasted” because it offers quick and direct access to frequently-(or what should be frequently) accessed objects.
Personally I moved my data file structure under My Documents once I saw the new setup. Accessing my data now is quick and easy.
The “Set Program Access and Defaults” I’ll agree with, though . I suspect it has to be so prominent because of the antitrust case.
It sucks because it violates object permanence.
That’s the nature of recently used lists. Personally I think the recently used programs list is worth keeping the new Start Menu on its own.
Unlike most people I purposely organize my start menu and I want certain things in certain places. It sucks because I can’t put only the things I want in it. It has 2 main advantages, the 2 columns and the recently used list. Neither of them are worth giving up the ability to customize the menu.
It’s worth pointing out that the Start Menu *is* meant for “most people” and you may not fit that mold. However, I really suspect the big problem you’re having is trying to use the new system like the old one, not trying to use the new system how it is supposed to be used.
I spent a week or two using the new Start Menu and now I can’t live without it. The recently used programs list, quick access to My Documents, Control Panel, Favourites, My Computer, etc. It’s so much better than the old one going back is frustrating.
Computers are different than microwaves in that a microwave does one thing, and only one thing, so manufacturers can make it do that thing well. Cars are more complex than microwaves, but they still are only meant to allow the user to drive between points.
Computers are meant to provide one thing to the user: computation. Chips are meant to execute instructions, and execute them well (indeed, with advances in pipelining, caches, etc. this has been the case). In this respect, computer hardware manufacturers follow the same principles as microwave/car manufacturers, except that computer manufacturers have seen a shitload more innovation in the last 2 or 3 decades than any other industry has in history.
Operating Systems designed for PC/Desktop use are supposed to abstract away the hardware and give the users some metaphors to work with (files, processes, etc.). More than that, OSes are supposed to make it easy for developers to work with the hardware at a high level, through device drivers, and to do privileged tasks through system calls. It’s true that a lot of users would prefer a higher level of abstraction, but consider how high the abstractions have gotten! The raw hardware of a computer only needs to execute instructions, so the original computer users were programmers (punchcards and all)! Nowadays computer systems are used by artists, architects, newspaper editors, you name it. And they aren’t just making popcorn or driving around town–they are making their livings in the world of computers! As Steve Jobs once said, computers are meant to be the bicycle of the _mind_, and indeed they have been in many respects. OSes do suck, you’re right, but not because they don’t provide enough abstraction. That’s not the OS’s job (see exokernels, etc. for what OSes should really be doing). It’s the job of software developers in userspace to abstract away the computer, and indeed the trend has been in that direction. So don’t worry, author, your day will come. It’s a top priority for many developers to make computers “just work.”
The actual point in this discussion (in my opninion) is that there still is much to be improved upon in modern day operating systems. True, the role of the operating system has been changed somewhat, but it still does not augment human intellect or maximise our information processing potentional. Computers still do not know much about our environment and context, whilst we struggle updating our conceptual models about how our computers are functioning.
I think computers and OSes need more practical ‘intelligence’, and I am not talking about AI or Wizards or hiding functionality from users, just the opposite, I want computers to be more supportive to the task at hand so I can achieve my goals easily.
For example when I want to upload a graphic image to a website (forum etc.), and I select a 40 MB BMP file, the OS / browser could offer me an option to convert the file to a websavvy format (say a downscaled jpeg image). In that way, nothing of the old functionality is lost, but users who do not know all details can achieve the same tasks as the pros. The OS should be more sensitive to the context of my task. Of course this is not a perfect example, but it does illustrate the problem; developers focussing on the technical and task based side of their applications instead of the user’s goals.
I think a distinction needs to be made between something being “complex” and something being “convoluted.” Computers can do complex tasks but they do them ina convoluted way. THe entire GUi system of drag drop, whilst making simple tasks easier, ahs amde the system even messier when involving large cvolumes of data. How much time is spent maximising and minimising and shuffling icons around nad reconfiguring screens? How long do programmers now spend on the designs of a screen and how unuasable does a program become when tooma ny options are avaialbe or the GUI is inconsistent.
How hard is it to provide ‘phone support when trying to describe where to click in screen after screen?
How hard is it for a user to fix a problem whne the GUI has abstracted the way a computer works via “desktop” metaphors to the point that they do not understand what breaks.
Too many, the CLI is a step backwards but think about it, taking Linux/Unix as an aexample:
1. Configuration data is stored in plain text files. They can be cryptic but at least there is a consistent method of configuring the machine. Open the file in the text editor and modify.
2. Use of languages like LaTeX allow for consistency in creating files and allow them to be easily transported fom platform to platform, again using hte text editor as the interface. Yes, the inital learning curve is steep but so to is that of learning to drive, since we are using the analogy. Most still find editing HTML easier than using graphical web design programs, especially when troubleshooting code.
3. The lack of abstraction means that the user understands the reason for basic maintenance.
4. It is easier to read instructions down the telephone support line and request a user type these in than try to remember the layout of screen after screen.
5. Piping provides consistent ways of manipulating files.
6. One more input device (the mouse) is eliminated. Despite claims that these have made computing easier, they have not. Many users never even use the right-button and are unaware of its function.
Yes, there are limitations to the CLI but it is grossly under-rated. There is an initial learning curve but it pays far greater dividends than the GUI ever will.
Looking forward to hearing your thoughts.
No, no, no! I’m sorry, but the different levels of complexity for different users dealie is bullocks.
Take a look at the “simplified” Windows XP control panel for example. The old version, although somewhat cluttered, is familiar to a number of people who have used the Microsoft Windows interface over the years. The new version completely screws everyone up. Options are buried, and the whole task of relearning for the people in the intermediate category of users leads to them having to learn a whole new layout or find a way to revert to the previous system. And that’s not to mention having to field tech support calls where the UI could be one of (in your case, three) categories: newbie, computer literate, or power user.
Remember that there are some people who, no matter how explicit the instructions are, will still manage to screw things up or get lost.
Even worse is the labels attached to each of these categories. Would a new user be taunted for being a newbie? Oh, and I’m a software programmer by trade. Should I really be limitted to the power user category? I vote for a fourth option for programmers! How about a fifth category for kernel hackers? Or a sixth category for musicians?
In all seriousness, though: is this really a way to make computer usage simpler?
The continuous saving thing is an interesting idea; it doesn’t have to be per-character but if it wrote the doc back to disk every paragraph or so it’d be acceptable – you’d never lose a lot of work, no more than you do at present anyway.
The issue I see with this is users have to break their bad habits – closing and reopening a file to lose unsaved changes. This is a bad habit, especially since it doesn’t work everywhere – works in Word and Excel but not in Access. Obviously Access isn’t used as much, but you certainly figure that one out pretty early on.
The defragging thing – hell yes! Why does Linux defrag automatically but Windows doesn’t? Why doesn’t Microsoft hire a few programmers to make it do that – it can’t be all that difficult!
And not only is it something the user shouldn’t have to worry about, it _is_ significant to performance.
Windows XP doesn’t do it by default – I’ve seen a machine that did do it at startup, but it had been set to do that specifically via some piece of software, and that’s masking the problem in a nasty fashion anyway – startup now takes 10 seconds longer each time.
It’s not just a FAT thing either; NTFS fragments rather nicely too, and I’ve heard HFS/HFS+ do too (can’t comment though).
GMail is an improvement to the webmail problem, but it presents an interesting issue: if grandma uses GMail and exchanges lots of pictures with her knitting group, she may actually fill it up (over some years). Now she has to figure out why it doesn’t work any more and learn something new.
Admittedly it’s not likely to happen much…
One thing I’d like to see lost is the damn Recycle Bin/Trash thing. With all these flash undeletey things about (System Restore etc) it shouldn’t actually be necessary. And frankly it’s a bloody nightmare; you delete some documents because you know your disk will get full. Sooner or later it does anyway, and you discover you’ve got 500MB in your Recycle Bin – so you have to tell it to delete all these files *again*. Not totally intuitive.
The worst offender in this regard seems to be OSX; I don’t know how it managed it, but I saw a 128 USB key a while back that was “full”, despite having nearly nothing on it. Some investigation revealed a .Trashed folder or some such, with 100MB of crap in it. Obviously removing this solved things; but why was it there in the first place? Why wasn’t it overwritted automatically when we tried to copy more files to the key? No doubt “Emptying Trash” would probably have solved it, but that wasn’t made totally obvious.
The shadow files didn’t help either; every single file was accompanied by a hidden counterpart. Which destroys interoperability; on a Windows machine you’ve got twice as many .jpg’s, and Windows thinks they are all valid jpg’s. One more minefield for the casual user to navigate…
On the other hand, I can’t say I’m all for too much of this; “ease of use” generally entails a lack of control. You let the music program “manage” your music, and suddenly it’s moved the files everywhere. The wizard decides you can’t do something, so it won’t let you – despite the fact that you’d really like it to try anyway.
Possibly computers are too hard for the average person to use. Let’s face it, they’re not really getting any easier; we replace CLI’s with GUI’s (yeah, they are easier for most people… but a good CLI should still be available) which makes it easier, but now we’ve got spyware and viruses which make it harder. We can update the software over the interwebby thing, but on the other hand the user now has to worry about updating their software. Maybe it’s like a car; you can get in it and not worry about the processes going on too much, but you have to know what the rev counter means and what to do with the gear lever or you’ll break it.
As an aside on the software update thing; anyone else think that stinks? You can’t make it nice for the users (*especially* not with the disaster Windows makes of it).
Does anyone remember patching their copy of Win 3.11? Neither do I…. Okay, we’ve got an internet to worry about security threats from now, but that just shows bad planning; what’s the point of having an internet to make finding information easier if it’s a trial going near your computer?
> I think systems should be more transparent about their inner workings, allowing users to
> understand.
Very good point and *the* reason why I for one chose not to use Windows.
mostly the problem is this: the system as it stand now is app senterd, not task senterd.
haystack is approaching the right idea. rather then haveing to think about useing a browser to view the web and a mail program to send mail you should have a “address” area similar to a browser that was allways present. write a email address into this and the main window would show a “new mail” gui. if you want to write a new document you should be able to have a new.. entry on a menu, select document there and up pops the gui for that task. on one side of the screen (the bottom is the totaly wrong place to put a true task bar as it have to do more then list the windows that are currently open) you have a task bar that show two scroll arrows and every task that you have not saved, or sendt or in a similar way completed is shown by a big box showing the file name and some basic buttons (save and discard change for documents of diffrent kinds, send or discard for mails and similar). i can allready hear the mac fanatics shouting but please…
basicly the task bar should do more then just list the currently active apps. it should be the primary interface for basic interaction with the tasks at hand. no need to look for a save button inside the gui as its allways on the task entry in the task bar.
saveing ones pr change is a bad idea. someone ones told me that the imageing app that ships with windows saves a image every time you change something to it. problem is that if you save a lossy format many times over it over time becomes a mess. just use that app to rotate a image and you will see the problem.
right now the computer desktop can be task sentric, but only as long as all the stuff on it is made by the same company that made the os. look at how msn messenger hooks into outlook express and the address book. this cant be done by any other app (atleast i have yet to find another im client thats able to do so). basicly to get true task senterd work one will have to allow code from diffrent makers to interact so fully that the user cant tell them appart. the os isnt a application all it own, its a framework. any mail client should be able to share addresses with any im program basicly by hooking into the traffic pipes and telling the os “hey,im a mail app and i need to be able to handle any mail address accessed by any app and be able to share addresses with any known address book”. and then a im program could do a similar thing “hey, i want to share my addresses with any known address book”. the “set default program access and defaults” dialog in windows is a step in this direction but sadly very few apps use it (only entrys i can find is from mozilla outside of the normal ie apps. and the very fact that it was put in to get microsoft out of a antitrust jam shows how little interest there was for this from their side. and the number of tasks that can be taken over are limited. any function the os have buildt in should be interchangeable with a outside source, and listed in a single place so that if i install nero i can tell by going to that place what function i want nero to handle and so on.
The comparsion between a car and a computer is not the best one. A lot of people already pointed that out.
But why?
A car serves a specific task: transportation
Therefore the main focus of usage is to provide an easy way to do excaclty that.
Pretty much the same is true for cell phones. The main purpose of cell phones is (tada): to call someone.
These are all special devices desigend around or for that purpose. But if you look at todays top of the line products you also have a huge complexity in cars or cell phones, but the main function is still the dominant task and therefore most of the users have no problem using it.
If you do compare these devices to a PC you have to bring the complexity of uncountable purposes in one device. Which was and is a big reason why the PC is so popular.
E.g. you want to play computer games on a PC. You have to make sure you have the right OS, drivers, grafic cards and what not. But if you extract the game part of a PC into a dedicated device AKA game console like the Playstation or xbox then it’s just a simple as plugging console into a wallet, put a CD/DVD into it, press “power on” and start playing. But that’s about it, no burning CDs, no writing letters, no support for a digicam.
Wasn’t it Bill Gates or sombody else who told years ago that the PC we know today will not be the PC in the future? The PC will be split into smaller devices designed specially for a specific task and therefore not as complex as one device that does it all.
But do people really want that?
I seem to recall there was, sometime back, an HTML editor on Windows which did implement this. I can’t remember its name, but I tried it once.
You had a simplified interface for total newbies. You didn’t know much about HTML, you used that. Later you could change to a second interface in which you had some more buttons for more actions. You had a third interface which looked a bit like Homesite 4 or whatever.
In this particular case it was done quite nicely, but I’m not sure this kind of solution could apply to everything.
The defragging thing – hell yes! Why does Linux defrag automatically but Windows doesn’t?
Linux does not “defrag automatically”.
And not only is it something the user shouldn’t have to worry about, it _is_ significant to performance.
I’d be more than willing to bet a few thousand bucks the typical user couldn’t perceive the difference if you defragged their hard disk.
One thing I’d like to see lost is the damn Recycle Bin/Trash thing. With all these flash undeletey things about (System Restore etc) it shouldn’t actually be necessary.
Firstly, undelete tools and things like System Restore are for _completely_ different purposes.
Secondly, “undeleting” is notoriously unreliable.
The Recycle Bin and Trash are meant as a temporary location for files that aren’t wanted anymore.
And frankly it’s a bloody nightmare; you delete some documents because you know your disk will get full. Sooner or later it does anyway, and you discover you’ve got 500MB in your Recycle Bin – so you have to tell it to delete all these files *again*. Not totally intuitive.
Actually it is intuitive and consistent with the metaphor. Think of the Trash can under your desk – you put stuff in it, but until it gets emptied you can still pull that stuff back out.
As an aside on the software update thing; anyone else think that stinks? You can’t make it nice for the users (*especially* not with the disaster Windows makes of it).
You go to http://www.windowsupdate.com. You hit “Express Install” (IIRC). You wait.
It doesn’t get much simpler than that.
Does anyone remember patching their copy of Win 3.11? Neither do I….
Nevertheless, some updates and fixes were released.
Okay, we’ve got an internet to worry about security threats from now, but that just shows bad planning; what’s the point of having an internet to make finding information easier if it’s a trial going near your computer?
It’s not as long as you take simple, basic precautions.
mostly the problem is this: the system as it stand now is app senterd, not task senterd.
Most GUIs these days can be considered *both* “application centric” and “document centric”. It’s just a matter of which way you choose to use them.
rather then haveing to think about useing a browser to view the web and a mail program to send mail you should have a “address” area similar to a browser that was allways present. write a email address into this and the main window would show a “new mail” gui.
Already done, at least in Windows.
Right click your taskbar, Toolbars, Addresses. Chuck in a WWW URL or something like mailto:[email protected] (the latter is somewhat clumsy, I’ll agree).
This has been around since IE4, back in 1997 (’96 if you count the betas).
if you want to write a new document you should be able to have a new.. entry on a menu, select document there and up pops the gui for that task.
Right click a folder (or the Desktop), New, pick your document type. This dates back to Windows 95. OS/2 did it even earlier with templates.
on one side of the screen (the bottom is the totaly wrong place to put a true task bar as it have to do more then list the windows that are currently open) you have a task bar that show two scroll arrows and every task that you have not saved, or sendt or in a similar way completed is shown by a big box showing the file name and some basic buttons (save and discard change for documents of diffrent kinds, send or discard for mails and similar). i can allready hear the mac fanatics shouting but please…
This is getting a bit intrusive. Not _everything_ you do is task oriented. More importantly, not everything you do can be easily assigned a “start” and “finish” point, particularly such that the computer would understand.
basicly the task bar should do more then just list the currently active apps. it should be the primary interface for basic interaction with the tasks at hand. no need to look for a save button inside the gui as its allways on the task entry in the task bar.
This sounds like turning the taskbar into a combination of the Taskbar and the Mac’s single menu bar.
right now the computer desktop can be task sentric, but only as long as all the stuff on it is made by the same company that made the os.
Untrue. The interfaces to do things like put entries into the “New” context menu are public.
look at how msn messenger hooks into outlook express and the address book. this cant be done by any other app (atleast i have yet to find another im client thats able to do so).
That no-one else does it does not mean it can’t be done.
any mail client should be able to share addresses with any im program basicly by hooking into the traffic pipes and telling the os “hey,im a mail app and i need to be able to handle any mail address accessed by any app and be able to share addresses with any known address book”.
AFAIK every major OS/GUI has a central address book style applet that can be queried from any application. In other words, this is already possible.
First off, you had 2 (two) examples of where an OS could improve. That they don’t defrag when idle (Hint, mac OS X does), and that word shouldn’t have a save button (hint, auto saving is easy to enable and IT’S NOT AN OS).
And as an example of how computers should act, as an example of computing where the user doesn’t have to deal with “boring things like menu’s and doublie clicking” you give GMAIL?
WHY? because it has lots of space?
you are a VERY stupid person. Try a little googling next time.
http://www.osxfaq.com/DailyTips/10-2004/10-08.ws
Panther automatically defragments frequently used files less than 20 Meg in size. The process is recorded in file ‘/.hotfiles.btree’ (not human-readable).
The only (de)fragging you need do is in Unreal and Quake. 🙂
http://www.apple.com/macosx/tiger/spotlight.html
Pretty much the same is true for cell phones. The main purpose of cell phones is (tada): to call someone.
This is actually a *great* example, because as time passes, mobile phones are picking up more and more features and we can watch what happens as it happens.
Consequently, they are becoming less reliable and their UIs are becoming increasingly more complex, with steeper and longer learning curves (IMHO the pinnacle of mobile phone UI was reach by Nokia on phones like the 5110 and 3210 – a keypad, two arrows, an “enter” and a “cancel”).
you are a VERY stupid person. Try a little googling next time.
http://www.osxfaq.com/DailyTips/10-2004/10-08.ws
Panther automatically defragments frequently used files less than 20 Meg in size. The process is recorded in file ‘/.hotfiles.btree’ (not human-readable).
The only (de)fragging you need do is in Unreal and Quake. 🙂
AFAIK this doesn’t happen continually in the background, it only happens when that file is initially accessed (eg: opened for editing).
So I teached him plain HTML with a simple text editor in 15 minutes. And that was easy for him to understand because now *he* was in control, not the fscking machine!
I think systems should be more transparent about their inner workings, allowing users to understand.
Agreed! I would never have been able to use computers so effortlessly if I had started with Windows instead of Acorn Electron and DOS. God help people born in to the world of Windows. I like to start from the bottom, going up. It’s like programming: the best programmers know assembly, and can use it when necessary (not very often).
OSes do suck, you’re right, but not because they don’t provide enough abstraction. That’s not the OS’s job (see exokernels, etc. for what OSes should really be doing). It’s the job of software developers in userspace to abstract away the computer, and indeed the trend has been in that direction.
Yes. An OS that wouldn’t suck would be nothing more than a transparent layer to compose address spaces, muliplex execution contexts and provide shared hardware access (interrupt handlers, I/O ports and address ranges, etc.) It would be tiny and verifiable. I think whatever goes on top is a harder problem, but it should be changeable. Linux is kind of like this but provides too many abstractions (files, processes, device ‘files’).
Well contrary to what the majority of hecklers have to say about the article… I found it very interesting. I do wish though that OSNEWS could implement some form of age verification as I am sure the majority of OSNEWS readers must be 14 year olds.
If someone writes an article that you do not agree with, you are more than welcome to disagree, but do so in a polite and honorable fashion. Don’t start your post with a direct insult… put yourself in the authors shoes… if you went to all the time and trouble to write an enlightening article on a given subject you would certainly not appreciate having someone slap you in the face with insults about your work. I would highly recommend that some of you hit google and look up a term known as “constructive criticism”. Don’t just slam an article… analyze it… think about it… and then provide feedback as to how the author could have made his argument better. And make sure that you thank him for his article… if it weren’t for contributers like the author and many others OSNEWS would not be the site that it is.
Thankyou for presenting this article. I personally agree very much with what the article is saying at it’s root. The fact of the matter is that operating systems or software are no where close to being as easy or as usable as we seem to think they are. Alan Cooper’s “The inmates are running the asylum” is all about this topic and certainly provides some great food for thought. Users don’t need more features in an already feature packed application… what they need is usability. I can’t believe that after as many revisions as Office has had that it is still as annoying to use as when I first started using it at Office 97. Sure they’ve added more features… but its just made it bloaty. I don’t need 1/4 of the features they have available. This could be easily fixed by asking you a few questions during the installation and then having it tailor the install to your particular needs. Not only that but the UI itself is immensely distracting. There are a million little buttons that have icons no bigger than 5px by 5px. How is a elderly gentleman without his reading glasses supposed to use this? Right…. the menu’s. Ok… so I hit the format menu. Low and behold there are at least 20 menu options… hmmm… which one do I want… wait… I’m not fully competent in using a mouse yet… so I accidentally move over to the view menu… view? What is this? I want to format…
And so the gong show that is the average users life continues. Apple is certainly on the right track but there is still vast room for improvement. OS X Tiger will fix the search issue (by using spotlight) that the author of this article mentions… and will blow the socks off anything coming out of Redmond.
I highly recommend going and looking at what Tiger is going to show off next quarter.
http://www.apple.com/quicktime/qtv/wwdc04/
After you watch this.. you will realize just how much Windows and Linux are being left in the dust as far as usability goes. Apple still has things to improve on… but they have their head in the right direction.
This revolutionary interface actually was developed based on how the human mind works. It is the interface that will eat up the microsoft desktop share if it ever becomes mature. Buy the book, contribute to it if your a python programmer, get it done so I can finally use an interface that makes sense and doesn’t require me to do stupid things like start programs or resize windows.
http://humane.sourceforge.net/the/
“So I teached him plain HTML with a simple text editor in 15 minutes. And that was easy for him to understand because now *he* was in control, not the fscking machine!
I think systems should be more transparent about their inner workings, allowing users to understand.”
“Agreed! I would never have been able to use computers so effortlessly if I had started with Windows instead of Acorn Electron and DOS. God help people born in to the world of Windows. I like to start from the bottom, going up. It’s like programming: the best programmers know assembly, and can use it when necessary (not very often).”
Exactly my point! Hence the need to start with commdn-line systems and understand why they are doing tasks, not bury them beneath layers of abstraction and various metaphors such as the desktop.
I also agree with Paul Hahle. Raskin’s THE is exactly where we should be heading. It may not be suitable in all circumstances but it is a step in the right direction and a mvoe away from the dead-end that GUIs have become. A tool like THE will also make it an easier path to eventually use voice Interfaces.
I want to see a day when I can “telnet” into my machine anywhere from a mobile ‘phone via a secure link and have it read e.-mails or web pages on my request. (I live in hope.)
Exactly my point! Hence the need to start with commdn-line systems and understand why they are doing tasks, not bury them beneath layers of abstraction and various metaphors such as the desktop.
Commandlines and GUIs work at the same level of abstraction. Double clicking an icon to run a program is at the exact same level of abstraction as typing a command to run a program. As is dragging a data file onto an icon and piping a data file into a binary.
The only difference is the method of using those abstraction.
I want to see a day when I can “telnet” into my machine anywhere from a mobile ‘phone via a secure link and have it read e.-mails or web pages on my request. (I live in hope.)
You can do that *now*.
Instead of getting stuck in specific examples, focus on the overall message. While for expert users current interfaces may be fine, for the majority of novice as well as users for whom computers are a secondary tools, most current interfaces are too complex.
Ideally we’d have the StarTrek interface, also used in most ohter sci-fi movies, where people simply speak command in a plain human language (English). That way we could simply ask for a ABC recport or info on XYZ without worrying about disk directories, http,ftp,SOAP protocols, etc. Of course, we are still well short of this ultimate goal.
1) AI R&D is pathetic
While univerities are doing Artificial Intelligence research, there is little to no development and use in mainstream apps. IT and software industry is completely neglecting AI (with some exceptions).
To make computers easier to use we need to make them “smarter” so that they can understand our instructions without having to specify a great number of parameters, we need AI.
2) Little focus on automation, most efforts on features
New CPUs are starting to add multiple cores, so there is plenty of capacity for execution of many threads. Software could/should create extra threads to automate mundane tasks, such as disk mgmt,…. Given a certain configuration outlook automatically archives messages in background. Simple to program and saves me a lot of mgmt work.
Instead, IT and software industry is focusing on adding new features which of course have to be invoked manually. After all it is features that sell a new version, make a new version more valuable than the old one. If AI and automation were more aggresivelly used you’d actually remove features. How do you sell a new version that has less features?!?!?
The result is a code bloat, simple mindless piling of features, forms, windows and dialogs on top of one another, until the point where software is far too complex. The industry seems simply not to care.
There are not technical impediments to implementing at least some simple automation/AI features. File manager could perhaps “observe” user actions and “learn” about repetitve tasks, then perform (or offer to perform) them automatically. There are many possibilities, but the industry is not looking at them.
As the author suggested it comes down to money. There is more money in bigger more complex apps. More support, more features can be sold for more money, etc. OSS will have to take the lead, as users for the moment are not rebeling.
Dr Smithy,
Yes, I agree that there is a level of abstraction in using the command-line, in that we are not dealing directly with binary code or manipulation of data at its most fundamental level. Yes, we also still do use metaphors such as “electronic mail.” However, the next layer of abstraction, that involving metaphors pertianing ot the desktop, such as “trash cans” and so on are eliminated.
Directly coding pages via HTML or LaTeX is a better learning method than using WYSIWYG.
There needs to be soem degree of abstraction as we do not think like machines, so an interface is needed but not one whereby we lose sight of how the machine works.
On your second point, you mention that we can use a mobile ‘phone to connect to a computer now and read e.-mail. What I was referring to was to do this via voice. In other words, connect via a remote portable device to a computer sitting in a house, have voice pattern recognition as a password subsittute and then say, “Open Mutt. Retrieve E.-Mail”
Computer would respond with voice output “Five new messages.”
I Would then state, “Filter for new messages from Fred Smith. Read.”
It woudl then read the messages from Fred to me down the ‘phone line.
Does such a technology actually exist already? Obviously,t he various elements are there but I have not heard of them being put together in this way. If so, please supply me with more ifnomraiton.
Yes, I agree that there is a level of abstraction in using the command-line, in that we are not dealing directly with binary code or manipulation of data at its most fundamental level. Yes, we also still do use metaphors such as “electronic mail.” However, the next layer of abstraction, that involving metaphors pertianing ot the desktop, such as “trash cans” and so on are eliminated.
Not really. Using “rm” to delete a file is no less an abstraction than dragging an icon to the trash, selecting it and hitting the delete key or right clicking it and hitting delete. Certainly, the *methods* are different, and the metaphors are different (a “trash can” or “delete key” vs an arbitrary couple of letters), but fundamentally all both people are doing is using a process (“drag to trash”, “rm”) to tell the machine to delete a file (the file’s icon, a filename).
Directly coding pages via HTML or LaTeX is a better learning method than using WYSIWYG.
Learning method, perhaps – but not everyone wants (or needs) to learn, they want to *do*. When all someone wants to do is bang out a quick letter or put a few pictures onto a web page, a tool with more automation is all that’s required.
People shouldn’t have to learn how write an HTML page in notepad to put a few pictures onto the web. There’s no justifiable reason they should.
There needs to be soem degree of abstraction as we do not think like machines, so an interface is needed but not one whereby we lose sight of how the machine works.
Say what ? The perfect abstraction would be when we could forget completely about the underlying machine and how it works, and concentrate solely on the tasks at hand.
Does such a technology actually exist already? Obviously,t he various elements are there but I have not heard of them being put together in this way. If so, please supply me with more ifnomraiton.
Well, all the necessary components certainly exist and have for some time. I’m not aware of anyone who has actually glued them all together though, probably because of lack of demand. Personally, I think the system you suggest sounds like a horribly frustrating and inefficient way of remotely accessing email with little to no practical improvements in ease of use. Here’s a few reasons why:
1. It would be slow.
* Most people can read a lot faster than they can listen.
* Long emails would take substantial time to read end to end. When you’re reading, you can skim.
* Retrieving messages is inherently interactive.
2. It lacks persistance.
* If you need to check some aspect of a message, you need to go through the whole rigmarole again
3. It’s way too reliant on certain aspects the underlying technology
* It will be a long time before there is perfasive, perfect, crystal-clear mobile phone reception. This potentially breaks your authentication method, your ability to instruct the machine what to do and your ability to listen to the information retrived.
A far better solution for this sort of thing is the technology that exists today – PDAs, email-enabled phones, devices like the Nokia communicator, Treos or Blackberries.
Consider the advantages:
* You get a written you can quickly skim through for important information.
* The message is stored locally for quick and easy review
* Messages can be retrieved in the background and read at leisure
* The lower negative impact of poor connection quality with regards to transmitting and receiving text means a more reliable, more efficient system.
Thanks for your response, Doc Smithy. These are my last words on the subject:
Perhaps I was wrong to use the term “abstraction.” Maybe the actual “metaphors” themselves is the problem. The fact that we use terms such as “desktop,” “trash can” and “file” is,, of course, to rpovide comparison to “Real world” objects. However, this disguises the fact that we are not really placing a “file” in a “trash can” but we are in fact erasing a collection of magnetic markings on a disk.
No real vocabulary of computing to describe precisely what we are doing has emerged (hence unwieldy descriptions such as “erasing a collection of magnetic…) By vocabulary, I do not mean industry jargon but a precise relationship between signifier and signified. History shows that language develops over time to meet these needs.
Onto your second statement, regarding HTML or LaTeX coding, here I do disagree with you. I really think there is a necessity for everyone to learn HTML. I believe the concept that came in with the Mac that manuals could now be dispensed with is a fallacy. Education is still a key concept for learning to use a computer properly.
Finally, regarding the mobile ‘phone interface/voice system that I am hypothesising, I definitely agree that it the idea needs more thought. You have focused on the negative aspects and there would certainly be challenges to overcome but whilst retrieval of e.-mails maybe slow (though I am surprised by your assertion that listening is slower than reading, unless one is skimming), input (via dictation) would be faster than the majority of typists. (Of course, I acknowledge the the current limitations of ViaVoice and Dragon would need to be overcome.)
It would also benefit many people with certain types of disability.
Knowledge reserves such as the internet could be accessed from one’s belt.
I agree that searching and navigation would be technical challenges and there would definitely be a need to avoid a system where rigmarole would be constantly repeated. That would annoy me as much as it would anyone else.
Really this idea is still very much abstract but I feel it would have potential once the limitations you point out are overcome.
Thanks for your response, Doc Smithy. These are my last words on the subject:
Even so, I am compelled to respond to clarify my stance on a few points .
Perhaps I was wrong to use the term “abstraction.” Maybe the actual “metaphors” themselves is the problem. The fact that we use terms such as “desktop,” “trash can” and “file” is,, of course, to rpovide comparison to “Real world” objects. However, this disguises the fact that we are not really placing a “file” in a “trash can” but we are in fact erasing a collection of magnetic markings on a disk.
Perhaps, but the overall point you’re trying to make is that you think a CLI offers a “lower level” view of the system than a GUI and hence is a “better teaching tool”.
I don’t believe this is correct. I can’t think of any reasonable argument as to why a command like “rm filename” should be considered any “lower level” than dragging an icon to the trash.
No real vocabulary of computing to describe precisely what we are doing has emerged (hence unwieldy descriptions such as “erasing a collection of magnetic…) By vocabulary, I do not mean industry jargon but a precise relationship between signifier and signified. History shows that language develops over time to meet these needs.
I think “deleting a file” is good enough, dont you ?
Education is still a key concept for learning to use a computer properly.
Millions of competent drivers out there who barely know enough to continue putting fuel into the vehicle would suggest otherwise.
Indeed, so would millions of computer “gurus” who haven’t a clue about things like machine code, formal logic, silicon chip fabrication, etc.
It *is* possible to competently (if not expertly) use a tool without having a clue how it works. I direct your attention to an entire generation of children who are able to communicate with each other despite having a grasp of grammar and spelling that would have been considered borderline illiterate fifty-odd years ago as another example .
You have focused on the negative aspects and there would certainly be challenges to overcome but whilst retrieval of e.-mails maybe slow (though I am surprised by your assertion that listening is slower than reading, unless one is skimming), […]
Well, it is – in general, at any rate. The typical person can read, say, a page of writing far faster than they would be able to listen to it read at a “normal” speed.
This might not be true amongst people who haven’t completed significant parts of their schooling, and incidents of people who can’t read (or write) very well are certainly snowballing due to modern schooling techniques, but on the whole the typical person on the street is far more likely to be a quicker reader than listener.
[…] input (via dictation) would be faster than the majority of typists.
In this you are correct. Even more so when you consider the awful input capabilities of the sort of tools under discussion.
It would also benefit many people with certain types of disability.
Another excellent example of where voice recognition has a lot of potential.
I agree that searching and navigation would be technical challenges and there would definitely be a need to avoid a system where rigmarole would be constantly repeated. That would annoy me as much as it would anyone else.
I don’t think it’s so much a “technical challenges” as sheer physical limitations . There’s only so fast you can have understandable text read to you.
Really this idea is still very much abstract but I feel it would have potential once the limitations you point out are overcome.
Don’t get me wrong, I think voice recognition is technology that has a lot of potential in some areas, but trying to use it for some things – the things I pointed out – is very much square peg in round hole type stuff. Right tool for the right job, etc.
Voice recognition and voice control are not a silver bullet, they’re just another tool. Don’t get sucked into the “it solves all our problems, and does the dishes as well” attitude.
Although when watching Star Trek I was always more impressed with the way the computer could supposedly grasp the *meaning* of the commands it was given, rather than the mere fact it could recognise the words being spoken.