“To be truly free in the 21st century, we have to ignore the flashy graphics and really get inside our computers” says the Guardian, while Slashdot runs a story titled: “Tangible Interfaces for Computers“.
“To be truly free in the 21st century, we have to ignore the flashy graphics and really get inside our computers” says the Guardian, while Slashdot runs a story titled: “Tangible Interfaces for Computers“.
dylan almost inspires me to give up friends, family, fresh air, seeing the sun, and ever seeing the low side of 200 lbs….
all to get inside the code.
ok maybe not. i need a life! 😉
i think he’s a bit too smitten with the matrix stuff….btw i enjoyed the movie. better then the second. no rave parties…thank the heavens.
I kind of wonder if this Evans guy ran out a *Terminator* show screeching that the end was nigh. Does he ever stare at his monitor from across a dark room and silently plan to kill it before it kills him? Please. Yes, computer literacy is important, as is understanding computers. But do we truly have to see a hot blonde in a stream of green html characters in order to avoid becoming “manual labor”? The icons (always those infernal icons!) are not our blinking little enemies. I’ll leave the coding to the expert hackers out there (thank you guys!!) and simply use my machines for productivity…which, as I recall, is why I paid big bucks for the software in the first damned place. (Mind you, “big bucks” is a relative term, but still….)
I think it’s good for people to get to know what’s under the hood but…
But the graphical interface has so many advantages over a text interface, and text is just another interface, even languages like c++ are invented as an interface so we can communicate easier with the machine.
‘dir’ ‘ls’ ‘cd’ are also just ‘icons’ we use to make controlling the machine easier, to really get to the basics we’d have to learn binary code and understand how this is processed by the cpu and controlling the hardware…and is’nt even that just an interface?
I know the ‘under the hood’, but I can certainly say If I had a choice between text and graphics interface, I’d choose graphics. all sorts of operations are done many times faster at the click and drag of a mouse then typing commands.
The article paints an impression that human beings are somehow more well-suited tactile, highly visual interfaces because they mimic real objects. Of course, that ignores one of the most powerful capacities of the human brain — the capacity for abstract thought.
Tactile and real-world interfaces rely on the more primitive processing centers of the brain. In contrast, more abstract interfaces rely on the brain’s well developed centers for speech and other symbolic communication. The former are very specialized. For their limited range of users (object recognition, spatial location, etc) they can process information extremely quickly. In contrast, the more abstract centers are much more versitile, but more limited in what information they can process.
Interestingly, one of the most powerful examples of the above phenomena is the keyboard. Its completely physical — depending only on muscle memory to transfer (relatively) large amounts of data from the user to the computer. Further, its relience on the more primitive physical centers of the brain allows keyboarding to proceed in parallel with more abstract processes. Something like a speech recognition interface, in contrast, competes with the more abstract centers of the brain, which means that it cannot occur in parallel with abstract thought. Because of these powerful features, my guess is that you will not find a fundementally better abstraction for data entry than the keyboard, at least until you can come up with a more direct computer -> human connection.
I’m going to go out on a limb and throw my two cents in there. The interface of the future will not be tactile. That’d not be an efficient use of the brain’s resources. Instead, it will be a mix of a tacticle/visual interface and an abstract (yes, think CLI) interface. The visual interface will be leverage for things it is good at (object recognition, pattern finding, etc) while the abstract mechanisms will be used for general communication. Real innovation will come from making information readily available for use in the abstract interface. Just as your mind can readily associate many relations with a single concept, the “CLI of the future” will use advanced data indexing schemes to relieve the human memory from having to manage vast amounts of data manually.
This interface will not be easy or intuitive. But in the future, computers will be so important, they won’t have to be. Knowing this interface will be like knowing how to speak or write properly, or how to do math. None of these are easy or intuitive, but schoolchildren everywhere learn them because otherwise they cut themselves off from the vast collective knowledge of humanity. For such a basic skill, efficiency and power trumps all else.
The graphic user interface can be limiting to the poweruser. Yes, it is great for single task operations. But when you need to combine multiple tasks and commands to achieve a customized result, the textual interface is exponentially more powerful. It is certainly to your benefit if you sacrifice the time to learn it. Well designed application provide an alternative to use textual interfaces for scripting and much flexible control.
the Matrix analogy that is.
The perfect interface *will* be like the matrix however, but not because everyone has adapted to how the computers talk, but vice versa. The computers will be able to translate into stimuli our brains naturally understand. You wont be reading lots of assembler, the instructions will be signals that you’ve grokked since birth.
Uhmmm… computer evolve in the way to become more intuitive and easy to use. At first you have to user physical gate to enter the program in… all in binary. After we got perfored card, text entry, cli, graphical interface -> so why it should go back to a hard to learn computer?
For most people, computer are just a tool and nothing more, like a hammer. You can use the hammer without going to be a Phd in Math from the MIT yeah?
Computer does got more and more power to process things, so more power are free to process thing so we can understand them in an easy way.
Abstracting things is a good thing to use big amount of data – otherwise, we will be in the matrix in some year to manipulate big library with old system, the only thing that will have change is the place it took Stupid
The guy who like the CLI is crazy, and it’s not really how computer work… computer still work with binary, right??? So we all must learn assembly command in binary(long time i’ve not done any ASM too :p), gonna slow down things a lot
He sounds like a guy who’s just installed Linux or learned how DOS works or learned to write HTML. He wants to tell everyone how important it is for them to do the same.
He isn’t making a convincing case at all. “Writing code will become essential for professionals of every stripe”? Ridiculous. Writing code is quickly becoming pretty much optional even for programmers (I think RAD in VB is coding in the same way that, say, a pencil is a stabbing weapon. It can be, but it’s not what one means when one uses the term ‘stabbing weapon’).
It’s a stretch to call those that can’t write code ‘illiterate.’ Writing code is to computer use as knowing about the history of morpheme development in Eastern European languages is to literacy.
Interestingly, one of the most powerful examples of the above phenomena is the keyboard. Its completely physical — depending only on muscle memory to transfer (relatively) large amounts of data from the user to the computer. Further, its relience on the more primitive physical centers of the brain allows keyboarding to proceed in parallel with more abstract processes. Something like a speech recognition interface, in contrast, competes with the more abstract centers of the brain, which means that it cannot occur in parallel with abstract thought.
You’re comparing to different standards. You’re comparing the mechanical actions of hitting the keys to the entire act of forming coherent thoughts and speaking them. You should be comparing hitting keys to making sounds. Coherent input via the keyboard requires just as much – if not more – abstract thought as voice input was. More importantly, using the keyboard requires (for most people at least) learning and mastering *additional* skills.
<p>Because of these powerful features, my guess is that you will not find a fundementally better abstraction for data entry than the keyboard […]
I would be amazed if you can type faster than you can speak. Heck, most people can’t even type faster than they can *write* without serious amounts of practice.
A keyboard is a pretty poor input device, really. It’s ergonomically awful and requires significant amounts of training before using it becomes automatic. “Chording” type keyboard replacements are usually _vastly_ more efficient, have better ergonomics and require about the same amount of training.
[i]This interface will not be easy or intuitive. But in the future, computers will be so important, they won’t have to be.
I disagree. (General purpose) computers won’t become essential or integral to everyday life *until* the interfaces have become simple and intuitive. My bet is on *real* natural language recognition making this possible.
Knowing this interface will be like knowing how to speak or write properly, or how to do math.
This is how it works _now_ and it sucks. Why should we have to learn additional skills so we can interface with the machine ? Surely it’s more intelligent to give the machine the capability to interface with us, using skills we all have to learn anyway.
For such a basic skill, efficiency and power trumps all else.
If that were true, English wouldn’t be the de facto language of the most advanced, knowledgable and educated civilisations on the planet.
Humans have spent their entire existence modifying their environments to make their lives easier. I can’t see that pattern changing with computer interfaces. Indeed, one of the reasons computers are becoming more popular is because their interfaces are moving away from the model you are advocating.
Hopefully, one day, we’ll have Matrix-esque interfaces and, more importantly, education systems. I doubt it will happen in my lifetime though.
For most people, computer are just a tool and nothing more, like a hammer. You can use the hammer without going to be a Phd in Math from the MIT yeah?
>>>>>>>>
You need to get out of your box. We’re not talking about computers now. We’re talking about computers in the future. A future where anybody who can’t efficiently use a computer is like someone who can’t read. A future where a computer isn’t just a tool, but an integral part of human society. We’ve already seen part of this happen. The internet has connected the whole world. Engineers do most of their work in virtual space. Doctors are starting to do surgery through robots. Their application is unlimited. Anywhere you need to think, you can use a computer to help you, or to think for you so you can think about more important things. A computer is most definately not a hammer, not now, and certainly not in the future. Its something much more fundemental than that — something, I’d aruge, is worth spending years learning (think of the 12 *years* of studying kids do today in preperation for the real world!) in order to fully harness its power.
Computer does got more and more power to process things, so more power are free to process thing so we can understand them in an easy way.
>>>>>>>>>
Easy != efficient. Human speech is not easy. Mathematics is not easy. But they are so fundemental, that it doesn’t matter, because the cost is worth it.
The guy who like the CLI is crazy, and it’s not really how computer work… computer still work with binary, right???
>>>>>>>>>>
Its not about how close it gets to how the computer works. Its about efficiency. In a future where everything is computerized, point-and-click is not going to cut it. We’re going to want something that allows us to command the computer as fast as we can think. Since a direct neural link (with the required level of fidelity) is probably very far into the future, we have to do something that comes close. So far, the CLI is as close as we have gotten. It embodies the principle that there is no need for cumbersome real world metaphors — the computer is not the real world! Instead, its direct. You think something, and you tell the computer to do exactly that. Of course, the interface of the future won’t be anything like the CLIs of today. CLIs of today don’t take enough advantage of our visual capabilities, and don’t work well for some things that are inherently visual. They put too much of a burden on the operator’s memory and familiarity with the system.
The interface of the future will be based on fundemental axioms that can be taught in school, that the user can extend to any scenario he might find himself in. This is exactly how mathematics and writing/reading are taught today. It will incorporate natural language processing (to relieve the human of being precise like a computer) and extremly powerful database and search capabilities, to free the user from having to remember a large amount of information.
Come on, I wrote an article on OSNews on this a while back. CLI doesn’t cut it AT ALL for abstract data display/editing. Sure, maybe deleting a file is faster in CLI, but, say, word processing and spreadsheets are unusable without GUIs (Note that, say, Vim and Emacs are both GUIs in a sense, because they are not direct text-entry arrangements. Ever tried using ed? That’s the ‘ultimate efficiency’ of the command line for you).
The author really doesn’t know what he’s talking about. In any case, any tasks that the laymen will be trusted to do will not involve or be helped by command lines.
Also: What’s gonna be taught in school? C? Pity the child that is forced to code C. Perl? Better teach ancient Sumerian. VB.NET? That loses the whole point of ‘getting to the bottom of computers’.
Those of us who have read Barthes’ Mythologies can see that ‘the revolt of the machines’ is merely yet another myth of capitalist society. It hearkens back to communism, the ultimate nightmare–the rebellious proletariat being replaced by technology. Besides what ifs, I cannot see any reason at all to believe that computers or cars or ice dispensers will try to take over our world.
You’re comparing to different standards. You’re comparing the mechanical actions of hitting the keys to the entire act of forming coherent thoughts and speaking them.
>>>>>>>>
No I’m not. Its a very well known phenomenon. You can look up the research yourself if you’re interested. Voice is not something innate to humans. Their thoughts are completely abstract — they don’t really talk to themselves while they are thinking. After a thought is completed, it takes more mental resources to form words and voice them then it does to type them. More importantly, the resources required to voice a thought are the same resources that the brain could be using to come up with the next thought. In contrast, the resources used to key-in a thought are different from those used to form the next one. This basic principle is the reason you can generally type and think at the same time, but not talk and think at the same time. Its also why fast readers don’t voice the words to themselves as they ready, because it slows them down and uses up resources that could be better used towards comprehending the material.
would be amazed if you can type faster than you can speak. Heck, most people can’t even type faster than they can *write* without serious amounts of practice.
>>>>>>>>>>>
First, you can speak several times faster than you can type. However, you can think and type at the same time, but you can’t speak and type at the same time. Unless you’re reading pre-prepared material, the thinking process is going to be the bottleneck. Speaking just makes that bottleneck worse, because it contends for the same resources as thinking. Second, we’re not talking about today. Typing classes are already widespread in school. As computers become more important, they’ll become required parts of the school cirriculum.
“Chording” type keyboard replacements are usually _vastly_ more efficient, have better ergonomics and require about the same amount of training.
>>>>>>>>>>>
I’m using “keyboard” very loosly. The fundemental concept of chording devices is still the same as that of a keyboard. My point is it will be a very manual interface (maybe twitched of your fingers or something else?) not something like speech.
I disagree. (General purpose) computers won’t become essential or integral to everyday life *until* the interfaces have become simple and intuitive.
>>>>>>>>
By all accounts, computers today are *not* simple and intuitive. But many people do nothing all day but sit in front of a computer. The benifets of computers are too great to waste just because people don’t want to learn. Corporations, seeing the massive productivity increases possible, will force employees to learn, and schools, wanting to prepare children for the real world, will force students to learn. Again, just like math and writing.
This is how it works _now_ and it sucks. Why should we have to learn additional skills so we can interface with the machine?
>>>>>>>>>
Because machines will become so important that you won’t have a choice. Our current skillset (specifically, reading, writing, and speaking) are about as efficient as you can get for something designed to communicate with other humans. But computers aren’t other humans. They’re essentially a tool for your mind. If you want to use this tool to its fullest potential, you’ll just have to suck it up and learn another skill. I didn’t want to take 3 years of calculus, but I had to. You’ll get over it. And if you don’t, you’ll just get left behind, like people who can’t read or write or speak properly.
Surely it’s more intelligent to give the machine the capability to interface with us, using skills we all have to learn anyway.
>>>>>>>>>>
It’d be easier, but it wouldn’t be *efficient*. And if computers do become so widespread, it *will* be a skill you have to learn anyway. I think you’ve got the wrong idea. Its not a matter of changing your behavior to make it easier for the computer to understand you. Rather, its about making the best possible human -> computer interface. A computer has a fundementally different place than another human being. Logically, its not something external that you communicate with, its something internal that you command. Thus, the challenge is to devise an interface optimized for this sort of communication, that deals with the fact that, physically, computers are still external to your mind.
The title “Smash the Windows” is very ill-chosen on the remembrance day of the Kristalnacht (November 9, 1938).
No I’m not. Its a very well known phenomenon. You can look up the research yourself if you’re interested. Voice is not something innate to humans.
I’d be very interested to read about it – do you have any pointers ?
And I’d argue with the idea that speech is not innate to humans. Last I heard it was one of the primary reasons we were so much more successful than other species.
Their thoughts are completely abstract — they don’t really talk to themselves while they are thinking.
Are you trying to say we type ?
Incidentally, I know *heaps* of people who talk quietly or move their lips while they’re speaking. Covering a while range of different intelligence levels.
After a thought is completed, it takes more mental resources to form words and voice them then it does to type them.
I find this exceptionally difficult to believe. The “work” in the mental forming of words should be completely independent of whether those words are going to be spoken, written, typed or whatevered.
More importantly, the resources required to voice a thought are the same resources that the brain could be using to come up with the next thought. In contrast, the resources used to key-in a thought are different from those used to form the next one.
I fail to understand why the physical act of manipulating the voicebox requires more “thought” than the physical act of manipulating the fingers.
This basic principle is the reason you can generally type and think at the same time, but not talk and think at the same time.
Any semi-decent public speaker or conman is capable of speaking and thinking at the same time.
Its also why fast readers don’t voice the words to themselves as they ready, because it slows them down and uses up resources that could be better used towards comprehending the material.
What, are you trying to say that typing it as they read *wouldn’t* slow them down ?
I’m using “keyboard” very loosly. The fundemental concept of chording devices is still the same as that of a keyboard. My point is it will be a very manual interface (maybe twitched of your fingers or something else?) not something like speech.
I will agree such interfaces will probably be commonplace in situations where speed is critical (like in the military). I can’t imagine them being taught in schools alongside maths and english.
By all accounts, computers today are *not* simple and intuitive. But many people do nothing all day but sit in front of a computer.
Many, not all – probably not even the majority. And for the vast majority of them the difficulty lies in the usability of the interface, not its efficiency. I can’t imagine this situation changing. The person is *not* the bottleneck.
Because machines will become so important that you won’t have a choice.
No, the tasks they do will. The machine is a tool and it will be designed so as to make the task easier. Simplification of *interface* is the driving force behind most tool refinements.
Our current skillset (specifically, reading, writing, and speaking) are about as efficient as you can get for something designed to communicate with other humans. But computers aren’t other humans. They’re essentially a tool for your mind. If you want to use this tool to its fullest potential, you’ll just have to suck it up and learn another skill. I didn’t want to take 3 years of calculus, but I had to. You’ll get over it. And if you don’t, you’ll just get left behind, like people who can’t read or write or speak properly.
Not going to happen. Just like people don’t need a PHD in English to communicate with their peers, they aren’t going to need a multi-year education in computers just to be able to use them.
You are basically trying to say all trends in interface design are going to reverse. I can’t say I think that will happen.
It’d be easier, but it wouldn’t be *efficient*.
Efficiency has never been a driving force in society.
And if computers do become so widespread, it *will* be a skill you have to learn anyway. I think you’ve got the wrong idea. Its not a matter of changing your behavior to make it easier for the computer to understand you. Rather, its about making the best possible human -> computer interface. A computer has a fundementally different place than another human being. Logically, its not something external that you communicate with, its something internal that you command. Thus, the challenge is to devise an interface optimized for this sort of communication, that deals with the fact that, physically, computers are still external to your mind.
As I said, I can see how the sort of interface you describe will appear in places where it is justified, but it’s not going to become commonplace. People are lazy – nearly everyone will take easy over efficient. Secretaries aren’t going to need to learn a whole new “computer language” just so they can take a memo.
For the past 50 years or so programming skills have become increasingly less valuable. In 1950 you wouldn’t be allowed near a computer without a PhD. Now five year old children use computers freely.
In a decade, or so, many IT will jobs be based in low wage countries. Many IT skills are commodity products unable to command high salaries.
Speech is not what makes us the most powerful species. Abstract thought is. Take a wolf pack, they hunt together play together. They have to think together. They communicate ideas, thorugh howls, body position, and looks.
We have the ability to use tools, So do the other apes. Apes in the wild can be seen using rocks as hammers to open up food. What you won’t see an ape do is to use a vine, a stick and a rock to create a better hammer.
Simple experiment. go find a secerctary that can type 60 wpm. give her a document tell her to read it aloud and time it. Garenteed she will make a lot of mistakes the first time around.(which is why most speaches are partionaly memorized to fix errors) Have the same person type a document into a computer. It will be done with a lot less errors, and much faster. The keyboard allows one to type think and work at the same time.
It is that kind of seperation, that allows an Artists work to flow from mind to hand. They may not beable to describe what they are doing but they can do it.
Speaking requires an enormous amount of thought, Think of how many different languages there are and remember not one of the is in the langauge of the human mind. You have to translate thought into speech, but hands can do that faster.
Just something I would like to point out since it kinda fits with this “tangible interface” malarkey.
Remeber the operators in Xion in Matrix Reloaded? They were moving things around on a 3D screen in front of them… Now it might just be me but I would consider typing on a keyboard with my hands resting a lot more comfortable than waving my arms around all day
For the past 50 years or so programming skills have become increasingly less valuable. In 1950 you wouldn’t be allowed near a computer without a PhD. Now five year old children use computers freely
One thing is to program, to create. Another thing is to use. Yes today some 5 y.o. kids can use a computer, but that doesn’t mean they can program or create complex stuff with it. You still need some knowledge and neurons to program an OS or a finantial system, a kid can’t do that. IT skills are becoming commodity for dummy tasks, but you still need some smart guys to analyse the problem and design a solution.
In his article, Dylan Evans writes:
Paradoxically, it is only by learning the language of the machines, by adapting to their logic, that we can free ourselves from their dominion. …Remember DOS or the ZX-80, or the old BBC computer? Not much in the way of fancy graphics. Just lots of text, and strange words like DIR and CD.
Someone needs to inform Mr. Evans that typing DIR and CD is not learning the language of the machine; that would be the electronic signals that we represent as 1’s and 0’s. I don’t think any sane human beings want to go back to that, and assembly is only the slightest bit better.
Even writing this much gives this intellectual train wreck of an article more credit than it deserves. Move along, folks, nothing to to see…
“One thing is to program, to create. Another thing is to use. Yes today some 5 y.o. kids can use a computer, but that doesn’t mean they can program or create complex stuff with it. You still need some knowledge and neurons to program an OS or a finantial system, a kid can’t do that. IT skills are becoming commodity for dummy tasks, but you still need some smart guys to analyse the problem and design a solution.”
The financial value of a skill is largely on how great the barriers to entry are. In 1915 it was vastly cheaper to train a military pilot or surgeon than it is today. A 1915 vintage fighter aircraft was no more expensive to purchase or run than a decent sports car. A modern jet fighter costs at least $20 million to purchase and $20,000 an hour to operate. During WW1 it took a few weeks to train a fighter pilot – now it takes around five years. In 1950 most programmers were PhDs from elite universities because a high level of mathematical skill was needed to write very efficient algorithms due to the astronomical cost of raw processing power. Computers are now so cheap that even students in poor countries have access to them. There are countries where very highly educated people earn very low wages. Places like Vietnam have professional salaries 1/20th of those in western countries.
The very best IT people (top 1%) will probably continue to earn high salaries. Most of the other jobs will go to countries like India and China.
Also increasing processing power and development tools means that people with lower skills will be able to program effectively. A simple analogy is woodworking. A hundred years ago only someone very skilled could make a quality wooden table. The availability of a wide range of modern tools, presawn timber, varnishes etc means that a person with reasonable skills can now make the same table.
I was 7 in 1985. That year I learned basic, on a ti 994/A
with in 2 months I was programing, racing games, and role playing games. GAmes I created I was programming at 7. I learned out aboout overloading memory. as I created a program that wouldn’t load as it took more than the 32kb of ram that the machine had. from that I learned moduler programing. I never got into it afterwards so it is not my job, but I did it, and istill have to 360kb formatting 5.25 disks witht he proof
I didn’t skim all the comments, so sorry if I’m just reiterating anything already said, but here’s my $0.02…
I know enough about my car to change a tire, change the oil, drive it, work the radio, A/C, and so forth. That is ALL i need to know about the vehicle. I respect the engineers that know the “under the hood” stuff, but I don’t care to know all they know because it is unnecessary for me.
I can write in many different computer languages (like I’m sure many of us do). However, I don’t expect my family or friends to know what I know to use their computer. I very much believe it would make using computers easier for them, but I don’t believe it’s a necessity for them to know their computers “well enough.” This man’s article has missed the mark imho.
This is getting off topic, but I’ll throw in my thoughts. I studied linguistics in grad school, specializing in theoretical syntax/morphology and the history of language. The comment made by R.H. that speech, because it takes longer than typing to produce, is not innate to humans is false. Yes, speech certainly does take a lot of resources to produce, more than typing. But so does squatting twice your body weight. Look up the research: large muscle-group movement under heavy loads takes more brain processing than does simple manual tasking…typing included. However, no one would argue that moving large amounts of weight is less intrinsic to human mental mechanics than typing.
Just because something takes more resources does not mean it is any less “natural” or “innate.” Speech is a complicated process because–in short–one must appropriate organs not originally intended for speech to produce it. The tongue, lips, cheeks, and larynx have been appropriated from other duties and stretched to their limits to allow us to speak. This is precisely like the human lower back: it has become S-shaped and reconfigured to allow us to stand upright–a definite bonus for us as a species–but it is not truly optimized for it…hence the profusion of human back problems. (And no, this is NOT due just to inactivity. Even among physically fit individuals, the lower back is one of the commonest sources of problems. It can be said that we as a species “stood up” too fast.)
Typing, on the other hand, is simple manual dexterity. That’s it. Animals have been doing it for millions of years, and many do it even better than we do. The parts of the brain that do this have been dedicated to it and highly optimized since long before the first utterance.
Not only that, but there is somewhere between three and four times more real eststate in the brain devoted to controlling speech organs than to manual control (depending on whose research you follow). That alone, really, can account for much of the research figures. It has been shown that the greater the brain area involved in physical movement, the (slightly) longer the action/reaction time.
So the research numbers R.H. points to do not prove that speech is somehow less a part of our natural human makeup than is typing/writing. In fact, that’s what “baby talk” is: it’s not babies practicing sounds as was thought for decades; rather, it’s babies instinctively exercising their innate wish to speak but being unable to produce viable product. (Again, look up the research. This is the model currently accpted throughout the community of developmental linguists and neurologists.) Instead, this research shows that speech takes more brain power…and nothing more.
the author doesn’t know what freedom means. Free from what? Advertisements? Requirement of knowledge prior to using an expensive and complex tool? What exactly? Certainly not free from opression and law.
In the west, at least, illiteracy is practically a thing of the past.
You’re living in a dream world, Neo. There are still many illiterate AND computer illiterate people here in the west. Wake up!
Remeber the operators in Xion in Matrix Reloaded? They were moving things around on a 3D screen in front of them… Now it might just be me but I would consider typing on a keyboard with my hands resting a lot more comfortable than waving my arms around all day
Depends on what you’re doing. Mouses and their equivalents aren’t really ideally suited for making things with programs. I for one would much rather wave my hands all day than point and click (carpal tunnel syndrome ain’t a picnic).
What makes us the most powerful species is our genetics. We are 99% identical. And we all have this ability to create new things by imagining them in our minds. Our minds separate us from the rest of the species. And we all have the same ability to learn.
IQ, literacy, etc. is a way people like to stereotype eachother as being stupid. But the truth is we’re equally capable of learning the same things. Some people learn faster because they have more interest in the information they are learning. The rest of us are just lazy.
Get interested!
I did not imply that because speech is slower/more resource intensive than typing it is not innate. In fact, innate was probably a bad choice of words. My point was that formal speech is not something that just develops (like the capacity to manipulate physical objects) but something that must be actively taught. Further, speech is not fundemental to humans — they can think without speaking. Thus, converting thoughts to speech takes work, and that work is more than what is required to type that thought in. Either way, the two points are logically seperate.
What do you think is a faster path to carpel tunnel — waving your hands, or moving your fingers? Either way, I highly doubt interfaces of the future will be so manual. I’m guessing that, not too far in the future, it will be practical for machines to detect minute movements of the fingers, and we can use that, coupled with a chording-style keyboard system, to replace physical keyboards.
I’d be very interested to read about it – do you have any pointers ?
>>>>>>>>>>>
This is a transcript of an interview with some professor’s at the University of Maryland. It and the referenced papers should answer most of the points you made about speech vs typing:
http://discuss.washingtonpost.com/wp-srv/zforum/02/washtech_hcil050…
Many, not all – probably not even the majority. And for the vast majority of them the difficulty lies in the usability of the interface, not its efficiency. I can’t imagine this situation changing. The person is *not* the bottleneck.
>>>>>>>>>>
The person isn’t the bottleneck, the *interface* is the bottleneck. When I’m writing code, or writing a paper, I have to manage a dozen different data sources at once. Let me give you a real world example. A couple of years ago, I needed to write a paper on the status of TB research. Every few minutes, I’d need to call up statistics and data to continue writing. This would require completely breaking my train of thought as I had to fumble between the windows google searches and various papers and articles. Contrast this to an ideal human -> computer interface. You’d just think about a given topic, and data would be delivered to you as an extension of your own memory. Since that’s very sci-fi at the moment, we need to get as close as we can with the technology we have. Cumbersome, slow, “real world” interfaces are not a candidate for this — we simply don’t think that way.
No, the tasks they do will. The machine is a tool and it will be designed so as to make the task easier. Simplification of *interface* is the driving force behind most tool refinements.
>>>>>>>>>>>
Things should be as simple as possible, but no simpler (Einstein). If harder-to-learn interface is twice as efficient, than it is worth it to learn the harder interface. As computers become more widespread, they will become critical components of more peoples’ jobs. When that happens, corporations will *demand* that their employees can use computers as efficiently as possible, because of the huge productivity gains that are possible.
Not going to happen. Just like people don’t need a PHD in English to communicate with their peers, they aren’t going to need a multi-year education in computers just to be able to use them.
>>>>>>>>>>>>>
I’m not saying that people are going to need the equivilent of PHDs in English to use computers. I’m saying that they’re going to need the equivilent of a high-school degree. Employers just won’t hire people who can’t read or write, and if computers become as important as most people think they will, they just won’t hire people who don’t know how to fully utilize them. Multi-year education sounds involved, but its really not. Children spend more than a decade learning how to read and write. They spend almost as long learning how to speak properly. Hell, they spend years (in the US, anyway) learning about Native Americans! They can certainly fit computer-use into the cirriculum.
Efficiency has never been a driving force in society.
>>>>>>>>>>
Hello? Capitalism? If your workers can use their tools twice as efficiently, you can bet that your company is going to be more competitive. This will become even more noticible in the future, because the trend is towards fewer physical jobs and more intellectual ones.
Secretaries aren’t going to need to learn a whole new “computer language” just so they can take a memo.
>>>>>>>>
What about when her boss asks her to compile a summery of major memos circulated in the company concerning a particular project. If she can access those memos more efficiently, she can do her job better. What about a paralegal that needs to write a research summery for a lawyer? If an advanced interface makes the dozens of different databases involved easier to juggle, it will be worth it to her. What about an insurance agent writing a report about the accident history of a particular catagory of drivers? Wouldn’t his job be easier if he could access all the relevent information without breaking his train of thought?
To figure out whether waving hands can cause carpel tunnel, do a study on people who use sign language every day. Not only are the waving their hands but moving their fingers. Since sign language is a set of routine movements, it is very similar to routine mouse movements.
To help speech recognition the development of focused microphones will be needed. Similar to the technology behind focused speakers. There are some uses for speech recognition but in a limited manner.
A lot of the communication between humans is done through voice and gestures. So if a proxy could be built to comminicate to humans in voice and then translate our wishes to a machine using the machine’s language, that would be nice. But that is also very Sci-Fi.
I did not imply that because speech is slower/more resource intensive than typing it is not innate. In fact, innate was probably a bad choice of words. My point was that formal speech is not something that just develops (like the capacity to manipulate physical objects) but something that must be actively taught.
If I’m not mistaken research has shown that even in the absence of formal teaching, young humans (and apes) develop languages and communicate.
Further, speech is not fundemental to humans — they can think without speaking. Thus, converting thoughts to speech takes work, and that work is more than what is required to type that thought in.
You haven’t explained why speaking a thought is somehow fundamentally different to typing it. You still have to get from “strange brain-internal language” to “arbitrary communication language” whether you’re speaking or typing.
You’re mistaking communication for speech. The former is very concrete and widely used by many organisms. The latter is very abstract and limited (so far) to human beings. Also, many current theories hold that the latter is a more learned than innate. According to the theory, there is a “critical period of language acquisition” during which language can be taught. Thus, human beings are born with the capacity for language, but the actual skill is something that must be taught over a period of many years.
You haven’t explained why speaking a thought is somehow fundamentally different to typing it. You still have to get from “strange brain-internal language” to “arbitrary communication language” whether you’re speaking or typing.
>>>>>>>>>>>>>>>
The translation process is more complex for brain -> speech than for brain -> typing. Human beings have developed a lot of special machinery that we use to control the complex process of forming words and voicing them. In contrast, many primates have the fine-motor control necessary to accomplish something like typing. Unfortunately for us, the specialized functions necessary for speech seem to share brain space with with abstract thinking processes.
We are doomed! Maybe not us, but our children for sure. Gates is investing on other things that we should be more afraid on long term.
http://www.guardian.co.uk/microsoft/Story/0,2763,1063776,00.html
As with the success of Windows I just can anticipate another success…
We are going fast towards the idea of a movie called The Green Product. For the ones that didn’t saw it: the green product was the only food product left on the planet and it was made from dead bodies.
The movie you’re referring to is called Soylent Green.
http://www.sciflicks.com/soylent_green/
This is like saying everyone needs to know how to do an off frame restoration of a car, how to perform hart surgery, or build a house. I do agree that most people are ignorant when it comes to their computer. I have family members that call their system, their hard drive, and most do not know the difference between the Web and the intranet.
Most people I know under utilize the software on their systems becuase they don’t take the time to learn what its capable of doing. For instance how many people purchase a computer with some kind of office suite, and don’t know how to use a spread sheet to create a amoritization schedule, or loan schedule. Or don’t know how to use mail merg with their word processor.
How many people buy a computer and don’t have the ability to organize thier files and then can’t find anything when they needed it on thier system. How many people use Outlook Express becuase it came on their system, and they don’t know that their are other E-Mail programs out there that are more flexable.
Most people look at a computer the same way the look at a car or a lawn mower. Its designed to do a spcific task in mind. They don’t want to know how it works, or what all the button’s do. When they don’t need to use it they want to put it away and forget about it. The majority of the people’s lives do not revolve around their computer and all the things it could be used for. Its just a tool.
The article is about future interfaces. Unless we grossly under-utilize the potential of the computer, they will soon be a major factor in everyones’ lives. Consider that already, most office workers spend much (most?) of their time in front of their computer — to do paperwork, to communicate with collegues, to do research, etc. Already, things like AIM have started to replace the phone as the choice means of communications between young people. Already, e-mail has started to replace letters as a means of messaging between everyone. Already, its nearly impossible to get any work done at major universities without access to all the online resources professors put on the inter/intranet. Already, engineers spend much of their time in a virtual world designing products. Already, managers have moved to programs like MS Project to manage project schedules. Then consider online stores, online bill-paying, online banking, even online food-ordering!
The computer is not a tool, but a meta-tool. Its a way of accessing the tools you need to do your job. Its quickly becoming apparent that, in the near future, most skilled jobs will require a great deal of computer use. Even personal life will require some level of computer use. Thus, the situation is not at all analagous to saying that you don’t need to know how to rebuild and engine to use a car. Rather, it is analagous to saying that you need to be able to read and write, even if you don’t plan to become a novelist!
What do you think is a faster path to carpel tunnel — waving your hands, or moving your fingers?
I can’t quote any studies on this, but first hand experience has taugh me to avoid using the mouse for very long without breaks.
Waving your hands is a very natural thing to do. I’d go so far as to say that that’s the way hands are meant to be used. Just look at what other apes do all day.
It’s a pity that the human animal is so willing to adapt. I don’t see how ending up atrophied in front of a computer is in any way good or necessary.
I don’t disagree with what you are saying, you are correct on every point. I’m just saying that the avg person looks at the computer as a tool. Maybe I’m wrong, maybe its just the avg person 45+ years of age that lives in the western civilization.
But very few professional people I know, baring those who have chosen computers as their profession, after they have graduated from college look at the computer just as a tool. I would like to see them utilize it better but its going to be quite some time before you see it being used to its full potential.
I work for a division of General Dynamics and with a company like that you would expect that the employees would utilize the computer more than say a company like GM. But they don’t. I have friends and family that work at NCR, Standard Register, Lexis-Nexis, IBM, SAIC, and various other technology driven companies. And I can tell you first hand that the companies do not utilize computers and related technology in this maner. E-Mail is the number one used tool, most have a policy against IM, forget video confrencing you are luky if you can get them to use virtual confrence rooms. Your phone/voice mail are not integrated into a colaboration suite, hell most of them are still using 10/100 LAN’s, it will be 2-3 years before they will see Wi-FI networks in most of these places.
Universities are awesome places to implement these technologies and they should becuase they are educating the next generation of professional workers, but in the real world most companies don’t have bottomless pockets of cash. They look at how to keep their expenses low and most equate change to core technologies as a major expense. They have to pay for the hardware, software, educating employees on how to use the technology, the cost involved in supporting the technology, and if they become dependant on it, the cost of lost opportunities when the technology fails.
Many people don’t rely on these technologies to make it through thier daily lives, some do – I do, but most still look at it as a tool/utility, the same way they look at their tv, washing machine, or toaster. These are the same people who won’t use ATMs, pay-at-the-pump to fuel their cars, or self-serve check out lines at the store.
To assume that all people will use these thing just becuase they are available is just as much a misconception as thinking everyone has gone out and bought a DVD player becuase its better than VHS, or replaced their 35mm camera with a Digital one. Things will be different 10 years from now when the old gard is replaced by technology savy people, but there are still people in the US that don’t have electricity, or indoor tolits, and those have been around for more than 100 years.