A couple of days ago I read a blog post by Stephen Ramsay, a professor at the University of Nebraska-Lincoln and a Fellow at the Center for Digital Research in the Humanities. In it, he mentions that he has all but abandoned the GUI and finds the command line to be “faster, easier to understand, easier to integrate, more scalable, more portable, more sustainable, more consistent, and many, many times more flexible than even the most well-thought-out graphical apps.” I found this very thought-provoking, because, like Ramsay, I spend a lot of time thinking about “The Future of Computing,” and I think that the CLI, an interface from the past, might have a place in the interface of the future.Ramsay goes on to describe scenarios, such as building a music playlist (“create playlist “country plus katy” from last 11″) wherein a simple typed description would me much faster, and in fact much easier (even for a n00b) than all the clicking and dragging and menus that it would require using a GUI. He gives this example:
Ever play Pictionary? Pictionary is a brilliant game. You get a card that describes some concept—say, “return address.†You have to communicate that idea to other people, but here’s the thing: you can only use your index finger (extended, McLuhan-style, with a pencil). Before long, people are laughing. “That’s not an envelope!†“Yes! Look, that’s a house, and that’s a letter, and that’s an arrow!†Much laughter ensues.
In the real world, we’d say, â€I’m thinking of a return envelope.â€
I realize the analogy is a bit strained, but my point is simply this: the idea that language is for power users and pictures and index fingers are for those poor besotted fools who just want toast in the morning is an extremely retrograde idea from which we should strive to emancipate ourselves.
He’s absolutely right that sometimes a few words are worth a thousand pictures (to reverse the common aphorism). I think most nerds could probably relate the most easily to the kind of CLI that we’re really talking about by thinking of the voice interface on the starship Enterprise. When Captain Picard orders his tea, he just tells the computer what he wants. In Star Trek IV, Scotty tries to talk to the 1980s PC, then realizes that he has to type, then, improbably, actually knows how to type. Even if the computer didn’t have to be able to understand our spoken language, what if it could understand what we typed in, and did that?
Unfortunately, speech recognition is probably more advanced than machine interpretation of natural speech, so by the time we can tell a computer to find every document related to your comparative literature class from 1998, it’s going to be just as likely to understand us speaking it as typing it. There are some computers that can understand human language pretty well, like the one that beat Ken Jennings in Jeopardy, but it’s probably going to be a little while before your iPhone will be able to.
And unfortunately, as long as the CLI means remembering what “ls -l” does, it’s not going to have mass appeal. Only by allowing people to describe actions in their own words, and understand them is a command-type interface going to catch on. On the other hand, I’m a big fan of hybrid interfaces. I’m a dedicated Quicksilver fan, so a lot of what I do on a computer involves keystrokes and descriptions. On Windows 7 and iOS, I usually prefer to search for an app rather than poke around through the menus for it.
And Star Trek notwithstanding, I’m pretty sure that speaking to our computers is always going to be a secondary computer interface, reserved for when our hands are occupied, such as while driving, cooking, or working with tools. I doubt the keyboard is going anywhere.
Command line, CLI, with its usually a bit arcane commands, is a somewhat different concept from natural language interpretation. Treating them as very related (considering the topic used above, mentioning the CLI repeatedly; contrasting them both with GUI…) is possibly counter-productive.
As a side note, IIRC there were also some research suggesting that, in many scenarios, CLI merely is perceived as faster than GUI (supposedly because CLI requires greater attention, which masks perception of time); but actually timing some common tasks tells otherwise.
And scifi “computers” are mostly just a storytelling tool, to woo the audiences with cheap tricks…
PS. Winphone7 is a more real-life example to consider. Very text based. But TBH… I’m not entirely convinced it fits the way our minds work (particularly with the early demonstrations of WP7, I can’t get rid of the feeling that the presenters appear relatively lost; people who were meant to promote the product, show how nice it is, hence supposedly familiar with it). It took us some time to get decent ~WIMPy paradigms, and maybe they are what works for humans… (obligatory car analogy: kinda like it took some time to develop the steering wheel; since then, no “refirement” was able to replace it; maybe autonomous cars can change that, maybe a central “swinging joystick” controller would be a fit in them – but that would be only because of very different overall approach, almost like “computer, do what I want to be done, so that I don’t have to (authorisation: 8295 somebody-who’s-not-quite-sure-why-he-is-needed-onboard)”)
Edited 2011-08-05 17:20 UTC
zima,
“As a side note, IIRC there were also some research suggesting that, in many scenarios, CLI merely is perceived as faster than GUI (supposedly because CLI requires greater attention, which masks perception of time); but actually timing some common tasks tells otherwise.”
This is desperately asking for a citation.
My thought is that it is true that CLI apps have a higher learning curve, but knowledge of them pays off very quickly.
To the extent that GUI are inherently a superset of CLI, then in theory GUI should be every bit as good as CLI. However in practice GUI programs are incredibly difficult to automate. It’s usually quite trivial to perform automation on top of CLI, and incredibly difficult with GUI apps.
We could always say that this is a weakness of the specific GUI program, which failed to incorporate macros, rather than a problem with the GUI model in general. However this is something many CLI apps get for free.
Even something as simple as renaming many files is error prone and painful without CLI tools.
You might need to replace “Company” with “Company, Inc” in all documents where it hasn’t already been changed. If your documents are only accessible through a GUI interface, then you (or the intern) might need to spend several hours doing it manually.
Don’t make out of it anything more than I wrote and you quoted (which includes “side note”, “IIRC”, “suggesting”, “many” (!=”most” for example); what is essentially everything I remember about it (well, plus how it was some research with a downloadable pdf, I think; and perhaps how it included more real-life, poorly-structured, end-user file management).
(so it’s also not about finding an almost absolute edge case counterexample – simple, almost “atomic”, highly repeatable text edit of every file at hand, what batch editing is pretty much about)
Seeing GUI as a superset of CLI is probably also not particularly helpful, when tons of data people care about doesn’t really exist in textual form, but in graphical (sure, pedantically we might go down to how it is represented under the hood, but…)
That’s kinda related to what I was pointing at, unnecessarily rigid distinctions and/or grouping of concepts. For example, is Google – or, even better, GMaps – a graphical or textual UI?
Well… both. It would be mostly quite horrible when presented and manipulated in a usual pure CLI fashion. But you do input textual commands (there’s just no reason for them to be very CLI-like); and with routine automation even expected to be hidden by nice GUI, here and there.
Edited 2011-08-05 19:11 UTC
You do realise he said “superset” not “subset”?
Well, GUI is not inherently a superset of CLI – it’s a COMPANION. Meaning, each is better at some tasks than the other, so they go together like sugar and kool-aid, and I wouldn’t be without either. Why must everything be turned into a pissing contest?
When it comes to GUIs, some have said how it’s easier to pull up files in a CLI, like when you need to copy all files that are older than a certain date and with a certain extension. But what if you need to copy about 50 random files from a directory of 300 files that don’t have any specific pattern to them? In that case, it’s easy to CTRL+CLICK select the ones you need and drag them wherever. Similarly, the article mentions how it would be easier to manipulate music playlists, but what if you’ve got a playlist with 300 songs, and you want to custom sort them? Again, it’s going to be easier to just drag stuff where you want them.
Of course, one could list dozens of examples were CLIs would be more useful… as I said before, it’s not about which one is ‘better’, because that depends on the context.
Actually, no it isn’t… at least on Windows, it’s quite easy:
http://www.autoitscript.com/site/autoit
The only exception to this is if apps don’t use standard widgets, which is one of about 486472343 reasons why I’m a fan of standards in this regard. Of course, on Linux, this would never work, since the general assumption seems to be that having 900 different toolkits is a good thing. So if GUI automation is hard, it’s probably because of a piss-poor implementation, or someone hasn’t written the tools to do it. But doesn’t that work the same way on CLIs? I mean, SOMEONE has to build in pipes and stuff if you really want to do something useful with it …
Not necessary… it would work, but you have to chose the solution made for the toolkit that your app is build upon. Plus sending key and mouse events might work the same on all toolkits (but the recognition of widgets is not so portable).
Plus Qt offers an easy way to integrate scripting into your app (if it is open source or you are developing it).
Relying on widget types and -IDs for automation is inherently error prone and extremely brittle. I can never be more than a band-aid in the absence of a real automation framework.
All types of problems crop up: When suddenly an error message is displayed, when the dialog flow changes because of some unforeseen state, when timing issues crop up. Not to mention how enormously difficult it is to have robust error control in such an automated script. Or when a new version is released and IDs or widget types changes, when drop-down boxes doesn’t contain the expected (and hard-wired) values. etc. etc.
I wouldn’t fault Linux for this. You were never supposed to rely on widget types and/or ID in the first place. The only ones who can reliably use those are the very developers of the apps who can use it for automated testing. The rest of us should stay away from it.
A much better solution is being developed with PowerShell. PowerShell “cmdlets” are suitable for sharing implementation with GUI application “command” patterns. Indeed this is what Exchange and several other MS server apps do now: They implement the cmdlets *first* and then build a GUI on top of those. The GUI then becomes the composition tool for PowerShell command pipelines.
The Exchange admin GUI will even show you the very PowerShell command pipeline it will execute and let use cut-and-paste it for automation.
PowerShell cmdlets are designed (unlike the typical *nix shell) for in-process execution, hence they can consume and produce strongly-typed in-process objects which makes it much more usable for actual GUI automation. In essense the GUI becomes a way to combine and design CLI commands.
I’ve never had an issue with timing when using controlsend to a certain widget/id. But, as you state, you may have to make some minor modifications if/when a new version is released and the developer changes things that directly affects your script, but wouldn’t that be the case for CLI apps as well, if they change the command-line syntax?
Of course, GUI automation isn’t the ideal way to get things done; I was just pointing out that it’s not exceedingly difficult either, and certainly not impossible. In fact, I do it all the time
Edited 2011-08-09 02:44 UTC
The difference is that there is absolute no obligation on the developer to keep widget types and/or -id’s stable across versions, much less behavior.
The developer of a CLI command has the option of introducing new parameters as non-breaking changes, or an entire new command if breaking changes are needed. That can keep the old command running.
More importantly, the developer of a CLI tool understands that she has an obligation to carefully avoid breaking changes as much as possible. Having some script depend on the tool is the *norm*. That is not the case with GUI keyboard/mouse-automation.
I have no doubt that it is possible (with some work). But as I pointed out it is not something you can build stable scripts (stable over time) upon. And the scripts tend to be horrible and have very poor error handling.
http://www.youtube.com/watch?v=R8gF0KTfMrQ
My apologies in advance for being the first to bring up apple, but I think it’s relevant!
In this video is mentioned that one guy spent 45 minutes writing an email to his friend with this Macbook Wheel, and then they show a 2 sentence email…
Also the Hummingbird battery coming with this $2600 laptop can power the computer for almost 20 minutes…
One must really be an Apple fan-boy to call this progress…
Maybe this video is a joke and not portraying an official Apple product…
Maybe this video is a joke and not portraying an official Apple product…
Bingo!
I think the problem is that you cannot embed a youtube video inside the <sarcasm> and </sarcasm> tags
you thought that was real!?!? haha it is not, it is a joke video made by the onion news network, look them up, they have some pretty funny stuff!
Given it’s fake, but they really did a great job with it. I love my Apple products, but getting rid of the keyboard and replacing it with a click-wheel was a classic.
Along those lines, sometimes you see these ‘how do I fix XXXXX in iTunes”, and half the time the answer is ‘hold cmd-option while launching it’ or ‘press option while clicking on the Update button’. Uh… I know it’s just me, but when did having invisible options become easier than just putting those under a ‘Advanced’ options pane somewhere for doing special tasks that most people don’t need to see? Totally hidden is easier? Really?
And that’s where the Onion video got it so right!
See the date it was uploaded: April 1, 2009
Love the ending part.. “not just dicking around”
Also possibly relevant : http://www.youtube.com/watch?v=GdIanqM_RZ8
It may be some different switch, or some longer expression, but it’s always going to be specific, nit picky and pedantic. It will also be equally “undiscoverable”, which is the primary complaint against CLI.
GUIs are basically discoverable for most domains. Specific vertical applications (e.g. 3D modelers) have so many new concepts that outside training and information are required simply to understand what options might be available in the interface. Without that domain knowledge, discoverability isn’t useful.
But with that knowledge, the GUI opens up many of the options by presenting them in front of the user. Click around and find what you want to do, or discover something new.
Most modern command lines don’t have that, the Unix one especially. Not easily.
If I know a command, I can learn more about it via its man page. But I can’t readily do “man commands” to get list and short summary of all the commands on the system. This could be done, it’s just not.
But either way, regardless of how you learn about an action, representing that action to the machine will be a detailed request. There are always defaults (‘ls’ vs ‘ls -l’) but that last thing that you want to do with a computer is have a conversation with it. You don’t want it asking you questions, validating your choices. That’s what makes the command line efficient. It does what it’s told. But to tell it correctly, you have to be very specific.
Consider examples from the real world.
I eat at “In-N-Out”, which is a burger stand. I know exactly what I want, and I’ve honed my ordered to work within their system to make it as efficient as possible. “Double double, lettuce, tomato, grilled onions only.” I’ve learned this, USUALLY, expresses exactly what I want with little interpretation. I could order it “double double, no sauce, grilled onions”, which is the same end result, but they almost always came back with “you want ketchup or mustard instead”. A needless question, but obviously I’m not clear enough.
Still, I occasionally get the the server who asks the ketchup question.
Similarly, with generic drive through, I follow a specific regimen for the order. I make the basic order “#1 please”. Then I wait for the questions. Why don’t I just say “#1, hold the cheese, large fries, and a coke”? That’s what I want. But a combination of the order speaker, the noise inside, the skill of the server trying to work the machine (which is very specific) inevitably overloads them and they just ask these questions anyway. So, I let them pace the order. In-N-Out tends to have a simpler system and better training, most restaurants don’t.
Finally, when ordering eggs, order them “scrambled well” vs “scrambled hard”. “Hard” gets conflated with “over hard”, you say “scrambled hard”, and either server hears “eggs hard” or the cook misinterprets it and you get the wrong result.
The point of this diversion is simply that even when involved with human communication, you have to be very specific. And computers are a) much worse at interpreting people and, moreso, folks hate ‘interacting” with computers more than anything (witness the general praise for voice response applications).
The command line is great when you know exactly what you want, as it’s very precise and concise. The problems with “-l” etc. are really secondary, because folks in time want those abbreviations.
Most any user would loathe being forced to routinely type:
list files with full description sorted by modification date in descending order.
Maybe some context, maybe always doing what it did last time. Remember options all the time, having really good defaults and simple setting and reseting and overriding of those defaults. But then everyones experience becomes different. When your “ls” is different from my “ls”, expect support to explode.
Computers are hard to use. They’re awful. Truly awful. But, then, so is everything else. Communication is hard, ask any married couple. How can you expect better from complete strangers and machines?
Can can do ‘info’.
> list files with full description sorted by modification date in descending order.
Excellent example. When was the last time you *needed* to list all files sorted by modification time? Because I can’t remember such a time. I’m always amazed how people browser their files by *manually* searching the list presented by their computer. I thought computers are meant to do the searching for you?
Many even don’t know they can just start typing the first characters of the file if they know its name. By just browsing and clicking on names, they also usually don’t know how the files and folders are called, they only learn to recognize them when they see them.
For most purposes, I find GUIs horribly slow and inefficient. I use them for graphics work and browsing, I didn’t find good CLIs in those areas or they were too expensive to learn (e.g., I still use GIMP only once a year, so I’m not going to learn the shortcuts there..).
I use it quite often. If you can’t remember the name of the file but can remember the time you used it, then modification time helps. It even helps when you cannot remember exactly when you last used it by looking for it as an offset relative to a file that you do remember modifying.
Just because you don’t use it, doesn’t mean it isn’t valid.
All the time – it’s totally useful for development. Say I’m exporting from a graphics app, and then need to copy to another directory – I can easily see which files have most recently been updates (both source and resources) and know that things are in the right state. It’s useful for photos if you browse the directory containing them, documents, etc. Name is useful if you know the name, but by last-modified is incredibly good to have too.
And speaking of command line, I just needed to concatenate multiple .pvr graphics files into a single file, and the GUI tools I have don’t want to do that. Pity the person who doesn’t know how to cat 1.pvr 2.pvr … > merged.pvr. Just sayin’.
How about using the “apropos” command? Or man -k?
http://en.wikipedia.org/wiki/Apropos_%28Unix%29
Oh yeah? Try
man -k .
(don’t forget the . at the end, it’s important)
But given that you didn’t know this I think this really only backs up the poor discoverability argument.
If you think about it, any CLI is just yet another programming language. Sure, there are problems that can be easily solved by CLI. But to think that a text based approach will quickly solve all problem is a bit naive.
CLI is very good at being very specific about what you want, and more importantly chaining (e.g. pipes) existing tools to create a new procedure. Again, this is just yet another programming language…
But if everyone uses GUI nowadays, it’s not just because they are too stupid to understand CLI. GUI can present multidimensional information much more easily, and the user can interact with it very intuitively. Think about Google Map on a touch based device, how do you do that in CLI?
Another example I can think of as a programmer is GDB versus Visual Studio Debugger. Another would be SVN CLI versus TortoiseSVN. Each time the GUI version presents the information in a better way to the user, and ironically also provides more features.
If you’re looking for a quick procedural approach to solve a problem then CLI is great. If you have to deal with any form of multidimensional information that’s not quickly and easily filtered, then no, CLI is not what you want.
Having used both, I’d say they are equally powerful as each has their faults that are the others strength’s.
For simple debugging, VSD is a great tool. But you can’t get the same kind of in-depth diagnostics or extensions as GDB.
VSD doesn’t have a scripting language, or even a API that many others can use – i.e. you’re stuck with MSVS, WinDebugger, or one of the few other GUIs provided by MS to utilize VSD – and you can’t extend it very easily at all.
GDB, OTOH, has a full API that a number of different applications use to provide a GUI on top of it – some of which are very similar to VSD.
But even if you want to stick with the CLI, GDB has a fully programmable interface through an internal scripting capability, and even the ability to support Python additions.
So where in VSD you would have to see that array[1].d->array is a string (to borrow from Qt’s QString), in GDB you can add support for them via one of two interfaces so that array[1] just shows a string when you call ‘print’, and you can access its functions too – e.g. array[1].size(). VSD has not equivalent extension capability – where it can be extended requires extensive work to support through customized DLLs.
This is more a case where the main project (SVN) does not want to implement certain features, but enables derived projects (e.g. TSVN) to do so easily, and those derived projects decide to do so.
In other cases, it is simply that the GUI makes it easier to follow a work flow process.
That said, I still tend to use the SVN CLI over TSVN for more complex things as it is simply more straight forward. (And I started using SVN via TSVN.)
Sometimes yes, sometimes no. In both your examples they both have trade-offs, strengths, and weaknesses – some of which you seem to have overlooked.
So I’d still have to agree with TFA as the more advanced functionality that even non-power users would find very useful is typically in the CLI program – GDB and SVN – in easier to use methods should the GUI support it at all.
VSD is also extendable via scripting and plugins. For example, Boost publishes a text file that allows to better visualize some of its common classes (e.g. shared_ptr, optional, etc).
I’ve never seen any developer truly using the extension capabilities of either GDB or VSD.
It’s the Hello World problem all over again. No matter how powerful a language or an interface is, if it takes 200 lines to print a ‘hello world’, it won’t get popular.
CLI vs GUI is for me the same problem. It doesn’t matter how powerful CLI can get, if 99% of the base usages are less efficient.
Take the Google Map example and the SQL-like CLI that Alfman posted below. Sure it’s extremely powerful, but it fails to consider the information consolidation that such interface lacks:
– in Google Maps I can easily see where I am and where I roughly want to go. I can drag and drop the source and destinations with the tip of my finger. I can go into street view just by two finger-pushes, and find my view around town, visualizing the neighborhood before I even got there. Oh, and why not make a restaurant reservation via Google Map while I’m at it. Once I’m done, I can email it to a friend with 3 clicks.
There is no way a CLI can get close to the efficiency of a GUI interface. As the GUI can provide a tighter integration to several multidimensional concept that a CLI can’t (at least no so easily).
The reason I cited TortoiseSVN was for similar reason. Sure the CLI is more powerful. But on a realistic situation, the information integration provided by the GUI is simply superior for most usages.
As another user posted, the CLI requires you to know exactly what you want and how to express it precisely. However, most of the time problems are fuzzy. You roughly have an idea of what you want, but you’re not fully sure how to get there.
Taking TSVN as an example again, the GUI makes it a lot easier to manage the working copy and it’s associated server side repository as the user does not have to remember by heart all the URLs. He can browse to the location he wants and then directly access the available operations on it.
The human brain retains 9 times more information when visually presented or visually supported (in contrast to text or speech). Knowing that, it’s hard to defend that CLI is superior to GUI in general.
Yes, extending VSD is not very common.
Extending GDB is far more common than you may realize as its part of how you efficiently use GDB.
May be for VSD. But it’s quite easy with GDB – it’s a simple function – so may be 4 lines of code to do ‘hello world’.
At the same time, GUIs are only as powerful as they are made to be, and often are not powerful enough.
Google Maps is certainly one thing that is designed for and can only really be operated by GUI, and one that can be augmented through scripting (ala its JavaScript APIs).
In all, there are typically four realms of software:
1. Software that has no user interface
2. Software that only has a CLI
3. Software that only has a GUI
4. Software that has a CLI or GUI but is really only good with one of them (which may be either).
Google Maps certainly falls into #3.
Whether or not a CLI or GUI is more efficient is wholly dependent on the design and interface of each.
Many times a GUI is simply a wrapper around a CLI, dumbing down the CLI for people that don’t want to deal with all the work – example: cd/dvd/bd burning on Linux typically is a wrapper around a series of command-line programs that make it easy to gather the data to pass to the program.
However, CLIs can also be very easy to use and extremely efficient. That is the nature of the interfaces on a *nix system – that’s how *nix systems are tied together. Some programs (e.g. sed, perl, grep, find) are more complicated while others (e.g. if, [[, ]], then, test) are extremely simple, and yet others (e.g. bash, ash, dash) provide the glue to bring it all together.
Now comparatively, the DOS Batch vs. PowerShell is a whole other world. PowerShell is way too complicated to do anything useful. DOS Batch was rather simple, but well underpowered – namely due to the limitations therein imposed.
SVN CLI is very very simple. TSVN is also very very simple. And yes, TSVN is great for 90% of use cases; but there are times (e.g. some merges) when it is just easier to use the CLI – and far less steps too.
SVN CLI is very efficient.
However, unlike other things – like TortoiseCVS – TSVN is not simply a wrapper around the SVN CLI. SVN is, rather, by design a library that is integrated into multiple things directly. So TSVN uses the library and provides one style interface; while SVN CLI uses the same library and provides a different interface. But that’s simply the design of SVN and why it is so well extensible.
(Other software, like eSVN, WinSVN, etc sadly do not make proper use of the library and have become simple wrappers around the SVN executables.)
That, again, depends on the CLI interface.
As I said – CLI vs. GUI very much depends on the design of the interface of each. Searching is rather fuzzy, and hard to get right in a GUI interface. It’s equally hard with a CLI interface; and neither really do it right.
That same info is easily available from the SVN CLI too – pretty much in the same manner. Whether using ‘svn list’ to look at the repository, or ‘svn info’ to look at a working copy. Better yet, ‘svn property edit’ will use your favorite editor – whether vi, notepad, emacs, nano, etc. TSVN, OTOH, only lets you select different diff & merge tools.
However, you can’t script a GUI nearly as well.
Sure there are some things out there that can do it (e.g. DejaGNU can for testing GUIs), but it just doesn’t work very well.
Scripting CLIs can work extremely well when you have the right environment – which the Windows Command Shell is not.
Take finding a file on disk for example. A GUI interface can provide a basic search functionality, even integrate RegEx. But you’re limited by the abilities of the GUI.
Comparatively, with a CLI you can pipe numerous aspects together – include this, exclude that, apply this filter, etc. This is something that even the most advanced GUI searches are unable to do. Yet CLIs do it extremely efficiently.
Ultimately neither is CLI or GUI is necessarily superior for any given task with few exceptions (e.g. Google Maps) – it’s a matter of your tools. As such, arguing that one or the other is superior is an endless and useless argument.
gpsnoopy,
“But if everyone uses GUI nowadays, it’s not just because they are too stupid to understand CLI. GUI can present multidimensional information much more easily, and the user can interact with it very intuitively. Think about Google Map on a touch based device, how do you do that in CLI?”
For consuming visual media, obviously you need a visual interface, I don’t think anybody’s claiming otherwise.
But for interacting and manipulating that information, some people may still prefer a CLI. Using your example, let’s suppose we have a command line tool to query google maps (allow me to making this up as we go along):
select route from roads where route.orig=’san fransisco, CA, USA’ and route.dest=’new york, ny, usa’ order by route.time
Hmm, not as nice as the GUI, but what else can we do with it?
select concat(a + b + c) from roads a, flights b, roads c where a.orig=’…’ and c.dest=’…’ order by concat(a+b+c+d).distance and b.fair<150 order by time
Clearly we’d need to rethink the sql metaphor, but the point is this would be incredibly powerful. We could write very complex queries to implement use cases which the GUI designers had never anticipated (or would have found too difficult represent visually).
Would this CLI have a large learning curve? Absolutely, but those who learn it would have so much more power over the information at their hands.
Edit: Just to clarify, my use of “CLI” here is refers to a command oriented interface, whether or not the interface is part of a larger GUI.
Edited 2011-08-05 18:18 UTC
I actually miss the days of viewing images with Alchemy in Dos. It was like magic to see what was always an ascii/ansi text environment showing full colour photos.
These days, there are times when I’d really like to just pop open an browser, video or image without having to load into a full GUI.
(I know viewing video can be done, I just haven’t looked into it yet.. more interesting projects ahead of it on the list)
I am sure someone anywhere in the world finds the radio “faster, easier to understand, easier to integrate, more scalable, more portable, more sustainable, more consistent, and many, many times more flexible” than the television.
Sure the militars share this too.
It’s way more faster saying “Mayday” or transmit “SOS” than sending a video of a sinking ship or whatever, but I personally have never used them (and expect it to keep going this way). So I keep enjoying my TV
Edited 2011-08-05 18:13 UTC
Have you tried watching the telly whilst driving your car or jogging?
Why does it see that some comments arguing against CLI are modded down? Most are valid points of debate against a culture that is inevitably going the way of the AM radio because, face it…CLI is dying. Eventually all the dinosaurs that grew up banging away at keyboards for 2 minutes, just to begin a process that will last (at most) 30 seconds, will eventually retire/conform/die or simply go away.
Edited 2011-08-06 16:30 UTC
I’d guess that advocating a ‘you only need a GUI’ approach to people who actually need to do more with their computers is what’s getting you modded down.
I happened to post this already above before seeing your comment, but as an example, I needed to generate multiple mipmap levels for a .pvr file the other night. The GUI tool outputs nicely dithered versions of those files, but doesn’t give you multiple levels as an output option, and doesn’t actually complete the ‘chain’ back up to the smallest size at all – the tool won’t save the smallest sizes that are needed. There’s a command line tool that will do those final sizes in a merged .pvr file. So I needed to merge the output from the GUI tool with the output from the other tool to get a complete .pvr with all mipmap levels.
Go GUI! Uh… no, not really. try ‘cat 1.pvr 2.pvr 345.pvr > merged.pvr’. Done. The GUI tools don’t do what I want in this case, and I wouldn’t waste my time looking for a GUI app that will take a few files as drag-drop input to write out a merged file when I can do a command line hit in a second. And script it to make it part of my workflow if I need to do it repeatedly.
So it depends on what you’re doing, but I can certainly guess why people disagree with you if you think one size really fits all. Not to mention that there are cases where the 30 second process that took 2 minutes to string together would take hours to do by hand or write in some other language.
PowerShell the MS CLI scripting language/environment was sorta kinda developed to be easier to use than good ole bash. They have longer command names to make things easier, work with objects instead of text. All things that make sense.
… Except I don’t like it. Maybe its because I’ve been using bash forever. Maybe because I think its because in trying to be easier to use, its lost its simplicity in use. Ie faster to learn but more complex in the data structures = frustration some how. I can’t really explain it. Its like trying to explain why 1+1 is not always two. Maybe I’m in a funky vector space with an non positive definite metric.
One thing PowerShell has right is that it extends interprocess pipelining beyond simple text streams. If I had the time I’d love to take Bash, strip out some of the more arcane syntax and add the concept of structured data pipelining; imagine what you could do if you could get common commands to emit say, objects as JSON data and then manipulate them?
I would actually argue that the non-use of simple text streams is exactly what is wrong with PowerShell.
While I’ve used Bash for quite a while, I’ve tried to pick up PowerShell (namely to start replacing Batch scripts) and find it far easier to just use WSH+JavaScript that to interact with the PowerShell.
As with much of Microsoft Technology, it’s way too complex to actually be useful and efficient.
Stream of objects can be a stream of strings – which is how “legacy” commands are integrated into PowerShell. You can work that way if you prefer.
However, the object-oriented streams add tremendous robustness, simplicity and readability to scripts and scripting. There are far fewer surprises than with *sh scripting.
It is always hard to learn new skills. However, PowerShell is well worth it.
If you spend a lot of time on Windows. I learned it just before I left the platform, and I was pretty frustrated that it didn’t come standard on Vista RTM. At the time it was a great scripting language… that didn’t exist on anyone’s computers. Has that changed? Does windows 7 come with any version of the dot net framework and/or Silverlight?
It comes with the runtimes. So to get powershell working you still need to download a buck load of stuff (at least on consumer versions of windows).
No, PowerShell v2 is an integral part of Windows 7. You do not need to download anything. You do not need to turn anything on. In fact, you cannot turn it off 🙂
Now, if you want to use certain modules, like SQL Server administration you may have to download a snapin or a module.
Edited 2011-08-06 18:28 UTC
Sort of agree there. PowerShell is a much better fit for Windows than other OSes, since Windows API is already almost exclusively object-oriented (handles, WMI, COM, .NET).
With Windows 7 PowerShell is an operating system component since several tools which come with the OS are based on it (e.g. troubleshooting scripts like the one that runs when you click “troubleshoot problems” on a faulty network connection). Several new server products from MS have advanced functionality which can only be reached through PowerShell.
PowerShell is a system component since Vista SP1 – although it had to be “turned on”. Since Windows 7 / Server 2008 it is a mandatory component.
Edited 2011-08-06 18:26 UTC
now, the simple text streams don’t eat much memory, makes interrupting things when you find what you need etc.
Like looking in some database thing and grep out of it. It’s faster with bash, because it uses small pipes instead of buffering the whole stuff, even if the text is already there.It will presented at the end.
The biggest drawback is that large datasets will eat up all memory.
Powershell in it’s best for is a small subset of what scripting can do under unix type of operating systems.
Ahem! PowerShell objects are *streamed* as well; they are pushed through the pipeline one-by-one. Because it is all in-process what is pushed through the pipeline is actually only an in-memory pointer. That is arguably both *faster* and more memory-efficient than constantly serializing to/from strings.
I don’t know where you have gotten the idea that PowerShell would buffer everything until the end?
Once an object has been passed on through the pipeline it is eligible for garbage collection. Large datasets will not need to be allocated all at once.
PowerShell is much more capable than unix/linux *sh style shell scripting, more consistent, more robust and more secure.
I think PowerShell may be a better SCRIPTING language, but I don’t know if it’s a better COMMAND LINE language. The terseness and expressiveness of the generic SH command line is very powerful. The additional lengths and complexity of the the elements in PowerShell warrant more an edit/run lifecycle than simply a type it and go and creating simple one offs that SH promotes.
I am curious as to why you find PowerShell have “additional lengths”. I do a lot of scripting in PowerShell and I find that it is almost *always* shorter and more concise than the bash equivalent. Do you have any examples?
Also, I find PowerShell is *a lot* more consistent (naming conventions, parsing rules, parameter passing) with fewer specialized single-purpose features. Don’t be fooled by PowerShell examples which (for instructive purposes) use the long-name versions of common commands. For the common commands there’s often a short alias (which are a true aliases rather then bash “macro” aliases).
I like the that’s it’s object oriented, which opens up lots of cool things, but I find it too wordy and obtuse as well.
I don’t think they went for simplicity with Powershell. I think they did a 180 from the DOS like cmd.exe, and created an alien shell with a bunch of complex tools. Powershell is Swiss Army knives and Erector set pieces, and the Unix shell is kitchen utensils and Legos.
The CLI is great in some cases. The GUI is better in others.
The most basic scenario: if I’m doing something infrequently, I want a GUI. Discovery and recognition mean that I will spend a lot less time looking things up then applying them. On the other hand, if I do something a lot, a CLI will allows me to cut to the chase and just do it (i.e. no digging through menus and dialog boxes). CLIs also provide much better facilities for automation.
The task matters too. If I’m doing something ‘linguistic’ (e.g. word processing, programming, file management), using a GUI means that the brain is constantly context switching between linguistic and visual/spatial processing. On the other hand, trying to express something spatial in linguistic terms creates a similar problem. If you ever tried to use POV-Ray for 3-D modelling you’ll know what I mean.
Finally, there’s the person involved. Some people are more linguistic, while others are more spatial/visual. Some people have excellent recall (great for CLIs), while others have a wonderful ability to recognize visual cues (fantastic for GUIs). Neither skill set is better overall, but each skill set is better in a certain domain.
Now that is a complete work.
If in doubt type help.
Totally hierarchial
Plenty of examples in the help
What more could you need?
Answer…. Nowt.
Unix Shell? Pah
Dos ? You must be joking.
In VMS DCL you could build a complete system installer.
Then the Files system could handle OOTB indexed Sequential files.
All from 1977. Years aherad of its time methinks.
Yeah, I spent 20years working on VMS for DEC. nothing I have ever come upon since is anywhere near as powerful as DCL.
VMS has always been the one OS I haven’t been able to get my hands on that I’ve really wanted to.
I’ve heard some really good stuff about it, but I’ve never been able to experience it first hand.
It’s cool that you hang out here.
If you want to try out VMS, then you can create an account on the Deathrow Cluster – http://deathrow.vistech.net/
I recently read a comment from a legendary Mongolian horse trainer:
In Mongolia we teach our children to think like a horse. A horse can never learn to think like a human.
In the Western context we should teach people to understand how computers function (file hierarchy, commands etc) not try and not waste enormous efforts on making computers understand us.
Back mid 80s when I was using DOS I was probably much faster using the CLI and function keys than I am now on a Gnome GUI.
I love the commands in AIX. Mostly prefixed with:
mk
ch
ls
rm
i.e. mklv, chuser, lsgroup, rmvg
Depends upon the task as much as it depends upon the user as much as it depends upon the tools in-question.
More interactive, trial-and-error oriented activities will obviously benefit from a GUI. Users unfamiliar with the tools will benefit from both graphical toolkits and a common syntax/set of utilities, from which they can draw upon the domain-general knowledge a user should have.
The real weak link (ignoring for a moment the existence of the computer illiterate) is the tools, which tend to be horribly inconsistent, regardless of whether they’re graphical, or terminal-based.
I’m reasonably mixed in my usage patterns. Without even thinking about it, I often find myself using both a graphical file manager and BASH (for the purposes of file management) at the same time. I don’t generally feel that one offers a benefit over the other but then I practically use my file manager like I would an interactive CLI tool, typing to navigate and employing keyboard shortcuts to invoke file operations.
try QuickSilver for Mac OS X and then come again to comment !!!
There is a definite problem for those of us who are keyboard challenged. I can type at 50/60 wpm, but accuracy is in the dump. I have used the CLI, but by the time I have checked it for accuracy, (is that one or two spaces, is that a 0 or O, or a 1 or l?, or a typo) I feel that longing for a GUI which just asks me what I want.
“Computer.”
Picks up mouse, raises in front of face.
“Computer!”