Ramsay goes on to describe scenarios, such as building a music playlist ("create playlist "country plus katy" from last 11") wherein a simple typed description would me much faster, and in fact much easier (even for a n00b) than all the clicking and dragging and menus that it would require using a GUI. He gives this example:
Ever play Pictionary? Pictionary is a brilliant game. You get a card that describes some concept—say, “return address.” You have to communicate that idea to other people, but here’s the thing: you can only use your index finger (extended, McLuhan-style, with a pencil). Before long, people are laughing. “That’s not an envelope!” “Yes! Look, that’s a house, and that’s a letter, and that’s an arrow!” Much laughter ensues.
In the real world, we’d say, ”I’m thinking of a return envelope.”
I realize the analogy is a bit strained, but my point is simply this: the idea that language is for power users and pictures and index fingers are for those poor besotted fools who just want toast in the morning is an extremely retrograde idea from which we should strive to emancipate ourselves.
He's absolutely right that sometimes a few words are worth a thousand pictures (to reverse the common aphorism). I think most nerds could probably relate the most easily to the kind of CLI that we're really talking about by thinking of the voice interface on the starship Enterprise. When Captain Picard orders his tea, he just tells the computer what he wants. In Star Trek IV, Scotty tries to talk to the 1980s PC, then realizes that he has to type, then, improbably, actually knows how to type. Even if the computer didn't have to be able to understand our spoken language, what if it could understand what we typed in, and did that?
Unfortunately, speech recognition is probably more advanced than machine interpretation of natural speech, so by the time we can tell a computer to find every document related to your comparative literature class from 1998, it's going to be just as likely to understand us speaking it as typing it. There are some computers that can understand human language pretty well, like the one that beat Ken Jennings in Jeopardy, but it's probably going to be a little while before your iPhone will be able to.
And unfortunately, as long as the CLI means remembering what "ls -l" does, it's not going to have mass appeal. Only by allowing people to describe actions in their own words, and understand them is a command-type interface going to catch on. On the other hand, I'm a big fan of hybrid interfaces. I'm a dedicated Quicksilver fan, so a lot of what I do on a computer involves keystrokes and descriptions. On Windows 7 and iOS, I usually prefer to search for an app rather than poke around through the menus for it.
And Star Trek notwithstanding, I'm pretty sure that speaking to our computers is always going to be a secondary computer interface, reserved for when our hands are occupied, such as while driving, cooking, or working with tools. I doubt the keyboard is going anywhere.