“In Vernor Vinge’s sci-fi novel A fire upon the deep, he presents the idea of “software archeology”. Vinge’s future has software engineers spending large amounts of time digging through layers of decades-old code in a computer system — like layers of dirt and rubbish in real-world archeology — to find out how, or why, something works.” Good stuff over there from UI designer Matthew “mpt” Thomas.
None of this makes any sense. “Save” and “quit” attributed to… Macintosh programmers? What about command line utilities with this same functionality which existed decades before?
He writes “save” off as some sort of time saving feature… that when writing to permanent storage was so time consuming it couldn’t be done continuously. But it’s much more than that… it allows for a buffer which allows you to safely make temporary changes to something before committing to them permanently. Otherwise a massively complex and persistent revision control system would be necessary, and is that really any simpler than a “save” button?
As for quitting, what about someone using a program like Photoshop… who has finished editing one picture and is ready to move onto another… another instance of Photoshop would have to load, and this is time consuming due to Photoshop’s weight.
Honestly, I see much worse issues with UI design than the ones being addressed here… why is this noteworthy?
Eugenia, your comments on the subject are desired 🙂
MPT, as always is being like myself: caustic, but true to his beliefs.
Most of his remarks do make sense, but in other situations a lot of new software will need to be implemented to get through with these changes. For example, a database-based fs, like the new ReiserFS or the new NTFS 6.
As for the “Save” command, Microsoft Word does autosaving, no? If it works for Word, it can work elsewhere too.
And I do agree with him on the lack of UI/usability innovation in the open source development cycles. Mostly because developers don’t listen to the project’s UI designers (if they have any).
“1. In the 1970s and early ’80s, transferring documents from a computer’s memory to permanent storage (such as a floppy disk) was slow. It took many seconds, and you had to wait for the transfer to finish before you could continue your work. So, to avoid disrupting typists, software designers made this transfer a manual task. Every few minutes, you would “save” your work to permanent storage by entering a particular command.”
This was probably the best point of the article – the only one I want to fix. With computers so fast, and hard drive space so cheap, I don’t see any reason not to have continous storage with the “SAVE” feature making snapshots. Think of being able to go back to any point in a documents history – always able to undo back to the first keystroke.
Almost all the other points lost the point that context is important. Saying that programs should drop open and save dialogs because FileManager could handle it ignores the subject of context – I am working within an application and do not want to leave it for another application, which is what a file manager is.
“4. Trouble is, this system causes a plethora of usability problems in Windows, because filenames are used by humans.”
The metaphor of folders to sort volumes of information goes back to the real world organization of sorting paper the same way. Until you can come up with a better system of organization, this is the one that humans will WANT to use.
If I rename a document because the computer could keep track of it, but how is it supposed to know if I want all copies to be renamed, or if I just wanted an extra copy of it. Keep it simple!
The point about moving programs in Windows breaking them is right on the money. The Mac does this much better. We used to have programs that could be moved without breaking anything. That was before the Registry and using shared DLL’s. If everything a program needs is within its directory there is no reason you could not move it – except that the menu pointing to it would be broken.
Lastly, I hate articles that just complain and don’t offer real advice to fix it. Some advice is given here, but not enough.
“The point about moving programs in Windows breaking them is right on the money. The Mac does this much better. We used to have programs that could be moved without breaking anything. That was before the Registry and using shared DLL’s. If everything a program needs is within its directory there is no reason you could not move it – except that the menu pointing to it would be broken.”
This is deffinatly true. In fact, OS X takes it one step further and really shines: the application folder appears as a single icon that the user can click on to run it. Not only can it be dragged around anywhere, but they do not have to figure out what to click on or which executable to choose(or which files are executables at all for that matter.)
There is really no excuse for any OS to not handle things this way. An application should be a single object, nothing more.
> the application folder appears as a single icon that the user can click on to run it.
This is a nice touch, but there are complex and big applications that need to have other files around them, not inside the Contents folder of the application. Photoshop, Mozilla, Opera, Painter, Maya are some of these examples I see on my MacOSX machine. So really, for small, simple apps, this is a great idea. But for big apps, who need to be inside another “real” folder that includes both additional files and the application’s content folder, is nothing different than the way we get it on all the rest of the OSes. Also, the way this works on MacOSX, it is a pain to launch apps from the terminal (you need to navigate 3 folders deep the application to reach the “real” binary). So, this OSX “trick” is both good and not-different at the same time.
I do agree that applications should not quit when closing a file. Photoshop is a great example that I had not thought of, I personally just like to run iTunes from the dock and close the main menu. But the point he has is that it is confusing for new users. That user may follow logic like, “If you can’t see it and it isn’t covered, shouldn’t it be gone?”
However, I think that not closing the app does give a decent boost to productivity. Thus, to accommodate both, this could be a good option to put in. Perhaps, “Quit when all windows closed” or something to that effect.
The screwy Win copying scheme is due to FAT being horribly broken. That it persists in later versions of windows is just a testiment to “backwards compatibility.” IIRC, in OS/2, if you were running HPFS, the filesystem acted pretty much they way a mac does.
Windows 95 did much to kill good interface design, because it tried to please everyone, and ended up keeping all the bad features of OS’s now dead.
The original Mac did things right, but it’s become more like Windows. OS/2 did things right, but it’s really a non-player these days. *nix/GNU does whatever you tell it to, except things like logical folder naming. It still follows the /bin /sbin, etc.etc. scheme (can’t remember what it’s called at the moment), so navigating the filesystem with graphical tools often doesn’t make much sense. Where’s mozilla? Oh, that’s easy! /usr/bin/mozilla-bin. And the libraries are in /usr/lib/mozilla. Umm, yeah. They ought to be in their own folders. Come to think of it….why don’t Unix programmers do that more, and just make sym-links to the system dirs? Apple’s sorta gotten this one right with the Program.app thing.
And I do agree with him on the lack of UI/usability innovation in the open source development cycles. Mostly because developers don’t listen to the project’s UI designers (if they have any).
Gnome2 has a Human Interface Guide, made by professional UI designers, and most developers are paying attention to it.
Biggest problem is that alot amateur UI designers are dreamers. They just want to throw the entire UI away (major mistake 1), and do something completely different. More because of how cool it looks, and how cool it sounds, than actual research (major mistake 2). Offcourse wether or not it can be implemented within a couple of months/years is absolutely not important.. (major mistake 3). Or they focus on silly details that barely help usability, when there are more important things to address.
You have to accept that software design is evolutionary with the occasional relatively small revolution. It is not constantly revolutionary. (that pretty much applies to everything)
I’m not a professional interface designer, but from using my machine every day, here’s what I think of his points:
1) I like the manual save “cruft.” About half the time, I don’t want to permenantly make changes to the document. Having the machine auto save for me would be disastrous. It would be mitigated somewhat by a versioning filesystem (a feature we actually do need) but it would just be a hassle. Autosaving should be done at the application level, not at the system level, to be of any use. And current applications already do this (Word, for example, allows you to do autosaves).
2) I like manually closing applications. I have a PocketPC, and it drives me crazy because there is no close buttons on programs. The reason for this is twofold. First, the biggest bottleneck (for me) is window management. I’ll often be heavily multitasking. While working on research papers, I’ll have several word processor documents open, dozens of web browser windows (Google, in particular, makes it very easy to make your browser window count blow up) as well as various research programs (encylopedias), and if it’s a science paper, I’ll probably have a scientific app or two open at the same time. Even with tabbed browsing and multiple workspaces, the clutter often slows becomes very hard to manage. Without manual closing of windows, I’d go insane. The second problem is that, despite what the author thinks, current machines are not powerful enough to automatically manage windows. We don’t have the CPU and memory to keep all those programs open simultainously, and we don’t have algorithms that would do a decent job of automanaging those windows. Having computers do automanagement for humans is a very embryonic field. We still don’t have algorithms (for example) that optimally seperate interactive processes from non-interactive ones, or ones that can optimally manage a camera in a video game (as evidenced by the fact that many new 3rd person games are moving towards a more manual camera model). Managing a user’s windows is an order of magnitude more complicated than either of the above problems. Because of this, it’s better to put the burden on the user, than to have the computer trying to second guess him/her.
3) I *hate* drag and drop. It goes back to the window-management bottleneck. Drag and drop requires me interact with two window simultaniously instead of two windows sequentially. The former is easier for humans, and the lack of really good window management just compounds the problem.
4a) This is an arguable point. Read the BeOS filesystem design book for the discussion of paths vs inodes.
4b,d) Again, trying to guess what the user means. A recipie for disaster. The easiest way, actually, to avoid this is to get rid of the idea of program directories entirely. The user really doesn’t care where the program is, and certainly has no reason to move it around. I think Portage and APT have solved this problem nearly perfectly. Every application is available from one repository, if you want to install it, you issue one command to make it available, and it appears in your menu. Want to get rid of it, you issue another command and it dissapears. It’s the perfect example of not mixing your metaphors. Documents are physical objects to be manipulated. They appear in the filesystem as such. Programs are processes, and should not be physically manipulated. They’re either doable (installed) or not doable (not installed).
If you care to hear what I think are the biggest sources of cruft, feel free to look it up at http://heliosc.home.mindspring.com/ui.html. Note, I am not a web designer, so this is plain-text, hacked up with the little HTML I remember from 6th grade
> Gnome2 has a Human Interface Guide, made by professional UI designers, and most developers are paying attention to it.
No, you are wrong, most don’t. At least this is what someone of the Gnome UI team told me *DIRECTLY*. Most OSS developers don’t pay attention on HIGs, they just do things. They put together the code, it works, they ship it. They pay very little attention to UI. And when the UI designer is pissed at them for not following the HIG, they reply back saying things like this: “hey, give me a break, I do this for fun on my free time, I get no orders from no one“.
And then, I review these DEs and naturally they suck, I give them bad grades, and then people get pissed at me, instead of get pissed at these developers who don’t listen to anyone.
And the whole thing continues in circles.
Eugenia,
even big apps like the MS Office apps (Word, Excel etc…) or Photoshop LE are “one container apps”. Here on my Mac it’s like that, don’t know why it’s not on your’s. The apps you mention, like Mozilla are cross-patform apps an not only written for OS X. Therefore they have a different file-layout. You can develop “big” apps that run im one package – that’s no problem. On MacOS X, it’s just a decission of the developer to do so or not. Office has just some libs shared between the applications an MS decided to put them in an extra folder. But they could, ih they would, organize this a different way, they decided not to do so.
..and by the way, you can launch apps via terminal without browsing into the app-folder. There is an “open” shell command. Just type “open <Appname>” and the app is launched.
Ralf.
>even big apps like the MS Office apps (Word, Excel etc…) or Photoshop LE are “one container apps”
Yes, I know that, you misunderstood. These big apps also use the standard NeXT/OSX container format. What I am saying is that for these big apps, you STILL have an application directory that includes other files relevant to the application, so it defeats the whole point of having the application binaries in containers.
For example, the Mail application and Chimera only have *1* icon that you launch.
But Photoshop and OfficeX and Mozilla and Opera and _many_ others, you will need to open their *application folder* and THEN doubleclick to the container binary. This is no different than the way Windows and Unix and BeOS and all other OSes currently do it.
So, for small apps, the container trick is good, but for big-big stuff, it just doesn’t work, developers are still using hieriarchical application folders that the user needs to navigate in order to load the binary.
>Just type “open <Appname>” and the app is launched.
Oh, how obvious was that. Not..
I think the Dock alleviates a lot of the problems with applications coming in a folder that has to be opened. If you’re using it often enough for this to be an issue, you’re probably going to keep that icon in your Dock, at which point it no longer matters where you move the application to, since the Dock menu can find the application if you really need to know where it is.
I checked my copy of Photoshop 6.0 For windows. The only directly launchable programs are photoshop and imageready. The rest are readme’s or stuff that belongs in a MacOS x style bundle, except for the plugins.
But I have an idea.
Have a properties dialog for the app that exposes any folders in the bundle that a user should need to access. Show things like plugins here. Let them drag new plugins onto the list or remove unneeded ones. If you want to share plugins, let them drag a folder alias to the shared plugins into that list. You could even present these properties as a folder (since that is what they really are), but it might be better not to re-use that metaphor.
Rayiner,
Who said you would have to leave *windows* open all over the place? You can open and close windows whenever you want. The system isn’t going to randomly close windows on you. The argument is that you shouldn’t need to separately manage some abstract entity known as a “program” via Quit or Exit commands. The system should do that for you.
So now we get down to the real argument, which you do hint at. Can we trust the computer to know which programs would hang around in the background, knowing they’ll be needed again soon, and which ones should get completely out of RAM, virtual or otherwise? Right now we have to do this management manually, or let primitive mechanisms like VM handle it for us (but then we clutter our taskbars and docs with buttons and icons of waiting programs). One possible answer for large, slow loading programs is to documentize the concept of the large program being open and ready. Make it a workspace, or a project (like in a programming IDE). Another would be a more task based UI (and idea I have floating around in my head invloves task bar grouping (ala WinXP) by task instead of program combined with the idea behind OS/2 Work Area windows).
PS: I’ve seen several people over time mention that Windows (since Win98) decides what running programs to swap out to VM based partially on whether they are minimized. At least they are thinking about the problem.
David said:
Saying that programs should drop open and save dialogs because FileManager could handle it ignores the subject of context – I am working within an application and do not want to leave it for another application, which is what a file manager is.
You’ve got to look at this from a docucentric viewpoint. You aren’t “in” one of two programs. your document is in a folder. A representation is open on the screen. (If you want to take it to a radical extreme, your document is not in that folder while it is open, but it does remember where to go by default when you “put it away”).
The document already exists so you don’t need to do the initial “save as”. If you don’t want it (say you just used it as a scratchpad), use the discard command on the document. After a confirmation, it will go away. Normally you want to keep your work.
Maybe you want to branch off by making a copy (“save as” again). Barring any version control system, you could just go to the folder containing the doc and copy it, then open that copy. The folder is likely to be open since that is where you opened the original from. If not, use the “open containing folder” (needs better wording) command on the document to open it.
Opening or creating a document are handled in the folder they will live in (or the desktop or some scratch directory if they are expected to be discarded). Think templates (os/2, or (poorly) Windows) or stationery (Mac OS classic).
This leaves instances where you need to just refer to a folder or file in a document. Things like inserting images into a WP doc can be handled by drag and drop. Other instances (maybe defining where a log file goes and what its name should be) would have to be creatively redesigned. You may still have to navigate a little mini-file system in these instances though. But not as often, one can hope.
The people that are horrified that a constant save mechanism will overwrite their precious file contents need to know that a good persistent undo system is critical to the no-save scheme. Unfortunately, MPT left this out of his article (I’m hoping he does know this).
Let’s break it down into a simple example with no HD hogging multiple revision control system features. You have a file. In that file are two versions, the one you are working on and the one you know you want to keep. How do you tell the computer you want to keep your current edits? Commit them (takes the place of “save”). If you close your document you don’t get a “Do you want to save your changes?” (or is it “Do you want to discard your changes?”, I forget dialog. Most people want to keep their work. It just closes with both your current edits and “last version” intact. It also keeps an undo history between these two points. If you want to go all the way back to the original at any time you use the “revert” command (and you would get a confirmation dialog here, I’d imagine. From this base, you can go hog wild with undo checkpoints, multiple versions, and the like (VMS users will tell you they had file versions in the file system years ago).
Of course a subtle change like this would throw everyone (or maybe just “power users”, but they are adaptable, aren’t they?) into a hissy fit. For awhile. Then they’d get used to it. Even rely on it.
I recommend reading “About Face: The Essentials of User Interface Design” by Alan Cooper. This book has a whole chapter on this concept, as well as a number of other vaulable insights.
“This was probably the best point of the article – the only one I want to fix. With computers so fast, and hard
drive space so cheap, I don’t see any reason not to have continous storage with the “SAVE” feature making
snapshots. Think of being able to go back to any point in a documents history – always able to undo back to
the first keystroke. ”
Drive space may be cheap, but you can still fill it up fairly quickly
if you are working with image files of around 70 Megs each. One of the
problems with these UI thinkers is that they assume that all documents
are word-processing documents.
Autosave should be done by the program, not by the OS.
Personally, I prefer manual saving. How hard is it to hit Ctrl-S after
a couple sentences?
The “document-centred” concept is also wrong, IMO. It is tied to the
idea that each program has a custom file format, as Word does. That is
real cruft. Good file formats are open, and the data can be edited or
used by a variety of programs for different purposes. How can the OS
know which program you want to edit a JPG file in? Just because it was
in Photoshop ten minutes ago doesn’t mean you want it in Photoshop
now.
Where did he get the idea that programs close when you close one
document? I have only one (very old) program that does this. Normally,
if you close all the documents, the program’s screen, with menu bar
and tool box are still there, waiting for you to open a new or old
project – at least, that’s what happens on my computers.
Most OSS developers don’t pay attention on HIGs, they just do things.
While using Fileroller, Nautilus, EOG2, rhythmbox (alpha), galeon2 (alpha), heck, even gedit, I have to disagree with you. It is pretty clear those developers payed attention to the HIG. It’s not perfect yet, but *duh*, those are the first releases using a new development framework, with a very recently released HIG,.. Most of them are still beta, slated for release when Gnome2.2 or even Gnome2.4 comes out. Fact is that they are paying attention to it.
Getting rid of “Save” is an amazingly good idea. As has been pointed out before, you need some form of basic “version control” (actually, undo-levels saved as a seperate file-stream should be enough) for it to be really useful.
However, implementing this on the filesystem level is probably a bad idea. All the filesystem could do is to simply store all past versions of the file, which would soon fill up even the largest disks. Yes, there are binary diffs and the like, but they don’t work with images.
So this undo-level storing needs to be done at an application level. And yes, it can also work for image files. Obviously, you wouldn’t want to store a complete copy of the image after every single operation. Instead, you store the _operations_ that were used to create/manipulate the image. Storing “Filter F with parameters A and B” takes a lot less space then storing a whole copy of the image for before and after the filtering. Obviously, this will increase the amount of processing power you need when reconstructing older versions of the image.
Image file history will still grow big quickly. So what? Just implement an option called “Finalize” or something which drops the undo history. You could even have an option to drop the undo history up to a given date or something.
The “document-centred” concept is also wrong, IMO. It is tied to the idea that each program has a custom file format, as Word does.
The document-centric concept is tied to the idea that you shouldn’t have to use more than two programs per document type: a viewer, and an editing program.
The “programs” used for editing a document should be simple frames which can be filled with different tools to manipulate the document (plugins, if you want). I remember reading an article by Eugenia about that not so long ago…
The document-centric approach is basically saying that applications are irrelevant. The user should not have to worry about different applications for different types of document.
(from reading that page)
I agree with your final point that the difference between _what_ you can do (to a certain object) and _how_ you do it (is it the 1st or the 2nd entry in the context menu) should be separated, and things like this do begin to appear.
KDE has the KAction system, for example. Basically, applications no longer define menu bars or toolbars, instead they just provide a number of actions that can be performed. Then, a number of different mechanisms are used to build the actual menu bar layout etc.
For example, common actions (like Cut,Copy,Paste) are always put into the same menu.
The layout for less common actions can be defined in a separate (non-source) file.
This is definitely a good first step. I doubt that computers will soon (ever?) be able to do the layouting of less common actions automatically, because that’s basically an AI complete problem (I think).
Yeah, good call. Yet another user interface “expert” trying to force *his* preferences on everybody else. I HATE drag and drop, thanks, and file-open/save dialogs make perfect sense here (and, apparently, to just about everybody that uses 99% of application software out there).
EVERYBODY is a user interface expert — they know what *they* like. God *damn* these people like Jef Raskin who tell you you’re wrong for liking things the way you like them!
PLEASE stop posting these troll-bait articles by “user interface experts” who offer absolutely no solutions to THEIR own problems… as I can’t help responding to them…
Talk talk talk talk talk talk talk…..
Someone tell me when any of this will actually change?
I mean, in the mainstream form of computers, of course, Because I use a computer that has a lot of these “new ideas” already in place and working just fine. It’s called a Palm Pilot (yeah, a Pilot, it’s that old).
There’s almost never a need to manage files. The concept of “Files” isn’t even in there for normal users. The only time you think in those terms is when you need to fix a technical problem, like delete a program or its datafile because the normal way failed for reasons usually related to bad app design, and that doesn’t happen all that often, in my experience. Even in those cases, it’s not about shared this and registry that – it’s still easy to manage, with little “technical thought.”
So when will this kind of sensibility be put into computers? Apple is moving in the opposite direction. Any sane person can compare the contents of their hard drives with OS 9 and OS X and see that this is so. Microsoft doesn’t seem to understand the concept. Palms may be changing to, with the next OS version, but my bet is they will still be the most simplistic and rational.
Stupid articles that point fingers at problems and do nothing to solve them piss me off. Oh yeah, that’s all anyone is able to do. There’s not really any way a non-multi-billionaire could institute a completely new system and get people to actually use it. So I guess I’ll just write stupid comments that point fingers at the problems and do nothing to solve them…
What do these young whippersnappers know about how to interact with a computer, dammit? I can do everything in text without a problem on my TRS-80 and it’s just fine! We’ve been doin’ it that way since the mid-’70s on text terminals and I can navigate those keystroke menus faster than your fancy-schmancy dropdown widget whatsits! What makes some mouse jockey think they know anything about good interface design? Why do I need windows? I can do everything in full screen and it’s just fine, I tell you. We’ve been doing it that way since mainframes! What idiot thinks they could possibly improve on MVS?
And don’t get me started on UNDO. Undo commands are for sissies who can’t make up their minds. If you didn’t want to put that in your file, you shouldn’t have typed it.
And for that matter, what’s with heirarchical file systems, anyway? You’ve given all your files unique names, and do you know how many combinations there are with that eight-character file limit? You’d fill up your hard drive without repeating, I tell you! You should be able to organize ’em just using logical drives.
Yeah, you definitely shouldn’t post articles like these! Next thing you know some damn fool will be trying to claim it’d be easier to do graphic layout with some ninny program like Adobe InDesign instead of good ol’ troff. Can you imagine?
“And yes, it can also work for image
files. Obviously, you wouldn’t want to store a complete copy of the image after every single operation.
Instead, you store the _operations_ that were used to create/manipulate the image. Storing “Filter F with
parameters A and B” takes a lot less space then storing a whole copy of the image for before and after the
filtering. Obviously, this will increase the amount of processing power you need when reconstructing older
versions of the image. ”
The things you do to image files are not, in general, reversible. For
example, if you scale or blur and image, information is lost
permanently.
Likewise if you adjust the tonal balance (gamma, contrast, “curves”,
and similar operations.
It _is_ possible to save only the selected area, if the operation was
done on a selected area. This is presumably what is done in
Photoshop’s “history” feature.
You can also save a whole file and keep a specified number of .bak
versions.
Just to clarify… I do think the article and those it links to are very good. The content is excellent, actually. The problem is that very few people actually REACT to them (other than knee-jerking and such). Therein lies my frustration. So again…
When?
I like articles like this, even if they are, well, not perfect. They get the brain juices flowing.
But, it is a difficult area to talk about as it is hard to visualize some of these things. A link to an application like YeahWrite is not especially helpful. It does what he says and the program does what it sets out to do, but is so ugly looking it almost looks like junior high school students did the UI as a class project 🙂
Okay, so commands only won’t work. Then what about saving a baseline image followed by N commands that have been applied, and then the resulting final image? That’s still better than N images.
The computing time necessary to re-calculate all those commands seems to be the worse problem here. Maybe it’d be possible to balance it a bit by inserting additional “key-frame” versions of the image when N gets too big.
Just some further ideas…
> >Just type “open <Appname>” and the app is launched.
>
> Oh, how obvious was that. Not..
Um, this is a command line we’re talking about, not a GUI. “Obviousness” is not the name of the game. If you want “obvious”, use the Mac’s GUI, not its Terminal.
Obviousness is even more important in a command line interface. In a graphical user interface I can easily poke around and figure out what things do. While possible, this is not as easy with a CLI.
“Obviousness is even more important in a command line interface. In a graphical user interface I can easily poke around and figure out what things do. While possible, this is not as easy with a CLI.”
There is no way to be “obvious” when there is only the command prompt. The cues present in graphical interfaces are simply absent. Even previous CLI experience doesn’t help much except perhaps with some conventions like using TAB for command line completion, or knowing the “man” command. It’s pretty much a matter of you either know it or you don’t; one has to rely on documentation and/or training.
The lack of “obviousness” in CLIs is a big part of the reason GUIs exist.
My personal favorite program as far as the UI goes these days is the Gimp. I like how all the major functions are fractured off into their own windows that I can easily manage with the window manager. I like the big ass context menu that comes up when I right click. I like the plethora of key commands. I like all the little handles for viewing the graphic around the edges of the graphic I’m working on. The only thing I don’t like about it is that graphics windows need tabs (a la Galeon) so I can stack up multiple graphics that I’m editing (note: I can get this functionality using certain window managers but It oughta be built in IMHO).
One thing that puzzles me no end is, why, after having such a great example no other gnome project does it like this. I should be able to yank all of Galeon’s, Pan’s, Abiword’s, and Gnumeric’s menus and tool bars and put them in a window that can be shaded iconfied or expanded as necessary. You can almost do this by just dragging them off the window, but it’s not quite enough (needless to say, people who don’t like to work this way should be able to ‘merge’ these into one big window out of the window manager’s purveiw.)
I like the basic unix paradigm of working with small programs that talk to each other via the project you are working on rather than one big monolithic program. To me, that’s what the ‘document centric’ interface means. It means that when I hit ‘save’ it should be able to talk to my version control program of choice (no need to build it into the filesystem) and the appropriate dialogues should appear. Likewise with ‘open’, ‘revert’, ‘close(commit?)’ etc. emacs seems to have the right idea in communications principle.
It a shame that the *nix GUI has so far to go before it will catch up to the UI principles of it’s command line.
Most people seem to prefer an integrated window, so that layouting of elements on the screen is done automatically for them e.g. when they resize a window. You simply have less stuff to move around.
On the other hand, many toolkits have functionality like this built-in. I don’t know about Gnome, but in KDE, you can detach all toolbars from the main window or hide them. Sure, it may need some refinement, but in principle it’s there.
BTW, I don’t think The Gimp is document-centric. When you hit the close button on the “main” window, all document windows are closed. In a truly document-centric system, closing the “main” window would do just that and leave all other Gimp windows open. Obviously, you’d need some option to open the “main” window again (possibly via an entry in the context-menu).
“Firstly, if you’ve used computers for more than six months, and become dulled to the pain, you may well be objecting to one or another of the examples. “Hey!”, you’re saying. “That’s not cruft, it’s useful!” And, no doubt, for you that is true. In human-computer interfaces, as in real life, horrible things often have minor benefits to some people. These people manage to avoid, work around, or blame on “user stupidity”, the large inconvenience which the cruft imposes on the majority of people.”
I agree with most of his points. Especially getting rid of the document ‘Save’ idea. So much so, I’ve begun to incorparate that, and a document versioning scheme, into to mozOffice (http://mozoffice.sourceforge.net). We are so used to the way things work, forgotten about the learning curve that we had to go through, that we forget that most people out there are not power users.
Don Cox said:
“Personally, I prefer manual saving. How hard is it to hit Ctrl-S after a couple sentences?”
At work, we release a program that changed a process to include two extra mouse clicks. And you know what happened? The user base absolutly went nutty! They complained of inconvienece, extra work, and our stupidity. In the back of my mind, I was thinking “Well, geez, its only two mouse clicks.” See my point? Why should the user have to press Ctrl+S (which can change from application to application, if users even know about hotkeys, which most don’t)? Or go File->Save? Or find the little disk icon on the tool bar?
Why ask users to do anything that can be handled programatically?
Nicolai, I don’t think the Gimp is particularly document centric either. It’s the illusion of it that I like.
That said, I think it’s possible to be too document centric–something like the “lifestreams” UI I read about. I think I would like something like that as a view option in a more general file manager/launcher thingy (it’s a great idea for a launcher UI) but I don’t think everyone would want to have to use it all the time.
The power of computers lies in their flexibility and adaptability. I think it’s a mistake to limit the expression of this flexibility because of some currently fashionable notion about ease of use. If we make our applications modular, communicative, and oriented to the task at hand, I think we can let the interface evolve to suit us as individual users.
(The very word “document” implies that the normal data file is text).
Suppose I use a paint program to create an image. I may want to save
it with either lossless or lossy compression, or both.
Now I want to use that image in a print publication. Instead of being
a whole document, it is now a component in a larger document. I am now
applying a different program to that document.
Next day, I want to use the same image in a web site. This time I want
to use the JPG version, not the TIFF. This time, the output is a
bundle of files and directories, not a single document.
The next day, somebody accesses the web site. They want to view the
JPG in a browser. They may want to download it and include it in a
print publication.
To me, it makes sense to select your application first, depending on
what kind of job you are doing, and apply it to the
“document”. The idea that each document can be linked to only one
program doesn’t match how I work at all.
I think that you’re 100% correct in this context. I think these kinds of tasks (content creation) will still need to be done in a more “expert” environment, where you can control file types and store multiple versions of a file.
BUT… Maybe not.
I think that the proposed systems (not saving, revisioning and all that) can probably still be applied even to these kind of “expert” processes of content generation, modification, serving, etc. It’s a matter of changing the whole process itself from one end all the way to the other. We have to separate you from your existing process before a new one can appear workable.
It’s a matter of developing the right system, getting it into place, and getting people to learn these new processes. Taking away complexity need not take away a developer’s sense of control over his or her data.
First step: all computer systems and software should fully support one open and standard “super” file type for each kind of data (image, text, audio, whatever).
Imagine: all web sites use only one image type (let’s just use PNG as an example). All software is able to fully manipulate that format. Then we give the content provider an interface to edit the image data (say a new-wave Photoshop). When the image is done, it is stored in the user’s storage area, as an original. If he/she wants a different version (maybe different color scheme) it’s a matter of telling the software “New Version, Name: Blue Scheme” or something like that. Finished with the image, the content developer then goes to his/her web site editing tool, accesses the appropriat content, updates changes from the server if need-be, and goes about making changed. Perhaps he/she chooses to “add a new image” (or maybe “change properties of existing image”). Maybe he/she wants to change the compression level (the results of the compression settings are previewed in real-time). Then press “Update server side” and the intelligent server drops the HTML file and any other updated materials (like the updated image) into the appropriate location.
The content creator still has the local “simulation” to work with but needs not keep track of all the individual files, including variations on image formats and compression types. One super-type is all that’s needed for original storage, while the intelligent server and web page design tools handle all the rest of the garbage transparently.
In your example of someone viewing the image on the web and wanting to use the image in a print publication, well, they can’t unless they want to accept the poor quality they will get with a screen-resolution image. In that case, they just drag and drop or copy and paste this image into their page-layout environment. If they really want a print-resolution image, they contact the creator for the original material.
It’s not that we’d be abandoning different file versions or types. We just need to develop a new process to manage the data. You would have to have open standards for documents and you would stop thinking so much about which program creates or edits those documents. Something that works universally instead of only on this or that OS or hardware. Maybe the OS is a system of plug-ins like Eugenia suggested (good idea, horrible chances of it being made any time soon, sadly). Instead of companies providing competing “total solutions” they provide competing plug-ins to manage data in different ways. Surely the sheer number of competing products would be reduced, since the file format lock-in would be eliminated and everyone should be saving and reading data the same way (at least to the eyes of the viewer).
Such a system of standards… that’s the real problem. MS likes to think that computers would be 100% simple if they controled EVERY aspect of them. Maybe they have the right goal in mind (if you believe they really want to make things better) but I wont trust ONE corporate entity to be the provider of that perfectly integrated system (especially since MS, as example, cannot even properly integrate its own products together without adding complexity and intermittent problems).
I know… how to get all these new systems developed, agreed-upon and into place??? 🙂
I agree with most of his points. Especially getting rid of the document ‘Save’ idea. So much so, I’ve begun to incorparate that, and a document versioning scheme, into to mozOffice (http://mozoffice.sourceforge.net). We are so used to the way things work, forgotten about the learning curve that we had to go through, that we forget that most people out there are not power users.
I think he missed something when he was going through that whole tirade about document saving, though. By default Office (and I’m sure most other office application suites) will save the current document every 5 minutes (the interval can be set in the options dialog), usually with the first line of text as the name (which could be cryptic depending on what you’re working on, but is better than some random naming scheme), unless you’ve already saved the document, in which case it uses the name of the document plus some odd characters (and it will delete this document when you save the actual document, but the document remains on the disk if the application or OS crashes).
Frankly, saving something to the desktop as the article implied adds extra steps for me, which includes drilling down to the folder I want it saved under from the root of the file system (the desktop is the root of the file system in win2k/XP) and then using either a cut/paste operation or drag & drop operation (which includes the other cruft he mentioned, I usually use right-click-drag; as an aside, I never use menus the way he says they’re meant to be used, ie click-hold-release on an item). I’d much rather use the save dialog which will usually drop me into the documents folder than have crap stuffed onto my desktop for me to filter through at a later time. Besides that, the current Windows interface guidelines include the suggestion that all saved files are saved in the documents folder, so automatically saving files onto the desktop would violate the guidelines. Whether you consider the guidelines right or not, it makes the system as a whole easier to use if application behavior is consistant from one app to the next.
Obviousness is even more important in a command line interface. In a graphical user interface I can easily poke around and figure out what things do. While possible, this is not as easy with a CLI.
The most obvious thing in the world for a CLI to do is provide useful information when the user types either ‘help’ or ‘?’. It doesn’t matter what the conventions are for the system, whether you have man or whatever, a user that’s never used the system is likely to consider both of these options before much else. Entering an invalid command should also refer to one of those as an available option. Applications should also provide helpful information if the user inputs invalid switches at the command line, or provides no switches or data when the application expects them. Those are all fairly simple things to ask, even if typing help simply brings up the man pages for the shell in use.
Yes, yes, I know. MS already has Autosave. But the concept of ‘Save’ is still there. That is what mpt was talking about. Sure, you should be able to save a copy, but the document itself should be perpetual.
And you’re right. Automatic saving of a document to the Desktop is a pain in the neck. I keep only a small number of icons on my desktop. So, what I would like to see, if you hit ‘New’, you are prompted to select your folder and for a document name. If you just want a ‘scratch pad’, there would be button to bypass those two features, and perpetual doc would be disabled until the user selected ‘Version’, where a file and path would be established.
Just my thoughts…
Chris
“Yes, yes, I know. MS already has Autosave. But the concept of ‘Save’ is still there. That is what mpt was
talking about. Sure, you should be able to save a copy, but the
document itself should be perpetual.
”
I think what you have in mind is what is called “orthogonal
persistence”. In effect, there is no hard drive space as such – it is
all allocated as virtual RAM.
Then all your data is permanently in “memory”.
This sounds like a neat revolution to start with, but IMO will end up
with some over-complicated system for indexing and accessing data. For
example, I have several thousand fonts kept in a reserve directory
(because the DTP programs don’t provide good requesters for selecting
one font from several thousand). If I want one to become available, I
install it in the active Fonts directory.
In other words, a hierarchy of accessability is useful. You do want to
store things away for possible use later, not have everything out on
your desk at once.