Mark Weiser has written a really interesting article about just how desirable new computing environments, like VR, “AI” agents, and so on, really are. On the topic of “AI” agents, he writes:
Take intelligent agents. The idea, as near as I can tell, is that the ideal computer should be like a human being, only more obedient. Anything so insidiously appealing should immediately give pause. Why should a computer be anything like a human being? Are airplanes like birds, typewriters like pens, alphabets like mouths, cars like horses? Are human interactions so free of trouble, misunderstandings, and ambiguity that they represent a desirable computer interface goal? Further, it takes a lot of time and attention to build and maintain a smoothly running team of people, even a pair of people. A computer that I must talk to, give commands to, or have a relationship with (much less be intimate with), is a computer that is too much the center of attention.
↫ Mark Weiser
That’s one hell of a laser-focused takedown of “AI” tools in modern computing. When it comes to voice input, he argues that it’s too intrusive, too attention-grabbing, and a good tool is supposed to be the exact opposite of that. Voice input, especially when there’s other people around, puts the interface at the center of everyone‘s attention, and that’s not what you should want. With regards to virtual reality, he notes that it replaces your entire perception with nothing but interface, all around you, making it as much the center of attention as it could be.
What’s most fascinating about this article and its focus on “AI” agents, virtual reality, and more, is that it was published in January 1994. All the same questions, worries, and problems in computing we deal with today, were just as much topics of debate over thirty years ago. It’s remarkable how you could copy and paste many of the paragraphs written by Weiser in 1994 into the modern day, and they’d be just applicable now as they were then. I bet many of you had no idea the quoted paragraph was over thirty years old.
Mark Weiser was a visionary computer scientist, and had a long career at Xerox PARC, eventually landing him the role of Chief Technology Officer at PARC in 1996. He coined the term “ubiquitous computing” in 1988, the idea that computers are everywhere, in the form of wearables, handhelds, and larger displays – very prescient for 1988. He argued that computers should be unobtrusive, get out of your way, help you get things done that aren’t managing and shepherding the computer itself, and most of all, that computers should make users feel calm.
Sadly, he passed away in 1999, at the age of 46, clearly way too early for someone with such astonishing forward-looking insight into computing. Looking at what computers have become today, and what kinds of interfaces the major technology companies are trying to shove down our throats, we clearly strayed far from Weiser’s vision. Modern computers and interfaces are the exact opposite of unobtrusive and calming, and often hinder the things you’re trying to get done more than they should.
I wonder what Weiser would think about computing in 2025.

I loathe the idea of anthopomorfizing computers. Tools can be invisible, as he well puts in the article, but it should remain tools. There’s a huge complexity difference between a walking cane or eyeglasses and a computer where the eyeglasses and the walking cane ALWAYS work, and computers fail.
My boss is pushing hard for us to use copilot, so I tried today for some scripts I was trying to refine. It kept giving me suggestions for improvement, when I asked for none. I just wanted to find out what the problem was. Then, after I asked it not to give any suggestions or pat my back about how much awesome my idea is, just point out the problem. It turns out that the code I pasted was perfect. The problem was an open quote not being closed 10+ lines before and I couldn’t see it after working on the problem for many hours. =)
Now… I digress.
A computer should be a reliable tool, behaving always as expected. When you get a screwdriver, a soldering iron or a drill, you don’t expect the shape of the tool or the basic paradigms to change. No one goes around inverting screws our deciding that clocks should go counterclockwise now. You can slowly evolve the interfaces and find better ideas, but the fundamentals should remain. there. An “X” should always close a window. Shutdown should always shutdown and give me a fresh start – it should not be a hybrid shutdown or hibernation or deep standby or whatever.
Also, status messages should be useful. “We are upgrading your computer”. Who is WE? What is happening? Is it downloading a file? Is it installing something? It it freezes or fails, I want to know what the last step before the failure was. What’s the deal with the spinning wheel on webpages? It keeps spinning even if the page is not loading anymore! (plus it eats 40W+ of GPU power depending on what GPU you have on a desktop, to convey no useful information)
VR could even be fine, but you need to know what is going on, it has to be consistent, and it has to be useful, and it has to be precise. A keyboard is precise. A mouse is precise. It allows me to interact with the computer in a transparent way (like a walking cane). Waving my hands while wearing VR goggles triggers the command I expect 80% of the time. This is also why I never used Siri or whatever to manage even a simple shopping list or basic calendar entries. If it fails 3 out of 10 times, then I’d rather just unlock my phone and set what I want. Also, I tend to type faster than I speak, so voice commands never made sense to me anyway.
And I can’t invoice anyone for my wasted time when I turn my computer on and find out that a program suddenly behaves in a very different way and I need to relearn it after an update. Ugh.
So please, computers should be tools.
I’m way more worried about human beings behaving like computers.
Acting human has long been a goal for artificial general intelligence. But at the same time there are many of us see and use AI as a tool and not merely as a human substitute. When I have work to do I want tools that help me solve my problems – being human-like isn’t a criteria that I care about. Where LLMs shine is that those problems can be expressed in human terms without needing translate our thoughts and instructions for the computer. As such LLMs do offer significant value in terms of human interaction, but LLMs aren’t the end all be all of AI tooling.
LLMs can actually be quite bad at performing specific tasks themselves, but as an interfacing layer I think LLMs may prove to be the ultimate rosetta stone of AI, connecting all of our tools together and exposing them in a friendly way.
O/T:
Recently every time I login to osnews now I am forced to authenticate via email. Furthermore this process forces me to click on a 3rd party link at “…sendibt2.com”, which appears to be a 3rd party tracking farm. I’m not sure if osnews has made this change intentionally, but is it possible a plugin got installed without realizing that it’s doing this?
Looking into it. No idea.