Mark Weiser has written a really interesting article about just how desirable new computing environments, like VR, “AI” agents, and so on, really are. On the topic of “AI” agents, he writes:
Take intelligent agents. The idea, as near as I can tell, is that the ideal computer should be like a human being, only more obedient. Anything so insidiously appealing should immediately give pause. Why should a computer be anything like a human being? Are airplanes like birds, typewriters like pens, alphabets like mouths, cars like horses? Are human interactions so free of trouble, misunderstandings, and ambiguity that they represent a desirable computer interface goal? Further, it takes a lot of time and attention to build and maintain a smoothly running team of people, even a pair of people. A computer that I must talk to, give commands to, or have a relationship with (much less be intimate with), is a computer that is too much the center of attention.
↫ Mark Weiser
That’s one hell of a laser-focused takedown of “AI” tools in modern computing. When it comes to voice input, he argues that it’s too intrusive, too attention-grabbing, and a good tool is supposed to be the exact opposite of that. Voice input, especially when there’s other people around, puts the interface at the center of everyone‘s attention, and that’s not what you should want. With regards to virtual reality, he notes that it replaces your entire perception with nothing but interface, all around you, making it as much the center of attention as it could be.
What’s most fascinating about this article and its focus on “AI” agents, virtual reality, and more, is that it was published in January 1994. All the same questions, worries, and problems in computing we deal with today, were just as much topics of debate over thirty years ago. It’s remarkable how you could copy and paste many of the paragraphs written by Weiser in 1994 into the modern day, and they’d be just applicable now as they were then. I bet many of you had no idea the quoted paragraph was over thirty years old.
Mark Weiser was a visionary computer scientist, and had a long career at Xerox PARC, eventually landing him the role of Chief Technology Officer at PARC in 1996. He coined the term “ubiquitous computing” in 1988, the idea that computers are everywhere, in the form of wearables, handhelds, and larger displays – very prescient for 1988. He argued that computers should be unobtrusive, get out of your way, help you get things done that aren’t managing and shepherding the computer itself, and most of all, that computers should make users feel calm.
Sadly, he passed away in 1999, at the age of 46, clearly way too early for someone with such astonishing forward-looking insight into computing. Looking at what computers have become today, and what kinds of interfaces the major technology companies are trying to shove down our throats, we clearly strayed far from Weiser’s vision. Modern computers and interfaces are the exact opposite of unobtrusive and calming, and often hinder the things you’re trying to get done more than they should.
I wonder what Weiser would think about computing in 2025.

I loathe the idea of anthopomorfizing computers. Tools can be invisible, as he well puts in the article, but it should remain tools. There’s a huge complexity difference between a walking cane or eyeglasses and a computer where the eyeglasses and the walking cane ALWAYS work, and computers fail.
My boss is pushing hard for us to use copilot, so I tried today for some scripts I was trying to refine. It kept giving me suggestions for improvement, when I asked for none. I just wanted to find out what the problem was. Then, after I asked it not to give any suggestions or pat my back about how much awesome my idea is, just point out the problem. It turns out that the code I pasted was perfect. The problem was an open quote not being closed 10+ lines before and I couldn’t see it after working on the problem for many hours. =)
Now… I digress.
A computer should be a reliable tool, behaving always as expected. When you get a screwdriver, a soldering iron or a drill, you don’t expect the shape of the tool or the basic paradigms to change. No one goes around inverting screws our deciding that clocks should go counterclockwise now. You can slowly evolve the interfaces and find better ideas, but the fundamentals should remain. there. An “X” should always close a window. Shutdown should always shutdown and give me a fresh start – it should not be a hybrid shutdown or hibernation or deep standby or whatever.
Also, status messages should be useful. “We are upgrading your computer”. Who is WE? What is happening? Is it downloading a file? Is it installing something? It it freezes or fails, I want to know what the last step before the failure was. What’s the deal with the spinning wheel on webpages? It keeps spinning even if the page is not loading anymore! (plus it eats 40W+ of GPU power depending on what GPU you have on a desktop, to convey no useful information)
VR could even be fine, but you need to know what is going on, it has to be consistent, and it has to be useful, and it has to be precise. A keyboard is precise. A mouse is precise. It allows me to interact with the computer in a transparent way (like a walking cane). Waving my hands while wearing VR goggles triggers the command I expect 80% of the time. This is also why I never used Siri or whatever to manage even a simple shopping list or basic calendar entries. If it fails 3 out of 10 times, then I’d rather just unlock my phone and set what I want. Also, I tend to type faster than I speak, so voice commands never made sense to me anyway.
And I can’t invoice anyone for my wasted time when I turn my computer on and find out that a program suddenly behaves in a very different way and I need to relearn it after an update. Ugh.
So please, computers should be tools.
Insightful post, thank you for taking the time to share it…
I totally agree.
In the last 10 years, every big pockets tech has become useless, unreliable or a shitshow. Facebook, Twitter, NFTs, Crypto, Metaverse, Tiktok, Instagram, Alexa, Google Home, even good ol Google search, and now CheatGPT and AI Slop (lets not talk about the Microsoft Windows 11 trainwreck or the messy state of MacOS).
Is there anything tech done right as of late? Everything is greed-driven bullshit.
I’ve been a programmer for 30+ years and I’ve never been more detached from tech as I am right now. It’s all rotten to the bones.
I’m way more worried about human beings behaving like computers.
Acting human has long been a goal for artificial general intelligence. But at the same time there are many of us see and use AI as a tool and not merely as a human substitute. When I have work to do I want tools that help me solve my problems – being human-like isn’t a criteria that I care about. Where LLMs shine is that those problems can be expressed in human terms without needing translate our thoughts and instructions for the computer. As such LLMs do offer significant value in terms of human interaction, but LLMs aren’t the end all be all of AI tooling.
LLMs can actually be quite bad at performing specific tasks themselves, but as an interfacing layer I think LLMs may prove to be the ultimate rosetta stone of AI, connecting all of our tools together and exposing them in a friendly way.
O/T:
Recently every time I login to osnews now I am forced to authenticate via email. Furthermore this process forces me to click on a 3rd party link at “…sendibt2.com”, which appears to be a 3rd party tracking farm. I’m not sure if osnews has made this change intentionally, but is it possible a plugin got installed without realizing that it’s doing this?
Looking into it. No idea.
Thom,
Btw, thanks for fixing my account issue. It is nice to be back.
Alfman,
I have learned to disable all write actions for LLM tools. Run command? Ask me first. Edit a file? Show me the diff. They are very easy in making a mistake, or completely misunderstanding intent.
“Hey, this code does not seem to compile, I have removed the offending parts”
: facepalm emoji :
“Gemini, please do the opposite. The broken code it better design, but incomplete. I need to to move the rest of the code to that format, not the other way around”
“Sorry, I will do as you said”
This happens all too often. But they are immensely useful when they get the intent right. I can save hours by spending a few minutes baby sitting the LLM.
Bonus: Being angry at them sometimes gets the solutions much more quicker and correctly. I do not know why, but they work better under pressure.
I’m not sure I agree, Thom. In the end, its a philosophical difference of how to approach tools. People who are “techies” like us in this discussion probably agree with you, Thom. But I think most people don’t want to have to think about how a tool works or how to use it, it should be that natural.
For example, I just completed a masters degree in history. Most of my classmates and professors are definitely in this later category! They want tools, but they are frustrated with the learning curve that using these tools requires. In my digital humanities course we covered a lot of tools and concepts that are familiar to us techies: git, HTML, algorithms, etc. DH is the application of technology to doing history, which can mean creating new presentations like Web sites, or applying new techniques like text analysis. It was second nature to me, having been in the tech world for 25 years, but most of my classmates struggled with this different mentality, which I suppose can be reduced ultimately to science vs art. They already dislike the idea of history being (re)turned into a science (long story!) and this was just another layer of this tech-oriented mentality that they don’t have.
Robots, machines that can do physical labor for us, is the end goal of tools for humans who think like this. Rather than forcing the user to adapt to the tool, these robots should be ordered about like a human, which is how we evolved to interact with entities that “think”, such as other animals. The person giving orders shouldn’t have to do more than the minimum to adapt to the tool. Obviously, that means that the computer driving it must accept human speech, understand context, etc.
Our problem right now is that we’re in the long and painful process towards that end. I don’t have much sympathy for the companies that are pushing today’s “AI” on us right now, but I understand their perspective. The problems are the the tools are immature, for now, and so are the users, but these corporations (we cannot ignore profit as a motivator!) see this as the cost society must pay for that end. I disagree completely with their methods, but I understand the mentality, and I think these users would welcome a carefully controlled version of AGI (by which I mean simply computers that can “think” act human levels).