Mark Weiser has written a really interesting article about just how desirable new computing environments, like VR, “AI” agents, and so on, really are. On the topic of “AI” agents, he writes:
Take intelligent agents. The idea, as near as I can tell, is that the ideal computer should be like a human being, only more obedient. Anything so insidiously appealing should immediately give pause. Why should a computer be anything like a human being? Are airplanes like birds, typewriters like pens, alphabets like mouths, cars like horses? Are human interactions so free of trouble, misunderstandings, and ambiguity that they represent a desirable computer interface goal? Further, it takes a lot of time and attention to build and maintain a smoothly running team of people, even a pair of people. A computer that I must talk to, give commands to, or have a relationship with (much less be intimate with), is a computer that is too much the center of attention.
↫ Mark Weiser
That’s one hell of a laser-focused takedown of “AI” tools in modern computing. When it comes to voice input, he argues that it’s too intrusive, too attention-grabbing, and a good tool is supposed to be the exact opposite of that. Voice input, especially when there’s other people around, puts the interface at the center of everyone‘s attention, and that’s not what you should want. With regards to virtual reality, he notes that it replaces your entire perception with nothing but interface, all around you, making it as much the center of attention as it could be.
What’s most fascinating about this article and its focus on “AI” agents, virtual reality, and more, is that it was published in January 1994. All the same questions, worries, and problems in computing we deal with today, were just as much topics of debate over thirty years ago. It’s remarkable how you could copy and paste many of the paragraphs written by Weiser in 1994 into the modern day, and they’d be just applicable now as they were then. I bet many of you had no idea the quoted paragraph was over thirty years old.
Mark Weiser was a visionary computer scientist, and had a long career at Xerox PARC, eventually landing him the role of Chief Technology Officer at PARC in 1996. He coined the term “ubiquitous computing” in 1988, the idea that computers are everywhere, in the form of wearables, handhelds, and larger displays – very prescient for 1988. He argued that computers should be unobtrusive, get out of your way, help you get things done that aren’t managing and shepherding the computer itself, and most of all, that computers should make users feel calm.
Sadly, he passed away in 1999, at the age of 46, clearly way too early for someone with such astonishing forward-looking insight into computing. Looking at what computers have become today, and what kinds of interfaces the major technology companies are trying to shove down our throats, we clearly strayed far from Weiser’s vision. Modern computers and interfaces are the exact opposite of unobtrusive and calming, and often hinder the things you’re trying to get done more than they should.
I wonder what Weiser would think about computing in 2025.

I loathe the idea of anthopomorfizing computers. Tools can be invisible, as he well puts in the article, but it should remain tools. There’s a huge complexity difference between a walking cane or eyeglasses and a computer where the eyeglasses and the walking cane ALWAYS work, and computers fail.
My boss is pushing hard for us to use copilot, so I tried today for some scripts I was trying to refine. It kept giving me suggestions for improvement, when I asked for none. I just wanted to find out what the problem was. Then, after I asked it not to give any suggestions or pat my back about how much awesome my idea is, just point out the problem. It turns out that the code I pasted was perfect. The problem was an open quote not being closed 10+ lines before and I couldn’t see it after working on the problem for many hours. =)
Now… I digress.
A computer should be a reliable tool, behaving always as expected. When you get a screwdriver, a soldering iron or a drill, you don’t expect the shape of the tool or the basic paradigms to change. No one goes around inverting screws our deciding that clocks should go counterclockwise now. You can slowly evolve the interfaces and find better ideas, but the fundamentals should remain. there. An “X” should always close a window. Shutdown should always shutdown and give me a fresh start – it should not be a hybrid shutdown or hibernation or deep standby or whatever.
Also, status messages should be useful. “We are upgrading your computer”. Who is WE? What is happening? Is it downloading a file? Is it installing something? It it freezes or fails, I want to know what the last step before the failure was. What’s the deal with the spinning wheel on webpages? It keeps spinning even if the page is not loading anymore! (plus it eats 40W+ of GPU power depending on what GPU you have on a desktop, to convey no useful information)
VR could even be fine, but you need to know what is going on, it has to be consistent, and it has to be useful, and it has to be precise. A keyboard is precise. A mouse is precise. It allows me to interact with the computer in a transparent way (like a walking cane). Waving my hands while wearing VR goggles triggers the command I expect 80% of the time. This is also why I never used Siri or whatever to manage even a simple shopping list or basic calendar entries. If it fails 3 out of 10 times, then I’d rather just unlock my phone and set what I want. Also, I tend to type faster than I speak, so voice commands never made sense to me anyway.
And I can’t invoice anyone for my wasted time when I turn my computer on and find out that a program suddenly behaves in a very different way and I need to relearn it after an update. Ugh.
So please, computers should be tools.
Insightful post, thank you for taking the time to share it…
I totally agree.
In the last 10 years, every big pockets tech has become useless, unreliable or a shitshow. Facebook, Twitter, NFTs, Crypto, Metaverse, Tiktok, Instagram, Alexa, Google Home, even good ol Google search, and now CheatGPT and AI Slop (lets not talk about the Microsoft Windows 11 trainwreck or the messy state of MacOS).
Is there anything tech done right as of late? Everything is greed-driven bullshit.
I’ve been a programmer for 30+ years and I’ve never been more detached from tech as I am right now. It’s all rotten to the bones.
I found out I still love tech, and if you dig hard enough, you will find nice stuff. Self hosting has been bliss. Email, nextcloud, all running from my home office for years. FreeBSD is very nice and I enjoy that things get better, but don’t change for no good reason.
I upgraded my Librem 5 to 5G and a new wifi module and I am just now 3D printing a new back cover and a TPU protection because my phone has fallen way too much. Via some clever mounting acrobatics, now I have basically a 1TB phone. =) The community is sometimes too passionate and Purism is not necessarily the best in PR, but there’s a lot of passion for the product.
Speaking of printing, the carcasses of upgrades that I did on my 3D printer remind me that, well, not every company sucks and not everyone is all into forced obsolescence.
Windows 11, which I unfortunately need to use, is not so awful when you debloat it (Hyper-V is great!) and look beyond the settings app schizophrenia. VMs allow me to, basically, daily drive 5 OSs depending on what I am doing – always the best tool for a task. Qubes OS is great and my corebooted thinkpad is wonderful with it – tech that I feel I own and gives me the sense of having a super power!
If you code, it’s trivial to make nice smart home gadgets that can do useful things for you. I put together a flame detector that beeps for us if we forget the stove lit and I am now planning to integrate it with meshtastic and stick a lora transmitter to the serial reader of my librem 5. =)
It can still be fun! My daily work with MSFT stack and Azure pains my soul but, once the lights are off at work, there’s tons to be played around.
I’m way more worried about human beings behaving like computers.
I’m worried about humans behaving like humans 😉
Acting human has long been a goal for artificial general intelligence. But at the same time there are many of us see and use AI as a tool and not merely as a human substitute. When I have work to do I want tools that help me solve my problems – being human-like isn’t a criteria that I care about. Where LLMs shine is that those problems can be expressed in human terms without needing translate our thoughts and instructions for the computer. As such LLMs do offer significant value in terms of human interaction, but LLMs aren’t the end all be all of AI tooling.
LLMs can actually be quite bad at performing specific tasks themselves, but as an interfacing layer I think LLMs may prove to be the ultimate rosetta stone of AI, connecting all of our tools together and exposing them in a friendly way.
O/T:
Recently every time I login to osnews now I am forced to authenticate via email. Furthermore this process forces me to click on a 3rd party link at “…sendibt2.com”, which appears to be a 3rd party tracking farm. I’m not sure if osnews has made this change intentionally, but is it possible a plugin got installed without realizing that it’s doing this?
Looking into it. No idea.
Thom,
Btw, thanks for fixing my account issue. It is nice to be back.
Alfman,
I have learned to disable all write actions for LLM tools. Run command? Ask me first. Edit a file? Show me the diff. They are very easy in making a mistake, or completely misunderstanding intent.
“Hey, this code does not seem to compile, I have removed the offending parts”
: facepalm emoji :
“Gemini, please do the opposite. The broken code it better design, but incomplete. I need to to move the rest of the code to that format, not the other way around”
“Sorry, I will do as you said”
This happens all too often. But they are immensely useful when they get the intent right. I can save hours by spending a few minutes baby sitting the LLM.
Bonus: Being angry at them sometimes gets the solutions much more quicker and correctly. I do not know why, but they work better under pressure.
This is why you run them in a project which is stored in git and run the commands in docker (probably bind-mounted the source of the app in the container). No dependency problems, very little overhead on a Linux laptop/desktop, and progress is in git.
I’m not sure I agree, Thom. In the end, its a philosophical difference of how to approach tools. People who are “techies” like us in this discussion probably agree with you, Thom. But I think most people don’t want to have to think about how a tool works or how to use it, it should be that natural.
For example, I just completed a masters degree in history. Most of my classmates and professors are definitely in this later category! They want tools, but they are frustrated with the learning curve that using these tools requires. In my digital humanities course we covered a lot of tools and concepts that are familiar to us techies: git, HTML, algorithms, etc. DH is the application of technology to doing history, which can mean creating new presentations like Web sites, or applying new techniques like text analysis. It was second nature to me, having been in the tech world for 25 years, but most of my classmates struggled with this different mentality, which I suppose can be reduced ultimately to science vs art. They already dislike the idea of history being (re)turned into a science (long story!) and this was just another layer of this tech-oriented mentality that they don’t have.
Robots, machines that can do physical labor for us, is the end goal of tools for humans who think like this. Rather than forcing the user to adapt to the tool, these robots should be ordered about like a human, which is how we evolved to interact with entities that “think”, such as other animals. The person giving orders shouldn’t have to do more than the minimum to adapt to the tool. Obviously, that means that the computer driving it must accept human speech, understand context, etc.
Our problem right now is that we’re in the long and painful process towards that end. I don’t have much sympathy for the companies that are pushing today’s “AI” on us right now, but I understand their perspective. The problems are the the tools are immature, for now, and so are the users, but these corporations (we cannot ignore profit as a motivator!) see this as the cost society must pay for that end. I disagree completely with their methods, but I understand the mentality, and I think these users would welcome a carefully controlled version of AGI (by which I mean simply computers that can “think” act human levels).
Having a computer able to speak, use natural language, and perform tasks on its own is an ACCESSIBILITY question. We’re all on this thread, preening and pruning our feathers on what computers shouldn’t do, because we’re only enjoying this site as a group able to use it. If you’ve ever seriously used accessibility tools – let me tell you – the experience is wretched.
Why should someone with motor control issues be required to install 30 pounds of bulky control hardware simply to operate a mouse proficiently? Why should a blind person be required to crawl through a page with damn near unintelligible TTS because the services they need don’t use semantic HTML?
The fact of the matter is, a lot of people need personal assistance because they can’t do it all on their own. Their input device IS OFTEN ANOTHER HUMAN. So by having a device able to “perceive” using AI, it breaks a massive barrier-to-entry. By having a home-ready robot, they can have a guilt-free care provider (Guilt can sometimes be a thing for the handicapped, and robots don’t give the same loss-of-independence feeling as relying on a person)
That an accessibility tool can benefit the general population is simply a massive win all-round. And honestly, AI-based computers will be the future in some very good ways. Instead of it taking 30 minutes to book a concert ticket using keyboards, mice, and screens… You could simply ask, maybe take 30 seconds to confirm the details as a natural voice guides the process. An experience equally good for someone with full faculties, as well as someone impaired.
That said there are aspects I agree with. I don’t think it’s a good idea – at all – to make a computer act human, and I mean that in the sense that computers should not have personality. Your computer should not “act”. It should not act happy to see you, it should not act sad when it failed, it should not act apologetic – because it’s none of those things. It’s a lie running on x86 silicon. At no point should anyone have an _intimately_ personal computer, because it will never be real.
Mark was concerned that a “relationship” with your computer would, essentially, be a maintenance task. My concern is the opposite; it will not be a maintenance task. The computer will always be happy to see you – no matter how abusive and neglectful the owner is. Someone socially illiterate, who really needs to gain experience with other humans, will be rewarded with success even as they indulge their worst impulses. Why date and interact with people when it’s so much work? “My AI loves my ‘quirks’ and never tells me no.” A user might say “Get me a f**king coffee” and their AI won’t even flinch.
That tech companies are trying to make every computer “intimately personal” with little characters and canned personalities is the death-nail. As soon as we get to the point where people are f**cking the plastic, we’re done as a species – literally. When your partner is something that can’t reproduce, birth rates will plummet even more sharply, and human interactions will be more strained without an AI filter. Once every person will be able to create their own “perfect” lover who makes sure they get asked all the right questions every day, the need for human relationships is dead. Look at Grok – it’s gooner bait. There are people out there RIGHT NOW who are not having HUMAN relationships and interactions because an Nvidia graphics card somewhere in the desert calculated the words “I love you.” THAT is the problem.
Kver,
I find your post extremely interesting, but for the sake of discussion I think we ought to consider that the desire to use AI for relationships may be the consequence of (rather than simply the cause of) a widespread loneliness epidemic. This has been well documented. Even though AI relationships are fake, someone might still find simulated relationships help them cope better with the fact they may not have them in real life. I’m not really advocating for it, but only pointing out this was completely absent from your take.
Let me just throw this here for context…
Warning: if it’s not clear from the url, the link depicts historical dildos.
https://www.thearchaeologist.org/blog/the-surprising-history-of-dildos-humans-have-been-using-them-for-over-28000-years
” I bet many of you had no idea the quoted paragraph was over thirty years old.”
The promise of both VR and AI have been long in development and slowly getting useful.
Spoken text as interface is imprecise, but often good enough, especially if we can get LLMs to have better context (memory), so this and that, John (the John which fits the topic), etc.-shorthand works immediately.
Obviously, many technical tasks do need precise language.
Lennie,
In my use of LLMs, I also notice they have a short term memory. However I am not using precise LLM models and I am not positive if this could be an artifact stemming from model reductions like quantization. My hardware is a few generations old and obviously it’s not going to be able to run the best LLM models. It’s probably fixable but nevertheless, I do notice it as a shortcoming for me.
Yes, Spoken languages like English have a lot of ambiguity. This isn’t just problematic for computers deciphering human text, it’s responsible for human failures as well. For now humans probably retain an edge simply because we have more domain specific context – sometimes we can come into a project having more intuition behind what a specification means as opposed to what it literally says. When you are new to a domain there’s often a lot of confusion about what the client’s are really asking for. There’s a learning curve and inexperienced LLMs are likely to hit this.
I think that technologically LLMs can eventually get good at this, maybe even better than human, but they’ll likely need additional domain training. A lot of this would-be training set is proprietary out of public view. This may pose a challenge for today’s LLMs since they may not be able to get adequate training from public sources like wikipedia or the open web.
In non-training aka inference mode LLMs are completely stateless, like a HTTP-request (sure the browser can visit a website and include a cookie so they can load in session data when processing the request and use that to personalize the output).
A LLM normally sits behind a HTTP API and a message you send as a user is combined with the system prompt and previous messages (from the LLM and the user) into what is called the context and the LLM generates output based on that input. There are some parameters like temperature which allow for some variance in how the output is made (0 is almost completely deterministic). So any memory is external and needs to be included in that context somehow before hand, like including the content of the PDF you uploaded to the chat interface or this fact stored in a database which seems related and thus was automatically included (RAG).
The hardware isn’t as relevant for good memory management.
Lennie,
I wouldn’t say that because LLMs absolutely do have session state. I believe you are making a subtle point about how LLMs work internally, but IMHO calling them stateless would be as misleading as calling the osnews website stateless.. It’s like claiming a PHP website is stateless because the PHP program itself forgets everything after each request gets processed. This is true but we’d be ignoring the fact that the PHP programs typically do have access to session variables and persistent stores that access persistent information using cookies.
A “stateless” algorithm + a persistent medium can nevertheless turn the algorithm into an endpoint that remembers stuff.
Why does this matter to you?
Humans also have cells acting as a persistent data store, and this medical case shows us what happens when they don’t work. As a computer scientist this is very fascinating to watch and it reveals so much about human memory and intelligence. It proves that even human intelligence can be made to work transactionally like an LLM.
“The Man With The Seven Second Memory”
https://www.youtube.com/watch?v=k_P7Y0-wgos
I’d certainly agree our hardware configurations might have a lot more room for optimization.