I’ve used thin clients at home for quite a while – both for their intended use (remotely accessing a desktop of another system); and in the sense of “modern thin clients are x86 boxes that are wildly overpowered for what they run, so they make good mini servers.”
Recently, I saw a bulk lot of Sun Ray thin clients pop up on Trade Me (NZ’s eBay-like auction site) – and with very little idea of how many clients were actually included in this lot, I jumped on it. After a 9 hour round-trip drive (on some of the worst roads I’ve seen!), I returned home with the back of my car completely packed with Sun Rays. Time for some interesting shenanigans!
↫ catstret.ch
I was unaware you could still set up a Sun Ray environment with latest versions of OpenIndiana, and that has me quite interested in buying a few Sun Rays off eBay and follow in the author’s footsteps. It seems like it’s not too difficult, and while there’s some manual nonsense you have to do to get everything to install correctly, it’s nothing crazy.
To this day, I firmly believe that the concept of dumb thin clients connected to powerful servers is an alluring and interesting way of computing. I’m not talking about connecting up to servers owned by massive technology corporations – I’m talking about a few powerful servers down in your own basement or attic or whatever, serving applications and desktops straight to basic thin clients all around your house. These thin clients can take the shape of anything, from something like a desktop setup in your office, down to a basic display in your kitchen for showing recipes, setting timers, and other basic stuff – and everything in between.
Sun Rays could ‘hot desk’ using personal smart cards, but of course, in this day and age you’d have your smartphone. The thin clients around your house would know it was you through your smartphone, and serve up the applications, desktop, tools, and so on that you use, but everything would be running on the servers in your house. Of course, my wife would have her own account on the server, as would our children, when they are old enough.
None of this is impossible with today’s tools and computing power, but it wouldn’t be easy to set up. There are no integrated solutions out there to make this happen; you’d have to scrap it together from disparate parts and tools, and I doubt such a house of cards would end up being reliable enough not to quickly become a massive annoyance and time sink. On top of that, we live in a rental apartment, so we don’t even have a basement or attic to store loud servers in, nor are we allowed to drill holes and route Ethernet cabling for optimal performance.
Anyway, there’s no chance in hell any of the major technology companies would build such a complex ecosystem in a world where it’s much easier and more profitable to force people to subscribe to shitty services. In my ideal computing world, though – a server in every home, with cheap thin clients in every room.
Unfortunately I don’t think Wayland supports this sort of remote display connection sort of thing. And at least in KDE, the RDP implementation is not fully baked yet. VNC of course could work, but for Wayland, I think, you’d still end up needing a separate VM for each “session” since I don’t think (could easily be wrong, haven’t researched it much) you can’t really start multiple Wayland sessions for different simultaneous user sessions. Happy to be proven wrong.
I do like the idea…but practically, even if I did live in a multi-story house, I can’t think of many cases where this would be that useful. Either I’m gaming, which I’m sitting at the main system anyways, or mostly browsing the web and a tablet/phone works just fine for that for quick lookup stuff. Especially with bookmark sync and the like these days.
> I don’t think Wayland supports this sort of remote display connection sort of thing
https://gitlab.freedesktop.org/mstoeckl/waypipe
Really? Wayland is so good it has to emulate stuff it’s claiming to be superior to? It’s like windows 11 would default to program manager from windows 3.1
Slightly off topic, but I’ve talked previously here and on Mastodon about the HP T620 thin clients I picked up basically for free a couple of years ago. They aren’t quite fast enough to use as a standalone desktop PC (though I’m doing just that with one of them with OpenBSD installed). For what Thom was talking about though, they would make for excellent clients in a home server/client environment. They are passively cooled and use solid state storage, so they’d be perfect to hide away around the house as needed. They can be found on eBay and similar sites for around $25 each, or in bundles that bring the cost down to $10 each or so.
Of course I’m not using mine for that kind of setup because as Thom said it would be a hassle to maintain; and as Drizzt321 said, I’d have to be sure to use a X11 WM/DE instead of Wayland since the latter doesn’t support X11’s seamless networked windowing. No, I’m using them in place of what most people would use a Raspberry Pi for; a PiHole DNS server, a slow but functional desktop PC, and a classic console emulator via Batocera Linux.
I wonder if they’d be fast enough to be a dummy Steam Streaming frontend for TV room, with your main machine with honking GPU/etc in another room somewhere.
More than fast enough I’m sure. I haven’t tried it but they are (on paper) nearly as powerful as a Raspberry Pi 4 which can do Steam streaming with no issues. I have the dual core AMD GX-217GA variants, there were also GX-415GA quad-core units that were effectively twice as performant.
Morgan,
I had lots of issues with Steam streaming on RPI5, so much so that I gave up on it. It was just too buggy. for the kids to use. A lot of remote play users reported that disabling hardware acceleration fixes a lot of display issues. which turned out to be helpful advance. However I never managed to consistently fix joystick issues. Some games worked fine, others the joysticks were totally unusable (only when remote, no problem locally). I actually don’t know if these bugs are strictly on the RPI client side, they could be bugs on the host side with specific games being incompatible. I’m not sure if streaming from a windows host, which I don’t have, would make a difference. The RPI is running armbian, maybe I need to try something else. I bought the RPI specifically for this use case for the kids and was frustrated it didn’t just work out of the box. I ended up buying 30ft HDMI/USB cables to avoid the bugs in remote play.
Despite being released for a couple of years now, the RPi5 is still a bit half-baked when it comes to software support. You might indeed be better off with the official Raspberry Pi OS. A few years ago I had built a Steam streaming box out of a RPi4 and had zero issues with it running the default Raspberry Pi OS and installing Steam Link via apt.
> Sun Rays could ‘hot desk’ using personal smart cards, but of course, in this day and age you’d have your smartphone. The thin clients around your house would know it was you through your smartphone,
That is silly when, in most use cases, letting the phone itself be the client would be far more convenient.
We used to be able to set this up in the Mac world (the original iMac being an under-the-radar NC). OS X Server had a nice turnkey GUI option to configure hosting user accounts on the server so you’d have your home desktop wherever you logged in, or you could netboot the whole OS on a driveless Mac.
Back in the day (early 2000s), this was how Purdue University in the US ran their CS labs. Around 20 students would log into CDE desktops on a Sun workstation that had 4 Ultra Sparc CPUs and around 2GB of RAM at the time.
I ended up finding a Sun Ultra desktop for my apartment to complete my homework on back in the day as well. I really miss Sun hardware and operating environments. Such nostalgia!
This makes sense if you want to have one “real computer” for your whole family and administer it alone, knowing your spouse and kids can’t get root.
This just sounds like a minicomputer and terminals with extra steps. It’s hardly the “future” when it’s literally a technology dating back to the earliest days of computing.
Not saying it’s a bad concept, or should be ignored. It’s definitely a concept that could (and maybe should) still be used in modern settings. But the “future” is hardly the future when it was the present for at least 30 years.
Anyway, it’s something pretty trivial to do with a couple raspberry pi’s, a minimal Linux distro and some remote desktop sessions. I know many universities and colleges use thin client software to operate remote desktop sessions for their students.
Yes!!! This is how technology should have evolved and we were actually on the right track in the late 90s, Video conferencing was P2P (ie microsoft’s netmeeting need a server). Remote control software didn’t didn’t depend on centralized service. Directory services existed for games, but the games themselves were genuine P2P and typically didn’t rely on centralized servers. P2P file sharing didn’t require companies to provide any infrastructure/bandwidth (legally problematic, but regardless the technology was proven and it scaled much better centralized providers could).
But then…
🙁
I honestly think we were evolving fine without centralized services….but the companies saw dollar signs and wanted to shift control away from owner-enabling tech and onto services they control instead. Even today this stupid shift makes it a pain to access files from local shares on my android phone. This is such an important feature for owner autonomy that both apple and google refuse to provide native support: for it. Innovation that empowers user independence needs to be avoided at all costs. The systematic killing of owner autonomy is taking place everywhere, Even the owner’s ability to setup a local login at install is being curtailed. Many of us are holdouts calling out this crap, but it doesn’t matter because we are the outsiders now and the onus on us is to throw up the white flag and give in if want to collaborate with others who’ve already accepted this fate.
A play on the star trek meme: Resistance is….lonely.
Oops “netmeeting ^did not^ need a server”.
I didn’t do a good job acknowledging that sometimes there are decentralized FOSS solutions for those who seek it out.
This already exists, to some extent – it’s called a web browser.
It you set up a home NAS (you can easily just buy a Synology, for instance, they’re still by far the most user friendly despite the recent hard drive debacle), it’s possible to easily install all the web-based applications you need (even the ones that aren’t officially supported can usually be installed as Docket packages). The Synology “desktop” UI itself acts as a launchpad for the apps, and it supports multiples users – although it won’t be easy to get the non-Synology apps to work with Single Sign On.. Regardless, for many of the use cases listed – office work, recipes, media sharing – this is a perfectly adequate way of centrally providing apps and data storage to an entire household without resorting to cloud providers.
Moochman,
That’s a fair point, especially in context of the article’s focus on thin clients. Although I actually like having local applications for things and not just using the web browser for everything.
The people curious about this type of architecture have no idea why oldies like myself gave it the kybosh, how much work it took to deliver the tailored experienced everybody needs from a centrally manage big brother type resource. Ever been in one of those organisations where the boss has everybody organise their desk the same way?
@Alfman
I prefer native applications but I am surprisingly happy with full virtual desktops via a web browser.
When using a window manager that has workspaces, I sometimes make one or more of the workspaces full-screen web browser windows to a remote desktop.
Sometimes, I use these remote desktops for long-running tasks for example. Perhaps I am converting a large video to AV1 and it will take many hours. Or maybe I am trying to run an AI task or large compile. I can switch to a virtual desktop which is actually a VM running on my more powerful home server. Not only can I switch back and forth to this desktop to check on progress but I can add and view this desktop from another computer or do things like shut-down my laptop without interrupting the long-running session. Another “long-running” task might be torrents which I of course pref native clients for. Not only can the “torrent” desktop but long-running but from the same remote desktop I can use web browsers and other related desktop apps related to this “task”. It has all the convenience of working on a local desktop with the benefit of being the same experience from any of my environments and being able to be left running if I have to go do something else or even to power down the computer I was working on.
But another kind of “long-running” task for me are things that I come in and out of. I may be running a hobby project on a remote desktop and I may be using native applications for that. Perhaps I have a spare 45 minutes. It is nice to jump on a remote desktop and find things as I left them. If I have to stop, it is nice to leave things as they are without having to clean up. Admin tasks like finance can be handled this way as well.
An example of pure playing would be using a remote desktop the Ladybird browser. I like to keep tabs on the progress of Ladybird and so I have a virtual desktop just for it. I can jump into it, pull down the latest code via git, maybe read the log to what has changed, and kick off a build of the latest code. Then I can go back to something productive. When I get another bit of free time, I can switch back to that desktop where the newly built version of Ladybird will be waiting for me. One of the things I do in Ladybird is come to sites like OSnews so a lot of the comments you read from me are written in Ladybird where I am using Ladybird running on a remote virtual machine via a full screen web browser window. So, I am running a remote browser in my browser. I can step away from this Ladybird desktop mid-comment if I need to and it is all just sitting there waiting for me to come back. In the meantime, it is also out-of-the way while I do something else.
When I travel, I can access these remote desktops that are running in my home. So, I can bring a low-end laptop (think light and expendable) but get access to all of these remote desktops including my home NAS and fast Internet. The “local” Internet just has to be fast enough to stream the screen. I can also be confident in the security of my home network when considering the data that I am interacting with since, for my local laptop, it is all just pixels. As long as I am not running Recall, it is going to be hard to spy on. I did this over the free WiFi at an airport just a few days ago.
I could be using RDP or something else but accessing Proxmox VMs via a web browser works so well that I cannot be bothered to set it up. Even not having to have an RDP client everywhere I want to use this is nice. Everything has a web browser on it these days.
Anyway, this is all way more than people want to know about my setup. But my point is, if you like the command line and regular desktop GUI apps, remote access over the web can be very powerful and pleasant.
Server in every home? Yes. Thin client for everything? No.
Centralizing compute power is the same bad idea as in IPv4 putting fragmentation logic on the routers. That implied also checksumming logic on routers. IPv6 does ONLY client side MTU path discovery.
I used a Sun SPARC thin client at uni, it was slow and ugly, and completely uncontrolable by the end user.
What should the home server be for? Data storage and other stuff that should be “always online” (for instance personal mail server, jabber server…)
Serafean,
Agree!
Also, I think we should be flexible with the “server in every home”. If someone wants to store their data at a friend or relative’s house. That’s fine, technology should make it so that it doesn’t matter. The point is to have the freedom to choose where your data is and it should be fully encrypted regardless.
It would be awesome to live in a world where all software/services evolved from this concept….but this does not seem to describe the vendor locked world we live in today. 🙁
It’s really hard.
I do self-host everything. mail, jabber, nextcloud, cgit, homeassistant, tvheadend; just yesterday I brought up an Ampache instance.
In my ideal world I’d host all this on the router (I do have the beginnings of an experiment with the Turris Omnia – homeassistant, tvheadend)
Serafean,
I self host a lot of services too.
I don’t think it needs to be intrinsically difficult. Ideally we’d have a standards based application server platform that all software publishers could target out of the box. There’s no reason that Installing/using self hosted applications should be any more difficult than install/using mobile applications. It would be just as easy to self-host as it would be to use a centralized platforms. The barrier to this isn’t technological, but business models. Corporations have learned that the money is not in customers who are free to leave or self host, it;s having customers who are dependent and don’t have good migration options. This is why things are the way they are. I don’t know how to fix the financial incentives that encourage corporations to control us.
Isn’t this what Dell does with ThinOS and their thin clients?
The littlest of the little guys: https://www.youtube.com/watch?v=LADsGBvB_ns
I’ve contemplated this idea over and over again, cuz we somehow seem to have a lot of laptops in the house, and I want to be able to use a spare one if the kids are using my daily driver. But the challenge becomes — what happens when I want to take said daily driver or spare on the road with me, possibly in a place with expensive or no connectivity? Suddenly that thin client becomes a paperweight.
I’m curious if anyone has thoughts. My naive solution would be to replicate the home folders from the server to the client, then frequently rsync (maybe triggered by inotify) from logged-in sessions back to the server. That way, the client is still usable without the server, and hopefully sync conflicts don’t happen too often. Oh, and I guess LDAP for logins or something.
I have had good luck with TwinGate though things like Tailscale work equally well:
https://www.twingate.com/
In a nutshell, I can be on my home network from anywhere.
@skeezix
Apologies. I re-read your comment and I realize you are talking about disconnected workflow. If you are talking about files then, yes, the solution would be replication.
I use remote VMs as remote desktops. If I was expecting to be totally disconnected. I guess one option would be to copy the VM image itself to the machine I was taking with me. That would enable me to launch the full desktop in a VM locally though with reduced performance to what I would get at work or at home.
Not really the same thing but a big part of my home workflow is simply Proxmox and I can do a lot of what Thom describes.
I have several “desktops” running as virtual machines on a central server and can access the full-screen GUI for any of them via a web browser. It works well enough that I sometimes get caught up working in a fullscreen VM and forget that I am not actually working on the local computer. Just a few days ago, I stuck a USB stick into my laptop and was wondering why if it did not show in the file manager when I realized that what I was looking at was a remote desktop being served up by Proxmox.
The actual computer I am using can be old, like the 2009 Macbook Pro I am typing on, but I get the processing power of the machine I am connected to. Not just the CPU but the storage and network speeds as well when dealing with large files. The WiFi on this computer is quite slow but I forget all about that when using a VM that has access to wired high-speed networking.
The screen, keyboard, and trackpad on this old machine are still a joy to use so, overall, the experience is quite pleasant.
I find that I create VMs for different tasks. For example, I created one recently for a hobby project. I do not have much time to work on it so it is nice that, when I do, the desktop (including open applications and files) is waiting exactly as I left it. I can also get that desktop from any machine in the house (and I have several). Using things like TwinGate, I can even get access to that desktop (running locally in my home) from anywhere in the world. I was on the other side of the country two days ago. After the family had all fallen asleep, pulled up my hobby desktop and spent an hour on it before bed.
Very handy.