Surprisingly, yes! It’s hard to judge how bad the performance really is, since it’s in a virtual machine, but all the software that I tested was definitely usable. It’s somewhat slow, but that’s exactly what you’d expect. As we used a lot of unsafe hacks (disabling dependency and file conflict checking, for instance) to get this to actually work, I wouldn’t recommend using this system for anything other than proving it’s possible.
Now is this useful? The short answer is no. The long answer is also no. I can think of exactly zero uses of this experiment (and I must be pretty crazy for doing it).
This is the kind of nonsense computing I can get behind.
So sort of the opposite of “rm -rf /” in a vague kinda way.
I’d love to see a YouTube channel devoted to such things. We often see garbage like “I put 1,000,000 orbeez in a pool and you’d never believe what happened!” so why not a series of experiments like this?
I get an “Unable to connect” using the link.
As for a use case, what about a distro that was designed to work this way. All the packages would always be installed in the file system by the distro and on end user installations it would be retrieved via openAFS or some other distributed caching file system. During network outages you could only access the cached portions obviously, but just think you’d never have to install anything as it would already be preinstalled upstream. The distributed file system would just synchronize it to your machine. Also if you have a network of machines, they could partake in the distribution to rapidly and efficiently deploy updates.
Maybe a silly idea, but food for thought 🙂