Red Hat’s Havoc Pennington announced the “Stateless Linux” project for Fedora, an interesting new philosophy as to how software should be developed/behave. Here is the PDF with all the details.
It’s an interesting idea, and doesn’t even seem that insane to implement. However, coming from the perspective of a desktop user, i can’t really see why i would want this. I can imagine it would be versatile for large network use though.
There’s lots of good qualities to stateless Linux listed in the overview that will also benefit regular desktop users. For example, the point of users never needing root. There’s an example of plugging in a printer, the printer must be automatically configured right there and then. And why shouldn’t it, it’s not like you plug in a printer by accident? Yet this is not the case today.
“However, coming from the perspective of a desktop user, i can’t really see why i would want this.”
You could put a computer in your children’s room(s), and not worry about them breaking into it and doing whatever they want when they turn 12 and know Linux like the back of their little hands. After all, there’s no local root password for them to change.
It could also revitalize old desktops – pop in the live CD, tune your server a bit, and you now have a nice, fast computer again.
Stateless sounds like it could also be used as a way to carry your data from computer to computer without having to do it physically. In a house with a lot of computers that don’t “belong” to people (we have 6 or so), that’s damned handy.
There are some good reasons for it right offhand, I think.
(Havoc needs to work on getting indirect rendering working, though.)
1) Windows is a competitor to RedHat’s Linux. You think they shouldn’t mention how their setup is better than their competitors? Like Microsoft doesn’t mention Linux?
2) Talk about rehasing features. Is the Volume Shadow Copying Service (VSS) in Windows 2003 the first time NT has gotten filesystem snapshots? Because many UNIX OSs have had this feature for ages. Linux, which got it with LVM2, had this feature quite late. Solaris and IRIX have it it for years.
3) VSS and Ghost have absolutely no relation to what “Stateless Linux” is supposed to do. Maybe you shouldn’t have stopped reading the paper at the first page…
The last couple of years I’ve been keeping track of a German insurance company that deploys several thousand cached Linux clients (on laptop) to their offices around the country. They also actually allow their people to work from home half of the time, allowing the company to grow without requiring additional office space.
The clients update themselves when they detect that the server has an updated distribution available and they are connected via LAN. They also can operate over GPRS for authentication and data exchange with the servers. All authentication is against an LDAP server and the clients use a smart card for strong authentication.
Currently they deploy a home grown Linux distribution and just recently made a deal (like Q1 2004) with Red Hat to develop a business desktop for them.
The ideas I see in this document mesh perfectly with this deployment scenario. I believe that Red Hat is picking up all the right market signals here. This is a very big deal for enterprise deployment and therefore an excelent leverage for their server products.
Stateless Linux is an attempt at creating another thin client, and thin client server system for Linux.
Stateless refers to the state or “settings and files” that your “computer” is useing. Since you are using a thin client these things are not stored on the hard drive but rather in other places mainly the network, RAM, where ever, hence “Stateless Linux”.
Adding that to the title would have been easier than having to read the PDF to find out.
Stateless linux is a hybrid thin client / thick client approach. You can separate the notion of where data is stored from where authoritative data resides (i.e. where the state resides). Thick clients have the problem of burgeoning state at the edges of the network (the desktops themselves). This leads to a constant maintenance nightmare, security problems, etc. Thin clients have central state, but run into huge performance problems and don’t benefit from the cost advantages of lots of cheap slower hardware. Stateless linux stores the state centrally, like thin client, but is more like a network updated thick client (the norm now) in terms of resource demands.
This is a very nice synthesis of ideas that have been kicking around for some time, combining features of fat clients, thin clients, bootable CDs, and more. You get a single disk image for all systems, centralized control and updates, but with the speed of local execution and data caching. I like the way that they are bringing laptops into the picture; keeping laptops up to date automatically is a big win.
“One of the goals of the stateless Linux project is to move towards a “best of both worlds” hybrid between thin and fat client” From the PDF
They are taking the best of both worlds yes but my reasoning for saying thin client is that I am not interested in such technology but the article header does not describe the technology rather you had to read the PDF to get any gist of it and even that was not written that well.
How would you have me explain the subject in two sentences so that I would understand they are talking about thin/thick/stateless client/server technology? So that I could move on and not be involved in a discussion I care very little about.
If you want to see this concept in action, look at OS X Server, and NetBoot (and/or NetInstall). It is *exactly* this approach and it works beautifully. I run an office of 20 macs this way, and have eliminated 100% of the headaches from our old “fat client” days with each machine having a quirky OS 9 installation that the users could muck up.
All machines boot or get installed from the exact same disk image (including all the Apps they need), the users’ home dirs live on the server, so I only have one system to back up. When there is an update to deploy, I update one disk image with it, and have everybody reboot at the end of the day. The NetInstalled machines are all partitioned so there is some “unbacked-up” storage room they can load up with iTunes stuff so it doesn’t fill up the server and I don’t have to be responsible for it.
Now very little goes wrong, I just have to maintain the server, and deal with “how do I … ?” questions.
It’s very easy to learn (with a little persistence mere mortals can even do it), and it also does mail, web (apache), mysql, php, file sharing, dns, firewall, and lots of other stuff.
It’d be great if Linux gets this ability too (in a way that’s easy to set up). Particularly on the PPC architecture. It’s what I wanted to do, but it’s just too hard to do the way PPC linux is right now. We were going to deploy PPC linux but the installers were too flaky (at the time at least, I don’t know how they are now), and you could not have “one disk image to rule them all” because each Mac would need a different xf86-config.
(yeah, i know, if only they had OS X for x86, this might be useful to you)
The osX thing sounds pretty cool, but can they un-plug their system (laptop for example) from the network and still use it as though they are connected to the network? That’s the really cool part about the stateless linux thing. Before unplugging you cache everything to you local machine and you can keep using it. Next time you plug it back into the network everything syncs back up with the servers.
“Instantiate” isn’t new. It’s part of the Object Oriented Programming lexicon.
I’m familiar with it from Mentor Graphics EDA tools, where the terminology of the developers sort of leaked into the user interface and documentation. For example, when editing a schematic, when you place resistor symbols on the sheet, you are instantiating instances of a single resistor symbol object from a library. Each instance has unique properties, such as the reference designator and connectivity, but they all inherit other properties (value, part number) from the library object.
Havoc is using the term correctly to describe the creation of a unique instance of an OS image object on a client from a parent object on the server. Most of the instance is identical to the parent, but some properties such as IP address will be unique to that instance. It’s all about objects and inheritence.
With Mobile Home Directories in Mac OS X Server v10.4, you can centrally manage the home directories of your portable Mac clients and yet allow each user online and offline access from the office and the road. When a user goes offline, her home directory goes with her, so she can continue to work just as she would back at the office. In addition, her public folder remains accessible to the network while she’s away, so her co-workers can still drop files into her folder as well as see her public files. When she reconnects her iBook or PowerBook to the network, Mac OS X automatically syncs up the home directory with the one on the server. You have the best of both worlds with this feature — you centrally manage your users’ home directories and they have full desktop mobility.
I’m an MCSE who hates the enormous amount of effort that goes into the daily maintenance of Microsoft Windows-based workstations. I’ve been a sysadmin since about 1998, and have been a closet Linux user for the last few years, (although I’ve only really leveraged linux in a server environment, and even then only for certain things — I’m still too addicted to Exchange / Outlook to break away)
The idea of a real, usable, truly stateless OS is one of the _coolest_ concepts I’ve heard discussed in a long time.
I work with several nonprofits. Microsoft’s fees have never been an issue for us, because most of the time the software is almost free to us anyhow.
However, if I could get Linux to deploy in this “stateless” fashion, I would eliminate a ton of maintenance and be able to focus my time on actually deploying solutions, rather than replacing machines and reinstalling windows when it fills up with junk that users somehow downloaded from the internet. I would guess the same would be true of many small to midsize organizations.
If Havoc and his team manage to get this working reliably and “out of the box”, I believe it might just be the thing that gets MCSEs like me to deploy Linux at their company en masse.
And I, for one, would find the flexibility brought about by this change very exciting.
Organizations would still face problems with “must have” windows apps, but throwing a single windows server and Citrix into the mix would fix most problems. (I have yet to be convinced that constantly tweaking Wine to work with your regular windows apps is more cost-effective than just buying Windows in the first place)
Anyhow, I’m one MCSE who’s hoping that Microsoft’s delay in introducing WinFS will be just the competitive edge that Linux needs to make inroads on the corporate desktop here in the USA.
Red Hat has been not so quietly gobbling up Desktop developers over the past 6 months or so gearing up to push the Linux Desktop. Between them and Novell, the Linux desktop is gonna happen (or not) in the Boston area in the next couple of years.
Basically has the same features the StatelessLinux.pdf paper describes:
– Centrally managed
– Applications runs locally
– HDD is used only as a cache
OTOH it’s using OS/2 instead of Linux and is old technology (from 1997, IIRC).
Having the possibility of doing this in Linux is nice, though (BTW: an interim solution for Linux is zeroinstall, but it doesn’t has all the features of WSoD/StatelessLinux).
Isn’t this similar to the ‘files only’ behaviour of the Amiga? Where the OS is just made up of files and backing up the entire OS is simply a case of copying across all the files in ‘SYS:’?
If you allow disconnected operation, there is a possibility of synchronisation problems: a user has a laptop and also modifies some files from another station, when the user reconnect its laptop, there is a synchronisation problem..
Same thing if several users modifies their ‘cached copy’ of a file while the server is not available (server or network problem).
Being able to work even if the network or server are down is very cool, unfortunately there is a price to pay..
We need async clients that can be patched when they attach themselves to networks containing their updates.
We need better overall control from a centralized interface, preferably web or at least X based. So we can reip a group/network of hosts quickly without dealing with downtime.
Hosts should autodetect what networks are available and possibly have multiple configurations to keep themselves on a working network at all times.
And it might be possible for an async network to use hosts, like laptops, that connect to the internet to download updates for the rest of the systems on their async ‘net.
Updates would be more efficient as binary diffs, IMO.
And other stuff. I like these ideas. Sounds like we’re headed in the same direction.
Just replicate a disk with all the requisite apps onto all your machines, lock them down with something like DeepFreeze, and have a Windows 2000/2003 domain with active directory and roaming profiles. Voila! And Windows can handle the synchronization without much difficulty if someone disconnects.
You can even give the Domain Users group Power User (or higher) privileges if you want, and you’re fine as long as they don’t figure out the password for DeepFreeze and disable it.
I mean, it’s nice to see more pre-packaged support, but as the PDF states, plenty of administrators are doing this already on their own (on whatever system).
It’s an interesting idea, and doesn’t even seem that insane to implement. However, coming from the perspective of a desktop user, i can’t really see why i would want this. I can imagine it would be versatile for large network use though.
Red Hat is a server company primarily, so there is no reason to put this news into desktop perspective at all.
There’s lots of good qualities to stateless Linux listed in the overview that will also benefit regular desktop users. For example, the point of users never needing root. There’s an example of plugging in a printer, the printer must be automatically configured right there and then. And why shouldn’t it, it’s not like you plug in a printer by accident? Yet this is not the case today.
Same thing when you plug into a network, it should Just Work. Those who follows the GNOME developers list will have heard of the coolness that is Red Hat’s NetworkManager, or read http://lists.gnome.org/archives/desktop-devel-list/2004-August/msg0…
“However, coming from the perspective of a desktop user, i can’t really see why i would want this.”
You could put a computer in your children’s room(s), and not worry about them breaking into it and doing whatever they want when they turn 12 and know Linux like the back of their little hands. After all, there’s no local root password for them to change.
It could also revitalize old desktops – pop in the live CD, tune your server a bit, and you now have a nice, fast computer again.
Stateless sounds like it could also be used as a way to carry your data from computer to computer without having to do it physically. In a house with a lot of computers that don’t “belong” to people (we have 6 or so), that’s damned handy.
There are some good reasons for it right offhand, I think.
(Havoc needs to work on getting indirect rendering working, though.)
-Erwos
1) Windows is a competitor to RedHat’s Linux. You think they shouldn’t mention how their setup is better than their competitors? Like Microsoft doesn’t mention Linux?
2) Talk about rehasing features. Is the Volume Shadow Copying Service (VSS) in Windows 2003 the first time NT has gotten filesystem snapshots? Because many UNIX OSs have had this feature for ages. Linux, which got it with LVM2, had this feature quite late. Solaris and IRIX have it it for years.
3) VSS and Ghost have absolutely no relation to what “Stateless Linux” is supposed to do. Maybe you shouldn’t have stopped reading the paper at the first page…
The last couple of years I’ve been keeping track of a German insurance company that deploys several thousand cached Linux clients (on laptop) to their offices around the country. They also actually allow their people to work from home half of the time, allowing the company to grow without requiring additional office space.
The clients update themselves when they detect that the server has an updated distribution available and they are connected via LAN. They also can operate over GPRS for authentication and data exchange with the servers. All authentication is against an LDAP server and the clients use a smart card for strong authentication.
Currently they deploy a home grown Linux distribution and just recently made a deal (like Q1 2004) with Red Hat to develop a business desktop for them.
The ideas I see in this document mesh perfectly with this deployment scenario. I believe that Red Hat is picking up all the right market signals here. This is a very big deal for enterprise deployment and therefore an excelent leverage for their server products.
Stateless Linux is an attempt at creating another thin client, and thin client server system for Linux.
Stateless refers to the state or “settings and files” that your “computer” is useing. Since you are using a thin client these things are not stored on the hard drive but rather in other places mainly the network, RAM, where ever, hence “Stateless Linux”.
Adding that to the title would have been easier than having to read the PDF to find out.
Stateless linux is a hybrid thin client / thick client approach. You can separate the notion of where data is stored from where authoritative data resides (i.e. where the state resides). Thick clients have the problem of burgeoning state at the edges of the network (the desktops themselves). This leads to a constant maintenance nightmare, security problems, etc. Thin clients have central state, but run into huge performance problems and don’t benefit from the cost advantages of lots of cheap slower hardware. Stateless linux stores the state centrally, like thin client, but is more like a network updated thick client (the norm now) in terms of resource demands.
Let’s compare:
Thin client: application runs on server
Stateless: application runs on desktop
Thin client: can’t detach from network
Stateless: laptop can be detached from network
Yep, it’s exactly the same!
This is a very nice synthesis of ideas that have been kicking around for some time, combining features of fat clients, thin clients, bootable CDs, and more. You get a single disk image for all systems, centralized control and updates, but with the speed of local execution and data caching. I like the way that they are bringing laptops into the picture; keeping laptops up to date automatically is a big win.
Stateless is not another thin client. A stateless client can have a HD. A stateless machine can work disconnected to the network.
“One of the goals of the stateless Linux project is to move towards a “best of both worlds” hybrid between thin and fat client” From the PDF
They are taking the best of both worlds yes but my reasoning for saying thin client is that I am not interested in such technology but the article header does not describe the technology rather you had to read the PDF to get any gist of it and even that was not written that well.
How would you have me explain the subject in two sentences so that I would understand they are talking about thin/thick/stateless client/server technology? So that I could move on and not be involved in a discussion I care very little about.
Looks like something businesses might like. I know mine would. They work very hard at locking desktops down.
Personally, I like the coining of the term “Instantiation”.
If you want to see this concept in action, look at OS X Server, and NetBoot (and/or NetInstall). It is *exactly* this approach and it works beautifully. I run an office of 20 macs this way, and have eliminated 100% of the headaches from our old “fat client” days with each machine having a quirky OS 9 installation that the users could muck up.
All machines boot or get installed from the exact same disk image (including all the Apps they need), the users’ home dirs live on the server, so I only have one system to back up. When there is an update to deploy, I update one disk image with it, and have everybody reboot at the end of the day. The NetInstalled machines are all partitioned so there is some “unbacked-up” storage room they can load up with iTunes stuff so it doesn’t fill up the server and I don’t have to be responsible for it.
Now very little goes wrong, I just have to maintain the server, and deal with “how do I … ?” questions.
It’s very easy to learn (with a little persistence mere mortals can even do it), and it also does mail, web (apache), mysql, php, file sharing, dns, firewall, and lots of other stuff.
It’d be great if Linux gets this ability too (in a way that’s easy to set up). Particularly on the PPC architecture. It’s what I wanted to do, but it’s just too hard to do the way PPC linux is right now. We were going to deploy PPC linux but the installers were too flaky (at the time at least, I don’t know how they are now), and you could not have “one disk image to rule them all” because each Mac would need a different xf86-config.
(yeah, i know, if only they had OS X for x86, this might be useful to you)
The osX thing sounds pretty cool, but can they un-plug their system (laptop for example) from the network and still use it as though they are connected to the network? That’s the really cool part about the stateless linux thing. Before unplugging you cache everything to you local machine and you can keep using it. Next time you plug it back into the network everything syncs back up with the servers.
“Instantiate” isn’t new. It’s part of the Object Oriented Programming lexicon.
I’m familiar with it from Mentor Graphics EDA tools, where the terminology of the developers sort of leaked into the user interface and documentation. For example, when editing a schematic, when you place resistor symbols on the sheet, you are instantiating instances of a single resistor symbol object from a library. Each instance has unique properties, such as the reference designator and connectivity, but they all inherit other properties (value, part number) from the library object.
Havoc is using the term correctly to describe the creation of a unique instance of an OS image object on a client from a parent object on the server. Most of the instance is identical to the parent, but some properties such as IP address will be unique to that instance. It’s all about objects and inheritence.
Did this make any sense?
@Scott:
I haven’t had a need to do this but supposedly there is some kind of “floating profile” thing?
Poke around Apple’s stuff and see: http://www.apple.com/server/macosx/
My quick peek says they’ll have that when 10.4 Tiger comes out.
Mean-time I’m sure you could make a slightly different net-install image for laptops and devise a way to sync the home dir with the server.
Scott,
yes, 10.4 will have it.
marketing text off the apple site:
Home Away from Home
With Mobile Home Directories in Mac OS X Server v10.4, you can centrally manage the home directories of your portable Mac clients and yet allow each user online and offline access from the office and the road. When a user goes offline, her home directory goes with her, so she can continue to work just as she would back at the office. In addition, her public folder remains accessible to the network while she’s away, so her co-workers can still drop files into her folder as well as see her public files. When she reconnects her iBook or PowerBook to the network, Mac OS X automatically syncs up the home directory with the one on the server. You have the best of both worlds with this feature — you centrally manage your users’ home directories and they have full desktop mobility.
I’m an MCSE who hates the enormous amount of effort that goes into the daily maintenance of Microsoft Windows-based workstations. I’ve been a sysadmin since about 1998, and have been a closet Linux user for the last few years, (although I’ve only really leveraged linux in a server environment, and even then only for certain things — I’m still too addicted to Exchange / Outlook to break away)
The idea of a real, usable, truly stateless OS is one of the _coolest_ concepts I’ve heard discussed in a long time.
I work with several nonprofits. Microsoft’s fees have never been an issue for us, because most of the time the software is almost free to us anyhow.
However, if I could get Linux to deploy in this “stateless” fashion, I would eliminate a ton of maintenance and be able to focus my time on actually deploying solutions, rather than replacing machines and reinstalling windows when it fills up with junk that users somehow downloaded from the internet. I would guess the same would be true of many small to midsize organizations.
If Havoc and his team manage to get this working reliably and “out of the box”, I believe it might just be the thing that gets MCSEs like me to deploy Linux at their company en masse.
And I, for one, would find the flexibility brought about by this change very exciting.
Organizations would still face problems with “must have” windows apps, but throwing a single windows server and Citrix into the mix would fix most problems. (I have yet to be convinced that constantly tweaking Wine to work with your regular windows apps is more cost-effective than just buying Windows in the first place)
Anyhow, I’m one MCSE who’s hoping that Microsoft’s delay in introducing WinFS will be just the competitive edge that Linux needs to make inroads on the corporate desktop here in the USA.
Red Hat has been not so quietly gobbling up Desktop developers over the past 6 months or so gearing up to push the Linux Desktop. Between them and Novell, the Linux desktop is gonna happen (or not) in the Boston area in the next couple of years.
Can someone enlighten me on what’s the difference between this “stateless linux” and IBMs Workspace on demand? (sorry for the roughness).
For a good article about WSoD, read the following:
http://www.sundialsystems.com/articles/workspaceondemand.html
Basically has the same features the StatelessLinux.pdf paper describes:
– Centrally managed
– Applications runs locally
– HDD is used only as a cache
OTOH it’s using OS/2 instead of Linux and is old technology (from 1997, IIRC).
Having the possibility of doing this in Linux is nice, though (BTW: an interim solution for Linux is zeroinstall, but it doesn’t has all the features of WSoD/StatelessLinux).
Isn’t this similar to the ‘files only’ behaviour of the Amiga? Where the OS is just made up of files and backing up the entire OS is simply a case of copying across all the files in ‘SYS:’?
See Ruggedness at
http://ros.rubyforge.org/wiki/wiki.pl?BestFeatures/AmigaOs
If you allow disconnected operation, there is a possibility of synchronisation problems: a user has a laptop and also modifies some files from another station, when the user reconnect its laptop, there is a synchronisation problem..
Same thing if several users modifies their ‘cached copy’ of a file while the server is not available (server or network problem).
Being able to work even if the network or server are down is very cool, unfortunately there is a price to pay..
We need async clients that can be patched when they attach themselves to networks containing their updates.
We need better overall control from a centralized interface, preferably web or at least X based. So we can reip a group/network of hosts quickly without dealing with downtime.
Hosts should autodetect what networks are available and possibly have multiple configurations to keep themselves on a working network at all times.
And it might be possible for an async network to use hosts, like laptops, that connect to the internet to download updates for the rest of the systems on their async ‘net.
Updates would be more efficient as binary diffs, IMO.
And other stuff. I like these ideas. Sounds like we’re headed in the same direction.
every computer must have our programs onto hard disk.
it’s silly download a program that i use everyday.
every user must have root access without problem.
Just replicate a disk with all the requisite apps onto all your machines, lock them down with something like DeepFreeze, and have a Windows 2000/2003 domain with active directory and roaming profiles. Voila! And Windows can handle the synchronization without much difficulty if someone disconnects.
You can even give the Domain Users group Power User (or higher) privileges if you want, and you’re fine as long as they don’t figure out the password for DeepFreeze and disable it.
I mean, it’s nice to see more pre-packaged support, but as the PDF states, plenty of administrators are doing this already on their own (on whatever system).