Linked by Nik Tripp on Mon 2nd Mar 2009 21:40 UTC
SuSE, openSUSE IT solutions companies have been generating lots of buzz regarding thin clients basically since the early 1990s, but have yet to really penetrate into many suitable environments. These relatively cheap computer appliances carry broad promises like energy efficiency, space efficiency, and centralized maintenance and data storage. These claims could sound like the computer industry equivalent of snake oil. Kiwi-LTSP, a combination of KIWI imaging technology and Linux Terminal Server Project, is one open source solution for thin client servers.
Permalink for comment 351703
To read all comments associated with this story, please click here.
phoenix
Member since:
2005-07-11

What I call Ultra-Thin-Client, you call a Dumb-Terminal. Lets call them UTC for short. As far as I know, there is no vendor offering UTCs, other than SUN with SunRay. All vendor's thin clients are essentially a diskless weak PC with 1GHz CPU and 256MB RAM. Making them unusable for heavy work.


No. Thin-clients, by definition, do not use local processing, memory, OS, or anything. They boot off the network, they load the OS off the network, and configure themselves to act as an I/O hub (send keyboard/mouse data to the server, display graphics from the server). That's it. A thin-client is a dumb-terminal. Period.

Some vendors (like HP, NeoWare, WySE) have "hybrid" thin-clients, which use a local CPU/RAM/OS (usually WinCE) to boot into a local GUI that then runs an rdesktop client. These things are worthless PoS that are over-priced, under-powered, and give thin-client computing (such as it is) a bad name. Once the OS is loaded, and the rdesktop client connects, then it is back to being a dumb terminal. But, yes, the boot times for these things is horrible, as are the graphical capabilities, which is why we stopped looking at them after testing two variations. A standard P2 266 MHz w/256 MB of RAM performed better.

When I talk about one quad core driving 40 clients, I mean one quad core driving 40 UTCs. One user normally requires 1-2GB RAM and 1-2 GHz of CPU. It should be near impossible for one dual core and 4 GB RAM server to drive 30-40 UTCs. Therefore I doubted your claims (misunderstanding).

Of course a dual core and 4GB RAM server would suffice for 30 diskless PCs. That is no doubt, the server would almost act as a file server. Any OS would suffice for that task, even Windows. But I am talking about dumb terminals. There is no way a dual core and 4GB RAM server can drive 40 dumb terminals.


I don't know how many times I can say this: a dual-P3 system with 4 GB of RAM **DOES SUPPORT** 30 thin-clients, where Firefox, Java, OpenOffice.org, and Flash, are all running on the server, with just the display being shot back to the client. WE DO THIS EVERY FRIGGING DAY!! WE HAVE BEEN DOING THIS FOR 7 YEARS ALREADY!!! THIS WORKS!! Get it yet?

So I point out that a quad core can drive roughly 40 SunRays (i.e. dumb terminals). Of course you need lots of RAM for driving 40 SunRays. Each SunRay user needs 256-512 MB RAM on the server which is really good, considering how much memory the user would require if he used a dedicated PC instead.


Which is absolutely horrible! But, you are running Windows, and we're running Linux (for the clients), which is probably where the disconnect is.

Regarding thin client vs diskless PC. I consider them more or less, the same thing. Same, same but different. Both use a rather weak CPU and has little RAM. The diskless PC has slightly better stats, but the thin client has an OS to patch and maintain.As I mentioned, a HP thin client booted in 7 minutes, someone told me yesterday.


No, no, no, no, no, no and NO!!! You do not understand the difference between a thin-client (dumb terminal) and a diskless PC.

In a thin-client setup, 0 CPU, 0 RAM, 0 processing is done on the client. Everything is done on the server. The client is just an I/O hub: mouse and keyboard events are sent to the server, video is sent back to the client. That's it. The local CPU/RAM is only used to boot the client. Nothing else.

In a diskless client setup, you have a standard PC, with a normal CPU, a normal amount of RAM, a normal videocard, a normal NIC, etc. It's a normal PC. The only difference is that there is no HD, CD, DVD, floppy, etc. The client boots off the network, loads the OS off the network, mounts network shares. The OS runs locally, using the local CPU/RAM. Applications are "downloaded" off the network and run locally. Except for the boot process, there's no difference between using a normal PC and a diskless PC.

Do you see the difference yet?

One runs everything on the server, requiring a massive server and an even more massive network, as everything is pushed down the pipes to the display.

The other loads apps off the server, but runs them locally, allowing you to do anything (even play 3-D games) a normal PC can do. But there are no moving parts to worry about, no harddrives to worry about, no OS installs to worry about, etc.

The *ONLY* similarity between a thin-client setup and a diskless setup is that everything is managed from the server. Need to install new software -- do it on the server and all clients get it instantly. Need to upgrade the client OS? Just upgrade the server, and everyone instantly gets the update. Add a user on the server, and they can login from any client station and get their personal desktop.

After a few years you have to upgrade the thin clients/diskless PCs, because they can not handle the new OS and new software versions.


Only for diskless clients. You never *have* to upgrade thin-clients. By definition, nothing is run on a thin-client, it's all run on the server. Hence, the local hardware *DOESN'T MATTER*. Period. The only time you replace a thin-client is when the hardware dies. You don't "upgrade" thin-clients.

Yes, for a diskless setup, where you run apps on the local hardware, you may need to upgrade. However, this is where planning ahead comes in, and you make sure that your initial roll-out can handle the apps you will be using for the next 3-5 years. Or, you find a hardware configuration that is so low that it's basically a disposable appliance (like we did -- at $150 each, we don't bother repairing them).

It is much cheaper to upgrade one server, than upgrade all diskless PCs. It is much cheaper to administer one quad core server, than to administer 40 diskless PCs. In the future, the servers will be dual octo core and have 128GB RAM, then the SunRays will be extremely fast. SunRay are future proof. Diskless PCs are not.


No, no, no, no, and NO! Administering thin-clients and diskless clients *IS THE SAME*. There is nothing to the clients. Everything is done on the server!! They are identical in pretty much every way ... except where the application runs (on the server vs on the client).

For some uses, yes, thin-clients are future-proof. But not for all applications, as the network and server disk are the bottlenecks.

Diskless PCs also suck as much energy as a normal PC


No they don't, as there are no HDs or optical drives sucking power and requiring cooling. And you can build diskless clients using low-power CPUs, chipsets, and videocards.

You can use SunRay over internet. One at work and one at home. You will login into your work environment.


We can do this as well, thanks to NX. It's one of our key selling points to the schools, as they always have access to their school destktop and files, even from Windows machines. Including "suspend" where you login from one machine, suspend the connection, and reconnect from another machine.

Thin-client solutions like the SunRay have their place. But they don't compare to diskless solutions when you leave the realm of simple web browsing and office documents.

Reply Parent Score: 2