Problem: Even the most powerful PC’s become non responsive during resource-intensive computations, such as graphic design, media, image rendering and manipulating. The traditional solution has been to upgrade to a faster computer and throw more computing power at the problem to lessen the wait-time. But there’s a simple solution that utilizes multiple machines, but without using grid/clustering. For now, this involves a hack, but how hard would it be for an OS vendor to streamline this process?
Imagine you are using a resource intensive application like Photoshop and you have 4 mainstream PC’s in your department. If you have ever been in the design or media industry, you are familiar with the fact that each department or office, usually the newest and most powerful PC is used for the high-end Photoshop work and the other machines are older, slower ones for typesetting, vector illustrations and other less resource-hungry applications. Sometimes that “blessed” machine is a high-end Wintel machine, or maybe a high-end Mac or SGI machine.
However, no matter how fast and capable your machine might be, there will be times when you are waiting for your application to finish a certain task like saving a large file, rendering or other bandwidth intensive task.
The inaccessibility of the classic solution
Technically the only way to reduce this delay is adding extra resources to the machine, or using grid/clustering technologies for using other machine resources. Both of the mentioned solutions are either expensive, hard to deploy, or both.
My solution is rather different and relies on a human being as the conduit. The fact is while you are waiting for the application to finish a certain task, you can switch to other tasks. However, despite the multitasking capabilities of all of modern operating systems, most of them lose normal levels of responsiveness and even become unusable during image rendering and processing, saving and opening large files etc. On the other hand working simultaneously on two heavy disk-related tasks, especially on IDE drives, increases disk access delays more than two times the original performance. In a nutshell, heavy computing tasks turn your modern multi tasking operating system into a single task MS-DOS like operating system no matter you are using MIPS, Sparc, PowerPC or X86.
The workaround
Now here’s my recipe: using one master PC using Windows XP and a Real VNC client, and several Fedora Core 2 Low cost Linux machines, each equipped with built in VNC server, CrossOver Office and Adobe Photoshop running on top of CrossOver office. You can run several copies of Photoshop and monitor all of them from your Wintel PC without making each application compete for resources. From the user point of view, it’s like the user is running several instances of Photoshop on his or her windows machine and using other machines’ HDD and RAM for computing without a real grid or clustering method. Now the designer can apply a complex Photoshop filter on a large image, and switch to other copy of Photoshop working on an illustration while the other instance of Photoshop is doing the calculations on a remote machine.
Real world experiments
I tested this configuration over a 100 MB Ethernet LAN and experienced a usable working environment with no performance penalty. The fact is, VNC uses very little bandwidth and is very easy to install and configure. In fact all you need is the Linux machine name and a sign on user name and password, you even don’t need to tweak Windows XP SP2 firewall settings to make it work.
Assuming the fact that this article is intended for the power user, I omit the configuration details.
Here are the links for downloading, installing and configuring the required software.
CrossOver Office:
http://www.codeweavers.com/site/products/cxoffice/
RealVNC:
http://www.realvnc.com/download.html
Editor’s Note: Of course, this hack could be done in several different ways, in all Linux, all Windows, and all Mac environments, but wouldn’t it be relatively easy to make an integrated, more elegant method of farming out resource-intensive calculations to other machines on the network on a case-by-case basis (without the permanence of a dedicated grid)? Does anyone know of any software that already exists for this purpose? Some companies, like Apple, are striving to make grid computing more accessible. This strikes me as a feature that would be a good idea for Apple to implement in a future OS X release. — David Adams
—
About the author: Kasra Yousefi is the creative director at Persiadesign Inc
So why are we dismissing grid computing again. Sure, I can see the benefits of this hack now when we don’t have grid computing but why would a vendor want to streamline this process when they could offer the much more general and elegant solution of offering grid computing where you could actually get the task done faster as well.
>Even the most powerful PC’s become non responsive during >resource-intensive computations, such as graphic design, >media, image rendering and manipulating
You should use “nice” then. 😉
I for one can see this as cheap and effective. I use an 8 port KVM my-self and had to make my own wires seen how the price for 50 foot cables was way out of the ball park. I got 7 machines and the biggest boy is the main lead so for task processing I’ll use all 7 and hooked on on my D-link 10/100 mbps switch.
This method can be very expensive, and have wires running around.
Now I for one would like to know how come I didnt think of using VNC with Cross Over.
Great Idea I might just put it to the test.
I use VNC everday to remotely administrator many different servers(windows/Linux), and it’s a wonderful tool but it’s just to slow if you want to do some “real” work. And that’s even on a 100Mbit local network..
“The fact is, VNC uses very little bandwidth and is very easy to install and configure”
But my experience is that vnc consumes allot of CPU power if you are connected to the remote host.
VNC is an excellent tool, but for real remote control you should use some “real” program like Terminal Server client if it’s a windows host.
Remote terminal access is not something new, IIRC this dates back to the early 60’s or 70’s.
Now, just imagine that instead of running VNC on 8 remote machines and have each compute a task indivually, if you had them all multi-tasking the same application over your network? You’d be able to finish your computation in probably close to a quarter of the time (some time is required for sending/receiving/synchronizing, etc).
I think the OS makers are better investing their time in grid-computing then in remote terminal services; after all remote terminal services are already available and only a few clicks/commands away in most OSes.
>, if you had them all multi-tasking the same application over your network? You’d be able to finish your computation in probably close to a quarter of the time (some time is required for sending/receiving/synchronizing, etc
That’s right! then we could talk about eve lotion. This is not available in the big mainstream OS that we use today Windows/Linux/Mac. But QNX is a great example how this could be done.
Quote from QNX:
Remote Process Creation:
Any resource or process can be accessed uniformly using standard messages, across the message-passing IPC, from any location on the network — without having to write code connectors to enable resources to communicate. Similarly, processes on one computer can be started or spawned on another network computer. The result is true distributed processing across multiple network computers with no code modification, and no incremental hardware costs.
Check this out for more information
http://www.qnx.com/tech_highlights/microkernel/
VNC is slow and a very messy way of getting remote GUI access to a UNIX workstation. Try this on your WinXP box: http://xlivecd.indiana.edu/
Years ago we used X windows to run resource intensive programs on “compute servers”. The funny part was that these compute servers were just headless workstations fully loaded with RAM.
(At times the software license fees were tied to the server type. Often a workstation class sytem was equivalent in CPU speed to a server, but the software licencse was considerably cheaper.)
The real question these days is “what do you want to use for your ‘rich media system’ that you actually sit in front of?”
With cheap access to wireless and VNC, you can put your (home) server rack just about anywhere. But more and more often these days the urge is to just collapse all the systems down to one more powerful system, and be done with the noise and heat.
I though they were working on something like this. Monitor, Key board, Mouse, and a little pad. You take out a credit card thing and just throw it down on the pad and bam, you are in Windows. Throw down another card on it and bam, you are in Linux (pick your distro). Throw down another and bam, you are using Mac OS X. All the cards have a OS assigned to it, and that pad picks up what it is, and remotely signs you into it to do what you want. I saw a demo video of it. The guy just threw one card down on top of the other and the pad pick up the newest card and the screen switched to match the card.
This guy is talking nothing more than setting up a bunch of remote terminals and having them do the work, while his PC is not being taxed, but all the other computers are. So did he really solve anything, nope, each computer can still fail and bring down what process was being done. Just now you need to buy as many PCs as tasks that you want to run.
If the Operating Systems we were using had decent schedulers then this would not be a problem. Let me explain.
If the OS detects that a process is really compute bound and interactive users/processes were suffering from lengthening I/O response times(eg keyboard clicks) it would dynamically reduce the priority of the CPU hog to let other users get a slot. This is not rocket sciene. I learnt this 30 years ago when studying realt time compute operating systems. One great example at the time was the RSTS-E Operating System that ran on PDP-11’s. Hawker Siddley Aviation(Home of the Harrier VTOL aircraft) was using a RSTS system in the aerodynamics lab. They had many users doing lots of CPU intensive work. They all got a response. The OS limited CPUS from taking too much CPU. The O/S would switch from a timeslice based scheduler to a priority based schedule at time of high CPU Usage.(That was how a top guy at DEC explained it to me at the time).
Naturally, this sort of thing could never be included in any Microsoft OS as IMHO, they don’t know the meaning of the word Scheduler but, that as I say is MHO.
Enough of the history remeniscing. Its Friday night and I’m off down the pub.
Yes, you might be able to multitask between different machines but for an average “designer” it will never work. It’s too complex, and your data is then spread across different machines. And a central file storage is not a solution unless you go to Gig-E or 10Gig-E which is still expensive (network wise) nevermind the cost of the actual file storage itself, or the cost of multiple licenses for a single app just so you can save yourself 3 minutes, and then to think of the confusion of organizing files. Just looks like it would be a big mess.
For any task that can take less than 5 minutes the solution is one workstation, and try to beef up that workstation as much as possible. For complex tasks that can take > 15 min. or hours, grid computing/distributed computing is the best solution.
“You should use “nice” then. ;-)”
“nice” doesnt help much if a process is only taking up 10% cpu but is thrashing your hard drive about.
I’d use tightVNC which is considerably faster than that of the one posted.
In a sense, what this guy is saying is that you’d have multiple users ‘logged on’ via VNC…and that instead of the ‘thin server’ (or main computer) being taxed…you’d bring the work to the VNC clients (terminals). It’s brilliant and not original…it’s been posted about before. But it is a cost effective answer and one that a home user or small business can implement for minimal cost.
photoshop in crossweaver is fairly buggy if you try to do more than the basic things. the imageready half of it is broken. also, what happens to your color calibration and display settings over vnc? As far as I know, vnc uses compression (eg jpg) and sends these compressed images over to the client. I wouldn’t exactly call it efficient, if you had to make corrections on a native pshop client post edit jobs on a crossover/pshop linux client. You’re also asking for an additional license of crossover and photoshop for each linux client added. IMHO I think adobe should implement something like the sony vegas video network renderer for photoshop. On the other hand though, SMP, dual core cpus, SATA w/ queues will come down in price eventually, so maybe such a solution is not needed.
This is related to one of my biggest disappointments with linux. With BeOS, there are ~150 different levels. Each thread is executed at an appropriate level. Media processing is given a higher priority than background tasks. Why can’t linux standardize on appropriate nice values and execute tasks at those values by default? This might even be doable without any software modifications, but I haven’t run across any linux distros that do this.
The problems arise with disk bound tasks. In particular, when you run out of real memory and have to swap. When you try to multitask on a machine where one app is consuming a lot of memory not only is the app you are trying to use starved of RAM, but it competes with the first app for disk IO to swap.
Almost inevitably poor performance in these scenarios can be improved by buying suficient RAM for the primary task to run in memory. For Photoshop this could easily be a couple of gigabytes.
VNC? Well, you might consider some better alternatives although TightVNC ain’t half that bad. Problem is that when you do Real Work then you don’t want VNC to show the picture different than it really is; hence you use compression. I think the following 2 solutions are also applicable:
* Use Windows 2000 Server or Windows XP Pro or Windows 2003 which come with RDP server.
* Use NoMachine’s X hacks which use high compression but not much resources.
As for Photoshop. You say ‘Inexpensive Solution’ and mention Photoshop. If you want an inexpensive Photoshop clone then try out Paint.NET, GIMP, PSP. Or buy a SGI O2 or Octane, put Photoshop 3 on it. Photoshop 3 is almost the same as the later versions but this puppy runs native and on inexpensive hardware and remote X works flawless. This solution is much cheaper than starting with Apple. I’m not sure if NoMachine runs on IRIX i still have to figure that out.
Just a note: The procedure described above is the way all unix-based systems (including linux) already work. Any X Windows environment can route its display, keyboard and mouse to a remote machine. I’ve been doing this for over two decades now. The only difference here is including Windows into the mix. 🙂
Microsoft has the Terminal Services solution for running win32 applications on windows servers but TSCAL (licenses to terminals) are abusive. M$ calculates the TSCAL price to don’t be cheaper to use windows dumb terminals…
I use very old machines (Pentium 100 with 32 MB RAM) as X-terminals of the Thinstation (http://thinstation.sf.net) solution. I recommend it. I use mainly for run firefox and OpenOffice on Pentium 4 or Athlon XP servers running linux. TS also comes with rdesktop client for M$ servers.
For spare windows machines I install Cygwin/XFree (http://x.cygwin.com/) to access the linux servers and run graphical applications.
I don’t think Photoshop uses a lot of CPU, but it uses a lot of RAM and HDD resources. I use Photoshop every day. I have 2GB RAM memory and 2 WD Raptors SATA of 10,000 RPM in Raid-0 , it’s necessary to use Photoshop and to work fast with high-definition graphics.
I doubt the average photoshop user ever heard of “Linux”, “Fedora”, or even “WINE”. Installing WINE is painful and only “power users” are able to do so.
Unfortunately “power users” don’t use CAD or programs such as Photoshop.
I don’t see how that is inexpensive. Doing multple PC with VNC you would be required to have multpile licenses of PhotoShop, plus CrossOver if you are running it on Linux.
I would like to share resource from multple PC like openMosix but unfortunately either the apps I want to run don’t run on Linux and openMosix doesn’t work for Windows.
Try to offer this solution to your boss what do you think will happen…
4 computers with Photoshop is 4 licences.
4 computers with Photoshop is 4 time installing it. Repeats with every other heavy program you want to use.
4 computers is 4 users killing your app.
My experience with users is they don’t want to know and don’t want to be bothered. It needs to be very transparent to them.
Ciao
Peter
This is a very obvious and moronic idea. Why even go through all of the trouble? Why don’t you just have all these machines hooked to a KVM switch and sit at a desk and work instead of complicating things. This is just a very poor idea here.
Photoshop is using a lot of CPU, imagine those filters that you apply and play with.
Except the filters you are right …. verry little CPU ussage.
But filters could become benchmarks …
They kill the proccessor like any other intesive operation, like rendering, compression …. your favourite D3 Game.
And so on …
I’m sorry, but installing WINE is not painful.. I’m not a linux advocate and im even fairly new to it but i would not call installing wine painful.. using apt all i type is apt-get install wine and voila’ its installed. easy.
There is a app in 3d max studio called backburner that setup rendering tasks for all networked computers with backburner installed.I have used this several time for huge renders that even my amd64 would take an age to deal with,however loaded onto three spare computers i have of varying cpu and memory works a treat.
The computers all log into my main computer,collect a task,render then send back the result.For animation each one will take a frame,or for a single frame render a field.
It does runs sweet and i`m guessing that this is more what Kasra is looking for,pity it isnt available for global apps.
… would be the LTSP, which works reliably. You can use diskless stations with it.
Correct me if I’m wrong, but isn’t this what Apple is trying to achieve with xGrid? Easily configurable distributed computing for persons without the administrative skills or technical know-how seems tailor-made for the situation the author describes. Can someone who knows more about this give me some more information? It seems to me that xGrid in conjunction with Automator could be a powerful combination, and I’m intrigued by the upcoming OS X Tiger.
RealVNC?? You could at least try TightVNC or NX. Photoshop users usualy need larger resolutions. 1600×1200 is common. RealVNC couldn’t handle real time brushing with larger brush.
Colors?? Even if you forget loosy compression there’s color downsampling to optimize troughput.
Colors again? Still no way to calibrate photoshop under X.
Let say you did it anyway. Now name Photoshop options that require time. Most of the work is color correction and cleaning scan errors which require user to be present.
Waste of money and waste of time.
reniceing a prosess may well just be hideing a problem. for a time distros came with xfree running on a above-avarage nice value to help with responsiveness.
but now with 2.6 the default scheduler monitors the number of times a prosess enters a wait state for more user input and adjusts its priority thereafter. therefor a cpu heavy task will get a bigger priority then tasks like wordprosessors and others that are centerd around user input. these levels will still be maintained so that the current user task gets a slighty bigger priority then a background task tho to maintain a sense of responsiveness.
but like someone else here stated, disk access and the amount of ram available have just as much to say. the moment the os have to start moveing stuff to or from swap is the moment the system crawls to a halt…
Forgive me but I thought that this article had some strength, turns out otherwise. What is so innovative in using vnc , its been around for ages. And even if vnc is used on the slower machine using Windows XP does not really help as it is a single desktop environment. Using crossover is dead slow and flaky in linux at best. How does running multiple copies of Photoshop on Linux solve the problem of excess hdd activity.
Would someone please buy Kasra Yousefi a KVM?
I don’t know what the author process with photoshop, but I often work with multi 100MBs files without any slowdown. If things starting to become slow (Heavy swapping) then add more ram. Filtering/batching images generally takes only a few seconds (Athlon T-Bird 1400), to short to switch to another work. Other than that VNC is too slow for photoshop to be practical, even on LAN. I know this because I use VNC all the time. Remote desktop connection, in the other hand has a much better response rate.
Forgive me for cynicism but I doubt whether the author really tried it.
Dear Mike,
I have some KVM switches in our own office
And one little thing about switches is that you can’t go much far away from your datacenter. If KVM was the answer, then why people are using TCP/IP and Internet?
Why people writing and using extensive remote management softwares?
I didn’t mean to “invent” a new remote management system. It was a hack for showing how we can distribute resources over a network now that grid solutions are hard to find.
Sorry if I sound impolite, English is not my native language. I enjoyed your post so much.
Regards- Kasra
First, a little explanation on the idea behind my post
I know what I proposed is nothing new and is based on 30 year old standards. The reason I proposed this – and intentionally used the most divert configuration, (I also knew that I could do this all on windows or Linux only configuration and don’t be criticized by the licensing and poor performance issues related to Crossover Office) – was to show the possibilities in distributing processing and disk bandwidth across a small network.
I also like clustering/grid technologies (especially following plan 9 closely) and prefer them over my own awkward hack for sure.
A word about performance:
I’m surprised to see some reply that the VNC is slow.(I’m not trying to defend VNC just want to clarify the issue) I did test this in real word and worked under heavy conditions and on the other hand my machines are not very powerful ones.
For your information, my master PC is Athlon XP 2200 + / 1GB RAM and a cheap (no brand) 32 MB Graphic adapter.
My Linux slave is Athlon XP 2200 + 512 MB RAM, Fedora Core II, also used as a staging Apache/MySQL server hosting around 40 websites
Both machines are connected via a 100MB 3Com Office connect switch.
Thanks for your thoughtful feedbacks! I learned so much from them. Regards – Kasra
http://synergy2.sourceforge.net/
This program might be useful for this. It’s like a software kvm, without the video.
I use VNC everday to remotely administrator many different servers(windows/Linux), and
> it’s a wonderful tool but it’s just to slow if you want to do some “real” work. And
> that’s even on a 100Mbit local network…
What kind of “real” work (one pair of quotation marks does hardly suffice by the way) are you talking about? Right now I use VNC to access my workstation 400 km away with a DSL connection over the internet and — given that — I find it amazingly responsive.
I don’t understand why Windows let’s programmes hog the cpu. I always find it so frustrating when I’m multitasking and one program suddenly get 99% of the cpu, I can’t even minimize the window! It seems so simple to think of this problem beforehand and program the os in such a way as to keep it responsive. If it’s gonna use 75% and take a bit longer, fine with me, because then I can check out my email in the mean time. Or is that too much too ask of an OS?
I don’t understand why Windows let’s programmes hog the cpu. I always find it so frustrating when I’m multitasking and one program suddenly get 99% of the cpu, I can’t even minimize the window! It seems so simple to think of this problem beforehand and program the os in such a way as to keep it responsive.
It isn’t so simple.
First, windows includes kernel time into cpu display, thereby your 99% can be just waiting for disk i/o.
Second, waiting for disk i/o itself doesn’t make windows slower (unless hdd is badly configured). But if your another program needs for example read something from same disk (or just read memory-swap file), then system becomes unresponsive.
Play with some program, using only cpu (prime95 is good example) – 99% of cpu doesn’t affect other programs at all.
What you need – more RAM and 2 or more hard disks, accessible independently.
Why people writing and using extensive remote management softwares?
So that the IT department can troubleshoot and repair software issues on a network without leaving their office.
And so that people can access their office and home PCs while out on the road.
What you’re describing just seems unnecissary.
Why would a 3d graphics designer use “a cheap (no brand) 32 MB Graphic adapter” in a computer equipped with an Athlon XP 2200 and 1 GB of RAM ? Doesn’t the cheap video card restrain the other components ? Maybe that’s one source of problems too.
“I don’t understand why Windows let’s programmes hog the cpu. I always find it so frustrating when I’m multitasking and one program suddenly get 99% of the cpu, I can’t even minimize the window! It seems so simple to think of this problem beforehand and program the os in such a way as to keep it responsive. If it’s gonna use 75% and take a bit longer, fine with me, because then I can check out my email in the mean time. Or is that too much too ask of an OS?”
just go to google and type:
windows xp Win32PrioritySeparation
youll find may opinions on the net for optimizing windows memory and cpu priorities.
default for Win32PrioritySeparation=2 but it can be lowered to 0. valid range is 0-38. you might not just have a cpu hog it might also be a memory hog as well.
go to windows task manager then click prosesses tab then right click on the desired process then got to set priority menu and select priority level.
Dear Kevin, there is no rationale behind this (My cheap GPU). The reason for having such a weird configuration is simply because my Radeon burnt out recently and I’m looking for a pro graphic adapter and still didn’t buy one and meanwhile I’m relying on the PC on board chip. I’m agreeing with you, that graphic adapter is a serious performance roadblock, especially when windows try to redraw the Photoshop screen and goes low on adapter memory. The fun part is, you don’t have this problem when using VNC because all the performance problems are “shifted” to another PC, of course. And at least, while the other, usually idle PC at the next room is busy solving the problem, you can switch to another task.
Best
Kasra
> Even the most powerful PC’s become non responsive during
> resource-intensive computations, such as graphic design,
> media, image rendering and manipulating. The traditional
> solution has been to upgrade to a faster computer and throw
> more computing power at the problem to lessen the wait-time.
No. The even more traditional solution is about CPU interrupts and prioritized access to resources (such as the disk). The hardware has everything you need. The problem lies in those crap OSes not using the hardware correctly.