Linked by Kasra Yousefi on Fri 7th Jan 2005 17:44 UTC
Editorial Problem: Even the most powerful PC's become non responsive during resource-intensive computations, such as graphic design, media, image rendering and manipulating. The traditional solution has been to upgrade to a faster computer and throw more computing power at the problem to lessen the wait-time. But there's a simple solution that utilizes multiple machines, but without using grid/clustering. For now, this involves a hack, but how hard would it be for an OS vendor to streamline this process?
Order by: Score:
Grid Computing
by logicnazi on Fri 7th Jan 2005 18:17 UTC

So why are we dismissing grid computing again. Sure, I can see the benefits of this hack now when we don't have grid computing but why would a vendor want to streamline this process when they could offer the much more general and elegant solution of offering grid computing where you could actually get the task done faster as well.

nice...
by sergio on Fri 7th Jan 2005 18:21 UTC

>Even the most powerful PC's become non responsive during >resource-intensive computations, such as graphic design, >media, image rendering and manipulating

You should use "nice" then. ;-)

Very nice solution!
by Malbojia on Fri 7th Jan 2005 18:22 UTC

I for one can see this as cheap and effective. I use an 8 port KVM my-self and had to make my own wires seen how the price for 50 foot cables was way out of the ball park. I got 7 machines and the biggest boy is the main lead so for task processing I'll use all 7 and hooked on on my D-link 10/100 mbps switch.

This method can be very expensive, and have wires running around.

Now I for one would like to know how come I didnt think of using VNC with Cross Over.

Great Idea I might just put it to the test.

RE:
by zack on Fri 7th Jan 2005 18:28 UTC

I use VNC everday to remotely administrator many different servers(windows/Linux), and it's a wonderful tool but it's just to slow if you want to do some "real" work. And that's even on a 100Mbit local network..

"The fact is, VNC uses very little bandwidth and is very easy to install and configure"
But my experience is that vnc consumes allot of CPU power if you are connected to the remote host.

VNC is an excellent tool, but for real remote control you should use some "real" program like Terminal Server client if it's a windows host.

Not really "new"
by Dave Poirier on Fri 7th Jan 2005 18:31 UTC

Remote terminal access is not something new, IIRC this dates back to the early 60's or 70's.

Now, just imagine that instead of running VNC on 8 remote machines and have each compute a task indivually, if you had them all multi-tasking the same application over your network? You'd be able to finish your computation in probably close to a quarter of the time (some time is required for sending/receiving/synchronizing, etc).

I think the OS makers are better investing their time in grid-computing then in remote terminal services; after all remote terminal services are already available and only a few clicks/commands away in most OSes.

RE:
by zack on Fri 7th Jan 2005 18:39 UTC

>, if you had them all multi-tasking the same application over your network? You'd be able to finish your computation in probably close to a quarter of the time (some time is required for sending/receiving/synchronizing, etc
That's right! then we could talk about eve lotion. This is not available in the big mainstream OS that we use today Windows/Linux/Mac. But QNX is a great example how this could be done.

Quote from QNX:
Remote Process Creation:
Any resource or process can be accessed uniformly using standard messages, across the message-passing IPC, from any location on the network — without having to write code connectors to enable resources to communicate. Similarly, processes on one computer can be started or spawned on another network computer. The result is true distributed processing across multiple network computers with no code modification, and no incremental hardware costs.

Check this out for more information
http://www.qnx.com/tech_highlights/microkernel/

Try Using X instead of VNC
by Brandon Philips on Fri 7th Jan 2005 18:44 UTC

VNC is slow and a very messy way of getting remote GUI access to a UNIX workstation. Try this on your WinXP box: http://xlivecd.indiana.edu/

UH, something like X windows maybe?
by Pinback on Fri 7th Jan 2005 18:46 UTC

Years ago we used X windows to run resource intensive programs on "compute servers". The funny part was that these compute servers were just headless workstations fully loaded with RAM.

(At times the software license fees were tied to the server type. Often a workstation class sytem was equivalent in CPU speed to a server, but the software licencse was considerably cheaper.)

The real question these days is "what do you want to use for your 'rich media system' that you actually sit in front of?"

With cheap access to wireless and VNC, you can put your (home) server rack just about anywhere. But more and more often these days the urge is to just collapse all the systems down to one more powerful system, and be done with the noise and heat.

Sun Microsystems
by brando on Fri 7th Jan 2005 18:51 UTC

I though they were working on something like this. Monitor, Key board, Mouse, and a little pad. You take out a credit card thing and just throw it down on the pad and bam, you are in Windows. Throw down another card on it and bam, you are in Linux (pick your distro). Throw down another and bam, you are using Mac OS X. All the cards have a OS assigned to it, and that pad picks up what it is, and remotely signs you into it to do what you want. I saw a demo video of it. The guy just threw one card down on top of the other and the pad pick up the newest card and the screen switched to match the card.

This guy is talking nothing more than setting up a bunch of remote terminals and having them do the work, while his PC is not being taxed, but all the other computers are. So did he really solve anything, nope, each computer can still fail and bring down what process was being done. Just now you need to buy as many PCs as tasks that you want to run.

There is another way...
by ASHLB on Fri 7th Jan 2005 19:01 UTC

If the Operating Systems we were using had decent schedulers then this would not be a problem. Let me explain.

If the OS detects that a process is really compute bound and interactive users/processes were suffering from lengthening I/O response times(eg keyboard clicks) it would dynamically reduce the priority of the CPU hog to let other users get a slot. This is not rocket sciene. I learnt this 30 years ago when studying realt time compute operating systems. One great example at the time was the RSTS-E Operating System that ran on PDP-11's. Hawker Siddley Aviation(Home of the Harrier VTOL aircraft) was using a RSTS system in the aerodynamics lab. They had many users doing lots of CPU intensive work. They all got a response. The OS limited CPUS from taking too much CPU. The O/S would switch from a timeslice based scheduler to a priority based schedule at time of high CPU Usage.(That was how a top guy at DEC explained it to me at the time).
Naturally, this sort of thing could never be included in any Microsoft OS as IMHO, they don't know the meaning of the word Scheduler but, that as I say is MHO.
Enough of the history remeniscing. Its Friday night and I'm off down the pub.

complex
by seratne on Fri 7th Jan 2005 19:05 UTC

Yes, you might be able to multitask between different machines but for an average "designer" it will never work. It's too complex, and your data is then spread across different machines. And a central file storage is not a solution unless you go to Gig-E or 10Gig-E which is still expensive (network wise) nevermind the cost of the actual file storage itself, or the cost of multiple licenses for a single app just so you can save yourself 3 minutes, and then to think of the confusion of organizing files. Just looks like it would be a big mess.

For any task that can take less than 5 minutes the solution is one workstation, and try to beef up that workstation as much as possible. For complex tasks that can take > 15 min. or hours, grid computing/distributed computing is the best solution.

RE: nice...
by seratne on Fri 7th Jan 2005 19:08 UTC

"You should use "nice" then. ;-)"

"nice" doesnt help much if a process is only taking up 10% cpu but is thrashing your hard drive about.

RE: Sun Microsystems
by devnet on Fri 7th Jan 2005 19:10 UTC

I'd use tightVNC which is considerably faster than that of the one posted.

In a sense, what this guy is saying is that you'd have multiple users 'logged on' via VNC...and that instead of the 'thin server' (or main computer) being taxed...you'd bring the work to the VNC clients (terminals). It's brilliant and not original...it's been posted about before. But it is a cost effective answer and one that a home user or small business can implement for minimal cost.

flawed solution
by Anonymous on Fri 7th Jan 2005 19:25 UTC

photoshop in crossweaver is fairly buggy if you try to do more than the basic things. the imageready half of it is broken. also, what happens to your color calibration and display settings over vnc? As far as I know, vnc uses compression (eg jpg) and sends these compressed images over to the client. I wouldn't exactly call it efficient, if you had to make corrections on a native pshop client post edit jobs on a crossover/pshop linux client. You're also asking for an additional license of crossover and photoshop for each linux client added. IMHO I think adobe should implement something like the sony vegas video network renderer for photoshop. On the other hand though, SMP, dual core cpus, SATA w/ queues will come down in price eventually, so maybe such a solution is not needed.

RE: nice...
by nns6561 on Fri 7th Jan 2005 19:29 UTC

This is related to one of my biggest disappointments with linux. With BeOS, there are ~150 different levels. Each thread is executed at an appropriate level. Media processing is given a higher priority than background tasks. Why can't linux standardize on appropriate nice values and execute tasks at those values by default? This might even be doable without any software modifications, but I haven't run across any linux distros that do this.

OS X is fine with compute bound tasks
by Jon on Fri 7th Jan 2005 19:40 UTC

The problems arise with disk bound tasks. In particular, when you run out of real memory and have to swap. When you try to multitask on a machine where one app is consuming a lot of memory not only is the app you are trying to use starved of RAM, but it competes with the first app for disk IO to swap.
Almost inevitably poor performance in these scenarios can be improved by buying suficient RAM for the primary task to run in memory. For Photoshop this could easily be a couple of gigabytes.

Other solutions.
by dpi on Fri 7th Jan 2005 20:05 UTC

VNC? Well, you might consider some better alternatives although TightVNC ain't half that bad. Problem is that when you do Real Work then you don't want VNC to show the picture different than it really is; hence you use compression. I think the following 2 solutions are also applicable:

* Use Windows 2000 Server or Windows XP Pro or Windows 2003 which come with RDP server.
* Use NoMachine's X hacks which use high compression but not much resources.

As for Photoshop. You say 'Inexpensive Solution' and mention Photoshop. If you want an inexpensive Photoshop clone then try out Paint.NET, GIMP, PSP. Or buy a SGI O2 or Octane, put Photoshop 3 on it. Photoshop 3 is almost the same as the later versions but this puppy runs native and on inexpensive hardware and remote X works flawless. This solution is much cheaper than starting with Apple. I'm not sure if NoMachine runs on IRIX i still have to figure that out.

This is what X Windows already does
by Anonymous on Fri 7th Jan 2005 20:50 UTC

Just a note: The procedure described above is the way all unix-based systems (including linux) already work. Any X Windows environment can route its display, keyboard and mouse to a remote machine. I've been doing this for over two decades now. The only difference here is including Windows into the mix. :-)

MS Terminal services
by Marcelo on Fri 7th Jan 2005 20:57 UTC

Microsoft has the Terminal Services solution for running win32 applications on windows servers but TSCAL (licenses to terminals) are abusive. M$ calculates the TSCAL price to don't be cheaper to use windows dumb terminals...

I use very old machines (Pentium 100 with 32 MB RAM) as X-terminals of the Thinstation (http://thinstation.sf.net) solution. I recommend it. I use mainly for run firefox and OpenOffice on Pentium 4 or Athlon XP servers running linux. TS also comes with rdesktop client for M$ servers.

For spare windows machines I install Cygwin/XFree (http://x.cygwin.com/) to access the linux servers and run graphical applications.

so so
by Charles on Fri 7th Jan 2005 22:59 UTC

I don't think Photoshop uses a lot of CPU, but it uses a lot of RAM and HDD resources. I use Photoshop every day. I have 2GB RAM memory and 2 WD Raptors SATA of 10,000 RPM in Raid-0 , it's necessary to use Photoshop and to work fast with high-definition graphics.

I doubt the average photoshop user ever heard of "Linux", "Fedora", or even "WINE". Installing WINE is painful and only "power users" are able to do so.

Unfortunately "power users" don't use CAD or programs such as Photoshop.

Inexpensive?
by Aaron on Fri 7th Jan 2005 23:18 UTC

I don't see how that is inexpensive. Doing multple PC with VNC you would be required to have multpile licenses of PhotoShop, plus CrossOver if you are running it on Linux.

I would like to share resource from multple PC like openMosix but unfortunately either the apps I want to run don't run on Linux and openMosix doesn't work for Windows.

Strange Idea
by peter on Fri 7th Jan 2005 23:23 UTC

Try to offer this solution to your boss what do you think will happen...

4 computers with Photoshop is 4 licences.

4 computers with Photoshop is 4 time installing it. Repeats with every other heavy program you want to use.

4 computers is 4 users killing your app.

My experience with users is they don't want to know and don't want to be bothered. It needs to be very transparent to them.

Ciao

Peter

wha?
by aurex on Fri 7th Jan 2005 23:25 UTC

This is a very obvious and moronic idea. Why even go through all of the trouble? Why don't you just have all these machines hooked to a KVM switch and sit at a desk and work instead of complicating things. This is just a very poor idea here.

Photoshop
by iongion on Fri 7th Jan 2005 23:26 UTC

Photoshop is using a lot of CPU, imagine those filters that you apply and play with.

Except the filters you are right .... verry little CPU ussage.

But filters could become benchmarks ...
They kill the proccessor like any other intesive operation, like rendering, compression .... your favourite D3 Game.

And so on ...

@ Charles (IP: 200.138.135.---)
by helf on Fri 7th Jan 2005 23:51 UTC

I'm sorry, but installing WINE is not painful.. I'm not a linux advocate and im even fairly new to it but i would not call installing wine painful.. using apt all i type is apt-get install wine and voila' its installed. easy.

backburber
by DarkTrancer on Sat 8th Jan 2005 00:02 UTC

There is a app in 3d max studio called backburner that setup rendering tasks for all networked computers with backburner installed.I have used this several time for huge renders that even my amd64 would take an age to deal with,however loaded onto three spare computers i have of varying cpu and memory works a treat.
The computers all log into my main computer,collect a task,render then send back the result.For animation each one will take a frame,or for a single frame render a field.
It does runs sweet and i`m guessing that this is more what Kasra is looking for,pity it isnt available for global apps.

Another solution
by Eddie303 on Sat 8th Jan 2005 01:14 UTC

... would be the LTSP, which works reliably. You can use diskless stations with it.

xGrid?
by Viridian on Sat 8th Jan 2005 01:19 UTC

Correct me if I'm wrong, but isn't this what Apple is trying to achieve with xGrid? Easily configurable distributed computing for persons without the administrative skills or technical know-how seems tailor-made for the situation the author describes. Can someone who knows more about this give me some more information? It seems to me that xGrid in conjunction with Automator could be a powerful combination, and I'm intrigued by the upcoming OS X Tiger.

Not even near
by somebody on Sat 8th Jan 2005 01:21 UTC

RealVNC?? You could at least try TightVNC or NX. Photoshop users usualy need larger resolutions. 1600x1200 is common. RealVNC couldn't handle real time brushing with larger brush.

Colors?? Even if you forget loosy compression there's color downsampling to optimize troughput.

Colors again? Still no way to calibrate photoshop under X.

Let say you did it anyway. Now name Photoshop options that require time. Most of the work is color correction and cleaning scan errors which require user to be present.

Waste of money and waste of time.

about linux and nice
by hobgoblin on Sat 8th Jan 2005 01:52 UTC

reniceing a prosess may well just be hideing a problem. for a time distros came with xfree running on a above-avarage nice value to help with responsiveness.

but now with 2.6 the default scheduler monitors the number of times a prosess enters a wait state for more user input and adjusts its priority thereafter. therefor a cpu heavy task will get a bigger priority then tasks like wordprosessors and others that are centerd around user input. these levels will still be maintained so that the current user task gets a slighty bigger priority then a background task tho to maintain a sense of responsiveness.

but like someone else here stated, disk access and the amount of ram available have just as much to say. the moment the os have to start moveing stuff to or from swap is the moment the system crawls to a halt...

are u kidding
by anand on Sat 8th Jan 2005 02:04 UTC

Forgive me but I thought that this article had some strength, turns out otherwise. What is so innovative in using vnc , its been around for ages. And even if vnc is used on the slower machine using Windows XP does not really help as it is a single desktop environment. Using crossover is dead slow and flaky in linux at best. How does running multiple copies of Photoshop on Linux solve the problem of excess hdd activity.

KVM
by Mike on Sat 8th Jan 2005 03:25 UTC

Would someone please buy Kasra Yousefi a KVM?

I don't know what the author process with photoshop, but I often work with multi 100MBs files without any slowdown. If things starting to become slow (Heavy swapping) then add more ram. Filtering/batching images generally takes only a few seconds (Athlon T-Bird 1400), to short to switch to another work. Other than that VNC is too slow for photoshop to be practical, even on LAN. I know this because I use VNC all the time. Remote desktop connection, in the other hand has a much better response rate.

Forgive me for cynicism but I doubt whether the author really tried it.

Reply to Mike from Kasra
by kasra yousefi on Sat 8th Jan 2005 06:45 UTC

Dear Mike,
I have some KVM switches in our own office ;)
And one little thing about switches is that you can’t go much far away from your datacenter. If KVM was the answer, then why people are using TCP/IP and Internet?
Why people writing and using extensive remote management softwares?

I didn’t mean to “invent” a new remote management system. It was a hack for showing how we can distribute resources over a network now that grid solutions are hard to find.

Sorry if I sound impolite, English is not my native language. I enjoyed your post so much.

Regards- Kasra

First, a little explanation on the idea behind my post

I know what I proposed is nothing new and is based on 30 year old standards. The reason I proposed this – and intentionally used the most divert configuration, (I also knew that I could do this all on windows or Linux only configuration and don’t be criticized by the licensing and poor performance issues related to Crossover Office) – was to show the possibilities in distributing processing and disk bandwidth across a small network.

I also like clustering/grid technologies (especially following plan 9 closely) and prefer them over my own awkward hack for sure.

A word about performance:

I’m surprised to see some reply that the VNC is slow.(I'm not trying to defend VNC just want to clarify the issue) I did test this in real word and worked under heavy conditions and on the other hand my machines are not very powerful ones.

For your information, my master PC is Athlon XP 2200 + / 1GB RAM and a cheap (no brand) 32 MB Graphic adapter.
My Linux slave is Athlon XP 2200 + 512 MB RAM, Fedora Core II, also used as a staging Apache/MySQL server hosting around 40 websites
Both machines are connected via a 100MB 3Com Office connect switch.

Thanks for your thoughtful feedbacks! I learned so much from them. Regards - Kasra

a useful program
by na on Sat 8th Jan 2005 08:32 UTC

http://synergy2.sourceforge.net/
This program might be useful for this. It's like a software kvm, without the video.

``real'' work...
by Cosmo on Sat 8th Jan 2005 12:02 UTC

I use VNC everday to remotely administrator many different servers(windows/Linux), and
> it's a wonderful tool but it's just to slow if you want to do some "real" work. And
> that's even on a 100Mbit local network...

What kind of ``real'' work (one pair of quotation marks does hardly suffice by the way) are you talking about? Right now I use VNC to access my workstation 400 km away with a DSL connection over the internet and -- given that -- I find it amazingly responsive.

I don't understand
by Mike on Sat 8th Jan 2005 14:45 UTC

I don't understand why Windows let's programmes hog the cpu. I always find it so frustrating when I'm multitasking and one program suddenly get 99% of the cpu, I can't even minimize the window! It seems so simple to think of this problem beforehand and program the os in such a way as to keep it responsive. If it's gonna use 75% and take a bit longer, fine with me, because then I can check out my email in the mean time. Or is that too much too ask of an OS?

RE: I don't understand........
by DonQ on Sat 8th Jan 2005 16:04 UTC

I don't understand why Windows let's programmes hog the cpu. I always find it so frustrating when I'm multitasking and one program suddenly get 99% of the cpu, I can't even minimize the window! It seems so simple to think of this problem beforehand and program the os in such a way as to keep it responsive.

It isn't so simple.

First, windows includes kernel time into cpu display, thereby your 99% can be just waiting for disk i/o.

Second, waiting for disk i/o itself doesn't make windows slower (unless hdd is badly configured). But if your another program needs for example read something from same disk (or just read memory-swap file), then system becomes unresponsive.

Play with some program, using only cpu (prime95 is good example) - 99% of cpu doesn't affect other programs at all.

What you need - more RAM and 2 or more hard disks, accessible independently.

RE:
by Mike on Sat 8th Jan 2005 17:00 UTC


Why people writing and using extensive remote management softwares?


So that the IT department can troubleshoot and repair software issues on a network without leaving their office.

And so that people can access their office and home PCs while out on the road.

What you're describing just seems unnecissary.

A question to Kasra
by Kevin on Sat 8th Jan 2005 23:08 UTC

Why would a 3d graphics designer use "a cheap (no brand) 32 MB Graphic adapter" in a computer equipped with an Athlon XP 2200 and 1 GB of RAM ? Doesn't the cheap video card restrain the other components ? Maybe that's one source of problems too.

windows xp Win32PrioritySeparation
by attila on Sat 8th Jan 2005 23:34 UTC

"I don't understand why Windows let's programmes hog the cpu. I always find it so frustrating when I'm multitasking and one program suddenly get 99% of the cpu, I can't even minimize the window! It seems so simple to think of this problem beforehand and program the os in such a way as to keep it responsive. If it's gonna use 75% and take a bit longer, fine with me, because then I can check out my email in the mean time. Or is that too much too ask of an OS?"

just go to google and type:

windows xp Win32PrioritySeparation

youll find may opinions on the net for optimizing windows memory and cpu priorities.

default for Win32PrioritySeparation=2 but it can be lowered to 0. valid range is 0-38. you might not just have a cpu hog it might also be a memory hog as well.

go to windows task manager then click prosesses tab then right click on the desired process then got to set priority menu and select priority level.

Reply to Kevin from Kasra
by kasra yousefi on Sun 9th Jan 2005 05:47 UTC

Dear Kevin, there is no rationale behind this (My cheap GPU). The reason for having such a weird configuration is simply because my Radeon burnt out recently and I'm looking for a pro graphic adapter and still didn't buy one and meanwhile I'm relying on the PC on board chip. I’m agreeing with you, that graphic adapter is a serious performance roadblock, especially when windows try to redraw the Photoshop screen and goes low on adapter memory. The fun part is, you don’t have this problem when using VNC because all the performance problems are “shifted” to another PC, of course. And at least, while the other, usually idle PC at the next room is busy solving the problem, you can switch to another task.

Best
Kasra

???
by Morin on Sun 9th Jan 2005 17:05 UTC

> Even the most powerful PC’s become non responsive during
> resource-intensive computations, such as graphic design,
> media, image rendering and manipulating. The traditional
> solution has been to upgrade to a faster computer and throw
> more computing power at the problem to lessen the wait-time.

No. The even more traditional solution is about CPU interrupts and prioritized access to resources (such as the disk). The hardware has everything you need. The problem lies in those crap OSes not using the hardware correctly.