Linked by Thom Holwerda on Tue 24th Jul 2007 15:16 UTC, submitted by danwarne
Linux "Con Kolivas is a prominent developer on the Linux kernel and strong proponent of Linux on the desktop. But recently, he left it all behind. Why? In this interview with APCMag.com, Con gives insightful answers exploring the nature of the hardware and software market, the problems the Linux kernel must overcome for the desktop, and why despite all this he's now left it all behind."
Thread beginning with comment 257996
To read all comments associated with this story, please click here.
Desktop Vs. Server
by kop316 on Wed 25th Jul 2007 00:12 UTC
kop316
Member since:
2006-07-01

Call me a newbie, but I was curious on what made development for the server versus development for the desktop in kernel space. I would think that if the kernel gets faster with a better CPU scheduler, then both the server and the desktop would benefit.

Could someone please clarify this?

Reply Score: 2

RE: Desktop Vs. Server
by hobgoblin on Wed 25th Jul 2007 00:42 in reply to "Desktop Vs. Server"
hobgoblin Member since:
2005-07-06

its a question of priority if i understand the subject correctly.

with a server one is more interested in the rapid movement of data to and from storage. for say a web server its critical to get the data of the storage media as fast as possible and out to whoever is accessing the page.

for a desktop system, its critical to present some kind of indication of reaction to the user. if i click some button i want that button to show that it have been pressed as soon after as one can (blinking, depressing or similar). if not you get the classical scenario of someone hitting the same button multiple times, or trying other buttons to see if the system is having issues. then you get a situation when the queued up commands are all done at ones. often with unpredictable results.

thing is that on a single cpu system like what the desktop pc is (or was at least) the cpu is in charge of all these jobs. so the question becomes, what should get priority. and here is where the scheduler comes in.

in the past, with the linux kernel being used as the basis for in-expensive server equipment and similar, the priority would shift towards IO.

but as one starts to use the kernel on the desktop, giving priority to the user interface (x server and the desktop environment that it serves) becomes more important then pushing the data around rapidly.

i recall some early desktop distros having the x server set to launch with a negative nice value. in other words, it would have a very high priority in the scheduler. this to improve the responsiveness of the gui (a long time sore point for linux distros in general).

basically, process scheduling is not a "one size fits all". it needs to be tuned to the exact job one want the machine to do.

same deal in real life really. if you have a boss that cant make up his mind about what he wants, either the job done as quickly as possible, or the max amount of feedback on whats being done, things grind to a halt as the workers jump from one priority to the other.

this is one potential benefit of the multi-core cpu, that one can toss the gui job to one and the file transfer to the other and have both get their job done. but then one may well risk problems further down the chain, in ram and bus...

Reply Parent Score: 3