Linked by cloud on Sat 27th Oct 2012 01:05 UTC
Linux A new version of the real-time Linux scheduler called SCHED_DEADLINE has been released on the Linux Kernel Mailing List. For people who missed previous submissions, it consists of a new deadline-based CPU scheduler for the Linux kernel with bandwidth isolation (resource reservation) capabilities. It supports global/clustered multiprocessor scheduling through dynamic task migrations. This new version takes into account previous comments/suggestions and is aligned to the latest mainline kernel. A video about SCHED_DEADLINE is also available on YouTube.
Permalink for comment 540300
To read all comments associated with this story, please click here.
RE[6]: lie-nux at it again.
by Laurence on Sat 27th Oct 2012 21:44 UTC in reply to "RE[5]: lie-nux at it again."
Member since:

Isn't the default behaviour under linux to call the out of memory killer? It takes over and heuristically decides which process to kill.

Well yeah, that's what i just said.

I'm opposed to the OOM killer on the grounds that it randomly kills well behaved processes, even when they handle out of memory conditions in a well defined way.

Yeah, i've often wondered if there was a better way of handling such exceptions. OOM doesn't sit nicely with me either.

Playing devil's advocate, OOM killer gives the user a chance to specify weight factors for each process to give the kernel a hint about which processes to kill first (/proc/1000/oom_adj /proc/1000/oom_score etc). This increases the likelihood that the kernel will kill a process that is responsible for consuming the most ram. Without the OOM killer, a small process (ie ssh) can be forced to terminate when another process (dd bs=4G) is responsible for hoarding all the memory. Killing the large "guilty" process is better than killing small processes that happen to need more memory.

Interesting concept. A little tricky to impliment I think, but it has potential.

I don't think that addresses the root concern, which is that userspace processes can abuse system resources to the point of grinding the system to a halt. dd was a simple example, but there are infinitely more ways to do similar things. If our goal was to deny access to all the commands with potential to overload system resources, we'd be left with a virtually empty set. Obviously you'd have to deny access to php, perl, gcc, even shell scripts. The following does an excellent job of consuming both CPU and RAM on my system until I run out of memory and it aborts:

cat /dev/zero | gzip -c | gzip -d | gzip -c | gzip -d | gzip -c | gzip -d | sort > /dev/null

It's not likely to happen accidentally, but if a user is determined to abuse resources, he'll find a way!

But that's true for any OS. If a user has access to a machine then it would only take a determined halfwit to bring it to it's knees.

The only 'safe' option would be to set everyone up with thin clients which only have a web browser installed and bookmarked link to cloud services like Google Docs.

Reply Parent Score: 2