Linked by cloud on Sat 27th Oct 2012 01:05 UTC
Linux A new version of the real-time Linux scheduler called SCHED_DEADLINE has been released on the Linux Kernel Mailing List. For people who missed previous submissions, it consists of a new deadline-based CPU scheduler for the Linux kernel with bandwidth isolation (resource reservation) capabilities. It supports global/clustered multiprocessor scheduling through dynamic task migrations. This new version takes into account previous comments/suggestions and is aligned to the latest mainline kernel. A video about SCHED_DEADLINE is also available on YouTube.
Permalink for comment 540295
To read all comments associated with this story, please click here.
RE[5]: lie-nux at it again.
by Alfman on Sat 27th Oct 2012 21:33 UTC in reply to "RE[4]: lie-nux at it again."
Alfman
Member since:
2011-01-28

Laurence,

"If Linux gets exhausted of RAM, then the requesting application is killed and an OOE (out of memory exception) raised in the event logs."


Isn't the default behaviour under linux to call the out of memory killer? It takes over and heuristically decides which process to kill. I'm opposed to the OOM killer on the grounds that it randomly kills well behaved processes, even when they handle out of memory conditions in a well defined way.

Playing devil's advocate, OOM killer gives the user a chance to specify weight factors for each process to give the kernel a hint about which processes to kill first (/proc/1000/oom_adj /proc/1000/oom_score etc). This increases the likelihood that the kernel will kill a process that is responsible for consuming the most ram. Without the OOM killer, a small process (ie ssh) can be forced to terminate when another process (dd bs=4G) is responsible for hoarding all the memory. Killing the large "guilty" process is better than killing small processes that happen to need more memory.


I am interested in what others think about the linux OOM killer.



"mv `which dd` /sbin/ problem solved."

I don't think that addresses the root concern, which is that userspace processes can abuse system resources to the point of grinding the system to a halt. dd was a simple example, but there are infinitely more ways to do similar things. If our goal was to deny access to all the commands with potential to overload system resources, we'd be left with a virtually empty set. Obviously you'd have to deny access to php, perl, gcc, even shell scripts. The following does an excellent job of consuming both CPU and RAM on my system until I run out of memory and it aborts:

cat /dev/zero | gzip -c | gzip -d | gzip -c | gzip -d | gzip -c | gzip -d | sort > /dev/null

It's not likely to happen accidentally, but if a user is determined to abuse resources, he'll find a way!

Reply Parent Score: 2