Linked by cloud on Sat 27th Oct 2012 01:05 UTC
Linux A new version of the real-time Linux scheduler called SCHED_DEADLINE has been released on the Linux Kernel Mailing List. For people who missed previous submissions, it consists of a new deadline-based CPU scheduler for the Linux kernel with bandwidth isolation (resource reservation) capabilities. It supports global/clustered multiprocessor scheduling through dynamic task migrations. This new version takes into account previous comments/suggestions and is aligned to the latest mainline kernel. A video about SCHED_DEADLINE is also available on YouTube.
Thread beginning with comment 540295
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[5]: lie-nux at it again.
by Alfman on Sat 27th Oct 2012 21:33 UTC in reply to "RE[4]: lie-nux at it again."
Alfman
Member since:
2011-01-28

Laurence,

"If Linux gets exhausted of RAM, then the requesting application is killed and an OOE (out of memory exception) raised in the event logs."


Isn't the default behaviour under linux to call the out of memory killer? It takes over and heuristically decides which process to kill. I'm opposed to the OOM killer on the grounds that it randomly kills well behaved processes, even when they handle out of memory conditions in a well defined way.

Playing devil's advocate, OOM killer gives the user a chance to specify weight factors for each process to give the kernel a hint about which processes to kill first (/proc/1000/oom_adj /proc/1000/oom_score etc). This increases the likelihood that the kernel will kill a process that is responsible for consuming the most ram. Without the OOM killer, a small process (ie ssh) can be forced to terminate when another process (dd bs=4G) is responsible for hoarding all the memory. Killing the large "guilty" process is better than killing small processes that happen to need more memory.


I am interested in what others think about the linux OOM killer.



"mv `which dd` /sbin/ problem solved."

I don't think that addresses the root concern, which is that userspace processes can abuse system resources to the point of grinding the system to a halt. dd was a simple example, but there are infinitely more ways to do similar things. If our goal was to deny access to all the commands with potential to overload system resources, we'd be left with a virtually empty set. Obviously you'd have to deny access to php, perl, gcc, even shell scripts. The following does an excellent job of consuming both CPU and RAM on my system until I run out of memory and it aborts:

cat /dev/zero | gzip -c | gzip -d | gzip -c | gzip -d | gzip -c | gzip -d | sort > /dev/null

It's not likely to happen accidentally, but if a user is determined to abuse resources, he'll find a way!

Reply Parent Score: 2

RE[6]: lie-nux at it again.
by Laurence on Sat 27th Oct 2012 21:44 in reply to "RE[5]: lie-nux at it again."
Laurence Member since:
2007-03-26


Isn't the default behaviour under linux to call the out of memory killer? It takes over and heuristically decides which process to kill.

Well yeah, that's what i just said.


I'm opposed to the OOM killer on the grounds that it randomly kills well behaved processes, even when they handle out of memory conditions in a well defined way.

Yeah, i've often wondered if there was a better way of handling such exceptions. OOM doesn't sit nicely with me either.


Playing devil's advocate, OOM killer gives the user a chance to specify weight factors for each process to give the kernel a hint about which processes to kill first (/proc/1000/oom_adj /proc/1000/oom_score etc). This increases the likelihood that the kernel will kill a process that is responsible for consuming the most ram. Without the OOM killer, a small process (ie ssh) can be forced to terminate when another process (dd bs=4G) is responsible for hoarding all the memory. Killing the large "guilty" process is better than killing small processes that happen to need more memory.

Interesting concept. A little tricky to impliment I think, but it has potential.


I don't think that addresses the root concern, which is that userspace processes can abuse system resources to the point of grinding the system to a halt. dd was a simple example, but there are infinitely more ways to do similar things. If our goal was to deny access to all the commands with potential to overload system resources, we'd be left with a virtually empty set. Obviously you'd have to deny access to php, perl, gcc, even shell scripts. The following does an excellent job of consuming both CPU and RAM on my system until I run out of memory and it aborts:

cat /dev/zero | gzip -c | gzip -d | gzip -c | gzip -d | gzip -c | gzip -d | sort > /dev/null

It's not likely to happen accidentally, but if a user is determined to abuse resources, he'll find a way!

But that's true for any OS. If a user has access to a machine then it would only take a determined halfwit to bring it to it's knees.

The only 'safe' option would be to set everyone up with thin clients which only have a web browser installed and bookmarked link to cloud services like Google Docs.

Reply Parent Score: 2

RE[7]: lie-nux at it again.
by Alfman on Sat 27th Oct 2012 22:11 in reply to "RE[6]: lie-nux at it again."
Alfman Member since:
2011-01-28

Laurence,

"Well yeah, that's what i just said."

"Interesting concept. A little tricky to impliment I think, but it has potential."

Maybe we're misunderstanding each other, but the OOM killer I described above *is* what linux has implemented. When it's enabled (I think by default), it does not necessarily kill the requesting application, it heuristically selects a process to kill.


"The only 'safe' option would be to set everyone up with thin clients which only have a web browser installed and bookmarked link to cloud services like Google Docs."

Haha, I hear you there, but ironically I consider firefox to be one of the guilty apps. I often have to kill it as it reaches 500MB after a week of fairly routine use. I'm the only one on this computer, but if there were 4 or 5 of us it'd be a problem.


This is probably hopeless, but here is what top prints out now:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
27407 lou 20 0 1106m 403m 24m R 4 10.0 50:27.51 firefox
21276 lou 20 0 441m 129m 5420 S 3 3.2 869:47.14 skype

I didn't realise skype was such a hog!

Reply Parent Score: 2

Gullible Jones Member since:
2006-05-23

But that's true for any OS. If a user has access to a machine then it would only take a determined halfwit to bring it to it's knees.

Have to disagree; IMO the entire goal and purpose of a multiuser OS is to prevent users from stepping on each other's toes. Obviously some of this is the sysadmin's responsibility; but I do think it's good to have default setups that are more fool-proof in multiuser environments, since that's probably where Linux sees the most use. (I think?)

That said, operating systems are imperfect, like the humans that create them.

Re handling of OOM conditions. IIRC the BSDs handle this by making malloc() fail if there's not enough memory for it. From what I recall of C, this will probably cause the calling program to crash, which I think is what you want in most cases - unless the calling program is something like top or kill! But I doubt you'd easily get conditions where $bloatyapp would keep running while kill would get terminated.

(Linux has a couple options like this. The OOM killer can be set to kill the first application that exceeds available memory; or you can set the kernel to make malloc() fail if more than a percentage of RAM + total swap would be filled. Sadly, there is as of yet no "fail malloc() when physical RAM is exceeded and never mind the swap" setting.)

Reply Parent Score: 2