Linked by cloud on Sat 27th Oct 2012 01:05 UTC
Linux A new version of the real-time Linux scheduler called SCHED_DEADLINE has been released on the Linux Kernel Mailing List. For people who missed previous submissions, it consists of a new deadline-based CPU scheduler for the Linux kernel with bandwidth isolation (resource reservation) capabilities. It supports global/clustered multiprocessor scheduling through dynamic task migrations. This new version takes into account previous comments/suggestions and is aligned to the latest mainline kernel. A video about SCHED_DEADLINE is also available on YouTube.
Thread beginning with comment 540300
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[6]: lie-nux at it again.
by Laurence on Sat 27th Oct 2012 21:44 UTC in reply to "RE[5]: lie-nux at it again."
Laurence
Member since:
2007-03-26


Isn't the default behaviour under linux to call the out of memory killer? It takes over and heuristically decides which process to kill.

Well yeah, that's what i just said.


I'm opposed to the OOM killer on the grounds that it randomly kills well behaved processes, even when they handle out of memory conditions in a well defined way.

Yeah, i've often wondered if there was a better way of handling such exceptions. OOM doesn't sit nicely with me either.


Playing devil's advocate, OOM killer gives the user a chance to specify weight factors for each process to give the kernel a hint about which processes to kill first (/proc/1000/oom_adj /proc/1000/oom_score etc). This increases the likelihood that the kernel will kill a process that is responsible for consuming the most ram. Without the OOM killer, a small process (ie ssh) can be forced to terminate when another process (dd bs=4G) is responsible for hoarding all the memory. Killing the large "guilty" process is better than killing small processes that happen to need more memory.

Interesting concept. A little tricky to impliment I think, but it has potential.


I don't think that addresses the root concern, which is that userspace processes can abuse system resources to the point of grinding the system to a halt. dd was a simple example, but there are infinitely more ways to do similar things. If our goal was to deny access to all the commands with potential to overload system resources, we'd be left with a virtually empty set. Obviously you'd have to deny access to php, perl, gcc, even shell scripts. The following does an excellent job of consuming both CPU and RAM on my system until I run out of memory and it aborts:

cat /dev/zero | gzip -c | gzip -d | gzip -c | gzip -d | gzip -c | gzip -d | sort > /dev/null

It's not likely to happen accidentally, but if a user is determined to abuse resources, he'll find a way!

But that's true for any OS. If a user has access to a machine then it would only take a determined halfwit to bring it to it's knees.

The only 'safe' option would be to set everyone up with thin clients which only have a web browser installed and bookmarked link to cloud services like Google Docs.

Reply Parent Score: 2

RE[7]: lie-nux at it again.
by Alfman on Sat 27th Oct 2012 22:11 in reply to "RE[6]: lie-nux at it again."
Alfman Member since:
2011-01-28

Laurence,

"Well yeah, that's what i just said."

"Interesting concept. A little tricky to impliment I think, but it has potential."

Maybe we're misunderstanding each other, but the OOM killer I described above *is* what linux has implemented. When it's enabled (I think by default), it does not necessarily kill the requesting application, it heuristically selects a process to kill.


"The only 'safe' option would be to set everyone up with thin clients which only have a web browser installed and bookmarked link to cloud services like Google Docs."

Haha, I hear you there, but ironically I consider firefox to be one of the guilty apps. I often have to kill it as it reaches 500MB after a week of fairly routine use. I'm the only one on this computer, but if there were 4 or 5 of us it'd be a problem.


This is probably hopeless, but here is what top prints out now:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
27407 lou 20 0 1106m 403m 24m R 4 10.0 50:27.51 firefox
21276 lou 20 0 441m 129m 5420 S 3 3.2 869:47.14 skype

I didn't realise skype was such a hog!

Reply Parent Score: 2

RE[8]: lie-nux at it again.
by Laurence on Sun 28th Oct 2012 11:53 in reply to "RE[7]: lie-nux at it again."
Laurence Member since:
2007-03-26

Ahh yes sorry. Thanks for the correction ;)

Reply Parent Score: 2

Gullible Jones Member since:
2006-05-23

But that's true for any OS. If a user has access to a machine then it would only take a determined halfwit to bring it to it's knees.

Have to disagree; IMO the entire goal and purpose of a multiuser OS is to prevent users from stepping on each other's toes. Obviously some of this is the sysadmin's responsibility; but I do think it's good to have default setups that are more fool-proof in multiuser environments, since that's probably where Linux sees the most use. (I think?)

That said, operating systems are imperfect, like the humans that create them.

Re handling of OOM conditions. IIRC the BSDs handle this by making malloc() fail if there's not enough memory for it. From what I recall of C, this will probably cause the calling program to crash, which I think is what you want in most cases - unless the calling program is something like top or kill! But I doubt you'd easily get conditions where $bloatyapp would keep running while kill would get terminated.

(Linux has a couple options like this. The OOM killer can be set to kill the first application that exceeds available memory; or you can set the kernel to make malloc() fail if more than a percentage of RAM + total swap would be filled. Sadly, there is as of yet no "fail malloc() when physical RAM is exceeded and never mind the swap" setting.)

Reply Parent Score: 2

RE[8]: lie-nux at it again.
by Alfman on Sun 28th Oct 2012 04:42 in reply to "RE[7]: lie-nux at it again."
Alfman Member since:
2011-01-28

Gullible Jones,

"Have to disagree; IMO the entire goal and purpose of a multiuser OS is to prevent users from stepping on each other's toes. Obviously some of this is the sysadmin's responsibility; but I do think it's good to have default setups that are more fool-proof in multiuser environments, since that's probably where Linux sees the most use."

I agree with the goal and that there may be better defaults to that end, however sometimes there isn't anything an OS can do to solve the problem if resources are oversubscribed in the first place.

For instance, on a server with 10K hosting accounts, one cannot simply divide the resources by 10K since obviously most accounts aren't busy at any given time and those that are may be seriously underpowered. As a compromise, the policy on shared hosts is to over-subscribe the available resources to make service good enough for active users most of the time. Unfortunately an over subscribed service will necessarily suffer if too many users try to make use of it at the same time. Every single shared hosting provider I've used has had performance problems at one time or another.

I confess, one time I was guilty of bringing down the websites of everyone on a shared server ;) Apparently apache/mod_php has an open file limit and I was running an application that consumed too many of them. It was neither memory nor cpu intensive, but the system was never the less unable to serve any further requests. Using any ulimit quotas at all on shared daemons like apache can result in this kind of denial of service. The issue gets even more complicated because it's impossible to determine which domain (and therefore user account) an incoming request is for until the HTTP headers are read. It means there's nothing the OS can do anything about this case and the shared daemon itself has to be responsible; it's quite the dilemma!


"Re handling of OOM conditions. IIRC the BSDs handle this by making malloc() fail if there's not enough memory for it. From what I recall of C, this will probably cause the calling program to crash..."

I agree that malloc should fail if there isn't enough ram. To me the deferred allocation scheme under linux is like a broken promise that memory is available when it isn't. It should always return a null, (which can be handled by the caller without crashing).

Edited 2012-10-28 04:47 UTC

Reply Parent Score: 3

RE[8]: lie-nux at it again.
by Soulbender on Sun 28th Oct 2012 09:22 in reply to "RE[7]: lie-nux at it again."
Soulbender Member since:
2005-08-18

Re handling of OOM conditions. IIRC the BSDs handle this by making malloc() fail if there's not enough memory for it. From what I recall of C, this will probably cause the calling program to crash, which I think is what you want in most cases - unless the calling program is something like top or kill!


The reason Linux has OOM is because Linux allows you to over-commit memory. I'm a bit hazy on the exact reason for this but it had something to do with effective fork'ing of processes that used a lot of memory.
The BSD's does not allow over-commit and instead malloc (or whatever) will fail.
Which approach is better can be discussed at great length but hopefully not here.

Reply Parent Score: 3