Linked by cloud on Sat 27th Oct 2012 01:05 UTC
Linux A new version of the real-time Linux scheduler called SCHED_DEADLINE has been released on the Linux Kernel Mailing List. For people who missed previous submissions, it consists of a new deadline-based CPU scheduler for the Linux kernel with bandwidth isolation (resource reservation) capabilities. It supports global/clustered multiprocessor scheduling through dynamic task migrations. This new version takes into account previous comments/suggestions and is aligned to the latest mainline kernel. A video about SCHED_DEADLINE is also available on YouTube.
Thread beginning with comment 540269
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[3]: lie-nux at it again.
by Gullible Jones on Sat 27th Oct 2012 18:43 UTC in reply to "RE[2]: lie-nux at it again."
Gullible Jones
Member since:
2006-05-23

I realize the above is correct behavior... What bothers me is that (by default anyway) is that it can be used by a limited user to create an an effective denial-of-service attack. Stalling or crashing a multiuser system should IMO (ideally) be something that root, and only root, can do. ;)

OTOH, the presence of tools like dd is why I much prefer Linux to Windows. Experienced users shouldn't have to jump through hoops to do simple things.

Edit: re swap, I wish there were a way of hibernating without it. In my experience it is not very helpful, even on low-memory systems.

Edited 2012-10-27 18:44 UTC

Reply Parent Score: 3

RE[4]: lie-nux at it again.
by Alfman on Sat 27th Oct 2012 19:38 in reply to "RE[3]: lie-nux at it again."
Alfman Member since:
2011-01-28

Gullible Jones,

"What bothers me is that (by default anyway) is that it can be used by a limited user to create an an effective denial-of-service attack."

I see your point. You can put hardlimits on a user's disk/cpu/ram consumption, but that can easily interfere with what users want to do. I'm not sure any system can distinguish between legitimate resource usage and accidental or malicious usage?


At university some ten years ago, we were using networked sun workstations, I'm sure they'd know something about distributing resources fairly to thousands of users. I don't remember the RAM capacity/quotas, but I do remember the disk quota because I ran into it all the time - soft limits were like 15MB, uck!

Reply Parent Score: 3

RE[4]: lie-nux at it again.
by Laurence on Sat 27th Oct 2012 20:16 in reply to "RE[3]: lie-nux at it again."
Laurence Member since:
2007-03-26

I realize the above is correct behavior... What bothers me is that (by default anyway) is that it can be used by a limited user to create an an effective denial-of-service attack. Stalling or crashing a multiuser system should IMO (ideally) be something that root, and only root, can do. ;)

mv `which dd` /sbin/

problem solved.

Edited 2012-10-27 20:18 UTC

Reply Parent Score: 3

RE[5]: lie-nux at it again.
by jessesmith on Sat 27th Oct 2012 21:08 in reply to "RE[4]: lie-nux at it again."
jessesmith Member since:
2010-03-11

That just takes care of one tool which can bring the system to its knees, limiting access to dd is a bandage. The issue is that any application on Linux can cause the system a great deal of stress or bring it down. (I do this a couple of times a year by accident.)

There are ways to protect against this kind of attack (accidental or not) such as setting resource limits on user accounts. Most distributions do not appear to ship with these in place by default, but if your system requires lots of uninterrupted uptime, the sysadmin should consider locking down resource usage.

Reply Parent Score: 3

RE[5]: lie-nux at it again.
by Alfman on Sat 27th Oct 2012 21:33 in reply to "RE[4]: lie-nux at it again."
Alfman Member since:
2011-01-28

Laurence,

"If Linux gets exhausted of RAM, then the requesting application is killed and an OOE (out of memory exception) raised in the event logs."


Isn't the default behaviour under linux to call the out of memory killer? It takes over and heuristically decides which process to kill. I'm opposed to the OOM killer on the grounds that it randomly kills well behaved processes, even when they handle out of memory conditions in a well defined way.

Playing devil's advocate, OOM killer gives the user a chance to specify weight factors for each process to give the kernel a hint about which processes to kill first (/proc/1000/oom_adj /proc/1000/oom_score etc). This increases the likelihood that the kernel will kill a process that is responsible for consuming the most ram. Without the OOM killer, a small process (ie ssh) can be forced to terminate when another process (dd bs=4G) is responsible for hoarding all the memory. Killing the large "guilty" process is better than killing small processes that happen to need more memory.


I am interested in what others think about the linux OOM killer.



"mv `which dd` /sbin/ problem solved."

I don't think that addresses the root concern, which is that userspace processes can abuse system resources to the point of grinding the system to a halt. dd was a simple example, but there are infinitely more ways to do similar things. If our goal was to deny access to all the commands with potential to overload system resources, we'd be left with a virtually empty set. Obviously you'd have to deny access to php, perl, gcc, even shell scripts. The following does an excellent job of consuming both CPU and RAM on my system until I run out of memory and it aborts:

cat /dev/zero | gzip -c | gzip -d | gzip -c | gzip -d | gzip -c | gzip -d | sort > /dev/null

It's not likely to happen accidentally, but if a user is determined to abuse resources, he'll find a way!

Reply Parent Score: 2