Linked by cloud on Sat 27th Oct 2012 01:05 UTC
Linux A new version of the real-time Linux scheduler called SCHED_DEADLINE has been released on the Linux Kernel Mailing List. For people who missed previous submissions, it consists of a new deadline-based CPU scheduler for the Linux kernel with bandwidth isolation (resource reservation) capabilities. It supports global/clustered multiprocessor scheduling through dynamic task migrations. This new version takes into account previous comments/suggestions and is aligned to the latest mainline kernel. A video about SCHED_DEADLINE is also available on YouTube.
Thread beginning with comment 540248
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: lie-nux at it again.
by Gullible Jones on Sat 27th Oct 2012 15:31 UTC in reply to "lie-nux at it again."
Gullible Jones
Member since:
2006-05-23

There's a little truth to this. Try running

$ dd if=/dev/zero of=~/dumpfile bs=4G count=1

on a system with 2 GB of RAM and any amount of swap space; the OS will hang for a long, long time.

(If you don't have swap space, the command will fail because you don't have enough memory. But it's not safe to run without swap space... right?)

Mind you, Windows is just as bad about this - it just doesn't have tools like dd preinstalled that can easily crash your computer. ;)

Reply Parent Score: 2

RE[2]: lie-nux at it again.
by No it isnt on Sat 27th Oct 2012 16:37 in reply to "RE: lie-nux at it again."
No it isnt Member since:
2005-11-14

You sure? I get

dd: memory exhausted by input buffer of size 4294967296 bytes (4,0 GiB)

Lowering to bs=1G, dd will complete without much noticeable slowdown.

Reply Parent Score: 3

Gullible Jones Member since:
2006-05-23

You're right, my mistake. For the Bad Things to happen, bs has to be set to something between physical RAM and total (physical + virtual) memory.

That said, I have never seen large writes fail to produce a noticeable slowdown. Not on an HDD anyway, I'm not sure about SSDs. I suspect that slowdowns during big writes are unavoidable on normal-spec desktops.

Reply Parent Score: 2

RE[3]: lie-nux at it again.
by WereCatf on Sat 27th Oct 2012 21:49 in reply to "RE[2]: lie-nux at it again."
WereCatf Member since:
2006-02-15

You sure? I get

dd: memory exhausted by input buffer of size 4294967296 bytes (4,0 GiB)

Lowering to bs=1G, dd will complete without much noticeable slowdown.


Well, that is actually the expected behaviour on an average desktop-oriented distro. Of course allocating 4 gigabytes of contiguous memory on a system that do not have that much is going to slow down or fail, you can perfectly well try that on Windows and OSX and get exactly the same thing.

Now, before you go ahead and try to say this is a fault in Linux I have to enlighten you that it's actually a perfectly solveable problem. Forced pre-emption enabled in the kernel, a proper I/O scheduler and limiting either I/O or memory-usage per process or per user will solve that thing in a nice, clean way, without breaking anything in userland and provide for a functional, responsive system even with such a dd going in the background. If you're interested peruse the kernel documentation or Google around, there's plenty of documentation on exactly this topic.

These are, however, not used on desktop systems because usually desktop systems are only utilized by one person at a time and they do not have the need for such and therefore it's rather misguided to even complain about that -- these are features that are aimed at enterprise servers and require some tuning for your specific needs.

EDIT: Some reading for those who are interested:
http://en.wikipedia.org/wiki/Cgroups
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=...
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=...

Edited 2012-10-27 21:56 UTC

Reply Parent Score: 6

RE[2]: lie-nux at it again.
by Alfman on Sat 27th Oct 2012 18:30 in reply to "RE: lie-nux at it again."
Alfman Member since:
2011-01-28

Gullible Jones,

The OP's clearly trolling, but you post an interesting question.

"$ dd if=/dev/zero of=~/dumpfile bs=4G count=1"

I don't get your result, it says "invalid number" for any value over 2G, probably because it's using a 32bit signed int to represent the size (on a 32 bit system).

"Mind you, Windows is just as bad about this - it just doesn't have tools like dd preinstalled that can easily crash your computer."


My own opinion is that this is a case of garbage in, garbage out. dd is a powerful tool and was not designed to second guess what the user wanted to do. You've asked it to allocate a huge 4GB buffer, fill that buffer with data from one file, and then write it out to another. If it has enough ram (including swap?) to do that it *will* execute your request as commanded. If it does not have enough ram, it will fail, just as expected. It's not particularly efficient, but it is doing exactly what you asked it to do. Windows behaves the exact same way, which is the correct way.


You could use smaller buffers, or use a truncate command to create sparse files. Maybe we could argue that GNU tools are too complicated for normal people to use, but lets not forget that the unix command line is in the domain of power users, most of us don't really want our commands to be dumbed down.



"(If you don't have swap space, the command will fail because you don't have enough memory. But it's not safe to run without swap space... right?)"

I don't believe in swap ;)
Look at it this way, if a system with 2GB ram + 2GB swap is good enough, then a system with 4GB ram + 0 swap should also be good enough. I get that swap space is so cheap that one might as well use it "just in case" or to extend the life of an older system, but personally I prefer to upgrade the ram than rely on swap.

Edited 2012-10-27 18:38 UTC

Reply Parent Score: 3

Gullible Jones Member since:
2006-05-23

I realize the above is correct behavior... What bothers me is that (by default anyway) is that it can be used by a limited user to create an an effective denial-of-service attack. Stalling or crashing a multiuser system should IMO (ideally) be something that root, and only root, can do. ;)

OTOH, the presence of tools like dd is why I much prefer Linux to Windows. Experienced users shouldn't have to jump through hoops to do simple things.

Edit: re swap, I wish there were a way of hibernating without it. In my experience it is not very helpful, even on low-memory systems.

Edited 2012-10-27 18:44 UTC

Reply Parent Score: 3

RE[2]: lie-nux at it again.
by Laurence on Sat 27th Oct 2012 20:12 in reply to "RE: lie-nux at it again."
Laurence Member since:
2007-03-26

There's a little truth to this. Try running

$ dd if=/dev/zero of=~/dumpfile bs=4G count=1

on a system with 2 GB of RAM and any amount of swap space; the OS will hang for a long, long time.

(If you don't have swap space, the command will fail because you don't have enough memory. But it's not safe to run without swap space... right?)

Mind you, Windows is just as bad about this - it just doesn't have tools like dd preinstalled that can easily crash your computer. ;)


If Linux gets exhausted of RAM, then the requesting application is killed and an OOE (out of memory exception) raised in the event logs.

Sadly this is something I've had to deal with a few times when one idiot web developer decided not to do any input sanitising which effectively ended up with us getting DoS attacked when legitimate users were make innocent page requests. <_<

Reply Parent Score: 3

Gullible Jones Member since:
2006-05-23

If Linux gets exhausted of RAM, then the requesting application is killed and an OOE (out of memory exception) raised in the event logs.

Not quite true, that's what Linux should do. ;) What Linux usually does (unless vm.oom_kill_allocating_task is set to 1) is attempt to kill programs that look like memory hogs, using some kind of heuristic.

In my experience, that heuristic knocks out the offending program about half the time... The other half the time, it knocks out X.

Reply Parent Score: 2