Linked by cloud on Sat 27th Oct 2012 01:05 UTC
Linux A new version of the real-time Linux scheduler called SCHED_DEADLINE has been released on the Linux Kernel Mailing List. For people who missed previous submissions, it consists of a new deadline-based CPU scheduler for the Linux kernel with bandwidth isolation (resource reservation) capabilities. It supports global/clustered multiprocessor scheduling through dynamic task migrations. This new version takes into account previous comments/suggestions and is aligned to the latest mainline kernel. A video about SCHED_DEADLINE is also available on YouTube.
Thread beginning with comment 540252
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: lie-nux at it again.
by No it isnt on Sat 27th Oct 2012 16:37 UTC in reply to "RE: lie-nux at it again."
No it isnt
Member since:
2005-11-14

You sure? I get

dd: memory exhausted by input buffer of size 4294967296 bytes (4,0 GiB)

Lowering to bs=1G, dd will complete without much noticeable slowdown.

Reply Parent Score: 3

Gullible Jones Member since:
2006-05-23

You're right, my mistake. For the Bad Things to happen, bs has to be set to something between physical RAM and total (physical + virtual) memory.

That said, I have never seen large writes fail to produce a noticeable slowdown. Not on an HDD anyway, I'm not sure about SSDs. I suspect that slowdowns during big writes are unavoidable on normal-spec desktops.

Reply Parent Score: 2

RE[3]: lie-nux at it again.
by WereCatf on Sat 27th Oct 2012 21:49 in reply to "RE[2]: lie-nux at it again."
WereCatf Member since:
2006-02-15

You sure? I get

dd: memory exhausted by input buffer of size 4294967296 bytes (4,0 GiB)

Lowering to bs=1G, dd will complete without much noticeable slowdown.


Well, that is actually the expected behaviour on an average desktop-oriented distro. Of course allocating 4 gigabytes of contiguous memory on a system that do not have that much is going to slow down or fail, you can perfectly well try that on Windows and OSX and get exactly the same thing.

Now, before you go ahead and try to say this is a fault in Linux I have to enlighten you that it's actually a perfectly solveable problem. Forced pre-emption enabled in the kernel, a proper I/O scheduler and limiting either I/O or memory-usage per process or per user will solve that thing in a nice, clean way, without breaking anything in userland and provide for a functional, responsive system even with such a dd going in the background. If you're interested peruse the kernel documentation or Google around, there's plenty of documentation on exactly this topic.

These are, however, not used on desktop systems because usually desktop systems are only utilized by one person at a time and they do not have the need for such and therefore it's rather misguided to even complain about that -- these are features that are aimed at enterprise servers and require some tuning for your specific needs.

EDIT: Some reading for those who are interested:
http://en.wikipedia.org/wiki/Cgroups
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=...
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=...

Edited 2012-10-27 21:56 UTC

Reply Parent Score: 6