Linked by cloud on Sat 27th Oct 2012 01:05 UTC
Linux A new version of the real-time Linux scheduler called SCHED_DEADLINE has been released on the Linux Kernel Mailing List. For people who missed previous submissions, it consists of a new deadline-based CPU scheduler for the Linux kernel with bandwidth isolation (resource reservation) capabilities. It supports global/clustered multiprocessor scheduling through dynamic task migrations. This new version takes into account previous comments/suggestions and is aligned to the latest mainline kernel. A video about SCHED_DEADLINE is also available on YouTube.
Thread beginning with comment 540210
To read all comments associated with this story, please click here.
lie-nux at it again.
by sameer on Sat 27th Oct 2012 08:09 UTC
sameer
Member since:
2012-10-27

hello all,

it seems to me that this sooper-dooper "sched_deadline" scheduler simply is partition scheduling which was made mainstream by the best control program ( os ) presently qnx 6.

even though i use mint linux executing off a usb drive... linux is simply a badly written program with big claims. it is too complicated ( libraries, many commands, slow, crashy... ).

lie-nux...

Reply Score: -8

RE: lie-nux at it again.
by NuxRo on Sat 27th Oct 2012 10:01 in reply to "lie-nux at it again."
NuxRo Member since:
2010-09-25

And because it's such a useless, big lie that it took over most of the computing world. You're a funny guy.

Reply Parent Score: 6

v RE[2]: lie-nux at it again.
by sameer on Sat 27th Oct 2012 10:56 in reply to "RE: lie-nux at it again."
RE: lie-nux at it again.
by MOS6510 on Sat 27th Oct 2012 14:07 in reply to "lie-nux at it again."
MOS6510 Member since:
2011-05-12

Perhaps Linux is badly written, complicated and certainly its coders aren't the most pleasant people, but it's hard to associate Linux with slowness (perhaps it is when run from a USB flash drive) or crashing.

Linux (the kernel) and GNU wonderland are, in my experience, just fine. The GUI stuff is often buggy and crashes, but it won't take down the system itself. A Linux server will go on and on for months and years.

If you experience crashes it's probably defective hardware or some rare buggy driver.

Reply Parent Score: 1

RE[2]: lie-nux at it again.
by ParadoxUncreated on Sat 27th Oct 2012 21:02 in reply to "RE: lie-nux at it again."
ParadoxUncreated Member since:
2009-12-05

You`ve been on osnews too long. "Perhaps linux is.." Actually if you have had all three mainstream OS`s installed, say windows XP (which can be made to run quite smooth), OSX (actually slow, sometimes even taking 5s for keyboard response here) and for instance Ubuntu (many would call it a bloated linux, but still), you would actually prefer Ubuntu. So how can linux be badly written? Indeed it seems to be the better of them all. If you wanna talk about badly written, think about the product MS sells. That is all, no enthusiasm, just a dollarmonkey, a product sold, just as cp/m once was. Also junk. I think most enthusiasts agree that windows is POORLY written. And OSX shows that even original unix code, can turn to a windows-like annoyance. Linux though, and Ubuntu, lots of choice. Modular mindset. And the best of code. You wanna run a windowmanager from the time of pre-overobfuscated high-level concepts, try IceWM, with a good theme. And you don`t have to worry about all the desktop-soapopera either. "no desktop id dead" "no desktop is alive" "no linus killed the desktop with evil mentalrays". And here I was running IceWM and not noticing a thing. And Wayland is coming in a big way.

"Poorly written" - no. And it has a lot of innovation, and seems to be incorporating more and more of realtime aswell. Have you ever played an openGL game with ACCURATE fps? It is just so much more enjoyable. Not to speak of how lowlatency/lowjitter improves the responsiveness of the desktop, making activity already on the next frame, after input.

No "lie", no evil coders. But as many places linux has been associated with several things. And for instance something many people "know" is that Gentoo is for "performance". However it`s mostly a myth, and in their forums you will get some really bizarre answers from time to time.

What I suggests is really just trying out the most popular distributions like Ubuntu/Suse/etc.

If you`re into realtime, or low-jitter, you might want to build yourself a PC just for that purpose also.

I am doing one, and it looks like this currently: http://paradoxuncreated.com/Blog/wordpress/?p=4176

It`s gonna be great.

Peace Be With You.

Reply Parent Score: 0

RE: lie-nux at it again.
by Gullible Jones on Sat 27th Oct 2012 15:31 in reply to "lie-nux at it again."
Gullible Jones Member since:
2006-05-23

There's a little truth to this. Try running

$ dd if=/dev/zero of=~/dumpfile bs=4G count=1

on a system with 2 GB of RAM and any amount of swap space; the OS will hang for a long, long time.

(If you don't have swap space, the command will fail because you don't have enough memory. But it's not safe to run without swap space... right?)

Mind you, Windows is just as bad about this - it just doesn't have tools like dd preinstalled that can easily crash your computer. ;)

Reply Parent Score: 2

RE[2]: lie-nux at it again.
by No it isnt on Sat 27th Oct 2012 16:37 in reply to "RE: lie-nux at it again."
No it isnt Member since:
2005-11-14

You sure? I get

dd: memory exhausted by input buffer of size 4294967296 bytes (4,0 GiB)

Lowering to bs=1G, dd will complete without much noticeable slowdown.

Reply Parent Score: 3

RE[2]: lie-nux at it again.
by Alfman on Sat 27th Oct 2012 18:30 in reply to "RE: lie-nux at it again."
Alfman Member since:
2011-01-28

Gullible Jones,

The OP's clearly trolling, but you post an interesting question.

"$ dd if=/dev/zero of=~/dumpfile bs=4G count=1"

I don't get your result, it says "invalid number" for any value over 2G, probably because it's using a 32bit signed int to represent the size (on a 32 bit system).

"Mind you, Windows is just as bad about this - it just doesn't have tools like dd preinstalled that can easily crash your computer."


My own opinion is that this is a case of garbage in, garbage out. dd is a powerful tool and was not designed to second guess what the user wanted to do. You've asked it to allocate a huge 4GB buffer, fill that buffer with data from one file, and then write it out to another. If it has enough ram (including swap?) to do that it *will* execute your request as commanded. If it does not have enough ram, it will fail, just as expected. It's not particularly efficient, but it is doing exactly what you asked it to do. Windows behaves the exact same way, which is the correct way.


You could use smaller buffers, or use a truncate command to create sparse files. Maybe we could argue that GNU tools are too complicated for normal people to use, but lets not forget that the unix command line is in the domain of power users, most of us don't really want our commands to be dumbed down.



"(If you don't have swap space, the command will fail because you don't have enough memory. But it's not safe to run without swap space... right?)"

I don't believe in swap ;)
Look at it this way, if a system with 2GB ram + 2GB swap is good enough, then a system with 4GB ram + 0 swap should also be good enough. I get that swap space is so cheap that one might as well use it "just in case" or to extend the life of an older system, but personally I prefer to upgrade the ram than rely on swap.

Edited 2012-10-27 18:38 UTC

Reply Parent Score: 3

RE[2]: lie-nux at it again.
by Laurence on Sat 27th Oct 2012 20:12 in reply to "RE: lie-nux at it again."
Laurence Member since:
2007-03-26

There's a little truth to this. Try running

$ dd if=/dev/zero of=~/dumpfile bs=4G count=1

on a system with 2 GB of RAM and any amount of swap space; the OS will hang for a long, long time.

(If you don't have swap space, the command will fail because you don't have enough memory. But it's not safe to run without swap space... right?)

Mind you, Windows is just as bad about this - it just doesn't have tools like dd preinstalled that can easily crash your computer. ;)


If Linux gets exhausted of RAM, then the requesting application is killed and an OOE (out of memory exception) raised in the event logs.

Sadly this is something I've had to deal with a few times when one idiot web developer decided not to do any input sanitising which effectively ended up with us getting DoS attacked when legitimate users were make innocent page requests. <_<

Reply Parent Score: 3

RE: lie-nux at it again.
by KrustyVader on Sun 28th Oct 2012 17:48 in reply to "lie-nux at it again."
KrustyVader Member since:
2006-10-28

Linux have lots of problems, GNome and KDE create, from my point of view, create a madness with the libraries dependence. Sometimes for installing a simple program i need 30 libraries.

Thankfully there are lot of option in Linux, and for some reason i keep choosing Slackware as a base installation (but without KDE).

And for Real Time O.S. QNX is (sadly) no more a valid option. It belongs to RIM and you can guess what will happen.

Reply Parent Score: 1

RE[2]: lie-nux at it again.
by Gullible Jones on Sun 28th Oct 2012 18:45 in reply to "RE: lie-nux at it again."
Gullible Jones Member since:
2006-05-23

Linux desktops have (IMHO) taken a turn for the worse lately, but don't mistake that for what's happening under the hood. Newer kernels have some really cool features (and IMO perform better on ancient hardware than the old 2.6 series).

(And fortunately there are still Mate and Xfce on the desktop front. Also Trinity, though that doesn't seem to be as functional right now.)

Reply Parent Score: 2