Linked by cloud on Sat 27th Oct 2012 01:05 UTC
Linux A new version of the real-time Linux scheduler called SCHED_DEADLINE has been released on the Linux Kernel Mailing List. For people who missed previous submissions, it consists of a new deadline-based CPU scheduler for the Linux kernel with bandwidth isolation (resource reservation) capabilities. It supports global/clustered multiprocessor scheduling through dynamic task migrations. This new version takes into account previous comments/suggestions and is aligned to the latest mainline kernel. A video about SCHED_DEADLINE is also available on YouTube.
Order by: Score:
Demo? Almost.
by ericxjo on Sat 27th Oct 2012 01:50 UTC
ericxjo
Member since:
2012-02-10

From the YouTube description: Unfortunately, however, the movie has been filmed in sequence, and then assembled to let people understand that all these activities were concurrent (we were not able of making three video simultaneously).

Well, that undermines the demo...

Reply Score: 2

RE: Demo? Almost.
by Alfman on Sat 27th Oct 2012 03:56 UTC in reply to "Demo? Almost."
Alfman Member since:
2011-01-28

ericxjo,

Well, it probably boiled down to something as simple as them not having three cameras. I don't have any trouble believing it could do all three at the same time. Although they should have panned from one to the next.

Even a non-realtime kernel should have been able to handle those three tasks simultaneously without any trouble at all on an old 486. I'd be more impressed if the tasks demanded much harder real time restraints. And then executing them while compiling linux and browsing with firefox!

Reply Score: 5

RE[2]: Demo? Almost.
by cloud on Mon 29th Oct 2012 16:10 UTC in reply to "RE: Demo? Almost."
cloud Member since:
2009-10-19

Hi all.

I've been the supervisor of the whole project, since the first submission (which was called SCHED_EDF). If you check, I have always released a news on OSNews and Slashdot whenever we released a new version.

As I have written in the description of the project on youtube, the project has been realized as part of a 3MEuro project called ACTORS and financed by the European commission. When you get funded by EU, you have to pass annual reviews with commission members, In particular, the movie has been filmed near the final review meeting, at which 3 EU members attended. These 3 members (selected by EU) had the chance to see the full system working. If you can't trust the movie, trust at least the EU commission.

Unfortunately, we started the ball-and-beams, and then we started the robotic arm. For this reason, in the ball-and-beams you can notice the arm stopped in the background: we simply did not start it yet...

The project started in 2008, and we had several submissions. During these years, the code has been reviews by the Linux kernel community several times, and it is getting ready for inclusion in mainline.

Reply Score: 2

RE[3]: Demo? Almost.
by capi_x on Wed 31st Oct 2012 15:37 UTC in reply to "RE[2]: Demo? Almost."
capi_x Member since:
2012-08-29
v lie-nux at it again.
by sameer on Sat 27th Oct 2012 08:09 UTC
RE: lie-nux at it again.
by NuxRo on Sat 27th Oct 2012 10:01 UTC in reply to "lie-nux at it again."
NuxRo Member since:
2010-09-25

And because it's such a useless, big lie that it took over most of the computing world. You're a funny guy.

Reply Score: 6

v RE[2]: lie-nux at it again.
by sameer on Sat 27th Oct 2012 10:56 UTC in reply to "RE: lie-nux at it again."
RE[3]: lie-nux at it again.
by WereCatf on Sat 27th Oct 2012 12:09 UTC in reply to "RE[2]: lie-nux at it again."
WereCatf Member since:
2006-02-15

For some reason I do not believe a thing you're saying.

even though i use mint linux executing off a usb drive... linux is simply a badly written program with big claims. it is too complicated ( libraries, many commands, slow, crashy... ).


A badly written program with many commands and libraries? If you were an OS-developer you'd know the difference between a kernel and userland.

Reply Score: 7

v RE[4]: lie-nux at it again.
by sameer on Sat 27th Oct 2012 12:23 UTC in reply to "RE[3]: lie-nux at it again."
RE[5]: lie-nux at it again.
by WereCatf on Sat 27th Oct 2012 12:30 UTC in reply to "RE[4]: lie-nux at it again."
WereCatf Member since:
2006-02-15

you have answered your own doubts.


You're not making any sense here.

the good os is simple in architecture


No, a good OS is one that fits its intended purpose. There is no single definition of a "good os."

by architecture linux is not microkernel.


And? No one claimed it was.

so this talk about "user-land" is shouting out that in linux there is not natural separation between kernel and "what ever one might call it".


Oh, really? Why are there so many different operating systems which use Linux-kernel but an entirely different userland? Oh, that's right: you have no idea what you're talking about.

don't forget the aspect of these complicated "dependencies" when one has to "install" some program.

what happened to the unix method of copy some program to a directory and just use it ??


Ahahaha. Fail. Next time learn what you're talking about.

Edited 2012-10-27 12:32 UTC

Reply Score: 4

v RE[6]: lie-nux at it again.
by sameer on Sat 27th Oct 2012 12:38 UTC in reply to "RE[5]: lie-nux at it again."
RE[7]: lie-nux at it again.
by WereCatf on Sat 27th Oct 2012 12:44 UTC in reply to "RE[6]: lie-nux at it again."
WereCatf Member since:
2006-02-15

"Oh, really? Why are there so many different operating systems which use Linux-kernel"

firstly such programs cannot be anything other than being called "distros".


Android is a good example of a Linux-kernel with non-GNU userland. There are also plenty of different kinds of embedded systems that use Linux-kernel without GNU-userland, like e.g. HDTVs and several BluRay-players.

when you said fail... please elaborate.


Package manager - files has nothing to do with the application itself. You CAN just copy and application and its dependencies to another directory and run it from there just fine. You've clearly never heard of shared libraries and the likes and you just expect all applications to be statically compiled, and that says enough about your level of technical abilities.

Reply Score: 3

RE[6]: lie-nux at it again.
by zima on Wed 31st Oct 2012 01:50 UTC in reply to "RE[5]: lie-nux at it again."
zima Member since:
2005-07-06

>what happened to the unix method of copy some program to a directory and just use it ??
Ahahaha. Fail. Next time learn what you're talking about.

My guess: he read somewhere about OSX all-in-one, drag & drop "install" files ...and connected it with the official OSX UNIX certification.

Reply Score: 2

RE[5]: lie-nux at it again.
by Laurence on Sat 27th Oct 2012 19:43 UTC in reply to "RE[4]: lie-nux at it again."
Laurence Member since:
2007-03-26


what happened to the unix method of copy some program to a directory and just use it ??

That method never existed. Even in the old days programs had to be compiled for each Unix variant and architecture, and these days UNIX dependencies are almost as bad as Linux.

Even on Windows, so called "stand alone" applications have pre-requisites, be that a specific version of the .NET framework, the latest DirectX drivers or even just Win32 libraries.

i have 512 megabyte in my desktop ( mint linux ). i have to restart most days. it just hangs.

Have you never considered that your problem might be running one of the most resource heavy distributions of Linux on a 10 year old PC?

You'd be better off with Puppy than Mint.

i know about computing... i built the first control program ( os ) of south asia, in 2002. it was based on microkernel architecture but ran in x86 "real mode". just a demonstrator. had message passing and unix-like "signals".

Please don't insult our intelligence with such blatant lies. If you want to appear to understand this subject, then you're much better off learning the subject (and keeping quiet on the subject until you do) rather than pretending to then covering your tracks with fictitious boasts because everyone has voted you down for posting nonsense.

Reply Score: 6

RE[6]: lie-nux at it again.
by Soulbender on Sun 28th Oct 2012 02:28 UTC in reply to "RE[5]: lie-nux at it again."
Soulbender Member since:
2005-08-18

Please don't insult our intelligence with such blatant lies.


Scroll to the end:
http://www.openqnx.com/phpbbforum/viewtopic.php?t=8261

Still, it's the one and only reference to this OS I can find.

Reply Score: 4

RE[7]: lie-nux at it again.
by WereCatf on Sun 28th Oct 2012 02:54 UTC in reply to "RE[6]: lie-nux at it again."
WereCatf Member since:
2006-02-15

"Please don't insult our intelligence with such blatant lies.


Scroll to the end:
http://www.openqnx.com/phpbbforum/viewtopic.php?t=8261

Still, it's the one and only reference to this OS I can find.
"

That's the one I googled, too. He says he started working on it in 2002 and by 2006 he still didn't have any sort of a GUI or task-switching, ie. it was merely a single-process OS. And there is no actual indication that he had done any of the coding himself -- with his clear misunderstanding of basic concepts I feel it's highly likely he has just borrowed code from others.

An interesting fellow.

Reply Score: 3

RE[5]: lie-nux at it again.
by Soulbender on Sun 28th Oct 2012 02:25 UTC in reply to "RE[4]: lie-nux at it again."
Soulbender Member since:
2005-08-18

the good os is simple in architecture, which makes it reliable and easy to add to ( modules ) or remove.


You're confusing your own opinion with facts.

so this talk about "user-land" is shouting out that in linux there is not natural separation between kernel and "what ever one might call it"


Say what? The fact that one is kernel space and the other is user land says exactly that: that there is a separation.

i have 512 megabyte in my desktop ( mint linux ). i have to restart most days. it just hangs.


It's not 2002 anymore, use the right tool for the job. Mint is obviously not the right choice for such a resource starved system.

i only use mint linux because the windows xp machine was not allowing me to access the win-xp boot partition.


How is this relevant? What does it even mean?

don't forget the aspect of these complicated "dependencies" when one has to "install" some program.


It's 2012, we have reliable tools to handle these tings now.

what happened to the unix method of copy some program to a directory and just use it ??


I think you're confusing Unix and DOS.

Reply Score: 4

RE[6]: lie-nux at it again.
by Hiev on Sun 28th Oct 2012 04:13 UTC in reply to "RE[5]: lie-nux at it again."
Hiev Member since:
2005-09-27

I think you're confusing Unix and DOS.

Well, to be honest, that's the way Unix works, just copy the files to a directory and works.

Reply Score: 2

RE[7]: lie-nux at it again.
by Soulbender on Sun 28th Oct 2012 05:20 UTC in reply to "RE[6]: lie-nux at it again."
Soulbender Member since:
2005-08-18

From the context I think he mean you don't need to worry about libraries, dependencies and such.

Reply Score: 2

RE: lie-nux at it again.
by MOS6510 on Sat 27th Oct 2012 14:07 UTC in reply to "lie-nux at it again."
MOS6510 Member since:
2011-05-12

Perhaps Linux is badly written, complicated and certainly its coders aren't the most pleasant people, but it's hard to associate Linux with slowness (perhaps it is when run from a USB flash drive) or crashing.

Linux (the kernel) and GNU wonderland are, in my experience, just fine. The GUI stuff is often buggy and crashes, but it won't take down the system itself. A Linux server will go on and on for months and years.

If you experience crashes it's probably defective hardware or some rare buggy driver.

Reply Score: 1

RE[2]: lie-nux at it again.
by ParadoxUncreated on Sat 27th Oct 2012 21:02 UTC in reply to "RE: lie-nux at it again."
ParadoxUncreated Member since:
2009-12-05

You`ve been on osnews too long. "Perhaps linux is.." Actually if you have had all three mainstream OS`s installed, say windows XP (which can be made to run quite smooth), OSX (actually slow, sometimes even taking 5s for keyboard response here) and for instance Ubuntu (many would call it a bloated linux, but still), you would actually prefer Ubuntu. So how can linux be badly written? Indeed it seems to be the better of them all. If you wanna talk about badly written, think about the product MS sells. That is all, no enthusiasm, just a dollarmonkey, a product sold, just as cp/m once was. Also junk. I think most enthusiasts agree that windows is POORLY written. And OSX shows that even original unix code, can turn to a windows-like annoyance. Linux though, and Ubuntu, lots of choice. Modular mindset. And the best of code. You wanna run a windowmanager from the time of pre-overobfuscated high-level concepts, try IceWM, with a good theme. And you don`t have to worry about all the desktop-soapopera either. "no desktop id dead" "no desktop is alive" "no linus killed the desktop with evil mentalrays". And here I was running IceWM and not noticing a thing. And Wayland is coming in a big way.

"Poorly written" - no. And it has a lot of innovation, and seems to be incorporating more and more of realtime aswell. Have you ever played an openGL game with ACCURATE fps? It is just so much more enjoyable. Not to speak of how lowlatency/lowjitter improves the responsiveness of the desktop, making activity already on the next frame, after input.

No "lie", no evil coders. But as many places linux has been associated with several things. And for instance something many people "know" is that Gentoo is for "performance". However it`s mostly a myth, and in their forums you will get some really bizarre answers from time to time.

What I suggests is really just trying out the most popular distributions like Ubuntu/Suse/etc.

If you`re into realtime, or low-jitter, you might want to build yourself a PC just for that purpose also.

I am doing one, and it looks like this currently: http://paradoxuncreated.com/Blog/wordpress/?p=4176

It`s gonna be great.

Peace Be With You.

Reply Score: 0

RE[3]: lie-nux at it again.
by MOS6510 on Sat 27th Oct 2012 21:20 UTC in reply to "RE[2]: lie-nux at it again."
MOS6510 Member since:
2011-05-12

I'm a quite able Linux user and I have to fiddle around with Windows XP/7 on a regular basis while I personally use OS X.

Sadly I have reached a stage in my life where I don't have the time or motivation to check out Linux distributions, testdrive different GUIs or build my own PC. There was a time when I did this and I'm happy it's over.

Linux desktops are fast, even heavy ones like KDE, but my gripe is with the application software. IMO it's just not good, certainly when compared to Windows and OS X counterparts. There is some good software, but in few numbers.

When I use Linux I prefer CLI only. That's fun to use and all the CLI commands and programs are very useful, powerful and just work.

Reply Score: 5

RE[4]: lie-nux at it again.
by zima on Thu 1st Nov 2012 22:19 UTC in reply to "RE[3]: lie-nux at it again."
zima Member since:
2005-07-06

Sadly I have reached a stage in my life where I don't have the time or motivation to check out Linux distributions, testdrive different GUIs or build my own PC. There was a time when I did this and I'm happy it's over.

??... ;) (emphasis mine)

the application software. IMO it's just not good, certainly when compared to Windows and OS X counterparts. There is some good software, but in few numbers.

For the usual stuff that a typical user likely needs, the software is fine I'd say. Browsers, Open/Libre Office, bittorrent clients, image viewers/organisers are perfectly OK, plus throw in some media player and IM - those last two tend to be somewhat better actually: multi-communicators and plays-everything seem to be more the rule on Linux than on Windows & OSX.

(sorry for a late reply, again ...I think I went to sleep when I had this reply window opened ;) )

Reply Score: 2

RE: lie-nux at it again.
by Gullible Jones on Sat 27th Oct 2012 15:31 UTC in reply to "lie-nux at it again."
Gullible Jones Member since:
2006-05-23

There's a little truth to this. Try running

$ dd if=/dev/zero of=~/dumpfile bs=4G count=1

on a system with 2 GB of RAM and any amount of swap space; the OS will hang for a long, long time.

(If you don't have swap space, the command will fail because you don't have enough memory. But it's not safe to run without swap space... right?)

Mind you, Windows is just as bad about this - it just doesn't have tools like dd preinstalled that can easily crash your computer. ;)

Reply Score: 2

RE[2]: lie-nux at it again.
by No it isnt on Sat 27th Oct 2012 16:37 UTC in reply to "RE: lie-nux at it again."
No it isnt Member since:
2005-11-14

You sure? I get

dd: memory exhausted by input buffer of size 4294967296 bytes (4,0 GiB)

Lowering to bs=1G, dd will complete without much noticeable slowdown.

Reply Score: 3

RE[3]: lie-nux at it again.
by Gullible Jones on Sat 27th Oct 2012 17:44 UTC in reply to "RE[2]: lie-nux at it again."
Gullible Jones Member since:
2006-05-23

You're right, my mistake. For the Bad Things to happen, bs has to be set to something between physical RAM and total (physical + virtual) memory.

That said, I have never seen large writes fail to produce a noticeable slowdown. Not on an HDD anyway, I'm not sure about SSDs. I suspect that slowdowns during big writes are unavoidable on normal-spec desktops.

Reply Score: 2

RE[3]: lie-nux at it again.
by WereCatf on Sat 27th Oct 2012 21:49 UTC in reply to "RE[2]: lie-nux at it again."
WereCatf Member since:
2006-02-15

You sure? I get

dd: memory exhausted by input buffer of size 4294967296 bytes (4,0 GiB)

Lowering to bs=1G, dd will complete without much noticeable slowdown.


Well, that is actually the expected behaviour on an average desktop-oriented distro. Of course allocating 4 gigabytes of contiguous memory on a system that do not have that much is going to slow down or fail, you can perfectly well try that on Windows and OSX and get exactly the same thing.

Now, before you go ahead and try to say this is a fault in Linux I have to enlighten you that it's actually a perfectly solveable problem. Forced pre-emption enabled in the kernel, a proper I/O scheduler and limiting either I/O or memory-usage per process or per user will solve that thing in a nice, clean way, without breaking anything in userland and provide for a functional, responsive system even with such a dd going in the background. If you're interested peruse the kernel documentation or Google around, there's plenty of documentation on exactly this topic.

These are, however, not used on desktop systems because usually desktop systems are only utilized by one person at a time and they do not have the need for such and therefore it's rather misguided to even complain about that -- these are features that are aimed at enterprise servers and require some tuning for your specific needs.

EDIT: Some reading for those who are interested:
http://en.wikipedia.org/wiki/Cgroups
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=...
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=...

Edited 2012-10-27 21:56 UTC

Reply Score: 6

RE[2]: lie-nux at it again.
by Alfman on Sat 27th Oct 2012 18:30 UTC in reply to "RE: lie-nux at it again."
Alfman Member since:
2011-01-28

Gullible Jones,

The OP's clearly trolling, but you post an interesting question.

"$ dd if=/dev/zero of=~/dumpfile bs=4G count=1"

I don't get your result, it says "invalid number" for any value over 2G, probably because it's using a 32bit signed int to represent the size (on a 32 bit system).

"Mind you, Windows is just as bad about this - it just doesn't have tools like dd preinstalled that can easily crash your computer."


My own opinion is that this is a case of garbage in, garbage out. dd is a powerful tool and was not designed to second guess what the user wanted to do. You've asked it to allocate a huge 4GB buffer, fill that buffer with data from one file, and then write it out to another. If it has enough ram (including swap?) to do that it *will* execute your request as commanded. If it does not have enough ram, it will fail, just as expected. It's not particularly efficient, but it is doing exactly what you asked it to do. Windows behaves the exact same way, which is the correct way.


You could use smaller buffers, or use a truncate command to create sparse files. Maybe we could argue that GNU tools are too complicated for normal people to use, but lets not forget that the unix command line is in the domain of power users, most of us don't really want our commands to be dumbed down.



"(If you don't have swap space, the command will fail because you don't have enough memory. But it's not safe to run without swap space... right?)"

I don't believe in swap ;)
Look at it this way, if a system with 2GB ram + 2GB swap is good enough, then a system with 4GB ram + 0 swap should also be good enough. I get that swap space is so cheap that one might as well use it "just in case" or to extend the life of an older system, but personally I prefer to upgrade the ram than rely on swap.

Edited 2012-10-27 18:38 UTC

Reply Score: 3

RE[3]: lie-nux at it again.
by Gullible Jones on Sat 27th Oct 2012 18:43 UTC in reply to "RE[2]: lie-nux at it again."
Gullible Jones Member since:
2006-05-23

I realize the above is correct behavior... What bothers me is that (by default anyway) is that it can be used by a limited user to create an an effective denial-of-service attack. Stalling or crashing a multiuser system should IMO (ideally) be something that root, and only root, can do. ;)

OTOH, the presence of tools like dd is why I much prefer Linux to Windows. Experienced users shouldn't have to jump through hoops to do simple things.

Edit: re swap, I wish there were a way of hibernating without it. In my experience it is not very helpful, even on low-memory systems.

Edited 2012-10-27 18:44 UTC

Reply Score: 3

RE[4]: lie-nux at it again.
by Alfman on Sat 27th Oct 2012 19:38 UTC in reply to "RE[3]: lie-nux at it again."
Alfman Member since:
2011-01-28

Gullible Jones,

"What bothers me is that (by default anyway) is that it can be used by a limited user to create an an effective denial-of-service attack."

I see your point. You can put hardlimits on a user's disk/cpu/ram consumption, but that can easily interfere with what users want to do. I'm not sure any system can distinguish between legitimate resource usage and accidental or malicious usage?


At university some ten years ago, we were using networked sun workstations, I'm sure they'd know something about distributing resources fairly to thousands of users. I don't remember the RAM capacity/quotas, but I do remember the disk quota because I ran into it all the time - soft limits were like 15MB, uck!

Reply Score: 3

RE[4]: lie-nux at it again.
by Laurence on Sat 27th Oct 2012 20:16 UTC in reply to "RE[3]: lie-nux at it again."
Laurence Member since:
2007-03-26

I realize the above is correct behavior... What bothers me is that (by default anyway) is that it can be used by a limited user to create an an effective denial-of-service attack. Stalling or crashing a multiuser system should IMO (ideally) be something that root, and only root, can do. ;)

mv `which dd` /sbin/

problem solved.

Edited 2012-10-27 20:18 UTC

Reply Score: 3

RE[5]: lie-nux at it again.
by jessesmith on Sat 27th Oct 2012 21:08 UTC in reply to "RE[4]: lie-nux at it again."
jessesmith Member since:
2010-03-11

That just takes care of one tool which can bring the system to its knees, limiting access to dd is a bandage. The issue is that any application on Linux can cause the system a great deal of stress or bring it down. (I do this a couple of times a year by accident.)

There are ways to protect against this kind of attack (accidental or not) such as setting resource limits on user accounts. Most distributions do not appear to ship with these in place by default, but if your system requires lots of uninterrupted uptime, the sysadmin should consider locking down resource usage.

Reply Score: 3

RE[6]: lie-nux at it again.
by Laurence on Sat 27th Oct 2012 21:35 UTC in reply to "RE[5]: lie-nux at it again."
Laurence Member since:
2007-03-26

That just takes care of one tool which can bring the system to its knees, limiting access to dd is a bandage. The issue is that any application on Linux can cause the system a great deal of stress or bring it down. (I do this a couple of times a year by accident.)

There are ways to protect against this kind of attack (accidental or not) such as setting resource limits on user accounts. Most distributions do not appear to ship with these in place by default, but if your system requires lots of uninterrupted uptime, the sysadmin should consider locking down resource usage.

It the same case for all OSs though. Trying to open a 200MB Excel spreadsheet that some office idiot decided to build a database in will easily bring Windows to it's knees.

The moment you put an idiot in front a computer than that machine is as good as dead. Regardless of the OS. There's a saying that goes something like "The more you dumb something down, the bigger idiot that come along".

Reply Score: 5

RE[6]: lie-nux at it again.
by Alfman on Sat 27th Oct 2012 21:37 UTC in reply to "RE[5]: lie-nux at it again."
Alfman Member since:
2011-01-28

jessesmith,

"The issue is that any application on Linux can cause the system a great deal of stress or bring it down. "

Agree with your post, however let's expand that to ANY multiuser OS, be it UNIX (freebsd, linux, osx, etc), windows terminal server, citrix, etc.

Reply Score: 3

RE[6]: lie-nux at it again.
by foregam on Sun 28th Oct 2012 18:53 UTC in reply to "RE[5]: lie-nux at it again."
foregam Member since:
2010-11-17

man sh
Scroll down to ulimit and read about all things you can put limits on. dd is not a problem by itself.

Reply Score: 4

RE[5]: lie-nux at it again.
by Alfman on Sat 27th Oct 2012 21:33 UTC in reply to "RE[4]: lie-nux at it again."
Alfman Member since:
2011-01-28

Laurence,

"If Linux gets exhausted of RAM, then the requesting application is killed and an OOE (out of memory exception) raised in the event logs."


Isn't the default behaviour under linux to call the out of memory killer? It takes over and heuristically decides which process to kill. I'm opposed to the OOM killer on the grounds that it randomly kills well behaved processes, even when they handle out of memory conditions in a well defined way.

Playing devil's advocate, OOM killer gives the user a chance to specify weight factors for each process to give the kernel a hint about which processes to kill first (/proc/1000/oom_adj /proc/1000/oom_score etc). This increases the likelihood that the kernel will kill a process that is responsible for consuming the most ram. Without the OOM killer, a small process (ie ssh) can be forced to terminate when another process (dd bs=4G) is responsible for hoarding all the memory. Killing the large "guilty" process is better than killing small processes that happen to need more memory.


I am interested in what others think about the linux OOM killer.



"mv `which dd` /sbin/ problem solved."

I don't think that addresses the root concern, which is that userspace processes can abuse system resources to the point of grinding the system to a halt. dd was a simple example, but there are infinitely more ways to do similar things. If our goal was to deny access to all the commands with potential to overload system resources, we'd be left with a virtually empty set. Obviously you'd have to deny access to php, perl, gcc, even shell scripts. The following does an excellent job of consuming both CPU and RAM on my system until I run out of memory and it aborts:

cat /dev/zero | gzip -c | gzip -d | gzip -c | gzip -d | gzip -c | gzip -d | sort > /dev/null

It's not likely to happen accidentally, but if a user is determined to abuse resources, he'll find a way!

Reply Score: 2

RE[6]: lie-nux at it again.
by Laurence on Sat 27th Oct 2012 21:44 UTC in reply to "RE[5]: lie-nux at it again."
Laurence Member since:
2007-03-26


Isn't the default behaviour under linux to call the out of memory killer? It takes over and heuristically decides which process to kill.

Well yeah, that's what i just said.


I'm opposed to the OOM killer on the grounds that it randomly kills well behaved processes, even when they handle out of memory conditions in a well defined way.

Yeah, i've often wondered if there was a better way of handling such exceptions. OOM doesn't sit nicely with me either.


Playing devil's advocate, OOM killer gives the user a chance to specify weight factors for each process to give the kernel a hint about which processes to kill first (/proc/1000/oom_adj /proc/1000/oom_score etc). This increases the likelihood that the kernel will kill a process that is responsible for consuming the most ram. Without the OOM killer, a small process (ie ssh) can be forced to terminate when another process (dd bs=4G) is responsible for hoarding all the memory. Killing the large "guilty" process is better than killing small processes that happen to need more memory.

Interesting concept. A little tricky to impliment I think, but it has potential.


I don't think that addresses the root concern, which is that userspace processes can abuse system resources to the point of grinding the system to a halt. dd was a simple example, but there are infinitely more ways to do similar things. If our goal was to deny access to all the commands with potential to overload system resources, we'd be left with a virtually empty set. Obviously you'd have to deny access to php, perl, gcc, even shell scripts. The following does an excellent job of consuming both CPU and RAM on my system until I run out of memory and it aborts:

cat /dev/zero | gzip -c | gzip -d | gzip -c | gzip -d | gzip -c | gzip -d | sort > /dev/null

It's not likely to happen accidentally, but if a user is determined to abuse resources, he'll find a way!

But that's true for any OS. If a user has access to a machine then it would only take a determined halfwit to bring it to it's knees.

The only 'safe' option would be to set everyone up with thin clients which only have a web browser installed and bookmarked link to cloud services like Google Docs.

Reply Score: 2

RE[7]: lie-nux at it again.
by Alfman on Sat 27th Oct 2012 22:11 UTC in reply to "RE[6]: lie-nux at it again."
Alfman Member since:
2011-01-28

Laurence,

"Well yeah, that's what i just said."

"Interesting concept. A little tricky to impliment I think, but it has potential."

Maybe we're misunderstanding each other, but the OOM killer I described above *is* what linux has implemented. When it's enabled (I think by default), it does not necessarily kill the requesting application, it heuristically selects a process to kill.


"The only 'safe' option would be to set everyone up with thin clients which only have a web browser installed and bookmarked link to cloud services like Google Docs."

Haha, I hear you there, but ironically I consider firefox to be one of the guilty apps. I often have to kill it as it reaches 500MB after a week of fairly routine use. I'm the only one on this computer, but if there were 4 or 5 of us it'd be a problem.


This is probably hopeless, but here is what top prints out now:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
27407 lou 20 0 1106m 403m 24m R 4 10.0 50:27.51 firefox
21276 lou 20 0 441m 129m 5420 S 3 3.2 869:47.14 skype

I didn't realise skype was such a hog!

Reply Score: 2

RE[7]: lie-nux at it again.
by Gullible Jones on Sun 28th Oct 2012 00:39 UTC in reply to "RE[6]: lie-nux at it again."
Gullible Jones Member since:
2006-05-23

But that's true for any OS. If a user has access to a machine then it would only take a determined halfwit to bring it to it's knees.

Have to disagree; IMO the entire goal and purpose of a multiuser OS is to prevent users from stepping on each other's toes. Obviously some of this is the sysadmin's responsibility; but I do think it's good to have default setups that are more fool-proof in multiuser environments, since that's probably where Linux sees the most use. (I think?)

That said, operating systems are imperfect, like the humans that create them.

Re handling of OOM conditions. IIRC the BSDs handle this by making malloc() fail if there's not enough memory for it. From what I recall of C, this will probably cause the calling program to crash, which I think is what you want in most cases - unless the calling program is something like top or kill! But I doubt you'd easily get conditions where $bloatyapp would keep running while kill would get terminated.

(Linux has a couple options like this. The OOM killer can be set to kill the first application that exceeds available memory; or you can set the kernel to make malloc() fail if more than a percentage of RAM + total swap would be filled. Sadly, there is as of yet no "fail malloc() when physical RAM is exceeded and never mind the swap" setting.)

Reply Score: 2

RE[2]: lie-nux at it again.
by Laurence on Sat 27th Oct 2012 20:12 UTC in reply to "RE: lie-nux at it again."
Laurence Member since:
2007-03-26

There's a little truth to this. Try running

$ dd if=/dev/zero of=~/dumpfile bs=4G count=1

on a system with 2 GB of RAM and any amount of swap space; the OS will hang for a long, long time.

(If you don't have swap space, the command will fail because you don't have enough memory. But it's not safe to run without swap space... right?)

Mind you, Windows is just as bad about this - it just doesn't have tools like dd preinstalled that can easily crash your computer. ;)


If Linux gets exhausted of RAM, then the requesting application is killed and an OOE (out of memory exception) raised in the event logs.

Sadly this is something I've had to deal with a few times when one idiot web developer decided not to do any input sanitising which effectively ended up with us getting DoS attacked when legitimate users were make innocent page requests. <_<

Reply Score: 3

RE[3]: lie-nux at it again.
by Gullible Jones on Sun 28th Oct 2012 00:43 UTC in reply to "RE[2]: lie-nux at it again."
Gullible Jones Member since:
2006-05-23

If Linux gets exhausted of RAM, then the requesting application is killed and an OOE (out of memory exception) raised in the event logs.

Not quite true, that's what Linux should do. ;) What Linux usually does (unless vm.oom_kill_allocating_task is set to 1) is attempt to kill programs that look like memory hogs, using some kind of heuristic.

In my experience, that heuristic knocks out the offending program about half the time... The other half the time, it knocks out X.

Reply Score: 2

RE[4]: lie-nux at it again.
by Laurence on Sun 28th Oct 2012 11:48 UTC in reply to "RE[3]: lie-nux at it again."
Laurence Member since:
2007-03-26


Not quite true, that's what Linux should do. ;) What Linux usually does (unless vm.oom_kill_allocating_task is set to 1) is attempt to kill programs that look like memory hogs, using some kind of heuristic.

In my experience, that heuristic knocks out the offending program about half the time... The other half the time, it knocks out X.


I stand corrected. Thank you ;)

Reply Score: 2

RE: lie-nux at it again.
by KrustyVader on Sun 28th Oct 2012 17:48 UTC in reply to "lie-nux at it again."
KrustyVader Member since:
2006-10-28

Linux have lots of problems, GNome and KDE create, from my point of view, create a madness with the libraries dependence. Sometimes for installing a simple program i need 30 libraries.

Thankfully there are lot of option in Linux, and for some reason i keep choosing Slackware as a base installation (but without KDE).

And for Real Time O.S. QNX is (sadly) no more a valid option. It belongs to RIM and you can guess what will happen.

Reply Score: 1

RE[2]: lie-nux at it again.
by Gullible Jones on Sun 28th Oct 2012 18:45 UTC in reply to "RE: lie-nux at it again."
Gullible Jones Member since:
2006-05-23

Linux desktops have (IMHO) taken a turn for the worse lately, but don't mistake that for what's happening under the hood. Newer kernels have some really cool features (and IMO perform better on ancient hardware than the old 2.6 series).

(And fortunately there are still Mate and Xfce on the desktop front. Also Trinity, though that doesn't seem to be as functional right now.)

Reply Score: 2

phoenix
Member since:
2005-07-11

Was really looking forward to an in-depth discussion on the merits of different schedulers ... and all that's posted is the same tired trolls. ;)

Is this really what the Internet has come down to?

Reply Score: 3

WereCatf Member since:
2006-02-15

I would have liked such a discussion, too, but I simply do not know enough about schedulers to say much or have an informed opinion on the various approaches. Sorry.

Reply Score: 2