“It’s a sad day when an ancient fork bomb attack can still take down most of the latest Linux distributions.” Read through this article at Security Focus for more.
“It’s a sad day when an ancient fork bomb attack can still take down most of the latest Linux distributions.” Read through this article at Security Focus for more.
From the mailarc:
https://www.redhat.com/archives/fedora-devel-list/2005-March/msg0115…
“The only news here is that securityfocus really will print any crap thats submitted.
I look forward to the followup article.
“I was stunned that I could just pull the power cable out of the wall and Linux would do nothing to prevent this denial of service”.
man ulimit [emphasis added]
If we set strict ulimits by default we’d have people writing articles like ‘Fedora is teh suck, I can’t malloc more than xMB in a single process’ What’s fit for one configuration may not be for another. One size most definitly does not fit all.”
This is a non-issue, completely preventable, 100% traceable.
How does that refute the claims made by Security Focus? The fact remains that any process can bring the system to its knees. By default, there should be a hard limit imposed on the amount of memory a process can allocate. If you want more, you need to do some fancy stuff. Something like the current JVMs, where you need to specify the maximum heap size for apps that require more memory than the predefined limit.
The point he was making is that any predefined limit is arbitrary, and will be unwanted by many users. Users that want to set such limits are free to do so. The method is already builtin; putting it in place is trivial.
The pertinant system philosophy being to provide a foundation on which to build, not a house to be remodeled.
I dont think its even about limits. The invalidity of this “security concern” is that a VALID, AUTHORIZED user to your system can can take it down by hogging up resources…
So, I give a friend unrestricted access to my computer and he hogs all the resources. How is the a security concern? However, it would be valid to point out that it is difficult to recover from this compared to, say, FreeBSD. But it is still not a security flaw.
Isn’t windows under threat from the same attack? I could write a small C program to test but I’m sure someone already knows the answer.
…or compromize the machine by gaining access to a user acount, which in most real-world settings is easier than getting root. Then, once you’ve got a user account, you take the machine down. It’s most definitely a security issue. Anything that allows users to do things they shouldn’t be be able to is a matter of security. The separation between user and administrator rights and processes is one of the fundamental parts of system security.
I might be an heretic but actually I think there’s a clear limit to what systems can automagically do.
I can confirm I had a problem like that on a Windows2003 system with a web application suddenly getting to use huge amounts of memory. That didn’t crash Windows2003 itself but made the whole system pratically unusable. Solution was to setup a limit (via IIS application pools) to memory such web application could eat. That way, sytem had no problem but of course that web application was (and still is) buggy and dangerous.
To me, the point is you cannot forget that machines get administered by humans and that humans are (by far) more stupid than machines. If user / administrator doesn’t know what he/she’s doing, no default settings could save you. Ever!
If we set strict ulimits by default we’d have people writing articles like ‘Fedora is teh suck, I can’t malloc more than xMB in a single process’ What’s fit for one configuration may not be for another. One size most definitly does not fit all.”
No doubt this is true. While systems can help you with security, the human factor is by far the most important one. You can have ANY kind of A/V, for example, but you cannot protect yourself from an user who decides to run an infected program…
“While the fork bomb example clearly isn’t a kernel-specific problem, it is a Linux problem — and it’s something that the kernel could certainly haved prevented. For the record, I hope that anyone out there running Linux is just as surprised ”
I’m not surprised, only confused..
Taking into account the easy way to fix this problem (if it actually happens to exist on your particular flavour of GNU/linux), it might still be a good idea to establish a measure in the operating system to ensure that ressources are only granted to such a degree to users, that the system is still able to react in a timely manner to administrative intervention. Or is there already such a capability in GNU/linux, and I do not know?
Taking into account the easy way to fix this problem (if it actually happens to exist on your particular flavour of GNU/linux), it might still be a good idea to establish a measure in the operating system to ensure that ressources are only granted to such a degree to users, that the system is still able to react in a timely manner to administrative intervention. Or is there already such a capability in GNU/linux, and I do not know?
They do exist, altough for some reason the guy who wrote that forgot them and blamed the kernel.
/etc/security/limits.conf:
* hard rss 409600
* hard nproc 1000
There. Two fucking lines of well-documented files, and they blame the kernel. Amazing.
It’s totally irelevant if you could “detonate” a fork bomb on other OS’s besides Linux.The fact is that running ){ :&:;};:
as non root user brought every mainstream Linux to it’s knees.
Only restricting both soft and hard limits in “/etc/security/limits.conf” could stop the infinite process spawning.The “funny” thing is , just selecting copy and paste from within for eg: mozilla thunderbird was enough.I’m glad i had an restrive setting 🙂
limits.conf should have a default heathy setting in order to help to protect the system anihilate those primitive attacks.
“brought every mainstream Linux”
Nice to see you didnt even RTFA.
Excerpt:
I’ll quickly mention here that Debian did not suffer the same fate as the others; congrats to the Debian development team.
Debian is the largest and fastest growing Linux distro. If that isnt mainstream, I dont know what is.
A few years ago, I heard the reason to use Linux was that it was secure by default, and that you could also make Windows secure, but that would take a lot of work.
And now that any process can take down a Linux machine with the default settings is a feature!?
Hello? What part of the first post did you not read?
– This problem can be prevented by setting a memory limit for processes.
– However, if a limit is set by default, other people will complain about it!
– Conclusion: you can never please everybody with the default settings. So configure it to whatever you like! There is no excuse, the option is there.
With your logic, windows is not bad either. They just come with bad defaults…i configured my XP properly and never had even one problem. yoohoo Windows rocks…most usable, best applications…and yes it is as secure as linux by default…
Free linux doesnt interest me because my time is more important than 400-500$ spent every 2-3 years…
This and everything else I read lately confirms my belief:
Linux is no more secure than any other operating system on the market.
So, as soon as I start hearing arguments about “this is more secure than that”, I really just switch off.
I’d rather prefer to have those limits set by default in all the Linux distros, there’re so many people who don’t know about it and are exposed to this issue.
I agree with OpenBSD’s philosophy: it must be secure by default, let’s bring this to every place in Linux.
First, I’ll mention that I am a big fan of Linux. I’ve used all of the mainstream distros and today use Gentoo 99% of the time I’m on a computer (work and home).
I also really enjoy the BSD’s – especially OpenBSD – and find it somewhat funny that so many Linux bigots have claimed that placing some sort of limit by default somehow hampers use of the system. How many web servers run FreeBSD? How many BSD’s do serious heavy lifting? They don’t fall to the fork bomb, yet they are still able to function on large scales.
If placing a default limit to prevent fork bombing hampers Linux so much, perhaps developers should spend some time studying how the BSD’s managed to prevent this attack while still being very useful.
One thing most people in here forget is that in order to start a fork bomb, you still need a valid user account on a Linux machine. I’d never give an user account to people I don’t trust. Windows is blamed to be insecure because it has a crappy default browser, a crappy email program and there have been a lot of attacks (a long time ago, though) that could take down a windows machine from a remote host.
If you’re an administrator that needs to give people accounts on a Linux machine, you need to know how to secure the machine; the defaults on Linux are _not_ to prevent a box with e. g. 50 users on it to go down under high load. Your linux machine will be pretty secure if you use it as a router, a webserver, samba server, whatever, but if you begin giving accounts to users, you need to know the dangers – simply as that
Apart from the fact that the author doesn’t seem to have too much knowledge of unices (blaming the kernel for fork bombs to work is ridiculous), I have to admit that default settings for ulimits would be a good thing for all distributions – 1000 processes per users, no one would have programs except for people which run more than 1000 processes per user, which, in turn, are exactly that kind of people smart enough to configure their system properly (knowing what ulimits are).
Then, shell accounts on Linux boxes are a permanent danger; just remember the mass of kernel exploits that came up in the last few years (just look up how many Paul Statzetz from isec found). However, DOS attacks are a common problem; nearly everyone having access to a real broadband connection (universitys) could take down your webserver, for example.
People seem to rediscover fork bombs and how to prevent them every few years or so.
The oldest example I can find for Linux is with kernel 1.2.13, from 1996.
And yes, the answer is still the same, use ulimit to stop it.
http://groups.google.co.uk/groups?q=prevent+%22fork+bomb%22…
As to why it’s not a default setting, well there are a thousand resource starvation attacks that can be made if someone has a user account on your machine. Either you try to stop them all, which can lead to unpredictable killing of processes by over paranoid settings, or you adjust ulimits to get the balance right for the particular task that machine is used for.
As most desktop distros don’t have potentially malicious unknown users with user accounts logged in, they are set up on the trusting side.
“I’d rather prefer to have those limits set by default in all the Linux distros, there’re so many people who don’t know about it and are exposed to this issue.”
Excuse me? *How* are they exposed to this issue? Do they have multiuser systems with untrusted users? Like hell they do.
Come on, most desktop distros are configured *out of the box* so that all users can execute “shutdown -h now”. Why is no-one complaining about that?
This is entirely a question for the distributions. If a normal end-user runs into a hard limit set via ulimit then they’re finished. They have no idea what to do, and as far as they’re concerned their computer, or “Linux,” is broken. I’m sure we’d see an editorial on OSNews about how Fedora is a “bad distribution” because it so confounds inexperienced users.
The question is, as usual, one between convenience and security. Desktop systems should probably err on the side of convenience so that the user doesn’t feel stifled by the environment, and server systems, of course, should err on the side of security because a good administration will know how to configure the machine properly if something actually needs to be changed. There will always be some resource that is unguarded or that can’t be guarded without loss of some convenience, and so it’s a question of what resource the malicious program is gobbling up. Note that the article doesn’t even mention this — what resources were being used up that caused the machine to crash? I wouldn’t be surprised if a slight variant designed to abuse different resources could bring a BSD to its knees.
In summary: nothing to see here, move along.
This and everything else I read lately confirms my belief:
Linux is no more secure than any other operating system on the market.
You are right, security isn’t really about OSes. It’s a process. To make that process work you need skilled sysadmins.
However, Linux provides a lot of tools for such a skilled admin that if handled right can make it an extremely safe system. I’m thinking of things like SELinux, good firewalls,..
But without somebody who knows what to do with it, you will not be safer than if you run windows or anything else for that matter.
All of my boxen crashed from fork bombing except my macintoshes. Apple si TEH WINNAR!!!!!
Seriously though, I have a hard time seeing this as a major issue. Yes, the default limits set are stupid. It is however very easy to make them sane. Maybe the best approach would be to prompt the admin at the time of install “Hey – do you really want to leave this limit st00p1d h1g|-|?” This could be done around the same time you are prompted for a grub password and firewall settings.
Its not often you see people who actually believe that the user has any control over the security of their system. Most of the time I see things like:
“Run linux distribution Z and you’ll be secure.”
“If you’d installed SP2 it would have been patched.”
“My system is running -O9 Gentoo, and I’ve never had any problems!”
“Malware doesn’t exist on linux.”
“BSD or Busted”
“…VMS is unhackable…”
Sometimes I get the feeling that the casual computer geek out there has the odd idea that their operating system of choice could protect even a lobotomized monkey from itself. Its very refreshing to see someone who actually takes into account that the amount of experience a user has with an OS is more relevant the software they use 90% of the time.
Gimp users would complain. I’ve personally used over a GB of RAM in Gimp, and I’m a light user.
Developers would be angry. Valgrind needs to reserve 1.5GB of RAM; with limits set it gets told no.
This shouldn’t be limited by default on desktops; it should be on servers probably; or a good tutorial on doing it should be presented. This guy isn’t helping anyone; he’s complaining without bothering to mention how to easily solve the problem he claims is easily solvable.
I am interested to know how the BSD’s prevent it without angering end users.
This is a trusted user issue. If you run a large server that thousands of people use, or thick clients then this is security. This is not desktop security; however some people believe headlines and don’t bother to read the article.
Actually, I take it back. It’s a non-issue for thick clients too. Who cares if people can take down a kiosk; you just reboot it.
what happens to the system when it’s fork-bombed?
does it crash or does it only get damn slow over a short period of time?
if it crashes it IS a bug in the kernel.
if it only gets slow thats normal with every task that puts heavy load on the system (not just fork-bombs)
I saw the 2nd efekt a view times with autocad when you tried to fill an open area. autocad used up all the virtual memory, crashed, win cleand up the rest and thats it. no systemcrash, just about 4 minutes gone to waste
This is a security issue, but it’s not a kernel security issue. I admit to never having heard of ulimit before, and had a look at the man page for it and let’s just say it’s not exactly very clear on what and how to fix the problem. If the man page isn’t clear, how are you going to encourage users to fix the problems? That said, since i’m running a Debian derivative, i’m most probably OK.
Now – we come to the question – many Linux machines are used, at home, by non sysadmins, who use it as a Desktop based system, and not a server system. Realistically, how many of those are going to have multiple user accounts that are particularly hostile (and of course will then bring the systems down to the knees). A competent sysadmin, running a server will lock this sort of thing down.
Now – if this guy was serious – why didn’t he include his script for downloading so that Linux users can test their systems? Rather than open his mouth and be counter productive, he could have been proactive. The average user isn’t going to have an idea how to write a script to do this. Sure, I could google it, but it would have been nice for him to have been more proactive on his part, rather than just ‘say-so’ damning.
From the article:
“While the fork bomb example clearly isn’t a kernel-specific problem, it is a Linux problem — and it’s something that the kernel could certainly haved prevented. ”
So – his title “Linux Kernel Security, Again” is vastly misleading. It’s NOT a kernel issue. He wants it to be fixed by the kernel team, but there are user land means of fixing this problem, and apparently has been for a long, long while. Linus doesn’t like to have a lot of user land stuff in the kernel, he wants it out of the kernel, for a variety of ‘security’ and ‘performance’ reasons.
“I personally don’t understand how usability can supersede security when the consequences are so grave. ”
Home systems Jason. Comprehende?
“Not being as intimately familiar with the various Linux distributions as I am with the three BSDs,”
He’s a BSD guy. BSD guys generally love to badmouth Linux. It’s like KDE vs Gnome users. Most probably cos they’re jealous that Linux develops (rather than the uber slow development that bsd calls ‘development’).
“Again I must ask, why are products like GRSecurity and PaX, or at a minimum their non-intrusive features, not ending up in the base Linux kernel? ”
Jason – if you think you can do it better, stop bitching, fork the kernel, pull your finger out of your ass and do it yourself. No ones stopping you. Stuff like this really fucking shits me – armchair critics.
Reading some of the comments gave me a few nice things to try and check out. I cannot fathom why Jason chose to criticise this but not tell users how to check (ie ulimit -a). The ulimit command on a Debian box is depracated btw.
You could just grant users rights without a shell account. There, problem fixed. No shell account, it’s pretty hard to do anything malicious (well, that’s my limited understanding on this).
Dave
Chris: Gimp users would complain. I’ve personally used over a GB of RAM in Gimp, and I’m a light user.
Developers would be angry. Valgrind needs to reserve 1.5GB of RAM; with limits set it gets told no.
This shouldn’t be limited by default on desktops; it should be on servers probably; or a good tutorial on doing it should be presented. This guy isn’t helping anyone; he’s complaining without bothering to mention how to easily solve the problem he claims is easily solvable.
I am interested to know how the BSD’s prevent it without angering end users.
I agree that this isn’t a big issue. However, limits should be set. I don’t see why it can’t be setup to somehow “pause” the program if the program exceeds its limits and then send a notification to the user asking what it should do. Meaning… Should it kill the program, temporarily extend the limits set on the program, or permanently extend the limits set on the program.
Of course… To my knowledge no OS does this.
But part of convenience is having awesome security by default, but be able to adjust it EASILY if its needs adjusting.
“He’s a BSD guy. BSD guys generally love to badmouth Linux. It’s like KDE vs Gnome users. Most probably cos they’re jealous that Linux develops (rather than the uber slow development that bsd calls ‘development’).”
This is comming from the same the David Pastern who spontaniously decided to trash both BSD sense of community at large, and their license only two days ago? Cute Dave.
Furthermore isn’t it a bit hypocritical that while you are calling the BSD development process “uber slow”, you’re also running a derivative of Debian — a distribution that has not had an official release in 31 months?
Oh… I forgot to add…
I’m not sure about the average desktop user in the world, but most of the home users I know (as an example) wouldn’t be using programs that take up a large amount of memory very often, if ever. So why have no limit for them? It doesn’t exactly make sense to me.
In my opinion, limits should be set by default, but they should be insanely easy to adjust for when the time comes.
An extremely simple executable called fork in your own home directory like this:
#!/bin/bash
bash &
~/fork
would kill the server eventually. Not elegant, not complex, but kills a box in a minute or so.
Quote: “This is comming from the same the David Pastern who spontaniously decided to trash both BSD sense of community at large, and their license only two days ago? Cute Dave. ”
Yup. Did you bother to read my reply to you on that post? Most probably not – I apologised to yourself and the BSD community on tarring them with the same brush as the corporate entities, which aren’t a *part* of the BSD community imho. That was the whole point of my original argument.
The BSD license I do not like, and never will like, for reasons that I’ve previously stated and i’m sure you’re well and truly aware of. I will stand my ground on this. I will not be intimidated by the likes of yourself or other BSD users on my dislike for the BSD license (and reasonings). Many other GPL users feel the same way I might add.
As to Debian, yes it’s rather slow, and it doesn’t make myself (or many others) happy that Debian is so slow. The beauty of Debian though is I can grab from testing/unstable if I want, and thus grab much later packages. You can of course, grab the BSD experimental tree releases, but from what i’ve seen you can certainly expect issues when running these releases. That’s fine, they are marked as testing releases, so you install at your own peril, with the bonus of getting new features, but the risk of possible stability issues. No big deal. How long has the 5.x tree of freebsd been in development for now? I’d say much longer than the release cycle of Debian. Memory tells me nearly 4 years.
Dave
How many BSD’s do serious heavy lifting? They don’t fall to the fork bomb, yet they are still able to function on large scales.
Alan Cox claims to be able to take down BSD boxes easily: https://www.redhat.com/archives/fedora-devel-list/2005-March/msg0120…
“How long has the 5.x tree of freebsd been in development for now? I’d say much longer than the release cycle of Debian. Memory tells me nearly 4 years.”
Touche! Yes its been quite a while for FreeBSD to consider the “Fine-Grained SMP branch” stable. ^^
As for standing your ground, I’m not trying to intimidate you into changing. Variety is the spice of life and while the linux community isn’t particularly my thing either — they certainly make life more interesting.
I am writing this on a debian system(sid). With the default limits (max proc = 6144, max mem = unlimited) of a non-super user, a forkbomb cripples the system. I use a vanillia kernel; is there debian specific patches that prevent this?
This is true! BSD is generally more secure than Linux – stability I think is pretty much a par. The big difference in my eyes is the license. If only the BSD license had a clause in it that MADE people who used BSD code have to return back to the community on improvements, it would be good. Whilst many corporates donate/recontribute back to BSD developers, they pick and choose what they want to return. In reality, they don’t have to return back to the community at all (try Microsoft for instance). The BSD license then, to add insult to the wound(s) allows them to relicense it [the code] under a new license, as long as the original BSD copyright notice is kept intact. Now of course, this new license can ‘conflict’ (and does) with the [I believe] original intent and spirit of the BSD license. The code can (and does) potentially become ‘non free’ to the recipient of this bastardised BSD code. This is what I dislike. I will never like the BSD license until this is changed by the regents of Berkely california. I will not recommend the BSD license based on these ideals to friends, family or business, based on this.
The GPL, whilst unnofficially not ‘anti business’, I think is anti business. Well, not anti business, per se, but anti business in the current modus operandi that businesses persue. Businesses can, and do run on GPL products. It’s just a change of mindset. Of course, you can argue that what right does RMS and the GPL have to instruct businesses how to conduct their business. That’s fine – simply do NOT use GPL code if you don’t like the license. Linux and GPL’d code was around before it became fashionable in business, and will be around long after that. It’s code written by the people, for the people.
Dave
No patch required.
At a console…
ulimit -u 100
> ulimit -u 100
The author of the article states that the default debian configuration is not suspectible to forkbombs. My newly installed debian system with a vanilla kernel was crippled using a forkbomb. I am wondering way my system is suspectible, but not the author’s.
With your logic, windows is not bad either. They just come with bad defaults.
Contrary to a Windows install with bad defaults, this “security issue” won’t allow your system to be compromised in less than 10 minutes simply by connecting it to the internet.
In fact, in nearly 4 years of using Linux, this has never caused any problems. Just to be sure, I’ve set the limit for processes at 1,000, which is always lower than what I user.
I checked out rexFBD @
http://rexgrep.tripod.com/rexfbd.htm
hasn’t been in development since 2000, and the modules are the old style .o, not the newer post 2.6 kernel .ko. Why Jason is referring users to these [old] tools is a mystery. And again, as I stated earlier, he didn’t reference ulimit I have no idea.
Dave
you fotget about games. these love ram like nothing else
The general problem is that a user-level process can crash the system by using all available resources for a type of resource. The problem is very important but is mostly a stability rather than a security one. Software can malfunction because of a bug or be overtaken because of a security vulnerability, and in many cases it is essential that the malfunction will not impair the system beyond the functionality of the module. For example, one would not want an airplane to crash every time something goes wrong with some part or some software module of the plane.
It is possible to design a (fully functional) operating system immune to these problems. For basic architecture, see for example,
http://web.mit.edu/dmytro/www/OS_Architecture.htm
It should not be too difficult for developers to modify Linux to have a resistance to system crashes brought by limited privilege processes.
A couple years ago I tried to fork bomb my Linux machine, 2.4, and the kernel simply killed off the offending process(s) when they sucked up a certain amount of memory. I believe this was the Out Of Memory Killer. It worked very well for me. My 2.6 based Gentoo setup tanked in about a second with a bash script fork bomb. After setting up /etc/limits all was good, but what happended to the OOMK? I thought it was pretty cool.
It’s a “sad day” when someone presents this ancient crack as current news.
Here is one solution I’ve read somewhere:
# ulimit [-u] max_#_of_processes
BTW. This fork can also be performed with two simple batch files on an modern and innovative system like Win XP SP2.
Not trolling. Just stating the facts.
The OOMK is a vile creature. I don’t like it in concept or in implementation. The fact that it serendipitously killed the right process for you is in fact a miracle. More often than not the OOMK has a fixation with destroying any and every useful process executing on the machine in an attempt to preserved the present borked state that caused it to be needed. Personally I think its some twisted darwinistically programmed attempt at self-preservation.
mmm:
ulimit -u max. # of processes
Doesn’t work for me…tried it both as normal user and root. Does nothing when checking via ulimit -a. I’m running a Debian based o/s, and the ulimit command is listed as depracated in the man page for it. I wonder if that has something to do with it?
Dave
”
Taking into account the easy way to fix this problem (if it actually happens to exist on your particular flavour of GNU/linux), it might still be a good idea to establish a measure in the operating system to ensure that ressources are only granted to such a degree to users, that the system is still able to react in a timely manner to administrative intervention. Or is there already such a capability in GNU/linux, and I do not know?
”
They do exist, altough for some reason the guy who wrote that forgot them and blamed the kernel.
/etc/security/limits.conf:
* hard rss 409600
* hard nproc 1000
Thanks for your answer. However I thought of having some “smart” ressource management, which need not have fixed limits for the allocation of ressources to individual users, but which would always spare the right amount of ressources for some privileged user (root, administrator, …)
Dave, you seem to have the common misconception that derivations would somehow have a negative effect on the original code. Just because somebody else works with the code doesn’t mean the original developers will just be sitting on their asses.
Developers who use the BSD license don’t mind that their code can be used for somebody else’s profit; they’re putting it out there so people can use it and benefit from it. Whatever they use it for is up to them.
From the same discussion on fedora-devel list (by Alan Cox)
https://www.redhat.com/archives/fedora-devel-list/2005-March/msg0120…
> The BSDs didn’t seem vulnerable to this issue, and I don’t see people
> going around in circles screaming about it. So, they seem to have chosen
> some “one size fits almost all” limits.
I disagree. I can crash the BSD’s effortlessly with a slight variant of it.
Its a matter of what the attack was tuned to zap
I limit nproc to 30 and write a program
while(1) {
if(fork()==-1)
perror(“fork failed”);
}
when I run this, my P4 3.0GHz 512M ddr machine lost responce, I cannot kill all “a.out”, isn’t it a security problem?
Yes, something should always be reserved for the kernel to use. Older versions of Unix reserved file space for system use, for example. An ancient real time system I used, ca 1970, marked each process with something called status, and its resource limits depended on this status. There were seven levels of status. Hence monitoring the engineering values in the plant was not held up by programs which updated the VDU displays.
What’s so hard about it? Use some sensible limits for resource consumption (memory, # of subprocesses, …) on each process, and if the limits are hit, present the user a dialog asking whether he would want to give the process more resources or kill the process. The admin would have to give users the right to decide about it – unprivileged users could only kill the process.
The install should have been automatically prevented from this. I know there are ways to correct it.
Should one have to correct every security hole or expect that the basic install is secure?
Headline: “Windows security, again”
Entry of text: “Researchers found MSIE’s SSL broken for 4+ years. Bla, bla, bla”
The last step is profit from banner ads. Geez, stop with writing such sensational news about nothing! At least the title is *totally off* and trollish. The problem described is at worst an unfortunate misconfiguration with _some _Linux distributions, although world-wide daemons, programs should indeed be limited by default.
>while(1) {
>if(fork()==-1)
>perror(“fork failed”);
>}
Even though you are limited to 30 processes, the tight while loop consumes the processors.
When you use ulimit you are not altering the available amount of memory for a process you are trying to alter the number of processes that a user can own. Because that is what a forkbomb exploits.
$ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 256
virtual memory (kbytes, -v) unlimited
Here we see that only 256 max user processes can be run. Note that this stops dead all of the above mentioned forkbombs.
I had this running in the background while I wrote this post.
while(1) {
if(fork()==-1)
perror(“fork failed”);
}
No slow down at all.
Note that ulimit is limit in tsch and csh
Debian is protected from this kind of vulnerability right?
so I compile an ATI driver
all fine.
I click on a 3D accelerated screensaver preview:
and Computer locks
only hard-reboot to get u out.
Meaning
there is always away to bring the system down
So what is would be solution now?
disable sh by default? no exec rights by default?
no compiler? .. how far u wanna go?
While I, another Linux fan boy, accept any fair criticism aimed a Linux.
The article is noddy at best.
I limit nproc to 30 and write a program
while(1) {
if(fork()==-1)
perror(“fork failed”);
}
when I run this, my P4 3.0GHz 512M ddr machine lost responce, I cannot kill all “a.out”, isn’t it a security problem?
Don’t try to kill the processes, as you can’t get them all at once and as soon as you kill one it allows another process to fork again.
Instead, ‘killall -STOP a.out’ to freeze them. Do this a few times. When they’re all frozen, then kill them.
as playing with javascript window.open scripts
let’t see if we can kill them all by keeping ALT-F4 pressed?