Linked by David Adams on Tue 28th Jun 2011 15:35 UTC, submitted by HAL2001
Privacy, Security, Encryption In an unexpected move for a security company, SecurEnvoy today said that cyber break-ins and advanced malware incidents, such as the recent DDoS attack by LulzSec, should actually be welcomed and their initiators applauded. The company's CTO Andy Kemshall said: "I firmly believe that the media attention LulzSec’s DDoS attack has recently received is deserving. It’s thanks to these guys, who’re exposing the blase attitudes of government and businesses without any personal financial gain, that will make a difference in the long term to the security being put in place to protect our own personal data!"
Thread beginning with comment 479272
To view parent comment, click here.
To read all comments associated with this story, please click here.
Alfman
Member since:
2011-01-28

jabbotts,

"I'd suggest that DDoS vulnerability is indeed a security issue. Security is not just concerned with protecting the information in that one box. It is also concerned with protecting the system resources for legitimate use. A denial of service removes resources from legitimate users."

This is all true, however you've overlooked a crucial element: in a well designed large scale DDoS attack, the victim doesn't know the attackers from legitimate customers.


"If your network gets flooded out by packets, you have a security mechanism failing to filter packets properly."

Two problems:
1. A filter is useless when the attacker's botnet has more bandwidth than you. Even an OC3 (which was considered large enough for my whole university) is easily saturated by a few hundred broadband users.

2. What kind of filter do you use? If you detect excessive bandwidth on an IP you can block it, but it may or may not be legitimate. Consider a bunch of mobile users being a proxy/nat router, you're filter could inadvertently block all of them.

"If your software gets crashed into a denial of service condition, you have an exploitable vulnerability in the code that needs to be addressed."

Well granted, the software should never crash. In the worse case, a busy server should start returning something like error 500 in http-speak.


"If your website takes down your webserver due to resource exhaustion through a designed website function, you have site code that needs to be addressed."

You're totally oversimplifying the issue to imply that code is at fault. Assuming you actually have enough bandwidth in the first place (which isn't likely for most small/medium businesses), then there are other local bottlenecks which will require infrastructure upgrades to eliminate. Databases quickly become saturated. Even ordinary web servers can start thrashing if the attackers deliberately request pieces of material which are unlikely to be cached. This causes random disks seeks well in excess of normal load. A typical disk seek is 5ms, if the attacker successfully requests an uncached file each time, then both normal users and attackers will reach a limit of 200 requests/sec.


"The information systems are a business resource that need to be protected in addition to the information those systems house. Denial of service demonstrates an exploitable flaw in the security of those systems."

Hopefully I've gotten my point across that being vulnerable to DDoS doesn't imply a security vulnerability. As Soulbender stated already "Availability != security."

I'd gladly discuss any usable ideas you have, but DDoS isn't as easy to solve as you make it out.

Reply Parent Score: 2

jabbotts Member since:
2007-09-06

If you get blown off the network by a flood you technology can not at all deal with then fair enough. The issue is not mitigting the risk of denial of service and getting blown off by deciding to ignore it outright; "DDoS isn't our responsability and even if it was, we'll just get hit with volumes that our physical network medium can't even handle."

"Availability != Security" is what I really keep tripping over. Encase I'm reading it wrong:

I would agree that avaiability does not mean one is secure. I would not agree that availability is not a security concern.

If your systems are getting hammered by malicious intent, maybe you need an IPS on the line to defend your systems.

If your webform is chewing up your server resources, maybe you need some throttling in place.

If the denial of service is caused as a misdirection or cover for a breakin that is most definately a security issue.

If we refer to IBM's ten principles of secure software design which are equally applicable as ten strong principles on which to base your greater system security, Denial of Service seems to apply to:

Provide defense in depth - provide redundant security solutions should one layer fail. provide redundancy in systems should one system fail.

Secure Failure - ie. have your website degrade gracefully instead of allowing it to simply consume the system's resources.

Compartmentalization - try to keep a denial of service on one system from taking out other systems

If we have a hode of random addresses cooperating to keep the server busy; throttle them so the server hardware can at least keep up rather than become completely unusable. Block all but known good addresses if your in such a situation the public service is secondary to specific clients/partners who use it. Drop an IPS in front of the box and let it help manage the hit.

I mean, if you've done what you can to mitigate denial of service attacks and your upstream provider is litterally over-run then fair enough. If you simply discount denial of service as "not a security concern" then; security fail.

Reply Parent Score: 2

Alfman Member since:
2011-01-28

jabbotts,

"I would agree that avaiability does not mean one is secure. I would not agree that availability is not a security concern."

Let me put forward the notion that if availability is a security concern, then the internet is not really a suitable medium.

Hypothetical a country may have a grid of warhead detecting radars. These radars are considered to be of paramount importance with near-absolute availability. They bring in a team of security experts to eliminate all possible vulnerabilities. They factor in all possibilities, including spies leaking details of the project (no security by obscurity). Now that they've addressed the security issues, can they rely on the internet to provide the necessary availability?

I expect the answer is "no".

I realize this skews the discussion a little bit, and that your talking about exploiting code scalability issues, but I think the point still stands; one cannot secure availability on the internet.

"If your systems are getting hammered by malicious intent, maybe you need an IPS on the line to defend your systems."

The problem with DDoS is that no one has intruded onto the system in the typical sense. The attacker is flooding servers with otherwise innocuous traffic.


"If your webform is chewing up your server resources, maybe you need some throttling in place."

How do you keep the throttle from affecting normal users?

"If the denial of service is caused as a misdirection or cover for a breakin that is most definately a security issue."

Yes but in general the DDoS *is* the damage, not a cover for some other nefarious activity.

"If we refer to IBM's ten principles of secure software design..."

You have me at a disadvantage here, I've never heard of them.


"If we have a hode of random addresses cooperating to keep the server busy; throttle them so the server hardware can at least keep up rather than become completely unusable."

Well, apache tends to respond with error messages when it gets overloaded. Is this what you mean by degrading gracefully? If not, then what do you mean?

"Block all but known good addresses if your in such a situation the public service is secondary to specific clients/partners who use it."

This will block legitimate users too, but my bigger question is how to put this into practice. Would you envision a process which scans web server logs heuristically for IP addresses and then loads them into iptables? Something more sophisticated? This list could become overwhelmingly large. What if bad IPs make it through the whitelist or the IPs are faked?

"Drop an IPS in front of the box and let it help manage the hit."

To the extent that it can determine which requests are legitimate, then that's great, but in practice it can be impossible to tell, an IPS has even less information to go by than the application server.


"I mean, if you've done what you can to mitigate denial of service attacks and your upstream provider is litterally over-run then fair enough. If you simply discount denial of service as 'not a security concern' then; security fail."

That's the opposite of what I'm claiming. It's an "availability fail", but the security is still intact.

I guess we're just arguing semantics here, but I'd rather that the media distinguish between actual security failings and denial of service related downtime. Otherwise we'd start to hear about about "security flaws" every time a company's servers were overloaded.

Reply Parent Score: 2