Linked by Thom Holwerda on Fri 17th Jun 2011 18:49 UTC
Privacy, Security, Encryption Oh boy, what do we make of this? We haven't paid that much attention to the whole thing as of yet, but with a recent public statement on why they do what they do, I think it's about time to address this thing. Yes, Lulz Security, the hacking group (or whatever they are) that's been causing quite a bit of amok on the web lately.
Permalink for comment 477754
To read all comments associated with this story, please click here.
RE: Proactive testing - depends on the test
by jabbotts on Sun 19th Jun 2011 12:16 UTC in reply to "Proactive testing"
jabbotts
Member since:
2007-09-06


Proactive testing is just proactive testing, it doesn't say anything about the security of a system.


You think it's better to wait for a malicious third party to test your systems for you? Proactive testing can, at minimum, give you an indication of your system's effective security posture. Properly done, it includes addressing discovered issues and retesting to discover new ones. That would be the "proactive" part of it. If proactive testing is not saying anything about your system's security, you need to fix your testing methodology.

Automated testing is also very much a part of proactive testing. I'd say it's like the relationship between signature and heuristics based AV; the signatures to catch the recognizable stuff and the heuristics to catch what is not recognizable. The automated vuln assessment tools for the signatures they recognize followed up by a skilled manual vuln assessment with the creativity and flexibility of a skilled human.


So even if the tool found a problem like a SQL-injection, the tool or user of the tool might not even have noticed it.


Bingo. "might not even have noticed it". If your admin or auditor is a Hacker they will indeed notice it though. They will be looking for it. They are self directed learners who think in terms of "hm.. what can I do with this beyond it's intended purpose?" by default.


No, pentesting and so on is to find the most obvious problems.


Vulnerability assessment says "someone could possibly open that door if left unlocked." Pentesting says "That door is indeed unlocked, here is what one is able to do in the room behind it if you don't lock the door." A vulnerability assessment is a list of potential problems one should address. A pentest provides that list along with confirmation that they are exploitable and evidence as to why you should fix them.

If all you tasked your internal team with or contracted a third party for is a single way into the system then sure. You put that limitation on them in the first place though. Your designing your test to fail. Limiting scope of testing, ordering a pentest when what you wanted was a vulnerability assessment or ordering a vulnerability assessment when what you wanted was a pentest are all great ways to insure failure.

You could alternatively contract the third party to find all the ways in they can, what they can do once in and ways they are able to maintain access during time permitted.

With an internal pentest team, you can run a proper testing cycle; pentest, harden, verify, pentest, harden verify. Now your not just finding a single vulnerability and calling it a day.

If your test is only to find the most obvious problems and your not repeating the test cycle to find your next most obvious problems; your doing it wrong.


I'm very certain banks do those previously mentioned security checks.


And, that's exactly the problem. You are very certain your bank is doing the proactive testing; do you now for sure that they actually are though?

Everyone was certain Sony, a huge tech company, knew how to manage it's servers and networks. How did that work out? Lack of network filtering, servers left without latest updates (or even remotely recent updates) customer data stored unencrypted. These are things any competent pentest would have identified. Any responsible company, having those identified, would have addressed them promptly.

Everyone was certain that having over a hundred million PSN and SOE customer's private information exposed would convince them to address discovered issues and check for similar issues across all other company systems. Everyone was certain that Sony's PR claims that they have addressed security issues meant they had actually implemented changes. How did all that work out for Sony when the next week the same weaknesses where exposed in other systems?

Everyone was certain Facebook knew how to implement it's software securely. Facebook must be testing it's systems continually right? So what of passing authentication tokens in URLs which has left every facebook user open to exploit since 2007? (that one was discovered around May of this year 2011).

And financial companies; banks and such. They must be doing the previously mentioned security checks; Heartland Payment Systems, 2009, 40 million accounts exposed.

Banks are in the business of making money. They are notorious for "minimizing expenses" any way they can get away with it. "we'll spend the money to fix that if it proves to be a problem" is the mainstay. If it's cheaper to live with the losses instead of fix the problem; they're going to continue living with the losses.

I wish the market success of a company was an indication of it's responsible management of secure systems; it's not. More often, it's the opposite.

Let's toss out another example for fun. RSA; thee security company. When governments, military and billion dollar companies need security they go to RSA. RSA's SecureID database has been compromised. Everyone who uses SecureID for authentication is screwed. RSA has actually said "uh.. make sure you are using strong passwords for your second of the two part authentication because the SecureID part of it isn't stopping anyone."

But how could this happen? We where all certain that RSA would be doing testing. It was a speer phishing email. How is automated vulnerability assessment tools and peer code review going to identify the need for staff training against social engineering attacks?

The string of successful company breaches resulting from the SecureID breach is ongoing and affecting such sensitive information as new weapon designs copied from government contractors.


If you want real security, there is only one solution to have a 3rd party look at the code. All the code.


That, like automated testing, is very much a part of it. Peer review can do a lot to remove bugs from software. It's not the one magic cure solution on it's own though.

Consider some of the vulnerabilities in Windows which exist because the code is correct. Intentional functions like DLL relative paths. Peer review and automated code audits where not going to find that problem because the code was implemented as intended. Discovering and demonstrating that vulnerability took human creativity thinking beyond the software design document. It took someone testing the system after source code was compiled to running binary.

Automated code auditing to find recognizable bugs in your source code.

Peer review to find bugs the automated audit tool missed.

Automated vulnerability assessment to find recognized weak points in your system's security.

Manual vulnerability assessment to find weaknesses missed by the automated tools.

Reply Parent Score: 2