Linked by Thom Holwerda on Fri 17th Jun 2011 18:49 UTC
Privacy, Security, Encryption Oh boy, what do we make of this? We haven't paid that much attention to the whole thing as of yet, but with a recent public statement on why they do what they do, I think it's about time to address this thing. Yes, Lulz Security, the hacking group (or whatever they are) that's been causing quite a bit of amok on the web lately.
Thread beginning with comment 477754
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: Proactive testing - depends on the test
by jabbotts on Sun 19th Jun 2011 12:16 UTC in reply to "Proactive testing"
Member since:

Proactive testing is just proactive testing, it doesn't say anything about the security of a system.

You think it's better to wait for a malicious third party to test your systems for you? Proactive testing can, at minimum, give you an indication of your system's effective security posture. Properly done, it includes addressing discovered issues and retesting to discover new ones. That would be the "proactive" part of it. If proactive testing is not saying anything about your system's security, you need to fix your testing methodology.

Automated testing is also very much a part of proactive testing. I'd say it's like the relationship between signature and heuristics based AV; the signatures to catch the recognizable stuff and the heuristics to catch what is not recognizable. The automated vuln assessment tools for the signatures they recognize followed up by a skilled manual vuln assessment with the creativity and flexibility of a skilled human.

So even if the tool found a problem like a SQL-injection, the tool or user of the tool might not even have noticed it.

Bingo. "might not even have noticed it". If your admin or auditor is a Hacker they will indeed notice it though. They will be looking for it. They are self directed learners who think in terms of "hm.. what can I do with this beyond it's intended purpose?" by default.

No, pentesting and so on is to find the most obvious problems.

Vulnerability assessment says "someone could possibly open that door if left unlocked." Pentesting says "That door is indeed unlocked, here is what one is able to do in the room behind it if you don't lock the door." A vulnerability assessment is a list of potential problems one should address. A pentest provides that list along with confirmation that they are exploitable and evidence as to why you should fix them.

If all you tasked your internal team with or contracted a third party for is a single way into the system then sure. You put that limitation on them in the first place though. Your designing your test to fail. Limiting scope of testing, ordering a pentest when what you wanted was a vulnerability assessment or ordering a vulnerability assessment when what you wanted was a pentest are all great ways to insure failure.

You could alternatively contract the third party to find all the ways in they can, what they can do once in and ways they are able to maintain access during time permitted.

With an internal pentest team, you can run a proper testing cycle; pentest, harden, verify, pentest, harden verify. Now your not just finding a single vulnerability and calling it a day.

If your test is only to find the most obvious problems and your not repeating the test cycle to find your next most obvious problems; your doing it wrong.

I'm very certain banks do those previously mentioned security checks.

And, that's exactly the problem. You are very certain your bank is doing the proactive testing; do you now for sure that they actually are though?

Everyone was certain Sony, a huge tech company, knew how to manage it's servers and networks. How did that work out? Lack of network filtering, servers left without latest updates (or even remotely recent updates) customer data stored unencrypted. These are things any competent pentest would have identified. Any responsible company, having those identified, would have addressed them promptly.

Everyone was certain that having over a hundred million PSN and SOE customer's private information exposed would convince them to address discovered issues and check for similar issues across all other company systems. Everyone was certain that Sony's PR claims that they have addressed security issues meant they had actually implemented changes. How did all that work out for Sony when the next week the same weaknesses where exposed in other systems?

Everyone was certain Facebook knew how to implement it's software securely. Facebook must be testing it's systems continually right? So what of passing authentication tokens in URLs which has left every facebook user open to exploit since 2007? (that one was discovered around May of this year 2011).

And financial companies; banks and such. They must be doing the previously mentioned security checks; Heartland Payment Systems, 2009, 40 million accounts exposed.

Banks are in the business of making money. They are notorious for "minimizing expenses" any way they can get away with it. "we'll spend the money to fix that if it proves to be a problem" is the mainstay. If it's cheaper to live with the losses instead of fix the problem; they're going to continue living with the losses.

I wish the market success of a company was an indication of it's responsible management of secure systems; it's not. More often, it's the opposite.

Let's toss out another example for fun. RSA; thee security company. When governments, military and billion dollar companies need security they go to RSA. RSA's SecureID database has been compromised. Everyone who uses SecureID for authentication is screwed. RSA has actually said "uh.. make sure you are using strong passwords for your second of the two part authentication because the SecureID part of it isn't stopping anyone."

But how could this happen? We where all certain that RSA would be doing testing. It was a speer phishing email. How is automated vulnerability assessment tools and peer code review going to identify the need for staff training against social engineering attacks?

The string of successful company breaches resulting from the SecureID breach is ongoing and affecting such sensitive information as new weapon designs copied from government contractors.

If you want real security, there is only one solution to have a 3rd party look at the code. All the code.

That, like automated testing, is very much a part of it. Peer review can do a lot to remove bugs from software. It's not the one magic cure solution on it's own though.

Consider some of the vulnerabilities in Windows which exist because the code is correct. Intentional functions like DLL relative paths. Peer review and automated code audits where not going to find that problem because the code was implemented as intended. Discovering and demonstrating that vulnerability took human creativity thinking beyond the software design document. It took someone testing the system after source code was compiled to running binary.

Automated code auditing to find recognizable bugs in your source code.

Peer review to find bugs the automated audit tool missed.

Automated vulnerability assessment to find recognized weak points in your system's security.

Manual vulnerability assessment to find weaknesses missed by the automated tools.

Reply Parent Score: 2

Lennie Member since:

OK, I agree on many points, but your post was so long. It is probably best to leave most of it as it is.

I was just trying to point out, 3rd party testing just won't cut it. These too are businesses and just have a limited amount of time to spend per site. So I don't think they'll actually find almost all the problems as is the intention of such a test.

On the issue of banks:
There is a law in my country which says I can not do my own pentests on a website, they probably call it something else. :-)

I'm sure as hell not gonna try that on the site of my bank, as that might get me into more throuble than any other site.

I actually did see problems and reported them to the bank before the law was in place. But I got no replies from the bank and nothing changed.

This shows you how good there policies and systems really are.

So I don't trust them either I just use pen and paper.

Atleast with pen and paper banking it isn't as easy to do one thing and affect 10s of thousands of custumers at the same time.

Reply Parent Score: 2

jabbotts Member since:

Fair enough. It did get silly long in the end. What can I say, infosec and hacking are topics I could talk all day about. I hear normal people are into sports teams or some such thing. ;)

I'd agree that third party testing on it's own is not a cure-all. It really is something one should do themselves if running a public facing network is a primary business function though. If you can hire a sys-admin who can also do a periodical vulnerability assessment and pentest then do so. If you can afford to staff a full pentest team, even better. If you can afford third party contractors then fantastic; they'll have the specialized skills, experience and ten thousand dollar software tools (literally, Nessus is around ten grand). Doing no pentesting at all? That's like ignoring QA testing in any other product category.

In your country, is that law refering to you being required to have a third party pentest your own website or does it block you from pentesting websites you do not have authority over? Interesting, I'm actually re-reading relevant laws at the moment including USC 18 T 1029/1030.

Pentesting your bank's website without permission; yeah, don't do that. Some folks can simply spot vulnerabilities without active testing based on the type of passwords aloud, do they get locked after so many tires, does the site use https and so on. Can they mess with the site using java script (without sending anything back to the servers of course).

Reporting problems and hearing nothing back; sadly, not surprising. They may have addressed the issues, decided that fixing the problem was more expensive then paying out losses or ignored the report all together. Hopefully they did actually do something even if they didn't respond. Fully transparency would have been better though; "here's the problem we had reported, it's been fixed so now we are making problem public so other's can find and fix it in there systems."

Reply Parent Score: 2