There’s a Bounty on Your Applications

In the last year there have been a number of organisations offering rewards, or ‘bounty’ programs, for discovering and reporting bugs in applications. Mozilla currently offers up to $3,000 for crucial or high bug identification, Google pays out $1,337 for flaws in its software and Deutsche Post is currently sifting through applications from ‘ethical’ hackers to approve teams who will go head to head and compete for its Security Cup in October. The winning team can hold aloft the trophy if they find vulnerabilities in its new online secure messaging service – that’s comforting to current users. So, are these incentives the best way to make sure your applications are secure?At my company, Idappcom, we’d argue that these sorts of schemes are nothing short of a publicity stunt and, in fact, can be potentially dangerous to an end user’s security.

One concern is that inviting hackers to trawl all over a new application prior to launch just grants them more time to interrogate it and identify weaknesses which they may decide are more valuable if kept to themselves. Once the first big announcement is made detailing who has purchased the application, with where and when the product is to go live, the hacker can use this insight to breach the system and steal the corporate jewels.

A further worry is that, while on the surface it may seem that these companies are being open and honest, if a serious security flaw were identified, would they raise the alarm and warn people? It’s my belief that they’d fix it quietly, release a patch and hope no-one hears about it. The hacker would happily claim the reward, promise a vow of silence and then ‘sell’ the details on the black market leaving any user, while the patch is being developed or if they fail to install the update, with a great big security void in their defences just waiting to be exploited.

Sometimes it’s not even a flaw in the software that can cause problems. If an attack is launched against the application, causing it to fail and reboot, then this denial of service (DOS) attack can be just as costly to your organisation as if the application were breached and data stolen.

A final word of warning is that, even if the application isn’t hacked today, it doesn’t mean that tomorrow they’re not going to be able to breach it. Windows Vista is one such example. Microsoft originally hailed it as ‘it’s most secure operating system they’d ever made’ and we all know what happened next.

A proactive approach to security

IT’s never infallible and for this reason penetration testing is often heralded as the hero of the hour. That said technology has moved on and, while still valid in certain circumstances, historical penetration testing techniques are often limited in their effectiveness. Let me explain – a traditional test is executed from outside the network perimeter with the tester seeking applications to attack. However, as these assaults are all from a single IP address, intelligent security software will recognise this behaviour as the IP doesn’t change. Within the first two or three attempts the source address is blacklisted or fire walled and all subsequent traffic is immaterial as all activities are seen and treated as malicious.

An intelligent proactive approach to security

There isn’t one single piece of advice that is the answer to all your prayers. Instead you need two and both need to be conducted simultaneously if your network’s to perform in perfect harmony:

­ Application testing combined with intrusion detection

The reason I advocate application testing is, if you have an application that’s public facing, and it were compromised, the financial impact to the organisation could potentially be fatal. There are technologies available that can test your device or application with a barrage of millions upon millions of iterations, using different broken or mutated protocols and techniques, in an effort to crash the system. If a hacker were to do this, and caused it to fall over or reboot, this denial of service could be at best embarrassing but at worst detrimental to your organisation.

Intrusion detection, capable of spotting zero day exploits, must be deployed to audit and test the recognition and response capabilities of your corporate security defences. It will substantiate that, not only is the network security deployed and configured correctly, but that it’s capable of protecting the application that you’re about to make live or have already launched irrespective of what the service it supports is – be it email, a web service, anything. The device looks for characteristics in behaviour to determine if an incoming request to the product or service is likely to be good and valid or if it’s indicative of malicious behaviour. This provides not only reassurance, but all important proof, that the network security is capable of identifying and mitigating the latest threats and security evasion techniques.

While we wait with baited breath to see who will lift Deutsche Post’s Security Cup we mustn’t lose sight of our own challenges. My best advice would be that, instead of waiting for the outcome and relying on others to keep you informed of vulnerabilities in your applications, you must regularly inspect your defences to make sure they’re standing strong with no chinks. If you don’t the bounty may as well be on your head.


About Author
Haywood’s computing history began writing programs for the Sinclair ZX80 and the Texas Instruments TI99/4A. During the early 1990’s Haywood worked for Microsoft in a cluster four team supporting their emerging products, then internet technologies firm NetManage and Internet Security System (ISS).

Leaving ISS in 2002, Haywood founded his first network security company Blade Software, pioneering the development of the ground breaking “stack-less” network security assessment and auditing technology. It was this technology that became the foundation for the companys’ IDS and Firewall Informer products, winning the coveted Secure Computing “Pick of Product” award for two years running, with a full five star rating in every category.

In 2004, Haywood founded his second network security company “Karalon”. It was during this time that he developed a new network based security auditing and assessment technology with the aim of providing a system and methodology for auditing the capabilities of network based security devices, with the ability to apply “security rules” to fine-tune intrusion detection and prevention systems.

2009 saw Haywood join forces with Idappcom Ltd. Haywood is currently the Chief technology Officer for the company and is guiding its future development of advanced network based security auditing and testing technologies as well as assisting organisations to achieve the highest levels of network threat detection and mitigation.

9 Comments

  1. 2011-01-05 4:31 pm
    • 2011-01-05 5:15 pm
      • 2011-01-05 9:06 pm
        • 2011-01-05 9:45 pm
  2. 2011-01-05 5:03 pm
    • 2011-01-05 5:17 pm
  3. 2011-01-05 5:14 pm
  4. 2011-01-06 1:05 am
    • 2011-01-06 10:10 am