Tags:
Penetration testing is one of the bulwarks of an application security program: get an expert tester to simulate an attack on your system, and see if they can hack their way in. But how effective is application penetration testing, and what should you expect from it?
Gary McGraw in Software Security: Building Security In says that
"Passing a software penetration test provides very little assurance that an application is immune to attack?"
This is because
"It's easy to test whether a feature works or not, but it is very difficult to show whether or not a system is secure enough under malicious attack. How many tests do you do before you give up and decide "secure enough"?"
Just as you can't "test quality in" to a system, you can't "test security in" either. It's not possible to exhaustively pen test a large system — it would take too long and it wouldn't be cost effective to even try. The bigger the system, the larger the attack surface, the more end points and paths through the code, the less efficient pen testing is — there's too much ground to cover, even for a talented and experienced tester with a good understanding of the app and the help of automated tools (spiders and scanners and fuzzers and attack proxies). Because most pen testers are hired in and given only a few weeks to get the job done, they may not even have enough time to come up to speed on the app and really understand how it works — as a result they are bound to miss testing important paths and functions, never mind complete a comprehensive test.
This is not to say that pen testing doesn't find real problems — especially if a system has never been tested before. As Dr. McGraw points out, pen testing is especially good for finding problems with configuration (like failing to restrict URL access in a web app) and with the deployment stack. Pen tests are also good for finding weaknesses in error handling, information leaks, and naïve mistakes in authentication and session management, using SSL wrong, that kind of thing. Pen testers have a good chance of finding SQL injection vulnerabilities and other kinds of injection problems, and authorization bypass and privilege escalation problems if they test enough of the app.
One clear advantage of pen testing is that whatever problems the pen tester finds are real. You don't have to fight the "exploitability" argument with developers for bugs found in pen testing — a pen tester found them, ergo they are exploitable, so now the argument will turn to "how hard it was to exploit".... But there are many kinds of problems that are much harder to test for at the system level like this — flaws in secure design, back-end problems with encryption or privacy, concurrency bugs. The best way (maybe the only way) to find these problems is through a design review, code review, or static analysis.
In the end, no matter how good the test was, there's no way to know what problems might still be there and weren't found — you can only hope that the pen testers found at least some of the important ones. So if pen testing by itself can't prove that the system is safe and secure, what's the point?
Is it more important to pass the test?
Pen testing is often done as a late-stage release gate — an operational acceptance test required before launching a new system. Everyone is under pressure to get the system live. Teams work hard to pass the test and get the checkmark in order to make everyone happy and get the job done. But, just like in school, focusing on just passing the test doesn't always encourage the right behavior or results.
In this situation, it's natural for a development team to offer as little cooperation and information as possible to the pen testers — it's clearly not in the development team's interest to make it easy for a pen tester to break the system. While this may more closely the mimic "real world" — you wouldn't give real attackers information on how to attack your system, at least not knowingly — you lose most of the value of pen testing the app. Passing a time-boxed black box pen test like this a few weeks before launch doesn't mean much. It's what Gary McGraw calls "pretend security" — going through the motions, making it look like you've done the responsible thing.
Or really learn??
The real point of pen testing is to learn: to get expert help in understanding more about the security strengths and weaknesses in your system of course, but more importantly, to learn more about the security posture of your development team and the effectiveness of the security controls and practices in the SDLC that you are following.
You've done everything that you can think of to design and build a secure system. Now you want to see what will happen when the system is under attack. You want to learn more about the bad guys, the black hats: how they think, what tools they use, what information and problems they look for, the soft spots. What they see when they attack your system. And what you see when the system is under attack — can you detect that an attack is in progress, do your defenses hold up? You also want to learn more about changes in the threat landscape — new vulnerabilities, new tools, and new attacks that you need to protect against.
To do this properly, you need to get the best pen testing help that you can find and afford - whether from inside your company or from a consultancy. Then be completely transparent. Don't waste their time or your money. Give the pen testers whatever information and help that they need. Information on the architecture and platform, the deployment configuration, on how the app works. Train them so that they understand the core application functions and navigation. Show them the ways in and where to go when they get in. Share information on past pen test results if you have them, on problems found in your own testing and reviews, what parts of the app that you are worried about. If they get stopped by upfront protection (a WAF or IPS or?) good for you. Then disable this protection for them if you can to see what weaknesses they can find behind.
Some pen testers want the source code too, so that they can review the code and look for other problems. This can be a problem for many organizations (it's your, or your customers', intellectual property and you need to protect it), but if you can't trust the pen testers with the source code, what are you doing letting them try to break into your system?
Look carefully at the problems that they find
Whatever problems they find are real, exploitable vulnerabilities — bugs that need to be taken seriously. Make sure that you understand the findings — what they mean, how and where they found the problems, how to test for them or check for them yourself. How easy was it — did you fail basic checks in an automated scanner, or did they have to do some super-ninja karate to break the code? What findings surprised you? Scared you?
Then get the pen testers and development team to agree on risks. If you think of risks in DREAD terms (Damage, Reproducibility, Exploitability, Affected Users, Discoverability) the pen tester can explain Exploitability, Reproducibility and Discoverability — after all, they discovered and exploited the bugs, and they should know how easy the problems are to reproduce. Then you need to understand and assess Damage and the impact on Affected Users to decide how important each bug is and what needs to be fixed.
Track everything that the pen testers found — preferably in the same bug tracking system that you use to track the other bugs that the team needs to fix. Fix serious vulnerabilities of course as soon as you can, as you would with any other serious bug. But take some time to think about why the problem was there to be found in the first place — what did you miss in selecting and deploying the platform, in design, coding, reviews and testing. And don't just look at each specific finding and stop there — after all, you know that the pen tester didn't and couldn't find every bug in the system. If they found one or two or more cases of a vulnerability, there are bound to be more in the code: it's your job to find them.
Feed all of this back into how you design and build and deploy software. Not just this system, but all the systems that you are responsible for. You've been given a chance to get better, so make the best of it. Before an attack happens for real.