Who remembers this song, written by Malvina Reynolds, and made famous by Pete Seegar (among others)?
Little boxes on the hillside,
Little boxes made of ticky‐tacky,
Little boxes on the hillside,
Little boxes all the same.
There’s a green one and a pink one
And a blue one and a yellow one,
And they’re all made out of ticky‐tacky
And they all look just the same.
Vulnerability assessments are common in the Application Security world, however, unlike the little boxes from the song, our boxes aren’t quite as colorful. But still, there are choices. There’s a black one, and a white one, and a grey one, and an orange one. Ok, there’s no orange one. I lied. But the others are real, and in general, they’re made of sterner stuff than ticky‐tacky (whatever the hell that is).
I say in general, because there’s one exception: The checkbox. If you’re testing your application’s security for the sole purpose of checking off an item in some list, it is indeed very likely that you’re practicing ticky‐tacky (whatever the hell that is) security.
That’s because the bad guys don’t really care if you’ve completed your checklist or not. They care about whether your application is actually secure, and they have a lot of time on their hands to find out. That could be because they’re part of a well‐funded foreign government, or an organized crime syndicate (if you’re a lucrative or sensitive enough target), or it could be that they have no job or anything better to do, and live in the basement of their mom’s ticky‐tacky (whatever the hell that is) house. In either case, they’ll eventually find the cracks in your you know what (rhymes with wiki‐wacky).
So let’s, um, lift the lid on the other box types.
In a black‐box assessment, the tester will hammer away at your application from the outside, with no
prior knowledge of the innards, other than what can be obtained from public sources or other means. These black‐box assessments are often called application penetration tests (pentests), but a true “penetration test” is not, in most cases, what the customer really wants. A true pentest would focus on compromising the application by any means available (including social engineering attacks against staff or users), and then using that compromise to escalate privilege or “pivot” to attacks on other systems. Such a test goes for depth, not breadth.
A penetration test of this type would most closely resemble what an actual hacker would do. Does that make it a good measure of your security? Not really. It’s limited by the ingenuity, resourcefulness, and time limitations of a particular tester. Even if he or she isn’t able to find a way in within the ground rules and time limit of the test, it doesn’t really tell you what will happen when you’re actually attacked in the wild where there are no rules or time limits, and the nerd‐wolves can run wild (or at least until mama wolf calls them up for supper). If the tester does find a hole doing this type of test, they’ll likely spend the rest of their time seeing how far they can go with it. Which means you may never find out about all of your other holes.
Most customers, even when they ask for a “pentest”, are actually looking for another kind of black‐box
assessment. They really want a “vulnerability assessment”. Here, the tester is going for breadth rather then depth. They are trying to map out as many potential problems with your application as possible within the time available. But once again, they are doing it “in the blind”. Does it really make sense to work this way? Usually not, unless you have no other choice. See my previous blog about “The Birds and the Frogs” for further discussion on this point!
The opposite of a black‐box test is a white‐box test. Yeah, I know, who would have thought? Here the examiner has perfect knowledge of what’s going on inside the box (i.e. source code), but can usually only theorize about its actual behavior because, in most cases, the application isn’t available for live testing. The tester must rely on their own code‐reading skills, and their ability to assess the accuracy of results spit out by static analysis tools.
Enter the grey‐box, where the tester has both source and the ability to test against a running version of the same application. You can think of this as either a source‐guided vulnerability assessment, or a test‐assisted static analysis. Really, it is both. The tester will look find and confirm security vulnerabilities across the entire breadth of your application’s attack surface, using whatever method, or combination of methods, makes the most sense. It is not a comprehensive source review, and it doesn’t accurately simulate what an actual attacker would do. What it does do, however, is give you the most comprehensive picture possible within the shortest amount of time (and therefore cost). Nothing ticky‐tacky (whatever the hell that is) about that!
Note that the advantages gleaned from the grey‐box require that the source code and the running application be made available to the same testers at the same time. If you try to schedule these things sequentially, you’ll end up with two separate tests, costing at least twice as much, and without actually realizing the synergy inherent in the combined approach. That is, quite frankly, a ticky‐tacky (whatever the hell that is) way of doing things, although it happens all the time, usually because it’s two separate little boxes on the checklist.
So when it’s time to have your application assessed, it really does pay to think outside of the box. The checkbox, that is. Because, despite appearances, these tests aren’t “all just the same”.
Disclaimer: If you’ve managed to read to the end, you should know that this ticky‐tacky (whatever the hell that is) post was made possible by a few dry martinis and the wonders of a university education!