<img height="1" width="1" src="https://www.facebook.com/tr?id=1879927395628828&amp;ev=PageView &amp;noscript=1">

The Value of Manual Testing

 Jul 14, 2017 2:30:00 PM |    Paul Mendelson
manualtesting

Vendors want companies to think their automated vulnerability scanner is a turn-key solution to all their security needs — and I feel like some security teams have fallen for this scam as well. Unfortunately (and fortunately for people like me), that's not entirely true.

Based on their descriptions, some of these tools sound like once you install them, you just need to paste in a URL, and then it runs and tells you all the problems. Sounds great! But in reality, it's not quite that simple. These tools are useful and sometimes do prove their worth, however, they often miss a lot of things too. From what I've seen, companies, internal audit, and even external security testers end up relying on these tools too much, sometimes missing critical vulnerabilities.

If all you're doing is running scans and doing no manual testing at all, I feel you are doing your company or your customer a disservice, giving them a false sense of security. If you have many applications to test in a very short timeframe, then you may have no choice but to use automated tools, but the rest of the time, you should probably be spending at least half the hours doing manual testing as well. The rest of this article highlights some key advantages of manual testing over automated testing, and provides guidance how to add value via manual testing.

Think Like an Attacker. Probably the biggest difference between manual testing and automated testing is the fact that a human can think and a tool cannot. For the most part, all automated tools do is automate simple tasks — they zoom through a bunch of test cases very quickly which other people came up with — people who don't know your specific application. This is useful, but because they are generic tests, they are likely to miss vulnerabilities that are unique to your application.

Being a human (I assume), you should do at least a quick threat analysis to identify likely threats and come up with a list of abuse cases and attack scenarios. Approach the application as a real adversary that wants to do harm — not a brainless bot running through a script. This is where you will shine compared to any automated scanner. Don't be a tool — remember to think!

Add Value. Make sure you are adding value compared to just running some tools. Realize that customers may already be running the same tools internally, including their development, security, or audit teams. Or, they may have already had one or more external assessments done. If you simply run a scan and hand it to them, validated or not, they might feel like they didn't get their money's worth (and they didn't), unless their only goal was to get an external assessment, of course. Don't assume your customer is an idiot and knows nothing about security. This is your chance to prove your worth. Don't let companies think you can be replaced by a piece of software!

Dig Deeper. Automated tools sometimes do a good job, and sometimes they don't. For certain types of issues, SQL injection for example, these tools might cause an application to cough up a database error message and report it as a Low risk "Verbose Error Message" or something similar. Simply confirming the error message is there without attempting to dig deeper is not how you validate this issue. Even if you're doing a vulnerability assessment and not a penetration test, it is usually possible to inject safe payloads (e.g. SELECT) without causing any harm. Go the extra mile by manual testing to make sure there isn't something bigger there.

Chain Exploits. Unlike automated tools, manual testing can allow multiple vulnerabilities to be chained together, potentially yielding more devastating attacks. On their own, they might represent minimal risk, but in concert, it might be possible to reset user passwords to mine sensitive data.

Reverse Engineer. Automated tools generally do not do reverse engineering or binary analysis. Through manual testing, application files can be decompiled or de-obfuscated and analyzed, checking for hard-coded credentials, encryption keys and such. This is a serious risk, typically for mobile applications and thick-clients, often overlooked by automated tools.

Look For Information Leakages. Automated tools understand it's bad to leak passwords and credit card numbers, but there is little else they reliably understand, including partially masked information. Awhile back, I tested an application which was masking the last 4 digits of subscriber SSNs in one part of the application, and masking the first few digits elsewhere in the application — kind of amusing. I'm guessing different developers had a hand in that application. I didn't bother, but a script could have been written to combine the two halves and build a nice database of SSNs. This is another flaw that would have been missed by automated tools but probably caught by a human.

Manually Validate Findings. One of the biggest problems with automated tools is they tend to generate a lot of false positives. If false positives make it to the customer, they may lose confidence in your assessments or skim past other vulnerabilities in your report that are actually a risk. Do your best to manually test each issue to ensure it is valid, exploitable, and a legitimate risk. In some cases, issues can technically be valid without posing any risk — for example clickjacking or cross-site-request forgery on a public website with no login/session.

Assess Business Impact. Automated tools are only capable of assessing technical impact, and in my opinion, they tend to do a poor job — often the risk ratings are inflated. Based on your knowledge of the application, environment, and industry, as well as mitigating and compensating controls, you should always re-assess risk ratings based on the potential impact to the business.

Provide Quality Evidence. Automated tools can find vulnerabilities with varying degrees of confidence and often they provide poor evidence, or evidence that is not very useful. Rather than including such nebulous evidence in your report, manually test each issue yourself, and provide adequate screenshots, HTTP requests/responses, code snippets, and other evidence to prove the issue is valid and demonstrate why it is a risk.

Provide Proof-of-Concept. Automated tools rarely provide PoC's or else they are not useful, often using special plugins or tools that even security testers generally do not use. Once you confirm an issue is valid, create the simplest possible PoC using standard tools and include this in your report. The customer's developers will thank you for making issues easy to reproduce and your coworkers will also appreciate it if they have to do a retest on your application in the future.

Create Custom Vulnerability Write-Ups. Automated tools use generic vulnerability descriptions and recommendations. When time permits, you should create your own write-ups or at least customize the write-ups as much as possible. Tweak the description, business impact, and make sure the recommendation is relevant and practical for the specific language, technology, and environment.

Create Readable Reports. Automated tools often allow reports to be generated, but they then to be long and difficult to read. Except for the shortest engagements, you should use your own reporting format, including only the necessary information. Format the findings so they are readable by anyone and don't include 10,000 instances of a vulnerability when it applies to the application as a whole.

We have already covered several, but there are many advantages of manual testing over automated testing.

Less Potential for Disaster. Automated tools aren't always safe. Some customers go nuts if 10,000 junk emails show up in their inbox or if their database is suddenly flooded with thousands of test transactions. Resource starvation can also be a problem. For production applications, this can be a serious risk, especially if it becomes difficult for them to quickly differentiate between real inquiries and orders vs. test data. Other times, customers just like to complain, but it's still in your best interest to keep them happy.

Automated tools are generally the cause of these problems since they click buttons and submit forms, with little or no intelligence about what may be happening behind the scenes. Specific pages and directories can often be excluded from scans, but for whatever reason, sometimes things don't work as expected. Manual testing is sometimes a safer approach.

Less Mess to Clean Up. Automated tools can easily turn a production database into a mess, which can be a real problem. Removing so much test data is often not possible, except by a database administrator, causing an inconvenience to the customer. Careful manual testing can often get the job done without leaving behind so much test data.

Less Noise. Often it doesn't matter, but for some penetration tests, it's important to be "quiet", as to avoid detection by your customer's security, network, or incident response teams. During these unannounced penetration tests, you will generally want to avoid "banging on doors" by doing careful manual testing rather than running automated scans — or at least save the scans till the end.

Focus on Issues that Cannot Be Detected by Automated Tools. There are a number of vulnerabilities that automated tools simply cannot find, because they do not understand the business logic. Certain other vulnerabilities they may find, but not reliably, or they may underestimate the risk. Automated tools generally cannot understand user roles nor can they identify many types of security violations — unless you spend a lot of time "teaching" them — and at that point, you might as well just do manual testing. Run some scans, but don't waste your time testing for issues that automated tools can reliably find for you.

Some things automated tools can test pretty reliably include: input validation, output encoding, cookie handling, transport security, error reporting, page caching, autocomplete, and open redirects.

However, the list of issues that automated tools cannot easily detect is longer and includes: flaws in business logic, access controls, CSRF, parameter tampering, file upload controls, session timeout, logoff/session termination, account lockout mechanism, multi-factor authentication, username/password policy, password management, anti-automation controls, system isolation, security alerts/notifications, and data masking.

Conclusion

Use automated tools, but don't forget about manual testing. Understand the limits of automated scanners and supplement your assessments with manual testing whenever possible. This is your chance to prove your worth. Don't let companies think you can be replaced by a piece of software!

If nothing else, realize that you are probably slowly putting yourself out of business by relying on scans and not performing manual testing. Customers may eventually realize they can perform these scans themselves and get similar results. If you are not adding much value beyond that, your customers may decide they don't need you anymore!

Topics: Application Development

Want more of the AsTech Blog? You got it.
Blog subscribers get email updates twice a week.

Comments