Or are they really oranges under the peel?
Writing to you again about Fortify since I had so much interest in previous posts. This time I’d like discuss some of the reasons why there may be differences between your view and your neighbor’s view of the same set of results.
We’ll start by assuming that you have a central or common machine where you consistently perform scans to produce a Fortify results file (FPR). It doesn’t matter if you are scanning through automation or manually scanning, just that we have one set of results. You may also upload those results to Software Security Center or just share the FPR among the developers. We can revisit the pros and cons of these procedures in another topic.
The scenario would go something like this:
- The scan is completed successfully, and the results are made available to developers
- A Fortify PDF report is generated and distributed to the development manager and technical leads on the project
- Developers then gain access to the results by:
- Reading the PDF report
- Reviewing results from Software Security Center’s web interface
- Downloading the FPR from Software Security Center and opening with Audit Workbench or an Integrated Development Environment (IDE) Fortify plug-in
Now that you have results and begin remediating, you notice that there are discrepancies between the Fortify PDF report, what your fellow developers are seeing through Audit Workbench or IDE plug-in, and what you are seeing on your desktop by viewing using the same tools.
Mismatch due to rule pack versions
The most common reason for this mismatch is that the Fortify rule pack versions are different from machine to machine. A common misunderstanding is that the information about each category of finding is contained in the FPR file. Well it isn’t, in fact information is split between the FPR file and the rule pack with things like descriptions, recommendations, and even characteristics used to determine the Fortify priority kept in the rule pack while the location of the finding is in the FPR. There is an id for each finding that links the finding with the rule that found it so the information can be joined for display in the tools used to review the results.
Therefore, if the scanning machine identifies findings that are not defined in older releases of the rule pack then a non-upgraded machine might be missing some information about the findings or missing findings entirely if you don’t have the rule id in your rule pack. This accounts for the differences in what you are seeing.
The resolution to this problem is of course to update security content, which includes the rule packs, on all machines to the same version. One way to do this is to direct the machines to the same location for the security content and have the content update each time a Fortify product is launched or you can set AWB to perform the update on a given time schedule. If you are running Software Security Center, it can be the central point for security content. The content can be updated on the central server from HP Fortify on a schedule that aligns with the quarterly Fortify releases. Or you can direct all of your machines to the Fortify update site and continuously update to the latest release.
Mismatch due to using the wrong filters
Another reason for this mismatch might be that machines have different default templates. This isn’t as big of a problem since the template and filters are included in the FPR. When you first load the FPR in Audit Workbench you are asked if you want to keep the template. Always keep the template included in the FPR, never overwrite with the one on your machine as it may be out of date.
The template contains filter sets. Filters sort the findings into folders and they hide some findings. Be sure to use the filter set as instructed by your security team as they may have a custom filter they want you to use to prioritize the remediation effort.
To solve this problem, you need to again be sure everyone is on the same version of security content, which includes the default template. If the template is being customized by an internal security team, you will need to find a way to procedurally distribute this template to all machines or update it on the Software Security Center server so that it is updated with the security content. It is also important that the default filter set is configured in the default template to load using the filter set that you want the developers to see for prioritization and remediation. This avoids having the developer deciding which filter set to use for remediation.
I hope this helps to understand why developers may experience different views while looking at scan results from the same scan. This is a common problem and can cause confusion when it comes to deciding what to remediate. Look for follow-on Fortify topics in future posts.