Contents


IBM Security AppScan Source Quick Process Guide

A journey from source code to actionable and defensible security findings

Comments

This tutorial is intended for current users of IBM Security AppScan Source who are familiar with static analysis and the IBM Security AppScan Source for Analysis client. For the sake of brevity, I will refer to the product as "AppScan Source" or "AppScan" for the remainder of this guide.

Although AppScan Source has been a market leader in static analysis security testing (SAST) for years, it still can't produce a perfect set of results out of the box. In fact, no SAST tool has that capability. The reason why is simple: Every organization is unique. Every organization writes its own code and has its own technology stack, which usually consists of dozens, hundreds, and even thousands of libraries and frameworks that may or may not be publicly available, and for which there may or may not be source code. Each organization has its own application security policies and secure coding best practices, which affect the types of security issues they investigate and often vary from one application to the next, based on risk assessments, programming languages, and other factors. This makes it impossible for a SAST tool to know out of the box what every API used in an application does or whether data coming into the application from an outside source and not being properly sanitized is a low-priority issue or a five-alarm fire. For example, using data read from a file on a file system without being sanitized may not pose concerns if the file system can be accessed only by administrators. The situation changes dramatically, however, if other users can upload files to that server either within or outside of that particular application.

Now, one can argue that AppScan Source should still be able to provide meaningful result-sets out of the box, even if it doesn't recognize every API or every little detail that's important to the user. And that's precisely what AppScan Source usually does. AppScan Source has hundreds of thousands of rules telling it what various APIs do. It also supports the latest frameworks, such as ASP.NET MVC, Spring, Struts, and JSF, to name a few. AppScan Source also provides a set of filters that permit users to zero in on issues commonly considered to be high priority, in just a click or two. Scan results with out-of-the-box filters applied are usually quite good and many users don't feel the need to review findings past that point. However, there are also many folks looking to take their findings to the next level. They're looking to really understand how much of the application was covered, to improve coverage, and to fine-tune scan results to their application security policies or secure coding best practices. AppScan Source makes this analysis relatively easy to do, by using built-in tools such as the Sources and Sinks view, Custom Rules wizard, and Filter Editor.

The process described in this tutorial guides you through using these tools to help achieve the custom fit you require. As always, this solution is not a cure for all problems. Also, it's not the only way to get the results you want, and there are other tools available as a part of AppScan Source to assist you (for example, Framework for Frameworks API), which are outside of the scope of this tutorial. That said, this tutorial should help you produce a comprehensive set of actionable results that you can defend in case of an audit. This process begins after you have successfully run a scan and obtained an initial set of results. However, because having a clean scan with few compilation errors is critical, I think it is important enough to include it in the "Scan the application" section below. Remember that you need permissions to use AppScan Source for Analysis client and to create custom rules in your environment to follow along with this guide.

The practices described in this guide are divided into the following phases of activity:

  • Phase 1: Scan the application
  • Phase 2: Assess and expand coverage
  • Phase 3: Filter findings
  • Phase 4: Analyze/Sort/Bundle findings

Phase 1: Scan the application

Before you can follow through the process described in this tutorial, ensure that:

  • Most, if not all, of the application's codebase is included in the scan
  • The application has been compiled/scanned, without any major errors
  • The initial scan completed successfully

Note: If the scan has too many compilation errors, code coverage may suffer significantly and lead to poor results. This can also lead to more manual effort required on your part to analyze such a poor set of results because AppScan will not be able to automatically analyze classes that fail to compile. You should resolve the majority of compilation or scan errors before proceeding to the next step.

Phase 2: Assess and expand coverage

The goal of this phase is to understand how much of the application was covered during the scan and, if necessary, to improve coverage to an acceptable level, by creating custom rules.

The time spent on this phase can vary from the few seconds required to perform a basic assessment of coverage, to several hours if you need to significantly improve coverage of a highly customized application. Just how long this step takes depends on the goals of the scanning engagement, the application being analyzed, and other factors.

Note: In this phase, do not consider the whole trace (data flow). Focus on the method you're examining, because the function of that method will not change from one data flow to the next. For example, the SqlQuery.execute(query) method executes the query the same way, regardless of whether the data in a query came from a property file or from a user's input on a web page. So, if you're examining the SqlQuery.execute() method in this step, you should consider only what the method does and whether it represents a concern, rather than where the data may have come from (I will address that concern when I discuss filters).

There are several steps to follow in this phase:

  • Step 1: Review discovered sources
  • Step 2: Define "known" but missing sources
  • Step 3: Identify lost sinks
  • Step 4: Define Sinks and Not Susceptible to Taint methods
  • Step 5: Define taint propagators
  • Step 6: Identify other lost sources
  • Step 7: Re-run the scan

Step 1: Review discovered sources

Open the Sources and Sinks view and review all sources to get a high-level idea of where the data is coming from. Check the types of sources being shown against the expected sources for the applications. For example, if the application is a web application using a database, you should see web and database sources (see Figure 1). It is possible that the application is only writing to the database and not reading from it, but you should double-check that to be sure.

Figure 1. Sources and Sinks view with web and database sources expanded
Screen shot showing Sources and Sinks view with web and database sources expanded
Screen shot showing Sources and Sinks view with web and database sources expanded

You can also see similar information in the Findings view, by clicking Select Tree Hierarchy on its toolbar and selecting Source. Now, the tree structure on the left side of the view should be organized by Sources.

Step 2: Define "known" but missing sources

Now that you see what sources are present, ask the developers of the application if there are any web service methods or other custom technologies that bring data into the application that you can't see under Sources and ask them how they work. AppScan Source supports many of the most popular web service definition frameworks, such as JAX-RS and JAX-WS, but even if the application is using these, there may be other technologies present. For that reason, it's always good to double-check. Define such methods as sources or tainted callbacks in the Custom Rules wizard (click the icon with a plus sign on it in the toolbar of the Custom Rules view).

A source is a method that returns tainted data, while a tainted callback is a method that accepts tainted data through its parameters (typically, from an external entity). Consider the following methods:

  • Example 1: String user = request.getParameter(username);
  • Example 2: public boolean isValidUser(username, password){ ... }

In the first example, request.getParameter retrieves the HTTP value of HTTP parameter username as entered by the user from the web. It returns the value entered by the user, which is potentially dangerous (and thus, tainted), which means it is a source of tainted data. The parameter this method accepts is not dangerous.

In the second example, isValidUser(...) is a web service method exposed to various clients of the application. If the data provided to this method comes from outside of the application, it cannot be trusted until proven otherwise! If users or clients invoke this method, they provide the user name and password they'd like to validate. And because isValidUser accepts tainted data through its parameters, it is a tainted callback. The return value here is either true or false, and it usually does not represent a threat.

Important: If you can't get information on missing sources from colleagues, or if their advice doesn't prove to be helpful, then you can use Scan Coverage – No trace findings as described in "Identify other lost sources." However, asking someone who knows the application is usually a much faster approach.

Step 3: Identify lost sinks

Lost sinks are APIs that AppScan Source doesn't understand. The trace stops because AppScan doesn't know what the code it has encountered does and, therefore, cannot proceed with the trace. There are no rules and no source code available to help AppScan Source analyze the API, and this has a negative impact on scan coverage. This is a challenge for most SAST products on the market today that perform data flow analysis. AppScan Source classifies lost sinks as "Scan Coverage Findings" to give you a chance to review them and improve your scan coverage. Lost sinks findings appear as a finding with a trace that ends with the lost sink method. Resolving lost sinks often offers a big return for your efforts, because creating just a handful of custom rules or locating "missing" source code for key lost sink APIs can dramatically improve scanning coverage. Key lost sink APIs are those with a high number of traces (findings) going to them. You can see lost sink information under Lost Sinks in the Sources and Sinks view (see Figure 2).

Figure 2. Sources and Sinks view - Lost Sinks
Screen shot showing Sources and Sinks view - Lost Sinks
Screen shot showing Sources and Sinks view - Lost Sinks

The first question to ask when resolving a lost sink is whether the API in question is really a third-party API. If it is a third-party API (open source or not), then you probably won't have the code. If it's an API created by your own organization, then check to be sure that you don't already have its source code on the file system. If you do, there may be a problem with your scan configuration. If you don't have the source code, you may need to get access to it. Even if you decide not to include it, that needs to be a conscious decision, as not including it may impact coverage of relevant code as described in "Scan the application." This thought process usually takes only seconds, but it can make a big difference to the final outcome. Finally, if you are confident that the source code is included in the scan but still shows up as a lost sink (this is very unlikely but still possible), then proceed with lost sink resolution as described below.

You can "resolve" a lost sink by creating a custom rule for it. To do so, right-click the lost sink in either the Sources and Sinks view or in the Trace diagram. You can also resolve lost sinks using the Custom Rules wizard. Below, I discuss different types of lost sinks and the process of resolving them.

Step 4: Define Sinks and Not Susceptible to Taint methods

Use the Sources and Sinks view to look at all lost sinks by their namespace. Review the list and look for Sinks and Not Susceptible to Taint methods. Create rules for these methods only.

Identifying Sinks: For a particular lost sink, ask yourself a question: "Should I have checked/validated/cleaned the data before it went into this method?" If the answer is yes, then it's a sink. For example, if the lost sink in question passes the data to an external system, third-party library, database, or the user, then it is a sink and you should probably check the data before it leaves your "span of control." Sink methods look like this: dbQuery.execute(...), netManager.send(...), httpResponse.write(...), thirdPartyLibrary.doSomething(...), backEndService.run(...), and so on.

Figure 3 shows an example of a lost sink that is actually a sink – logTransaction() method that logs transaction information (including sensitive credit card data to a plain-text log file).

Figure 3. Example of a lost sink that is actually a sink
Screen shot showing Example of a lost sink that is actually a sink
Screen shot showing Example of a lost sink that is actually a sink

Identifying Not Susceptible to Taint methods: For a particular lost sink, ask yourself a question: "Are there any scenarios for any application where the data going to this Lost Sink unchecked may be of concern to me?" If the answer is no, then the lost sink method is Not Susceptible to Taint. After being marked as such, all traces going to this method are removed, and, therefore, the Not Susceptible to Taint rule should be used with caution.

Remember: Consider only the lost sink method by itself and not the whole data flow or other methods this lost sink may lead to. Original taint will continue past a lost sink.

Step 5: Define taint propagators

There are two approaches to defining taint propagators, and it's extremely important for you to choose the right one. Both approaches are very effective when they are used properly and when their pros and cons are well understood and can be accounted for.

A taint propagator method does not "generate" tainted data, and no vulnerability occurs through the code inside one. However, if tainted data is provided to such methods (usually through parameters), then it will pass the data along (usually through the return value). Examples of taint propagators are string.subString(...), string.append(...), and base64.encode().

Figure 4 shows an example of a lost sink that is actually a taint propagator. In this example, the decodeBase64() method converts base64 encoded list of accounts stored in a cookie into a plain text list that can be used by the application.

Figure 4. Example of a lost sink that is actually a taint propagator
Screen shot showing Example of a lost sink that is actually a taint propagator
Screen shot showing Example of a lost sink that is actually a taint propagator

When you mark a method as a taint propagator, AppScan Source considers all of its input parameters to be tainted or dangerous—as well as its return value and the pointer. Stated differently, AppScan will now follow data coming in through any parameter and going out through the return value of this method; AppScan will also follow any future reference to that object as tainted. Each approach described below uses the concepts and functions of the taint propagator rule in a different way.

Quick and noisy approach

This approach is most effective in one-off review situations (for example, proof of concepts) when time is of the essence (and application coverage is insufficient) or when performing a tool-assisted code review. This approach should be used only when the taint propagator (if not all of the rules) will be thrown away at the end of the engagement.

In the "quick and noisy" approach, all remaining lost sinks are marked as taint propagators, regardless of whether they actually propagate taints. Doing so permits AppScan to quickly capture a whole new set of data flows and behaviors that it didn't observe before. However, at the same time, this practice also results in trace explosion. Remember that every single parameter and return value of every lost sink method is now being followed, resulting in at least one new trace for each. And because not every method that was marked as a taint propagator actually propagates taint, using this approach can introduce a lot of false data flows (that don't exist in real life) and, therefore, the result is a lot of noise.

That said, when handled properly, noise isn't necessarily a bad thing. While such findings are not valid security concerns, they can still provide great insight into the application being analyzed. They are usually fairly easy to remove using filters (using the Trace section in the Filter Editor). After all, it is much easier to tell if you have a chest full of gold or a chest full of coal if you have the chest in front of you, rather than if it's buried in a field. This approach is very effective at finding potential vulnerabilities based on taint propagation. This approach will yield findings only when the taint propagation reaches a dangerous method (sink). Before reporting a finding that is the result of taint propagation rules, verify that the node marked as the taint propagator is actually propagating tainted data and isn't the result of taint explosion. You will need to do this only for a limited set of findings.

Noise does start to become a problem over the long term. While it is usually relatively easy to remove in the context of a single application, it is much more difficult to control when looking at many different applications in an enterprise. It also pollutes the custom rules database with a large number of bogus taint propagators. In this case, more care needs to be taken and the clean, long-term approach described below should be used.

To mark all remaining lost sinks as taint propagators, open the Sources and Sinks view, right-click on the Lost Sinks node and select Mark all lost sinks as taint propagators.

Clean, long-term approach

This approach is most effective when AppScan Source is part of an ongoing security effort in an enterprise. Customized rules are created and maintained over multiple scans and are used to analyze multiple applications.

Because any rules that are created are then used on an ongoing basis to analyze a variety of applications, when using this approach you need to exercise greater care when creating rules. This is especially true for taint propagators, given their propensity to create noise.

In the clean, long-term approach, you need to actually review the remaining lost sinks and ask for each one: "Does it propagate taint?" Mark a lost sink as a taint propagator only if you are absolutely certain the taint going into the method is transferred to the return value of the call, or it is transferred to the pointer of the object. Good examples of taint propagators include collections, hashmaps, and operations such as doc.parse(taint).

This approach takes more time, but it avoids a lot of headaches if rules are accumulated over multiple scans.

Step 6: Identify other lost sources

While AppScan Source cannot automatically identify lost sources because every method AppScan doesn't recognize looks more or less the same, it can provide "bread crumbs" or pointers to help you identify them. These pointers are shown in the form of scan coverage findings that have no trace information available (Scan Coverage – No Trace). Looking through these findings may be time-consuming and may not happen in every engagement, but it's an important way of identifying lost or missing sources, especially when there is no one to ask. In general, however, as I've said before, asking someone who knows the application is much faster.

A Scan Coverage – No Trace finding may mean several things:

  • It's dead code or a web-service-like call where nothing calls the method identified by the finding. Creating a tainted callback rule for such a method is the best option.
  • Data is retrieved from an internal collection or storage object.
  • It's a lost source. You have to follow the code back to the entry point (or ask a developer).

Figure 5 shows an example of a Scan Coverage – No Trace finding where data is coming from an internal storage object called "account." Note that this finding has no trace.

Figure 5. Example of a Scan Coverage―No Trace finding
Screen shot showing Example of a Scan Coverage―No Trace finding
Screen shot showing Example of a Scan Coverage―No Trace finding

Tip: If you created custom rules for sources and tainted callbacks but they have no effect after a re-scan, you can troubleshoot your scan by enabling the Automatic Tainted Callback mark-up option. To enable it, set it to True in the Advanced Settings section of the Scan Configuration view for a scan configuration you use for your scans. This causes AppScan Source to taint every parameter of every public method in the application you're analyzing. This, in turn, causes AppScan to show a wide variety of data flows in the application, providing a lot of insight into potential sources of data and resulting in a lot of noise. The large amount of noise is the main reason why this option is usually used only when custom source and tainted callback rules fail to produce the desired effect.

After you see data flows in the application, you can analyze them along with the source code to find the source and tainted callback methods that AppScan can see. You can then disable the Automatic Tainted Callback option for your next scan. You can also simply use filters to single out vulnerabilities in the scan results, but that approach is not as robust as using custom rules.

Step 7: Re-run the scan

Re-running the scan is necessary for your rule changes to be applied. It may be useful to check the Enable Vulnerability Analysis cache option on the Overview tab of project properties. This prevents AppScan from having to recompile the code all the time, but instead enables it to remember to clear the cache if the code changes!

Rinse and repeat and...

Repeat the seven steps until satisfactory coverage has been achieved. You can judge this by comparing the number of "Scan Coverage" findings to that of "Definitive + Suspect" findings. A lower number of "Scan Coverage" findings is better. Lost sink findings should also be contributed by just a few lost sink methods.

Phase 3: Filter findings

The goal of this step is to significantly reduce the number of findings and focus in on those that are considered to be the most important and, thus, of the most interest to you. Stated differently, you're removing "noise" and "false positives"—issues that the customer doesn't care about. By the way, most of the findings that you're filtering out probably aren't actually "false positives." They usually are just deemed "difficult enough" to exploit.

One of the most important purposes of a filter is to enforce an organization's "Secure Coding Best Practices" policies. A lot of organizations start out with either a single filter for all of their applications or just a handful of them aimed at different programming languages or risk levels. However, for a detailed review, there is rarely a "one size fits all" filter. Out-of-the-box filters provide a great starting point and may even be sufficient to get desired results depending on your goals. "! - AppScan Vital Few" and "! - High Risk Sources" are great filters to start with.

Severity and classification

In the Filter Editor view, focus only on "High Severity Definitive" and "Suspect" findings. This is a great starting point for most filters. While Suspect findings are lower confidence than Definitive findings, you will usually find some very interesting and important vulnerabilities there (for example, SQL Injection). Figure 6 shows a filter with these settings.

Figure 6. Filter Editor – Severity and classification
Screen shot showing Filter Fditor – Severity and classification
Screen shot showing Filter Fditor – Severity and classification

Focusing on high-risk sources

The first approach to quickly obtaining results that concern you the most is to use the Trace section of the Filter Editor to restrict findings to include only those that come from easy-to-exploit sources or that go to high-risk sinks. For example, you can focus on data coming from the web by adding a Technology.Communications.HTTP property in the Source section of a new trace entry. You can focus your sources even further by defining specific methods from which the data comes in. For example, you can define javax.servlet.ServletRequest.getParameter() in one trace entry and javax.servlet.http.HttpServletRequest.getQueryString() in another to review data coming only from those two extremely easy-to-exploit methods. Figure 7 shows these sources defined in the Trace section of the Filter Editor.

Figure 7. Filter Editor―Focusing on high-risk sources
Screen shot Filter Editor―Focusing on high-risk sources
Screen shot Filter Editor―Focusing on high-risk sources

This approach allows you to quickly evaluate the most serious findings in the application, but eliminates other findings in which you may be interested in investigating further. A more thorough approach is to eliminate safe sources and sinks instead. See "Eliminating safe sources and sinks" for details.

After the first entry is added, each new entry in the Restrict part of the Trace section expands the result set by showing findings that didn't meet the criteria of the previous Restrict entries.

Eliminating safe sources and sinks

The second and more thorough approach is to use the Trace section of the Filter Editor to remove findings that come from sources or go to sinks that pose a low enough risk to be considered "safe." This approach usually takes longer than focusing on high-risk sources but often leads to a much more comprehensive set of results. That is because you review findings and decide what's "safe" instead of just assuming what's dangerous. Safe operations may include data coming from property files and environment variables and going to methods such as logging APIs or "copy-like" operations where you get a value from one storage attribute and then store it in another storage attribute.

Tip: What's considered safe may vary from application to application, so be careful! For example, methods that read data files on the file system may be considered safe, but if users are permitted to upload files to the server, you can no longer trust files on the file system and you cannot consider them to be safe. Property files are usually okay unless they are reading "secrets" and those secrets have not gone through decryption. This is indicative of clear-text password storage. If this is a concern, the Sources and Sinks view offers a quick way to understand where the data ends up after coming in through a source and to distinguish those source-to-sink flows that may be of concern to you and those that can be considered safe. Figure 8 shows some "safe" sources and sinks removed using the Trace section of the Filter Editor.

Figure 8. Filter Editor – Removing "safe" sources and sinks
Screen shot showing Filter Editor – Removing safe sources and sinks
Screen shot showing Filter Editor – Removing safe sources and sinks

Safe sinks tip: When looking for safe sinks, you can obtain context information in the Findings view for a particular source-to-sink combination or for a particular sink. You can then sort by context information so all findings with similar contexts are grouped together. You can quickly scroll through several thousand findings by scanning the context for interesting words. To do so, click Select and Order Columns on the Findings view toolbar and add the Context column. For example: Logging APIs' debug/warn/info/error methods are often "noisy" sinks. But you still may want to look at all the context information to see if "credit card" or "SSN" or "passwords" is included. This is usually indicative of an information leak and may be a very important finding to highlight. And it doesn't take long to quickly rule out irrelevant findings by looking at the Context column in the Findings view.

Each new entry in the Remove part of the Trace section shrinks the result set by hiding findings that didn't meet the criteria of the previous entries.

Filter-based validation

Using filters is the preferred approach to removing validated findings instead of using custom rules to perform the same task. The difference is that you can understand which findings are being removed not only today, but also a month and a year into the future. This is because filters can be easily "inversed" and rules can't. Filter-based validation also allows for a more fine-tuned control of validation for various data flows and applications in the organization, because you can utilize different filters (or combinations of filters), even for single applications.

To specify a filter-based validator, go to the Filter Editor view. In the Remove area of the Trace section, add a new entry; then specify a validation method (including its namespace) in the Required Calls section and the sink (or vulnerability type in Sink Properties) that this Validator/Sanitizer applies to. Figure 9 shows defining a filter-based validation entry.

Figure 9. Defining a Trace Rule Entry to perform rule-based validation
Screen shot showing Defining a Trace Rule Entry to perform rule-based validation
Screen shot showing Defining a Trace Rule Entry to perform rule-based validation

Important: Always check your filter by "inversing" it to ensure that no important findings accidentally get lost.

To inverse a filter, select it in the Filter Editor and click Show findings which do not match the filter on the Findings view toolbar. You can also automatically apply the inverse of a filter after a scan (see "Share filters and save filtered results").

Vulnerability type

Use the Vulnerability Type section of the Filter Editor to either remove low-priority finding types or restrict the types to just a few of the highest-priority issue types. This is best performed last to avoid accidental removal of issue types with interesting findings, because these findings can go unnoticed with all the noise still in view.

Share filters and save filtered results

After you've created a filter, you can share it with others by selecting Share filter on the Filter Editor toolbar.

You can also have filters be applied automatically when scans complete (only filtered results will be shown and saved). To set this up:

  1. Go to the project or application properties and select the Filters tab.
  2. Add one or more filters you created to the Filters list.
  3. If you'd like to make sure that your filter doesn't remove any important findings, you can use the Invert option at the bottom of the window to invert the filter so that it only shows you findings that are typically removed.

Tip: If assessment results will be published to IBM Security AppScan Enterprise, create a pre-filtered assessment prior to publication. This avoids noise in your IBM Security AppScan Enterprise reports.

To save a pre-filtered (partial) assessment without re-running the scan:

  1. Apply your filter in the Filter Editor to see issues you'd like to keep.
  2. Select all findings (click on a finding, then press Ctrl+A or on a Mac ⌘+A) in the Findings view. You can also select just those findings you'd like to save if you don't want to save everything that passed through the filter.
  3. Click on the Save Selected Findings icon on the Findings view toolbar and save the assessment. That icon resembles a floppy disk with an arrow over it.
  4. Open the assessment file you just saved to see only filtered results.

Phase 4: Analyze/Sort/Bundle Findings

The goal of this step is to review filtered findings, further improve filters, bundle the findings in a way that makes sense (for example, by issue type, developer, and so on), and distribute them to developers for fixes. Again, the time required for this step depends on your application, your goals, and the quality of your filters. For example, if the scan is run by a build system and a proper filter is set up, scan results can even be provided directly to developers and this step can be skipped altogether.

That said, it is usually best to review findings before distributing them. When reviewing findings, verify that:

  • Each source is relevant for this application
  • Each sink is relevant according to the business risk of the application
  • There are no obvious "validation" methods between the source and sink

If these three conditions are not easily checked off, then a little more digging may be required on your part. Use this information to further improve filters you created earlier.

Tip: You can hide bundled findings (findings that were already reviewed) from the Findings view by pressing Hide bundled findings on the Findings view toolbar.

Summary

The process described in this tutorial is very iterative in nature. As you proceed from one step to the next, you may discover things that were important for one of the previous steps. Typically, you would then go back to provide AppScan with this additional information. The goal is to start at a high level and let AppScan do the work for you, improving coverage through custom rules and focusing on issues of concern through filters. In this way, you do not just dive into the sea of findings trying to make sense of it all. As you focus your findings through the filters, you will be able to get more and more fine-grained in what you want and what you do not want to see. At the end, you should have relatively few findings left that are of concern to you and yet cover more of the application than on the initial scan. And, best of all, you will be able to reuse the fruits of your labor on future scans of this application and even on scans of other applications because both rules and filters can be easily shared, saving you time and effort on your future assessments.


Downloadable resources


Related topics


Comments

Sign in or register to add and subscribe to comments.

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Security
ArticleID=982866
ArticleTitle=IBM Security AppScan Source Quick Process Guide
publish-date=09112014