With increasingly complex and large infrastructures, finding vulnerabilities and other problems has become more challenging. In the past, finding problems would take a considerable amount of time and effort, because you had to both hunt down problems and find a solution for each. In today's infrastructure environments, automated testing has become a popular solution, as it allows for rigorous testing and remediation with smaller staff and shorter cycle times.
Continuing the Infrastructure architecture essentials series, this sixth article focuses on the hosting side of things. However, the concepts here could easily be adapted to any environment.
Before looking at the tools and techniques that you can use to examine and secure a Web server, let's look at some of the common issues facing Web servers. When examining the following security threats, keep in mind that there are security risks inherent in Web servers, the LANs that host these servers, and even typical users of Web browsers. First, let's take a look at threats from the standpoint of Web masters, network administrators, and users.
From the Web master's viewpoint, the biggest security concerns are those introduced to the enterprise's network simply as a result of being connected to the Internet. All sorts of threats can be introduced in this way, including different types of malicious code such as viruses, Trojans, worms, and even defects in Web applications themselves. In fact, in situations where an organization builds its own Web application for a specific function or necessary testing may never have been performed.
From a network administrator's viewpoint, a misconfigured Web server can introduce a myriad of vulnerabilities in a LAN's security. Configuration of a Web server traditionally poses something of a problem, as administrators and Web masters walk a fine line between configuring a Web server for maximum security and for maximum usability. Additionally, it's not uncommon for a network administrator or Web master to overlook or ignore different configuration options because of their need to just "make things work" in the quickest manner possible.
Finally, from the viewpoint of a client, the biggest threats tend to be those that can do the most harm behind the scenes. Content such as Microsoft® ActiveX controls, Java™ applets, and the like can introduce threats such as viruses, worms, and even spyware to a user's system. Although these threats may not directly affect the enterprise, because they may be on a client's system, the impact on a company's reputation can be huge. If the client using the Web server or application happens to be on the local network, the danger of opening holes that malicious code or attackers can get through can be worse.
You can typically categorize the risks facing Web servers and applications into three areas, each representing a discrete type of risk:
- Bugs/Web server misconfigurations permit unauthorized remote users to:
- Steal confidential or sensitive information.
- Execute commands or code on the server that’s designed to alter or control the system in some way—for example, planting rootkits and other back doors.
- Enumerate services on the host server that allow the user to compromise the system.
- Initiate denial-of-service (DoS) attacks designed to prevent the server from fulfilling its typical role.
- Browser-side risks:
- Active content that crashes the browser, damages the user's system, breaches the user's privacy, or merely creates a disturbance
- The misuse of personal information that the user provides
- Capturing network data transmitted between a browser and the server
through network eavesdropping such as sniffing. Eavesdroppers can
operate from any point on the pathway between the browser and server,
- The browser-side network connection.
- The server-side network connection along with the intranets.
- The user's ISP.
- The server's ISP or regional access provider.
What can attackers do with all these tools and options at their disposal? A lot, actually, including performing attacks such as:
- Structured Query Language (SQL) injection: An attacker can use an SQL injection to manipulate a hosted database with the goal of changing or extracting data from it in ways not intended by the original design.
- Defacement: An attacker can change the content of a Web site to something else, such as to push a political agenda.
- Creating a back door: This attack can be used to "drop off" a utility or some other piece of software with the goal of using it later to perform such actions as launching an attack or spying.
Now, I can't list every possible attack, but it's important to realize that an attacker can do any number of things to compromise or alter a Web site or server using methods both simple and complex. Your job as a Web master is to counter this threat using the tools at your disposal. In this case, you can use automated testing to help secure your Web server.
In today's marketplace, plenty of automated testing options are available, each offering its own benefits and features. Before I get into some of these options, let's explore some of the reasons you would even want to use automated testing.
Automated testing offers many benefits to those looking to analyze their hosting infrastructure—for example:
- Structured consistency testing of hosts and sites.
- Rapid testing of multiple of hosts and sites as well as applications.
- Simple configuration of testing suites.
- Most suites are designed to catch the more common problems.
- Some suites are easily upgradeable to include threats and vulnerabilities that have emerged since the package was created.
In addition to the options—both proprietary and open source—that you can use to perform automated scanning, you have to understand the general process involved. You can modify the process here to suit just about any enterprise environment.
When designing and building an enterprise scanning program, it's important to realize that you can't apply an off-the-shelf solution. Generally, the process requires a substantial amount of planning to be successful and produce useful results for the organization. The initial planning process must address the technical, political, cultural, and regulatory issues that apply to the enterprise. Keep in mind that the information gathered during the initial process will vary widely depending on the organization that is the target of the scan.
Because each organization has a unique objective and environment that is the target of the scan, some general guidelines apply to the process:
- Do your homework: Don't assume you know every detail about your environment: You probably don't know every proverbial nut and bolt inside the organization (especially in an enterprise environment). Far too often, I've seen the security officials or individuals in charge let their ego get in the way and assume that they know it all, which generally is not true. Don't be afraid to ask questions of the IT department and any other department involved so that you can learn more about their situation, needs, and other specifics of their corner of the universe.
- Confidentiality: Because you're in essence performing a vulnerability scan that by design will uncover "holes" in the existing hosting and other parts of the infrastructure, the temptation is definitely there to keep things secret. In some situations, it's certainly worthwhile to keep the scanning process under wraps—but generally, this isn't the case. Keeping the vulnerability-scanning process secret can lead to several problems, such as setting off alarms when someone detects the scan occurring. Indeed, to get the best results, scanning should involve not only those directly responsible for security but also IT administrators, developers, and others who will have their environment scanned. Remember that the result of the scan is to generate an outcome that you can use to improve security and assist everyone in addressing issues. Remind everyone involved that the goal of the scan is to improve awareness, not to police how people are doing their jobs.
- Scanning is a team effort: This point is nothing if not an extension of the prior point. For a scan to be as successful as possible and produce the best results, everyone in the organization who has even a minor role in the systems affected should be involved in the scan. In the previous point, you gathered information from everyone who may be involved with the systems being scanned; in this point, you're making sure they are informed about the exact nature of the scan and when it will occur. It's important to realize that without this step, the effect of the scan could be dramatic, causing everything from performance slowdowns to, in some cases, system crashes. Notifying personnel who manage and control the systems being scanned first makes sure that they are not caught unawares and second makes sure that they are on hand in case a system does crash or does something unexpected. Remember: The goal of your scan is to increase awareness and the security of your hosting and other infrastructures. Bringing those infrastructures down or slowing them down goes counter to that goal.
- Risk versus reward: Because of the way scanning works, it's entirely possible that the scan may produce results that are not only unexpected but completely unpredictable. When in progress, scans have been known to produce their own mini-DoS events or crash a system. This is where you have to make a decision: Does the benefit of knowing what can happen if a system crash or DoS attack occurs outweigh the risk of crashing a system or producing a DoS event during testing? Furthermore, will generating these results allow you to put countermeasures in place to reduce or eliminate the risk down the road? At this point, it's also worth noting that a lot of the scanning software you may choose to use is not only available to you but also to attackers. Additionally, a lot of the scanning products provide the option of running more or less aggressive scans depending on your needs, with the more aggressive scans having the potential to take down a system.
- Scan in phases: Sometimes, it's more advantageous to perform scans in phases rather than all at once. In addition, it may be beneficial to allow each administrator or developer to run the scans on his or her own schedule to prevent running a scan when it may not be ideal (such as at times when an important project is underway). Generally, performing a scan in phases produces great results or results similar to what would have been generated if an organization-wide scan had been performed all at once. Consider temporarily excluding systems from a scan and performing one in a later phase: The risk of taking a system down may outweigh the benefits of the scan. Also, by allowing each department to perform its own scans on its own schedule, those departments can perform mini-scans that can catch problems more quickly and address interim problems.
Now, how does the process of planning for a scan takes place? As with a lot of things, you have to have the right tools to get the job done. In this case, I recommend breaking out your favorite spreadsheet program to assist you in the data-gathering process.
In this first phase, you're trying to get a complete (or as complete as possible) picture of the systems that will be part of the scan. These systems should ideally include all the systems responsible for hosting your application as well as any supporting systems.
How you collect this information really depends on what you may currently have in place. For example, some organizations have enterprise-level systems-management software that already collects information on all the hardware and software present in the organization. If your organization falls into this category, it may be fairly easy for you to generate reports either in a standard CSV or spreadsheet-compatible format. If you aren't fortunate enough to have these tools at your disposal, you may have to resort to doing it the old-fashioned way, which is to locate and inventory each system manually.
Note: Make sure that you have as complete a picture as possible to perform the best and most successful scan. After you have the information and have identified the hosting infrastructure systems, you can move on to the next phase: categorizing systems.
You can now use the information you gathered in phase 1 to break the information down into more useful categories. Consider that as an enterprise-level organization, you probably have hosting environments in many geographic locations as well as across many different types of systems. The goal of this phase is to break down the target environments in as discrete groups as possible to make the intended scan as efficient as possible.
There are several ways to categorize a system: Which one you use is ultimately up to your specific needs. Generally, you should break systems into categories using factors such as these:
- Operating system and version: Useful in targeting scans, interpreting results, and tracking problems with specific operating system versions
- Test servers and production: Which servers are running which applications
- Importance or criticality: Helps target which systems need to be handled more carefully (in other words, more or less aggressive scans) and which should be monitored more closely just in case the scan causes an unexpected outcome (read that as crashes the operating system).
You may also choose to categorize systems based on region, who manages the system, or any one of a number of factors.
After you've completed an accounting of all target systems, you can move on to the scanning phase. When preparing to scan systems, you should gather the specific IP addresses and host names of the systems to be scanned (whichever your scanning tool requires). This information should be available from the systems listed in your report or spreadsheet.
When creating a list of systems to be scanned, consider breaking them down into similar groups to be scanned, such as all the Linux® or UNIX® systems, to make sure that the correct scan is performed. Additionally, grouping systems by type enhances performance, as you can load only those plug-ins required to scan the operating system and hosting environment in question.
The next step in scanning is to create scan jobs that include each group of systems to be scanned. Consider giving your scan jobs meaningful names, such as Test Servers in Las Vegas, Production Servers in Napa, or Development Servers in Placerville, to help identify the results and type of each scan later on. Ideally, the scan job names correspond to the different groups of servers you have collected previously; but in any case, make sure they are named something that you can understand and identify later on.
At this point, you can commence with the actual scanning process. Depending on your scanning software, the process for initiating the scan will vary to such an extent that this article cannot detail each one here.
In this phase, you take the reports generated from the previous phase and look for problems to be addressed. When analyzing data, look for the problems that have been uncovered, and categorize them by how critical or dangerous they are—for example:
- High: Problems that have a serious risk level associated with them and the potential to seriously compromise the enterprise
- Medium: Problems that are potentially dangerous but have a risk level that makes them unlikely (but still possible) to shut down production if exploited
- Low: Issues that have very low risk and no potential of halting production or removing systems from use
Because the problems you'll encounter will be unique to your environment, you'll need to evaluate each according to your own needs and environment. To properly prioritize problems identified, consider involving experts from each department or organization involved with the hosting solution, and seek the input of each regarding how serious the problems are. If you seek pure, undistilled feedback, consider using the Delphic method of feedback, which is where each group submits their feedback anonymously to the team.
The skills and competencies involved in this process vary—sometimes wildly. However, there are a few constants:
- Management: Proper management of links and applications can greatly reduce the risk of threats facing a hosting infrastructure.
- Server: Understand the hardware and software environment responsible for hosting the applications and services in the environment.
- Patch management: You need the ability to quickly locate and apply new patches, service packs, and other updates to ensure that your hosting infrastructure is not vulnerable to new attacks.
- Backup: Employ a backup technology of any type to prevent total loss of data and environmental information in the event of a disaster or attack.
The tools involved in this process are legion. Here are a few of the more popular choices:
- Vulnerability scanners: This software is designed to look for common or obscure vulnerabilities in a system and, in some cases, recommend remedial action.
- Testing: Application testing is an invaluable technique that you can employ to protect your organization. Ideally, this testing will be done extensively in an isolated lab environment with the intention of simulating as closely as possible real-world conditions to catch issues early.
- Configuration planning: Proper planning of your intended configuration can introduce a powerful measure of protection against misconfiguration later on. Although an exact percentage would be hard to nail down, it's safe to state that a large—too large, in fact—number of problems result from misconfiguration or sloppy configuration of systems.
- High availability: Planning for high availability can be a great defense in the enterprise when safeguarding against failures. Technologies such as clustering, RAID, and other types of fault tolerance and redundancy can provide a tremendously powerful hedge against potential problems and ensure that valuable data and applications are not offline. Keep in mind that high-availability solutions can be hardware, software, or a hybrid solution consisting of both.
This article looked at some of the common threats facing Web servers and the techniques that you can use to identify and mitigate them. Careful planning of the scanning process will yield the best and most useful results and allow for proper remediation of issues.
- Browse the
for books on these and other technical topics.
secure Web applications: An introduction to IBM Rational AppScan Developer Edition">
(developerWorks (Sep, 2008)
focuses on the role developers should play in improving Web application
security, and details how IBM Rational AppScan Developer Edition enables
them to do so.
Get products and technologies
a free trial of IBM
Rational AppScan V7.7, previously known as Watchfire AppScan, a leading Web
application security testing tool that automates vulnerability assessments and
scans and tests for all common Web application vulnerabilities including
SQL-injection, cross-site scripting and buffer overflow.
- Check out the
Top 10 vulnerability scanners
as identified by insecure.org.
Sean-Philip Oriyano has been actively working in the IT field since 1990. Throughout his career, he has held positions such as support specialist to consultants and senior instructor. Currently, he is an IT instructor who specializes in infrastructure and security topics for various public and private entities. Sean has instructed for the U.S. Air Force, U.S. Navy, and U.S. Army at locations both in North America and internationally. Sean is certified as a CISSP, CHFI, CEH, CEI, CNDA, SCNP, SCPI, MCT, MCSE, and MCITP, and he is a member of EC-Council, ISSA, Elearning Guild, and Infragard. You can reach Sean at email@example.com.