Tivoli Service Automation Manager 7.2.2 introduces the concept of extension, a set of TSAM software components that adds more capabilities to the TSAM platform. An extension usually (but isn't limited to):
- Can implement a new IT service automation solution, which in TSAM is called service definition; for example, a storage-as-a-service solution to offer home directories to the students of a university.
- Can add capabilities to existing service definitions; for example, extend TSAM out of the box as a service solution enabling it to attach extra disks to virtual machines in addition to the boot disk.
Extensions are developed and released outside the development cycle of TSAM by IBM or by Customer Services Teams. Extensions developed by IBM are released on Integrated Service Management Library free of charge for TSAM customers; they come with installation and configuration documentation, and a user's guide that follows IBM standards. IBM Tivoli Service Automation Manager Information Center is the entry point for online documentation of the extensions released by IBM. (See Resources.)
Among others, IBM has released two extensions that allow managing the configuration of network devices for adding security, scalability, and redundancy to projects of virtual servers created with TSAM:
- IBM Tivoli Service Automation Manager 7.2.2 extension for Juniper SRX Firewall
- IBM Tivoli Service Automation Manager 7.2.2 extension for F5 BIG-IP Load Balancer
The Juniper SRX Firewall extension delivers the possibility to automatically confine the virtual servers provisioned with a TSAM project in a VLAN/subnet protected by the Juniper SRX enterprise firewall using a set of default rules. Firewall rules can be further refined by the cloud administrators during the life cycle of the project using a service offering provided by the extension Modify Firewall Rules. The default rules can be customized during the initial setup of the extension, based on the customer's needs.
The extension for F5 BIG-IP Load Balancer delivers the possibility to put a "virtual load balancer" in behind the virtual servers provisioned with a TSAM project to augment the scalability and redundancy of the applications installed on the servers: An application can be advertised at a public Virtual IP address (VIP) in the TSAM project's VLAN/subnet by creating a Load Balancer Policy. A Load Balancer Policy is identified by the VIP:Port to reach the application and by the cluster of virtual servers of the TSAM project that run the application.
These features enable the ability for an enterprise customer to provision tiered business applications (like a J2EE application) to his branch offices, business partners, and clients in a fast, reproducible, secure, scalable, and redundant way.
In the first article on this topic, a scenario is defined in which the desired result is to securely deploy a three-tier J2EE application to the cloud and demonstrate how to set up and provision extensions in TSAM to accomplish the deployment; we recommend you read that article in order to more fully understand the methods illuminated in the article.
This article explains how to tune the load balancer policy to your system's needs; how to add and remove application servers as the workload of the business application changes; and how to modify the firewall rules and why you might need to do that.
A quick revisit to the scenario
The customer ABC is an enterprise who is operating a private (on-premise) cloud solution based on TSAM 7.2.2 and the extensions for Juniper SRX Firewall and BIG-IP F5 Load Balancer. ABC provides services to branch offices, business partners, and clients through the TSAM platform. In particular, ABC heavily counts on the out of the box capabilities of the platform to provision web applications to clients accessible over http/https standard protocols.
A typical web application ABC uses is a three-tier J2EE application with an http server, application server, and database server. In a traditional deployment, these servers are logically isolated from each other by routers and firewalls that limit the network connectivity and access to the servers. The database server usually accesses its data from a secure storage zone.
Without downloading the extensions for Firewall and Load Balancer, ABC would have to set up its own processes for isolating the servers on different network segments and for balancing the requests to the application servers that are usually deployed to form a cluster. But ABC is a wise customer so, since it already has BIG-IP F5 and Juniper SRX network devices, it decides to set up the TSAM extensions in order to standardize the layout of its web applications.
For a more detailed look at this scenario, read the first article on this topic.
Now let's explore load balancing and adapting network firewall rules.
Introducing load balancing
The TSAM Extension for BIG-IP F5 Load Balancer provides service offerings for balancing the workload among the application servers of the business application. The companion article describes the step needed to do that during the initial deployment (in the Provisioning section, step 4); however, it does not give much information on the attributes of a load balancer policy.
While you might leave default values selected when deploying the business application in your test lab, it is important to have a better understanding of the meaning of these attributes when deploying it to your clients because choosing the right load balancer policy can make the difference in terms of performance and resource consumption.
So let's start by learning how to customize load balancer policy.
Customizing load balancer policy
After the initial deployment, you can modify the way the workload is managed by the load balancer at any time using the Modify Load Balancer Policy service offering that in essence requires the same parameters of the Create Load Balancer Policy (with the exception of the name and the VIP:port attributes of the policy that cannot be changed).
The load balancer policy is an internal artifact defined by the TSAM Extension for BIG-IP F5 Load Balancer that comprises the information needed to best configure the BIG-IP device. It's job is to simplify the task of the requestor of the business application as much as possible and then hide the complexity of BIG-IP device behind a few intuitive parameters that allows the extension code to automate the BIG-IP configuration steps.
For an idea of the kind of consideration that the requestor of your business application would need to put in place without the load balancer policy abstraction, think of the configuration profile as a container of settings for defining the behavior of network traffic (for example, http) in the BIG-IP device, each setting enabling a particular feature. Examples of these features are:
- The insertion of headers into HTTP requests
- The compression of HTTP server responses
- Application authentication
- The connection pooling, and so on.
The load balancer policy abstraction automatically identifies the best configuration profiles to apply for configuring the BIG-IP device based on:
- The type of traffic (Protocol in Figure 1) among HTTP, HTTPS, TCP, and UDP.
- The type of the virtual IP address (Virtual Server Type in Figure 1) among Standard and Performance. Performance specifies a VIP for which you want to increase the processing speed of HTTP or layer 4 requests.
- Whether you want to reuse the connection between the BIG-IP and the balanced virtual servers (Connection Pool in Figure 1) which reduces the virtual server load through minimizing connection setup and tear down (enabling this option will enable the F5 Networks OneConnect feature which optimizes the use of network connections by keeping server-side connections open and pooling them for reuse).
- Whether you want client requests to be directed to the same virtual server in the project throughout the life of a session or during subsequent sessions (Session Persistence in Figure 1).
Figure 1. Load balancer policy attributes
For the standardized business application defined in this article and the companion, set a load balancer policy with:
- Protocol: HTTP.
- Virtual Server Type: Use Standard while you are testing the application; use Performance when deploying it to your clients.
- Connection Pool: This option is automatically selected if you select a Performance VIP while you can decide to use it or not in case of a Standard VIP.
- Session Persistence: This parameter depends on the characteristics of the business application. If you need to set it, remember that the extension code configures the cookie persistence profile on BIG-IP device which uses an HTTP cookie stored on a client's computer to allow the client to reconnect to the same balanced virtual server previously visited at a website.
As you can see in Figure 1, additional information is carried by the load balancer policy:
- The routing algorithm used by the BIG-IP device to balance the workload among the virtual servers (Algorithm in Figure 1) can be round robin, least connections, or predictive. For any client request, BIG-IP runs the algorithm to select the appropriate virtual server to which to route the request. To do that it needs some monitoring and availability information about the balanced virtual servers, information that is periodically collected by a so-called monitor.
- The health check parameters (Probe Protocol, Check Interval, and Timeout in Figure 1) are used by the extension code to configure the BIG-IP monitor. The check interval specifies the frequency at which BIG-IP probes the virtual servers and the timeout specifies the number of seconds the virtual server has in which to respond to the monitor request; after that, it is considered down and not used as target for the new connection.
What is the most suitable algorithm to use for the business application defined in this article, you might ask? Well, that depends on the characteristics of the application.
- The simplest algorithm is round robin; that's where BIG-IP selects the next virtual server in a circular list of those responding to the monitor.
- A more efficient algorithm is least connections; that's where BIG-IP maintains a statistic of the connections handled by each balanced virtual server and passes a new connection to the member of the pool that had the fewest connections over time.
- The best algorithm you can use is predictive; that's where BIG-IP uses the same ranking method but the trend is also analyzed to understand if the performances of the node are improving or declining.
The better the algorithm is, the more resources it requires from BIG-IP to perform calculations. So, you might want to select the round robin when testing the business application, changing the load balancer policy later to predictive when advertising it to your clients.
The load balancer policy belongs to a single TSAM Project and can span across virtual servers from a single project. More policies can be deployed in a single project but each one requires a dedicated VIP:port pair for external access.
Now let's examine how to add application servers as the workload grows and remove them as workload lessens.
Adding and removing application servers
As the number of hits to the business application increases, your clients can perceive a degradation in response time. You can anticipate this effect by running periodic trend analysis reports and adding application servers in a timely manner.
The administrator responsible for your business application does that using these service offerings:
- TSAM Add Server to Project service offering: Use this service to request another application server for the business application project.
- Modify Load Balancer Policy service offering: After TSAM provisions the new application server, use this service to add the server to the load balancer. On the Create Load Balancer Policy window, add a check mark to the hostname of the new server (see Figure 2).
Figure 2. Load balancer policy pool of virtual servers
When the opposite occurs (your trend analysis reports tell you about a decrease in the utilization of your business applications), you will probably want to release resources for other usages within your IT environment or reduce the power consumption in your data center. You can decide whether to power off some of the application servers or to release them which also releases the storage space in the hypervisors' data stores.
If the administrator of the business application decides to power off the application server, then the TSAM Stop Server service offering is used and that's all — the load balancer soon detects that the server does not respond (through the monitor) and stops routing connections. It starts rerouting connections as soon as the server is restarted.
If the administrator decides to definitely remove the application server from the business application project, these service offerings are used:
- Modif Load Balancer Policy service offering: Use this service to remove the application server from the load balancer. Go directly to the second screen (Figure 2 again) and remove the check mark for the hostname of the server.
- TSAM Remove Server from Project service offering: Use this service to deprovision the server.
One of the best features TSAM offers to make cloud computing easier is the ability to automate management tasks including provisioning/deprovisioning servers as the workload changes. We'll look at that next.
Automate app server provisioning and deprovisioning
This is the fun part ... this is like the previous section but you don't need the administrator each time a workload change occurs. The amount of computational resources required by a production-use web application can vary greatly — both the frequency of change and the amount of resources needed — the amount depends on the actual type of service provided. Here are some example scenarios.
- An online electronics store application has a significant increase of transactions during the Christmas period while the remainder of the year, the workload is stable even if it increases from year to year.
- An enterprise application that tracks employees' presence peaks in the first days of each month and show an almost flat line during the remaining days of the month.
- An online library usually faces a significant reduction of loans in the summer.
- An international online newspaper website faces unpredictable peaks when important events happen in the world.
The sudden and accurate reaction of the system to adapt the available computational resources to the actual requirements of the application is critical to maintain the service level agreement (SLA) with the customers and to optimize the usage of the resources.
TSAM and TSAM Extension for BIG-IP F5 Load Balancer provide the service offerings needed to respond to workload changes as described in the previous sections; however, these offerings do not provide an autonomic or self-tuning solution that is able to reconfigure the business application without the manual intervention of the administrator. This is something that must be created on purpose by the customer.
The goal in this section is to explain how such automation can be built leveraging public TPAE (Tivoli Process Automation Engine) and TSAM APIs, as well as other tools. A couple of approaches are illustrated to add self-tuning capabilities to the TSAM solution, although it is not in the scope of this article to go deeper into the details of the solutions — rather, pointers are given to different technologies available and a common architecture used to accomplish the task.
First we'll describe the approach to the solution and the components required to implement it. Then we'll translate this generic approach into two different concrete implementations that rely on different technologies.
Figure 3 summarizes the components of the solution:
Figure 3. The components of a self-tuning solution for workload balancing
- The Controller drives the self-tuning operations of the system by comparing the desired SLA of the business application with the current observed behavior and deciding the actions to take: It decides when to provision or deprovision application servers, when to switch on dormant application servers, and when to switch off the lazy ones.
- The Resources monitor is the component collecting the status of the web application and of the underlying virtual servers. This is the key input data to start the Controller processing. The pattern to feed the controller can be based on a periodic check of the resources or can be driven by events; the choice between the two depends on the actual technologies adopted for the monitor.
- The Resources manager is the actuator, the component in charge to provision/deprovision application servers and to update the load balancer policy. It is TSAM with the Extension for BIG-IP F5 Load Balancer in the scenario discussed in this article.
Two solutions are proposed here, both requiring TPAE customization and integration skills (Maximo Enterprise Adapter) and TSAM REST-based API skills:
- The first solution leverages the data collected by the BIG-IP device monitors to implement the Controller. This solution also requires development skills for getting BIG-IP monitors data.
- The second solution is an example of an event-driven Controller based on IBM Tivoli Monitoring (ITM) product. It also requires ITM skills to process events (TEC or Omnibus).
Let's look closer at both.
Figure 4 describes how the generic solution is implemented in this case.
Figure 4. Self-tuning solution for workload balancing: Leverage BIG-IP monitors data
The Controller is implemented on the TPAE platform using a scheduled task (TPAE Cron Task in the picture) that periodically checks the status of the resources and then eventually takes the appropriate actions. The check on the resources is sent to the BIG-IP load balancer and an enablement library is included in the Controller to access the IControl interface of the device.
The IControl interface includes the support for different programming environments (such as Java®, .NET®, Python, Perl) and allows certain flexibility in how to implement the solution. In the case of the Java programming language, the enablement library is a JAR file.
When the Controller detects an alert value for the statistics of a monitored application, then it decides the consequent action to maintain the system efficiency and availability. The action is executed invoking the REST APIs interface of TSAM and submitting the service request to accomplish the provisioning task.
Leveraging IBM Tivoli Monitoring
As depicted in Figure 5, the solution relies on the ITM agents to monitor the status of the application servers.
Figure 5. Self-tuning solution for workload balancing: Leverage ITM events
For that purpose you have to install the ITM agents on the application servers and configure them appropriately which can be done when requesting the TSAM Project of the business application or by embedding the ITM agent in the virtual image used to instantiate the application servers. The statistics data is then uploaded to the ITM server and forwarded to OMNIbus for event processing.
Once the event reaches OMNIbus, it is possible to invoke the TSAM REST API using a custom exit procedure; it is even possible to leverage the OMNIbus-Service Request manager integration to submit the provisioning requests.
Now let's look at the other major topic: How to adapt firewall rules when you need to modify security settings for a given enterprise application.
Adapting network firewall rules
The TSAM Extension for Juniper SRX Firewall automatically sets the firewall rules when you deploy a business application based on default rules that were defined when you set up the network templates during the configuration of the TSAM Extensions. You should not need to have to deal with firewall rules again.
Of course, how realistic is that scenario? There are always situations in which you need to modify the security settings for a given business application project. So you want to be able to see the current settings, right?
The TSAM Extension for Juniper SRX Firewall provides a service offering, Modify Firewall Policy, that is similar to the load balancer policy. The firewall policy is an internal artifact defined by the extension that comprises the firewall rules that apply to a TSAM virtual servers' project or even more precisely, to the subnet/VLAN of the project. Each project has its own firewall policy that you can change by adding, modifying, or removing individual firewall rules, as shown in Figure 6.
Figure 6. The firewall policies configured for the business application project
A firewall rule allows specific protocol traffic initiated from a source subnet to a destination subnet. Source and destination (Figure 6) usually are entire subnets specified using the CIDR notation (subnet address/number of bits of the IP address identifying the subnet). However, single IP addresses can be specified as both source and destination of a firewall rule in cases where you want to allow traffic for a specific host.
You might notice that you cannot specify a rule for denying traffic. Well, some of the major points of the extension are its ability to:
- Simplify the task of the requestor of a business application and
- Reduce the security risks as much as possible.
As a consequence of that, the firewall policy always contains a hidden rule that cannot be changed: Deny all traffic. That way, your administrators can only allow traffic as exceptions to the deny-all rule. This is a simple, less risky implementation of this feature.
To simplify the job of the administrator even further, there are three types of firewall rules:
- From Internet rule: This allows specific protocol traffic initiated from the DMZ to the project subnet/VLAN. You can use the notation 0.0.0.0/0 to indicate any address; the From-Internet rule from 0.0.0.0/0 to 192.168.0.0/16 then means that any IP address in the DMZ can initiate a connection into the subnet 192.168.0.0/16.
- To Internet rule: This allows specific protocol traffic initiated from one of the applications servers of the project to the DMZ.
- From Other Project(s) rule: This enables you to open a project to communicate with other projects, such as a project that hosts shared services. You set the From-Other-Project(s) rule from 0.0.0.0/0 to 184.108.40.206/16 in the shared services project, meaning that any other project can initiate a connection and therefore can use shared services. You complete the configuration setting the rule: The From-Other-Project(s) rule from 192.168.0.0/16 to 220.127.116.11/16 on a project that needs to access shared services (whose subnet is 192.168.0.0/0).
In this article, we've explained how to handle some life cycle management aspects of a J2EE business application deployed in the cloud with the help of TSAM Extensions for Juniper SRX Firewall and BIG-IP F5 Load Balancer. In particular we explained how to properly tune the load balancer to handle the workload of your business application, how to respond to variations of the utilization of your business application by adding and removing application servers, and finally, how to effectively use the firewall to lock down the access to your servers.
The initial provisioning of the business application is described in the companion article. There you can learn how to create a TSAM Project of virtual servers that sits behind a BIG-IP F5 load balancer and is confined in a VLAN/subnet managed by the Juniper SRX firewall.
- The companion article, Deploy a J2EE app with TSAM extensions, shows you how to set up and provision extensions in TSAM as the first step to securely deploy a three-tier enterprise application to the cloud.
For developer resources mentioned in this article:
- TSAM extensions developed by IBM are released from the Integrated Service Management Library
- IBM Tivoli Service Automation Manager Information Center is the entry point for online documentation of the extensions released by IBM
- You'll see many of the same concepts employed in developerWorks policy creation resources; many are listed in the Cloud computing: Build an effective cloud policy knowledge path which also shows you how to construct your own IT policy.
For information on how to perform tasks in the IBM Cloud, visit these resources:
- Up and download files from a Windows instance.
- Install IIS web server on Windows 2008 R2.
- Create an IBM Cloud instance with the Linux command line.
- Create an IBM Cloud instance with the Windows command line.
- Extend your corporate network with the IBM Cloud.
- High availability apps in the IBM Cloud.
- Parameterize cloud images for custom instances on the fly.
- Windows-targeted approaches to IBM Cloud provisioning.
- Deploy products using rapid deployment service.
- Integrate your authentication policy using a proxy.
- Configure the Linux Logical Volume Manager.
- Deploy a complex topology using a deployment utility tool.
- Provision and configure an instance that spans a public and private VLAN.
- Secure IBM Cloud access for Android devices.
- Security considerations for virtual machine instances in the IBM Cloud.
- In the developerWorks cloud developer resources, discover and share knowledge and experience of application and services developers building their projects for cloud deployment.
- Find out how to access IBM SmartCloud Enterprise.
Get products and technologies
- Visit the IBM SmartCloud Enterprise site for current cloud offers.
- See the product images available for IBM SmartCloud Enterprise.
- Join a cloud computing group on developerWorks.
- Read all the great cloud blogs on developerWorks.
- Join the developerWorks community, a professional network and unified set of community tools for connecting, sharing, and collaborating.
Dig deeper into Cloud computing on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Experiment with new directions in software development.
Software development in the cloud. Register today to create a project.
Deploy public cloud instances in as few as 5 minutes. Try the SoftLayer public cloud instance for one month.