Virtualization and Provisioning relationship
Virtualization is a concept of separating the physical asset from the logical function that it performs. Virtualization allows you to pool and share resources so that they can dynamically and automatically be provisioned to where the business need requires them and can be de-provisioned (or put back into the shared pool,) when the need no longer exists. After the resource is de-provisioned, the resource is available to satisfy another business need dynamically.
Provisioning managers can take advantage of this virtualization to dynamically add or remove computing capacity since the physical asset is separated from the business function, for example, processing in-coming Web requests.
This section provides an overview of the products involved in this solution. We will highlight Enterprise Workload Manager (EWLM) and IBM Tivoli Intelligent Orchestrator (TIO) functions. The next section describes the components of the EWLM provisioning solution.
Enterprise Workload Manager (EWLM)
EWLM allows you to define business-oriented performance goals for work requests running on your servers. EWLM continually monitors that work as it traverses multiple servers (and server-types) and applications, and reports performance statistics based on the work requests running on the applications and servers in an EWLM management domain. The performance goals specify how the work should run while the performance statistics show how the work is actually performing.
An EWLM management domain consists of a domain manager, which monitors the work, and managed servers, where the work is run, and is displayable through the Web user interface called the EWLM Control Center.
Figure 1. EWLM environment
Let's take a closer look at the EWLM components.
EWLM Control Center
The EWLM Control Center is a Web-based user interface that provides performance goals and statistics views. It is in the EWLM Control Center that the service policies, including the goals, are defined and deployed to the managed servers. The EWLM Control Center has various views that show server, transaction class, and service class performance information.
The EWLM Domain Manager is a software stack that consists of two operating system processes: one supports the Control Center and one coordinates policy across the managed servers and aggregates performance statistics. The EWLM domain manager has an end-to-end view of the performance characteristics in the EWLM management domain. Therefore, it is able to surface the end-to-end performance data to TIO through the objective analyzer, which receives and processes the data and optimizes resources based on that data.
The Managed Server provides the EWLM platform support to collect performance data and to share the data with the domain manager. The managed server is a common component installed on each operating system instance that you want to manage. When the managed server is started, it immediately makes contact with the Domain Manager to indicate it is part of the management domain and starts sharing the performance data from the system with the Domain Manager.
Tivoli Intelligent Orchestrator
IBM Tivoli Intelligent Orchestrator (TIO) is an automated resource management solution. Through orchestrated provisioning, it provides the ability to manage the IT environment in real time, according to defined business policies, to achieve the desired business goals. For the purpose of this document, the defined business policies are obtained from EWLM.
TIO configures resources in a multi-application environment to balance end-user traffic demands, excess capacity, and service level targets. Using an adaptive control technology, the system predicts capacity fluctuations and facilitates dynamic infrastructure reallocation.
TIO automatically triggers the provisioning, configuration, and deployment performed by IBM Tivoli Provisioning Manager (TPM), which is part of the TIO product as well as being a standalone product. For more information on IBM Tivoli Intelligent Orchestrator (TIO), see:
Tivoli Provisioning Manager
IBM Tivoli Provisioning Manager (TPM) automates manual tasks of provisioning and configuring servers and virtual servers, operating systems, middleware, applications, storage, and network devices acting as routers, switches, firewalls, and load balancers. TPM allows you to create, customize, and quickly utilize best-practice automation packages. For more information on TPM, see
EWLM provisioning solution components
Now that you have an understanding of EWLM and the TIO functions, let us describe the EWLM provision solution in more detail. Functionally, there are three (3) components to the provisioning solution:
- The provisioning manager
- The provisioning manager provides additional resources when those available to performance managers are not enough. It consist of three (3) areas:
- Resource optimizer (TIO)
- A decision function that calculates "the best" allocation of resources (servers) to the present workload. If multiple inputs are used in this process, they may conflict. The resource optimizer resolves conflicts between various contributors to conclude which of the various reallocation choices is "best" through arbitration.
- Deployment engine (TPM)
- An implementation function that manages the workflow(s) associated with a decision made by the resource optimizer.
- Database of available hardware and software (DCM)
- A relationship registry function containing available resources and the relationships between those resources (a configuration database).
- The objective analyzer
- The objective analyzer (OA) contains the interfaces used by the provisioning manager. It periodically pulls resource information using the configuration information from the DCM. It processes performance data provided by the performance manager into a "Probability of Breach" which is used by the Resource optimizer. The objective analyzer function is logically part of the performance manager. For the EWLM and TIO Integration solution, we have an EWLM objective analyzer.
- In our environment, the EWLM objective analyzer completes the EWLM provisioning using TIO.
- The performance manager
- A performance manager allows you to define business goals and provides an end-to-end view of the actual performance relative to those goals. Some performance managers can manage resource allocations, and some can surface this performance information to an OA, which in turn manages the resource allocation.
In our environment, the performance manager is EWLM.
In summary, the components of the EWLM provisioning solution are depicted in Figure 2 and will be described in more detail, starting with the provisioning manager
Figure 2. Components of the EWLM and TIO provisioning solution
1. Provisioning manager details
A system administrator can use provisioning software (such as TIO and TPM) to provision resources dynamically, thus improving return of IT assets and increase server utilization. TPM provides workflows for many manual repetitive tasks performed by system, network, and storage administrators. Administrators who have implemented an objective analyzer and TIO can automatically analyze the EWLM performance data, provide this information to TIO, and have TIO orchestrate the automation of provisioning servers, if necessary. TIO senses why to take action, anticipates when to start provisioning, and prioritizes where to put those resources.
The provisioning manager consists of three areas:
- TIO - Resource optimization
- TPM - Deployment engine
- DCM - TPM database
The resource optimizer (TIO)
The role of the resource optimizer is to make capacity allocation decisions based on data in the form of a "Probability of Breach surface" (a "P(B) surface").
Each interval, the resource optimizer requests Probability of Breach surfaces for all clusters associated with the application. Since the membership to each cluster may have changed since the last interval, the database must be probed for each cluster to determine which servers are members of the cluster.
Administrators can set the scope (minimum, maximum bounds) of the resources that are available to a cluster. The resource optimizer is then free to provision and de-provision those resources to the cluster, remaining within the specified range. There is an assumption made by the resource optimizer that all resources in a pool share the same computation capacity. Finally, the optimizer will send its provisioning decisions to TPM for execution.
Deployment engine (TPM)
To carry out a decision made by a resource optimizer, TPM looks up the application cluster for each application, takes a server from the resource pool associated with that application cluster and provisions the server to it. In order to accomplish this, the administrator implements a workflow called
Cluster.AddServerto As part of the workflow, TPM could install and start the EWLM managed server and any of the middleware products.
All of the servers and resources (the physical and logical) that make up an application in the data center are defined in a database called a data center model, or DCM. The DCM keeps track of the data center hardware and applications, as well as changes to the configuration. When a Tivoli workflow successfully completes a requested change to the data center, the updated data center model will reflect the current data center infrastructure.
In Figure 3, the Data Center Model Structure shows the objects and the relationships between the objects defined in the DCM, both the network infrastructure and the customer relationship model. Let's describe this further.
Figure 3. Data center model structure
The left side of Figure 3 shows the network infrastructure with the Switch Fabric definition at the top. The switch fabric is a group of related network infrastructure components, such as switches and routers. The switch fabric delimits the scope of a network where resources can be shared. For example, a resource pool cannot share its servers outside of its switch fabric. A data center model can have more than one switch fabric. A switch fabric can contain one or more subnets and associated virtual LANs (VLAN).
VLANs are a logical association of switch ports based upon a set of rules or criteria, such as medium access control (MAC) address, protocol, network address, or multicast address. This concept permits the grouping of devices in a network to be logically grouped together to form a single domain, without requiring physical rearrangement. To create and configure a resource pool or an application cluster, you must have a VLAN.
A VLAN is associated with a Subnet definition; therefore, you need to create a Subnet definition first. To be able to create and configure a resource pool or an application cluster, you must have a VLAN. Therefore, the logical order that we use to create and configure our assets and resources was:
Switch Fabric > Subnet > VLAN > Switch > Resource Pools > Servers
Shown below is the definition of the switch fabric, subnet and VLAN in the DCM. It begins with a switch-fabric, subnetwork, and VLAN tags.:
Figure 4. Switch fabric
Customer relationship model
On the right side of the Figure 3 Data center model structure is the customer relationship infrastructure, with the cluster definition at the top. Here are the customer relationship model components:
Customer: A customer owns applications. Customers can be unique corporations or departments within a single corporation.
Application: A group of one or more clusters. A service level priority (silver, gold, or platinum) is assigned at this level. The objective analyzer is associated at this level also.
Application cluster: A grouping or container for like resources or servers that support an application. Automated resource allocation and deallocation occurs at the cluster level.
Resource pool: A grouping or container of available (de-allocated) servers that support one or more application clusters. This is also referred to as a spare pool.
Servers: The physical servers definition. They belong to or are assigned to pools and clusters.
Shown below is the logical order that we used to define our customer definition:
Customers > Application > Application Clusters / Resource Pool > Servers > Software Stacks > Software products
We have provided a simple example of the customer relationship model using the configuration shown in Figure 5.
Figure 5. Single multitier application and resource pools
In this example, we defined three clusters in the DCM that are associated with a customer application: one cluster is defined for the HTTP Servers, one cluster is defined for the WebSphereÂ® Application Servers and one cluster is defined for the database server.
All servers that are part of the TIO defined customer application are part of the same EWLM management domain.
We also defined two resource pools. Resource pools consist of a list of available servers that TIO can use to provision into the clusters, as needed. In this example, "Spare Pool" is available for provisioning into "EWLM HTTP Cluster" or "EWLM WAS Cluster". "Resource Pool B" is available for provisioning into "EWLM DB2 Cluster".
Let's continue with our example. Let us say that the application is running at a higher rate than normal. The EWLM Objective Analyzer determines that "Cluster B" is contributing to the performance goal not being met based on EWLM performance information. It determines there is a CPU delay on the servers that belong to Cluster B and suggests to TIO to provision a new server from "Resource Pool A" into "Cluster B" to offset some of the work (and thus relieving some of the CPU consumption from Cluster B). The provisioning can include anything from building the operating system, installing middleware, setting up IP addresses and starting the necessary software to include the spare server into the cluster, depending on what is included in the Cluster.AddServer Logical Device Operation.
Let's conclude our example by saying that the application is now running at a normal rate. The objective analyzer decides that a server can be de-provisioned, the logical device operation called Cluster.RemoveServer monospace font will be called. The de-provisioning can include anything from stopping the software, removing IP address information, or powering off the box.
2. Objective analyzer details
Tivoli Intelligent Orchestrator gets EWLM performance information using a technology called an objective analyzer (OA). The EWLM objective analyzer enables TIO to make better provisioning decisions based on performance information that it receives from EWLM for the resources in the clusters. The OA determines how the applications and servers in this environment are affecting the business goals for a particular workload. The OA is associated with an application. Periodically the Performance Managers sends the performance information to the objective analyzer. The objective analyzer tells Tivoli Intelligent Orchestrator how likely the cluster is to fail to meet its performance goals using a given number of machines; this is called probability of breach. By repeatedly calling all the currently assigned objective analyzers with different numbers of machines, Tivoli Intelligent Orchestrator can determine the best course of action this cycle. Tivoli Intelligent Orchestrator determines cluster activity based on data received from EWLM on the resources in the EWLM Objective Analyzer (OA). Although Tivoli Intelligent Orchestrator comes with a default OA called Capacity on Demand, it is based on a Web workload environment. The objective analyzer that we use is based on performance metrics obtained from EWLM, which we discuss later.
Using the OA technology, users can model their workloads to determine what resources a particular cluster needs. Tivoli Intelligent Orchestrator, in turn, determines the priority of the cluster to determine if resources are available to service the cluster's requirements. Resources not in use can be provisioned into the cluster. If no resources are available, Tivoli Intelligent Orchestrator might determine that clusters with a lower priority must give up resources so they can be provisioned into higher-priority clusters.
For more information on objective analyzer and probability of breach, see:
Point of contact
The objective analyzer is the point of contact between the capacity analyzer and the resource optimizer. This component exposes the required external interfaces to TIO/EWLM.
In order to accomplish true independence, the OA should abstract all TIO specific data and methods such as those used to access the data in the DCM as well as exceptions that may be exposed by the capacity analyzer.
The capacity analyzer
The capacity analyzer (CA) processes performance data and turns it into recommendations representing the likelihood that, over time, for a range of system allocations, there will be a breach of the performance goals defined in the EWLM service policy.
The capacity analyzer is the "hub" of the provisioning model, the destination of the broadcast performance reporting data, and creator of P(B) surfaces for the resource optimizer.
The CA is responsible for working through the EWLM reporting data and generating Probability of Breach surfaces for each service class in a cluster. The P(B) surface will represent the probability that the service level objective for a particular service class on a cluster will meet its performance goal.
Instead of using the EWLM performance indices (PIs) to communicate the performance impact of capacity changes, a probability value is used. This value represents the probability that a goal will be missed (breached), hence the name probability of breach. To project the probability value, the goal (service class period), and the number of allocated servers must be known. In order to allow the resource optimizer to evaluate a range of possible values using data from a single interaction, a set of probabilities are calculated. From the CA perspective, the relevant variables are:
- The amount of resource available to the cluster (number of servers)
- The identity of the servers in the cluster by fully qualified hostname
- The length of time to consider in the P(B) surface (time)
- The service class containing the goal
The CA is interested in the servers comprising a cluster but not specific details about the cluster as viewed from the perspective of the provisioning manager
3. Performance manager (EWLM) details
In a management environment, a performance manager needs to have performance goals defined for the workload that is running in the environment. The performance manager needs to monitor and analyze the workload performance information to determine if the goals are being met. If the goals are not being met, it needs to surface that the goal will be missed, or to automatically determine what do to so that the goal will not be missed and to take an action to dynamically resolve the performance problem(s).
A performance goal specifies how fast work should run in your environment. Performance goals are defined for each service class in the EWLM service policy. EWLM provides a convenient measurement to indicate if the work is meeting its goal, called the performance index (PI). From an external point of view, the performance index is viewed as:
- PI less than 1 - exceeding goal
- PI = 1 - meeting goal
- PI greater than 1 - missing goal
If the work is not meeting its goal (PI>1), an action needs to be taken to help achieve that goal, which could mean that new resources need to be provisioned to accommodate the additional processing of the work.
Analyzing the EWLM performance data
Before the integration of EWLM and TIO, system administrators can manually provision new resources (or de-provision resources) based on the performance information provided by EWLM. The EWLM Control Center shows performance statistics for defined service classes. If the service class is showing a PI>1, System Administrators can analyze the performance data provided in the EWLM Control Center to determine which server is causing the performance goal not to be met. See the IBM RedPaper, "Enterprise Workload Manager - Interpreting Control Center Performance Report", REDP-3963-00, for a detailed description of how the system administrator can use the EWLM Control Center views to determine the root cause of the performance problem. Appropriate steps can then be performed, which may include the provisioning of a new server to absorb the additional workload.
Integrating EWLM and TIO
With the integration of EWLM and TIO, the EWLM performance information is passed to TIO through the OA, so the provisioning manager can automatically provision resources (or de-provision resources) using the EWLM performance data.
Our Tivoli database (DCM) definition
The TIO database has the complete customer application definition and the network infrastructure containing all of the potential resources that the application can use (as shown in Figure 6). We defined an application that our customer will use. For the application, we define a cluster. In the cluster, we defined the tier (hop), the resource pool, as well as the minimum and maximum servers it can use. TIO uses this information when provisioning to determine configuration limitations.
Figure 6. TIO application definition - EWLM trade application
Setting up the Tivoli workflows
Based on manual determination from the EWLM performance data, (if we did not have the EWLM Objective Analyzer), we could use the TIO administrative console to manually provision servers into an application cluster (as shown in Figure 7 in the bottom two entries as depicted by the UI:
Figure 7. DCM application association with the OA
If a server needs to be provisioned into an application cluster, the logical device operation, Cluster.AddServer, monospace font is called. Therefore, we need to add workflows to the logical device operation Cluster.AddServer monospace font in each of the application clusters, which will allow us to provision a server, start EWLM, and start the needed middleware product. Although there are many actions that can be performed, we wanted our workflows to include:
- Getting the EWLM UUID (EWLM managed server identity properties) and associating the UUID with the DCM server ID to allow us to take the information that EWLM provides us and associate it with the TIO customer relationship definitions.
- Start the EWLM managed server code on the provisioned server.
- Start the IBM HTTP Server or the WebSphere Application Server, based on which application cluster that it will be provisioned into.
If the objective analyzer decides that a server needs to be de-provisioned, the logical device operation called Cluster.RemoveServer monospace font will be called. Therefore, we need to add a workflow to the logical device operation Cluster.RemoveServer monospace font in each of the application clusters, which allows us to de-provision a server, stop EWLM, and stop the previously started middleware product . For our environment, we wanted our workflows to include:
- Removing the server from the application cluster
- Stopping the EWLM managed server code
- Stopping the IBM HTTP Server or WebSphere Application Server, based on which application cluster the server was provisioned into
Our final EWLM and TIO integration test
After we associated the EWLM objective analyzer with the Tivoli customer application called "EWLM Trade App" and defined the EWLM objective analyzer driver in the DCM (as shown in Figure 8), the EWLM objective analyzer makes the provisioning decision automatically. Based on the information that the EWLM objective analyzer receives from EWLM, as explained in Objective Analyzer details, a new server can be provisioned into a particular cluster or de-provisioned from a particular cluster using the same Cluster.AddServer and Cluster.RemoveServer logical device operations as described in Setting up the Tivoli workflows . This is also shown in Figure 7 in the top three entries.
Figure 8. DCM application association with the OA
We ran our workload at various transaction rates and we started with a normal workload. Using the EWLM Control Center, we could see that the EWLM performance index was less than one. As we increased the workload, the performance index started to approach one. At that time, the EWLM OA determined that the WAS Cluster needed another server and made this recommendation to TIO. TIO determined that a new server could be provisioned and TPM was called to perform this provisioning. After the new server was provisioned, it automatically appeared in the EWLM Control Center and shortly, the EWLM performance index began to improve. As we throttled back the workload, EWLM OA determined that there were enough servers to handle the incoming requests while maintaining the EWLM performance goals and called TIO to de-provision the server. TIO determined that a server could be provisioned and called TPM to perform the de-provisioning. We monitored this autonomic activity through the EWLM Control Center and the TIO administrative console. We were able to monitor the ELWM PI as the workload increased and decreased and monitor the servers, which were part of our EWLM management domain. We were also able to see the actions TIO and TPM were taking from the TIO administrative console (as shown in Figure 7).
- Tivoli software information center
- IBM eserver Software Information center
- IBM Tivoli Intelligent Orchestrator Support
- IBM Enterprise Workload Manager Release 1 Redbook
Dig deeper into Tivoli (service management) on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Keep up with the best and latest technical info to help you tackle your development challenges.
Software development in the cloud. Register today to create a project.
Evaluate IBM software and solutions, and transform challenges into opportunities.