IBM Workload Deployer is an appliance that can provision virtual images and patterns onto a virtualized environment. It provides a cloud management application as a Web 2.0 interface, pattern modeling technology, and an encrypted image catalog that comes preloaded with virtual images, patterns, and script packages. Workload Deployer does not include the virtualized environment itself — that is, the servers, the software, the hypervisors, and the networking resources. These resources are external to the appliance and must be defined as part of the Workload Deployer configuration.
Workload Deployer supports three types of hypervisors: PowerVM®, VMware ESX, and z/VM®. Workload Deployer also enables you to manage multiple hypervisors or cloud groups as isolated pools of hypervisors of the same type.
IBM PureApplication System embeds the capabilities of IBM Workload Deployer and offers the same Web 2.0 interface and pattern modeling technology, but it also integrates the hardware, the hypervisors, the software, and the networking resources needed to support the cloud environment.
IBM PureApplication System is called an Expert Integrated System (EIS) because it includes everything needed for the cloud in a single box. As Figure 1 illustrates, with Workload Deployer, you bring your own cloud into the picture, whereas, with IBM PureApplication System, you get a cloud-in-a box, which also incorporates Workload Deployer technology. Both Workload Deployer and IBM PureApplication System enable the rapid adoption and deployment of Infrastructure as a Service and Platform as a Service offerings.
Figure 1. Workload Deployer vs. IBM PureApplication System
Workload Deployer code
Workload Deployer can be leveraged as a physical appliance, a virtual appliance, or as an embedded component of the IBM PureApplication System. These different versions all have the same Web 2.0 interface and enable you to easily port patterns from one environment to another. Using a simple example (a single hypervisor), Figure 2 illustrates how Workload Deployer works as a physical appliance. The appliance communicates with and manages the hypervisor and provisions new VMs onto the cloud based on pre-existing or newly created patterns. This is the same functionality you get with IBM PureApplication System.
Figure 2. Workload Deployer as a physical appliance
If you don’t have access to a Workload Deployer physical appliance or to an IBM PureApplication System, you can still develop and test virtual patterns. IBM makes a Virtual Pattern Kit for Developers (VPKD) available for free which you can use to:
- Develop and test virtual application patterns on your local computer.
- Promote your virtual application patterns to the IBM PureSystems™ Centre, if you are an IBM Business Partner.
The VPKD includes:
- Web Application Pattern 2.0
- IBM Transactional Database Pattern 1.1.
- IBM Data Mart Pattern 1.1
- Plug-in Development Kit (PDK)
- IBM Image Construction and Composition (ICON) Tool
- Base OS (RHEL) image
The VPKD is delivered as a VMware image, and effectively acts as a virtual appliance version of the Workload Deployer physical appliance. Figure 3 illustrates how.
Figure 3. Workload Deployer as a virtual appliance in the Virtual Pattern Kit for Developers
The VPKD is, by all means, a full working software version of a Workload Deployer physical appliance. The only difference is that it includes only what you need to create virtual application patterns. The VPKD does not include the hypervisor images delivered with Workload Deployer or IBM PureApplication Systems that are used to create virtual system patterns. However, you can create your own virtual images using the ICON tool and add them to the virtual appliance to be able to create virtual systems.
Functionally, the Web 2.0 interface in both the physical and the virtual appliance is exactly the same; the only minor difference is that the text “IBM Workload Deployer” gets replaced with “Virtual Pattern Kit” throughout the GUI to avoid confusion.
It’s all about virtual patterns
IBM has been steadily moving in the direction of virtual patterns as a way of abstracting and automating otherwise difficult and time-consuming infrastructure provisioning tasks. Patterns offer a way of easily standardizing the provisioning process and the reusability of parts and topologies. Just like patterns and component-based software engineering help you deliver better-quality software more rapidly and consistently, parts and patterns in a cloud environment help you deliver environments more quickly and in a more consistent and reliable fashion.
Workload Deployer and IBM PureApplication System support three types of deployment models:
- Virtual appliances
A virtual appliance or virtual image provides a pre-configured VM that you can use or customize. Virtual appliances are hypervisor editions of software and represent the basic parts you use in Workload Deployer and PureApplication to build more complex topologies. Adding new virtual images to the Workload Deployer and PureApplication catalog enables you to deploy multiple instances of that appliance from a single virtual appliance template.
- Virtual system patterns
Virtual system patterns enable you to graphically describe a middleware topology to be built and deployed onto the cloud. Using virtual images or parts from the catalog, as well as optional script packages and add-ons, you can create, extend, and reuse middleware-based topologies. Virtual system patterns give you control over the installation, configuration, and integration of all the components necessary for your pattern to work.
- Virtual application patterns
A virtual application pattern, also called a workload pattern, is an application centric (as opposed to middleware-centric) approach to deploying applications onto the cloud. With virtual application patterns, you do not create the topology directly, but instead specify an application (for example, an .ear file), and a set of policies that correspond to the service level agreement (SLA) you wish to achieve. Workload Deployer and PureApplication will then transform that input into an installed, configured, and integrated middleware application environment. The system would also automatically monitor application workload demand and adjust resource allocation or prioritization to meet your defined policies. Virtual application patterns address specific solutions, incorporating years of expertise and best practices.
The remainder of this article focuses on explaining how virtual system patterns work.
Virtual system pattern walk-through
Consider a simple distributed server environment, consisting of a deployment manager, two custom profiles, two HTTP servers, and an external database. The manual steps to provision the base topology of such a system would be:
- Install WebSphere Application Server on the primary node.
- Create a deployment manager profile. This creates a deployment manager cell on the deployment manager node.
- Create a custom profile. This creates a second cell, a node, and a node agent.
- Federate (add) the custom profile node to the deployment manager cell. Federating the node allows the deployment manager to administer the node. The node agent that got installed on the custom profile node is what enables the communication between that node and the deployment manager.
- Repeat the previous two steps for the other custom profile as well as for the HTTP servers.
- Install the database for an optional data tier
A few things to keep in mind:
- A logical group of managed servers configured on the same physical or virtual machine is called a node, while a logical group of nodes on the same network is called a cell.
- A deployment manager manages a single cell.
- The example here uses one machine per node.
- A custom profile is initially an empty node. Once created, you can customize that node to include application servers, clusters, web servers, or other Java processes. You can do this from the admin console of the deployment manager or you can use the wsadmin utility.
As standard as this topology might be, these steps involved someone with the right experience to build it. With the Workload Deployer or IBM PureApplication System Web 2.0 interface, doing so is much simpler, and someone with less experience can create and deploy the basic skeleton of the environment. More importantly, the topology can be reused as necessary, and its initial configuration further enhanced via scripting to automate (for example) the creation of clusters.
Creating a simple virtual system pattern
From the Patterns menu, click Virtual Systems to open the virtual system patterns catalog, as shown in Figure 4. Doing so opens a dialog asking you to enter a unique name and a description for your pattern. This example uses the name, “Managed Nodes Example,” and the description, “A distributed server environment example.”
Figure 4. Virtual system patterns
Entering a name and a description for a pattern and pressing OK opens the pattern window (Figure 5). The pattern window displays the available patterns on the left, and information about the selected pattern on the right, including its topology, if it has been created.
Figure 5. Pattern window
Clicking Edit opens the Pattern Editor where you can start building your topology by dragging and dropping parts, script packages, and add-ons onto the canvas. Parts are virtual images that you use to build your topology. Script packages are bundles, or a set of files that execute one or more commands on an image part. Script packages contain scripts (usually shell scripts or Jython scripts) that you can use to further configure the virtual image. Add-ons are special types of scripts that let you customize the virtual hardware in your deployed virtual machine (for example, to initialize a network interface or create a new virtual disk).
Figure 6 shows the Pattern Editor with 93 virtual image parts, 67 scripts, and 4 add-ons. For this example, drag and drop the parts labeled as follows onto the canvas:
- Deployment manager
WebSphere Application Server 126.96.36.199
188.8.131.52 ESX, RedHat Enterprise Linux 64-bit 5 (RedHat
Enterprise Linux 5)
- IBM HTTP Servers
WebSphere Application Server 184.108.40.206
220.127.116.11 ESX, RedHat Enterprise Linux 64-bit 5 (RedHat
Enterprise Linux 5)
- Custom Nodes
WebSphere Application Server 18.104.22.168
22.214.171.124, ESX, RedHat Enterprise Linux 64-bit 5 (RedHat
Enterprise Linux 5)
- DB2 Enterprise
DB2 Enterprise Large
126.96.36.199, ESX, RedHat Enterprise Linux 64-Bit (RHEL x64)
Figure 6. Drag and drop parts, scripts, and add-ons onto the canvas to build your topology
You can place parts anywhere on the canvas. The Pattern Editor will automatically rearrange them and cross-configure them wherever it finds a unique relationship (for example, a custom node with a deployment manager). The system will also draw an arrow between them to indicate that a well-known, IBM pre-defined relationship exists between those parts. If a unique relationship does not exist, the editor will not be able to integrate the nodes. So if you just had a deployment manager and a DB2 part on the canvas, the Pattern Editor would not be able to do any federation and would warn you that there are no custom nodes federated to the deployment manager. Parts without predefined integration points will not appear connected with arrows to other parts in the editor. However, you can still integrate them via scripting where it makes sense. For this to work, of course, a command line interface (CLI) must exist that would enable the script to perform the cross-configuration.
While you are still dragging and dropping parts onto the canvas, you might see additional warning messages about the topology. You can safely ignore them until you are done editing. Once you add a custom node, for example, the warning about federation will go away, and the system will know to automatically federate the node to the deployment manager.
After adding the four parts mentioned above, your canvas should look similar to Figure 7 (except for the red text and dotted lines).
Figure 7. Simple WebSphere Application Server DB2 pattern topology
The generated layout includes four nodes (four different virtual machines) already configured to work with each other, with the arrows indicating the relationships between the parts. The added red text and dotted lines are added to the figure to help explain how the editor lays out the parts in the topology:
- Parts that appear on the left side of the canvas are managers of other parts. In this example, the deployment manager node manages the custom nodes, so the editor places it on the left side of the topology.
- Parts in the center of the canvas are managed nodes. They automatically get federated into or registered with the part managers that appear on the left side.
- The parts on the right side are connection parts, used mainly for routing traffic to the different nodes. Examples of these include HTTP servers and on-demand routers (if you’re using a WebSphere Application Server virtual image that includes the Intelligent Management Pack).
As Figure 8 shows, the icons and controls that appear with each part enable further configuration. Hovering over the part name of a node displays a window that describes the part and provides a link to it in the virtual image catalog.
Figure 8. Functions of controls and icons in a part
If you want to increase the number of custom nodes, you can click the up arrow in the Custom node part until the desired number of nodes appears next to the arrows.
Notice a few other underlying things:
- After deployment, each node (and instance) will exist in its own virtual machine.
- When you change the number of instances for a specific part, the pattern will automatically know how to configure and federate those additional instances.
- You can choose to change the number of instances while editing the pattern or at deployment time (more on that later).
- You can perform additional tuning as required at deployment.
- In this example, each part has a script attached to it labeled iwd_VMCompliance. This is not a standard script. It is used by IBM to test, secure, and patch the servers for compliance purposes. You can add a script or an add-on to a part by simply dragging and dropping it onto the part. If you got a script added to your part by default, try removing it and then adding it again to get a feel for how this process works.
- Some script packages might require parameters, in which case you will see an option similar to the properties icon that enables you to configure properties. Clicking this icon on a script package lets you edit the script’s parameters. You can specify variables in script packages that can only be known at deployment time using a special syntax provided.
When you deploy a pattern, the system automatically deploys, starts, and configures all of its related virtual machines. The sequence in which this occurs is determined by the constraints and ordering of the parts and scripts. Within the Pattern Editor, the blue links directly below the toolbar on the right let you configure advanced options, as well as toggle between the Topology and the Ordering view (Figure 9).
Figure 9. Ordering view
From the Ordering view, you can drag parts and scripts to place them in the necessary execution sequence. By default, the Pattern Editor places these in a correct order, based on how the parts work and their default constraints. In Figure 9, for example, the deployment manager is set to start before the HTTP servers and custom nodes. The left side of the Ordering view shows the existing constraints and highlights any additional constraints or conflicts that may come up as you rearrange the nodes.
Configuring advanced options
Next to the Ordering/Topology toggle is the Advanced Options link. This opens a dialog that provides options for configuring common choices associated with the type of topology being created. If your system matches what you have done here so far, there should be nothing to change. The advanced options that appear by default for a new virtual system are the recommended values for the type of topology you are creating. For this example, you can keep those as is.
Setting the node properties
Now, let’s look at the changes you need to make to the properties of each of the nodes in your topology. For each part on the canvas, click the Properties icon and ensure the settings match those documented in the corresponding tables, shown in Figures 10 through 13, respectively. An asterisk is displayed next to required fields, and values that need to change are highlighted.
Figure 10. Properties for the deployment manager part used in this example
Figure 11. Properties for the custom nodes part used in this example
Figure 12. Properties for the IBM HTTP servers part used in this example
You might have noticed that the deployment manager, custom nodes, and IBM HTTP server parts have many properties in common, while other properties are unique to a particular part. Table 1 lists brief descriptions of these properties.
Table 1. Property descriptions
|Name||Unique name assigned to the part. You cannot change this value.|
|Virtual CPUs||Number of Virtual CPUs the ESX server should allocate for the VM.|
|Memory size (MB)||Amount of RAM the ESX server should assign to the VM.|
|Reserve physical CPUs||This tells the ESX server to reserve the specified physical CPU capacity for the VM. A physical CPU (pCPU) denotes a physical CPU on the ESX server while a virtual CPU (vCPU) denotes a virtual CPU, as seen by the virtual machine. The hypervisor controls how a vCPU executes on a pCPU. When a VM starts, ESX activates its pCPU reservation, but only as an entitlement. It tells the hypervisor to guarantee the specified amount of pCPU capacity for that VM if needed. However, CPU cycles are not wasted. While a VM is not using its entitlement, the hypervisor allocates that reserved capacity to other activated VMs.|
|Reserve physical memory||This tells the ESX server to reserve physical memory for the VM. ESX will activate this memory reservation when the VM starts. Once used by the VM, even if just once, the reserved physical memory (pRAM) will not be available to other VMs. A VM will fail to start if the hypervisor cannot meet its reservation.|
|Cell name||The name of the cell (a logical group of nodes on the same network). Workload Deployer automatically creates the cell and federates the base custom nodes available with the deployment manager’s node.|
|Node name||The name of the node (a logical group of application servers configured on the same machine).|
|Feature packs||Feature packs are used to deliver new capabilities before a next major release of a product. The options currently offered are none (default) and xc10. Selecting the xc10 feature pack would permit your applications to connect to and utilize the capabilities of the shared caching service (in-memory cache technology for session persistence). This is the same technology found in IBM WebSphere eXtreme Scale and the IBM WebSphere DataPower XC10 Caching Appliance.|
|WAS IM Repository Location||The IBM Installation Manager uses a remote or local software repository to enable you to install, change, or update certain IBM products. A software package that can be installed with Installation Manager is called a package, and is associated with an installation location. Packages are stored in flat files called repositories. This field refers to the location of the WebSphere Application Server Installation Manager repository.|
|WAS IM Repository User||Username for the WebSphere Application Server Installation Manager repository.|
|WAS IM Repository Password||Password for the WebSphere Application Server Installation Manager repository.|
|Verify password||Verification entry for a previously specified password.|
|IMP IM Repository Location||Refers to the location of the Intelligent Management Pack Installation Manager repository. The Intelligent Management Pack enables you to augment the deployment manager, WebSphere Application Server, or IBM HTTP Server profile with the Intelligent Management Pack feature set which, among other things, offers improved application performance and delivery response times, as well as the ability to perform interruption-free maintenance upgrades.|
|IMP IM Repository User||Username for the Intelligent Management Pack Installation Manager repository.|
|IMP IM Repository Password||Password for the Intelligent Management Pack Installation Manager repository.|
|Password (root)||The root password of the VM.|
|WebSphere administrative user name||Username for the WebSphere Application Server administrative console.|
|WebSphere administrative password||Password for the WebSphere Application Server administrative console.|
|Enable VNC||Enables the operating system to accept Virtual Network Computing (VNC) connections for remote desktop access to the VM. Be aware that the VNC option is only available for virtual machines deployed on VMware (running on the ESX Server hypervisor). Besides VNC, the administrative console also provides links to remote logs, to SSH, and to the WebSphere Integrated Solutions Console for the deployed virtual systems.|
Because you need to configure two custom profiles and two web servers, make sure you set the number of instances of the Custom nodes and IBM HTTP Servers parts to two each.
Figure 13. Properties for the DB2 Enterprise part used in this example
Deploying the pattern
When you are finished editing the properties of the different parts, click Done editing in the upper right corner to return to the Pattern window. The topology you just created should display in the Pattern window.
Click Deploy to bring up the virtual system deployment window, shown in Figure 14.
Figure 14. Virtual system deployment window
- The first option, Virtual system name, lets you specify a unique name for the deployed instance of your virtual system. Type Virtual System Pattern Example in this field.
- The second option, Choose environment, lets you choose to deploy your virtual system to an existing cloud group or to a previously defined environment profile. The appliance filters them based on the type of Internet protocol (IPv4 or IPv6). Cloud groups provide a way of creating a pool of hypervisors of the same type (for example, ESX or PowerVM). They are usually defined and created by the administrator. Environment profiles provide further flexibility. They enable an administrator to create a layer above cloud groups that can further limit what users can do with the system, such as what naming convention they must use for virtual machines, what CPU, memory, storage, and license limits they have, and what cloud groups they can use. This is especially helpful when different teams need to use the same environment. The available environment profiles in your system can also be found via the Cloud | Environment Profiles menu option.
- With the Schedule deployment option, you can specify when the virtual system pattern should be deployed after you press OK.
- The Configure virtual parts option lets you open the Properties window for any of the parts in the virtual system pattern. If you have been following along, you have already set these properties from the Pattern Editor. Green check marks next to the items indicate their completion. If an item is missing a check mark, it means you still need to enter required values in the properties window for that part.
Press OK to begin the deployment. Shortly thereafter, you should see a panel similar to Figure 15. Depending on how your system is configured, you might also receive an email with a message informing you that the deployment of your virtual system has started.
Figure 15. Virtual System Instances
Verifying the deployment
If all goes well, you should see an updated panel similar to Figure 16 after perhaps an hour later. This tells you that the system has provisioned six VMs, and configured them with the software components specified in your topology. Figure 16 shows the Virtual machines node expanded. You can expand each of the VM nodes to see extensive information about the virtual machine, such as the hypervisor and cloud group it is running on, its hardware, software, and network configuration, its script packages, as well as environment metrics. At the very bottom, under Consoles, there is a link to the VNC viewer and the WebSphere Integrated Solutions Console (available only to the deployment manager).
Figure 16. A successful deployment
Open the VNC console for the deployment manager VM, and authenticate with the virtuser password. A new browser window should open with a graphical view of your deployment manager desktop. You can also remotely log in through the WebSphere Integrated Solutions Console to start managing the different nodes and creating application servers and clusters accordingly. The custom nodes also provide SSH access. Use the Integrated Solutions Console in the deployment manager to verify that your deployment looks as intended. For example, your list of federated nodes should look similar to Figure 17.
Figure 17. Deployment manager console
This concludes our walk-through and introduction to virtual system patterns. With virtual system patterns, you can simplify the amount of work you need to do to create middleware topologies that fit your requirements. In about an hour or so, you were able to provision the basic skeleton of an entire distributed server environment comprised of a deployment manager, two custom profiles, two HTTP servers, and a database. You can now work with these machines as if they physically existed in your lab or VMware farm through the console options provided. Since the topology exists in the image catalog, you can reuse it later to quickly deploy new environments based on a common pattern template. You can also extend this basic configuration via scripting to perform additional tasks upon deployment. This will be the next topic of discussion in Part 3.
- IBM Workload Deployer product information
- IBM PureApplication System product information
- IBM developerWorks Cloud zone
- New to cloud computing
- Cloud computing: Fundamentals
- Connecting to the cloud
- Virtual appliances and the Open Virtualization Format
- IBM developerWorks Cloud zone
- IBM developerWorks WebSphere