IBM PureApplication System W1500 v1.0 and W1700 v1.0 (hereafter called PureApplication System) provides an environment for running workloads in a cloud. To do so, it provides an extensive set of computer hardware with the capacity to run thousands of computer programs simultaneously. To manage and utilize this hardware efficiently, the system employs several industry best practices to virtualize its hardware resources, giving its workloads great flexibility to run anywhere in cloud. By knowing these techniques and how the product makes use of them, you will have a greater understanding and appreciation of the cloud environment provided by PureApplication System.
This article is the second of three articles that explains the hardware and software foundation that PureApplication System provides for hosting application runtime environments:
- Hardware: A tour of the hardware in IBM PureApplication System explains the hardware that comprises of PureApplication System.
- Virtualized hardware: The article that you are reading now explains how PureApplication System virtualizes its hardware to implement infrastructure as a service (IaaS).
- Runtime environments: Managing application runtime environments in IBM PureApplication System explains how the virtualized hardware in PureApplication System is used to implement application runtime environments that workloads are deployed into.
Each article builds on its predecessor to explain this foundation fully.
To understand resource virtualization in PureApplication System, we will explore the delivery models incorporated into a cloud computing system, the approaches used to virtualize different types of resources, and how the product makes use of these approaches.
Cloud delivery models
Virtualizing hardware resources is often described as "infrastructure as a service". It is one of three cloud delivery models (also known as cloud service models):
- Infrastructure as a service (IaaS) provides an environment of virtualized hardware resources for computation, storage, and networking.
- Platform as a Service (PaaS) provides a virtualized application runtime environment that includes operating systems; middleware such as application servers, databases, and messaging; and shared services such as caching, monitoring, and security.
- Software as a Service (SaaS) provides network accessible business applications in a centrally hosted environment that is highly reliable and scalable.
The models are cumulative, each one building on its predecessors. Figure 1 illustrates the cloud delivery models.
Figure 1. Cloud delivery models
The article Navigating the IBM cloud, Part 1: A primer on cloud technologies explains the cloud delivery models in greater detail. The delivery models are not specific technologies, but rather they are architectural goals of a cloud computing system. This article focuses on the first layer, infrastructure as a service, and how that is implemented in PureApplication System.
Infrastructure as a service is a goal achieved via resource virtualization. Resource virtualization decouples workloads (that is, running programs) from the underlying hardware they run in. For a program to run, it, of course, needs hardware resources. However, for workloads to run "in the cloud," they must not be bound to any particular set of hardware. This means that a workload must be able to run in any of multiple redundant sets of hardware. To achieve this goal, the cloud virtualizes the hardware resources to separate the workloads from the hardware and to enable the cloud to manage the workloads' usage of those resources.
A cloud organizes its hardware resources for running workloads into the three main types that are fundamental to all computing:
- Computational resources: This is the CPU and memory any program needs to run.
- Network resources: This is the connectivity between programs that enables them to communicate.
- Storage resources: This enables programs to persist their state.
PureApplication System employs techniques that virtualize each of these resources:
- It virtualizes computational resources using virtual machines (VMs).
- It virtualizes network resources using virtual local area networks (VLANs).
- It virtualizes storage resources using a storage area network (SAN).
None of these techniques is a new invention of PureApplication System, and that is the point. The system creates its virtualized cloud environment by leveraging industry best practices that are well known and proven. Understanding what these techniques are and how the product makes use of them is important to appreciate how the system's cloud provides a fully virtualized environment for its workloads.
Computational resource virtualization
Computational resources are needed by a computer program to run, namely CPU and memory, and generally thought of as a computer. Typically, a program is installed on a computer and then can only run on that computer's resources. If those resources become unavailable, either because they are not working or because they are busy running other programs, the program cannot run. To make sure the program can always run, a best practice is to decouple it from any particular set of computational resources, an approach referred to as "platform virtualization".
Platform virtualization decouples computer programs from the computer hardware they run in. This enables a program to run in any hardware that is available. To do so, the program needs to be packaged as a virtualized application that runs in a virtualized platform. A virtualized application is one that is installed independently of any particular set of hardware so that it can run anywhere on a virtualized platform. A virtualized platform is a set of hardware configured to behave like multiple sets of hardware by sharing the single underlying set of hardware.
Platform virtualization is typically implemented using one or more hypervisors to run a set of virtual machines. A virtual machine (VM) is software that simulates a physical computer by running an operating system just like a computer, but the computer is a virtual one because the VM's hypervisor decouples the operating system from the underlying hardware. A hypervisor - also known as a virtualization manager, virtual machine monitor (VMM), or platform virtualizer - is a specialized operating system that only runs virtual machines. A hypervisor running multiple virtual machines enables what seems like multiple computers to run in a single physical computer, enabling the virtual computers to share the physical computer's hardware resources. Examples of hypervisor products include VMWare vSphere™ Hypervisor (ESXi) and IBM PowerVM™.
The article Navigating the IBM cloud, Part 1: A primer on cloud technologies explains platform virtualization using hypervisors. Figure 2 shows the virtualization stack with virtual machines running in a hypervisor that runs either directly on the physical hardware (Type 1), or in a host operating system on that hardware (Type 2). Each VM typically has its own unique IP address so that it can participate in the network as a standalone computer. The hypervisor itself requires another IP address so that it can be managed remotely.
Figure 2. Types of hypervisors
VMs in PureApplication System
A compute node is the most fundamental computational resource in PureApplication System. The main difference between the different product sizes is the number of compute nodes, expressed as the number of CPU cores contained in those compute nodes (ranging from 32 to 608). More compute nodes mean that the system has greater computational capacity.
A compute node is a set of computer hardware containing CPU and memory that has access to storage and networking. It is essentially a very compact computer, rather like a blade server. As a computer, it is possible for a compute node to run an operating system. However, in PureApplication System, instead of a traditional operating system, each compute node runs a hypervisor.
Earlier, Figure 2 showed how a hypervisor enables multiple virtual machines to share an underlying set of hardware. In PureApplication System, as shown in Figure 3, the physical hardware is a compute node, the Type 1 hypervisor runs directly on the compute node, and each virtual machine runs a middleware server in an operating system.
Technically, the VM can run any program that can be installed in the operating system. To deploy that VM on PureApplication System, it will need to be developed into a virtual appliance as discussed in the article Navigating the IBM cloud, Part 1: A primer on cloud technologies. Nevertheless, PureApplication System uses its VMs primarily to run middleware servers.
Figure 3. Hypervisor stack in a compute node
The brand of hypervisor that runs on a compute node depends on the system model:
- W1500: VMware vSphere Hypervisor (ESXi)
- W1700: IBM PowerVM
The virtualized application runs in virtual machines, which run in the hypervisor that runs on the compute node. However, in PureApplication System, virtualized applications are not deployed directly to compute nodes, but instead are deployed to cloud groups. A cloud group is a collection of one or more compute nodes that acts like a logical computer and runs workloads.
A workload is a virtualized application running in a cloud group. A workload's VMs run in the cloud group's compute nodes, and not necessarily all in the same compute node. The cloud group may move the VMs between compute nodes for load balancing or failover purposes. Because the virtual machines in a virtualized application need IP addresses for network access, a cloud group also contains one or more IP groups. An IP group is a list or range of IP addresses, the ID of the VLAN they should use for communication, and settings for how to connect to the external network the VLAN is part of. VLANs are explained in the Network resource virtualization section.
To summarize, an application is deployed to PureApplication System as a virtualized application that runs in a virtualized platform, meaning that the application runs in virtual machines and that the platform runs those in a hypervisor. The virtualized platform is a cloud group, which is a logical computer composed of compute nodes, each of which is a physical computer and is itself a virtualized platform that runs a hypervisor.
Network resource virtualization
Network resources enable multiple computing processes to communicate, typically programs running on separate computers. The network protocol is typically an Internet Protocol (IP), which requires that each program runs on a computer with a unique IP address. To help isolate unrelated sets of data on the network and protect its privacy, each set of intercommunicating programs should only have access to its own network traffic and other unrelated programs should not have access to that network traffic. A best practice to isolate sets of network traffic is to divide a network into VLANs.
Virtual local area networks
A virtual local area network (VLAN) logically isolates its network traffic from that of other VLANs on the same network as a separate broadcast domain. This means that the network traffic of applications connected via two different VLANs transmits on seemingly independent networks. This is helpful, for example, to isolate the network traffic of unrelated applications (such as development vs. production, or the Finance department vs. the Human Resources department).
As explained earlier in Computational resource virtualization, multiple virtual machines slice a single physical computer into multiple simulated computers. In a similar fashion, VLANs slice a single physical network into multiple simulated networks. When the network traffic for multiple VLANs flows through the same network equipment, the equipment keeps each set of traffic separated so that each computer on the network only sees the traffic on its own VLANs. A VLAN that requires maximum isolation should be implemented as a separate local area network (LAN).
VLANs are defined on the network by the network administrator. Computers on the network and applications on those computers can be configured to use the desired VLANs to separate their network traffic, but ultimately it is the network configuration that achieves the separation.
VLANs in PureApplication System
PureApplication System contains a significant amount of networking hardware, with a pair of top of rack switches as the hub, but it does not host its own network. Just like any other computer, PureApplication System connects to the existing enterprise network and participates as part of it.
PureApplication System does not define VLANs - those are defined on the network by the network administrator. But, it does make extensive use of VLANs. PureApplication System uses VLANs in two distinct ways, either as a management VLAN or an application VLAN.
A management VLAN is used internally by the system:
- Communication: These VLANs enable the system's internal processes to communicate with each other.
- IP addresses: The system uses its own internal IP addresses, so the network administrator does not need to provide any IP addresses. The network administrator still needs to reserve each VLAN ID on the network so that the network does not create another VLAN with the same ID.
- Scope: Traffic on these VLANs only flows internally on the system. It is not (or at least should not be) accessible from outside the system.
An application VLAN is used by business applications deployed onto the system as user workloads:
- Communication: The VLAN enables the applications to communicate within themselves, with each other, and with resources on the network.
- IP addresses: The network administrator must provide not only the VLAN ID for each of these VLANs, but also a pool of IP addresses that the applications use to connect to the network. The network administrator should reserve the VLAN ID and IP addresses so that the network does not use these values elsewhere.
- Scope: Traffic on each of these VLANs flows internally on the system and externally on any parts of the enterprise network configured to also use the VLAN.
PureApplication System itself requires three management VLANs, plus each cloud group requires another management VLAN. Each IP group requires an application VLAN. While all of the IP groups can share the same VLAN, typically different groups of applications use separate VLANs. The appropriate combination of IP groups and application VLANs depends on how your enterprise architects and network administrators want to isolate your applications' network traffic.
The system associates applications with VLANs using IP groups. A system's workload administrator defines each IP group using settings provided by the network administrator, including which VLAN to use. When deploying a workload to a cloud group, either it only has one IP group to use or the workload deployer specifies which IP group or groups to use to deploy the workload. The deployment process assigns IP addresses from the IP group to the virtual machines in the workload. The process is responsible for making sure not only that the virtual machines connect to the network using the proper IP addresses (as specified in the IP group), but also using the proper VLAN as well. As long as the applications are connected using the proper IP addresses and VLAN, the system's work is done. It is the network administrator's job to ensure that the VLANs are defined properly within the enterprise network.
Storage resource virtualization
Storage resources enable a computer program to persist its state so that the state can be unloaded from the program, passed between multiple executions of the program, and shared with other programs. Data is typically bundled as files that are organized by the file management system, but some programs such as databases can be configured to bypass it and write data blocks directly to raw disk partitions. A best practice to virtualize storage so programs are decoupled from their storage is to encapsulate the storage as a storage area network.
Storage area networks
Storage can be virtualized using a combination of approaches:
- The storage can be encapsulated into a storage area network.
- Storage tiers can provide varying quality of service levels.
- Hierarchical storage management can optimize the use of tiers.
- The underlying media can be organized as a redundant array of independent disks (RAID) to increase its reliability and performance.
Let us explore each of these approaches in greater detail.
A storage area network (SAN) is a network that interconnects storage devices into an encapsulated pool of shared storage. A SAN decouples applications that need to persist data from details about how the data is actually stored. A SAN is like a cloud for persisting data and files. You know your stuff is in there, but you do not know precisely where it is or how it is stored, and it could get moved to somewhere else.
A related goal is storage as a service. Storage as a service (SaaS, or to distinguish it from software as a service, STaaS) often refers to making shared storage space available for others to use on an as-needed basis, such as a storage provider company whose business is renting capacity to its customers. It can also refer to a more general concept of a pool of storage for multiple applications to share, some using more when others need less. Shared storage is not required to be encapsulated as a SAN, but a SAN interface makes the storage easier to manage and share.
Storage can be organized into tiers. Tiered storage provides different sets of storage, each with a different quality of service (QoS). The different qualities typically result from different types of storage media, such as disks verses tape, with tradeoffs in speed, cost, capacity, and capabilities. Storage media has an inverse relationship between cost and speed - the faster the access to the data on the media, the more expensive the cost per byte of data stored. Some types of storage have greater capacity or special capabilities, such as burning a CD. Tiered storage can also differentiate between the QoS requirements of the data to be stored, creating a policy to store data deemed more important to media that is deemed faster or more reliable.
A particular set of data does not have to be permanently confined to a single tier. Hierarchical storage management (HSM) is a policy based approach to storing data that optimizes for highest performance and lowest cost. A simple example is to move historical data that is infrequently used to backup tapes, a type of storage that is slower, but less expensive than disk drives. HSM makes little sense when the storage media is homogeneous. Rather, it takes advantage of a storage solution that contains multiple types of media with differing price and performance tradeoffs, such as those in a tiered storage solution. With an existing tiered storage solution, the cost is already fixed, so HSM optimizes for performance, typically by determining the most frequently used data and storing that on the fastest media.
Even the most reliable storage media can fail, causing data to become temporarily inaccessible or permanently lost, which may be an unacceptable risk for important data. The best strategy to mitigate this risk is to store data redundantly. A simple example is to create backups. A more continuous and automated approach is a storage system that automatically stores each block of data twice, each on a different piece of media or device, such that even if one fails, the other preserves the data. Redundant storage can also be used to improve performance, such as reading from the second copy while the media for the first copy is busy.
A popular approach to implementing storage with built-in redundancy and improved performance is RAID. Redundant array of independent disks (RAID) storage combines multiple disk drives to act as one. What makes RAID storage superior to a simple collection of disks is how the data is stored. An array typically incorporates striping, a technique for improving performance, whereby the array breaks a single set of data into a series of segments and distributes the segments across the disks as a stripe so they can be accessed concurrently. The array can improve reliability by storing data redundantly. The simplest approach for redundancy is mirroring, or storing each segment twice on two different disks, which can also be used to improve performance.
A more sophisticated approach is parity, which requires greater computation, but lowers storage overhead, whereby the array stores a parity segment for a stripe such that any one segment that is lost can be recreated. Because of the lower storage overhead, parity does not create a copy that can be used to improve performance the way mirroring does.
RAID has multiple approaches called RAID levels for arranging a set of data as blocks on the disks. The levels represent standard combinations and implementations of striping, mirroring, and parity. The most basic RAID level, RAID 0, stripes the data, but it does not store the data redundantly. This improves performance and decreases reliability because of the greater number of disks that could fail. RAID 1 simply mirrors the data without striping it, essentially using one disk to copy the other. The most popular RAID implementation is RAID 5, which stripes the data across all but one disk and creates a parity segment on the remaining disk.
Standard RAID levels on Wikipedia® illustrates what a RAID 5 array looks like, as shown in Figure 4. In this example, the array contains four disks. The data is stored in four stripes, labeled A to D. Each stripe is stored in three segments, labeled 1 to 3, with a fourth parity segment for redundancy. Notice that storage for the parity segment is rotated among the disks so that each disk stores data segments for some stripes and parity segments for other stripes.
Figure 4. RAID 5 striping and parity segments
SAN in PureApplcation System
Storage in PureApplication System is managed by a cluster of four IBM Storwize® V7000 nodes, two storage units that each contains a controller node and an expansion node. One of the controllers manages the cluster, with the other acting as an expansion node and backup controller. The controller nodes include IBM System Storage Easy Tier storage management system software. Each node contains four 400 GB solid-state drive (SSDs) and twenty 600 GB hard disk drive (HDDs), giving the cluster a total storage capacity of 6.4 TB SSD and 48 TB HDD (4.8 TB and 38.4 TB are useable).
The controller manages the storage as a SAN that the compute nodes accesses through SAN adapters. Therefore, the compute nodes and their applications are not aware of how many nodes or drives compose the storage cluster. The storage is divided into two tiers:
- Generic SSD tier: This is composed of the solid-state drives, which are more expensive with lower capacity, but they deliver higher throughput.
- Generic HDD tier: This is composed of the hard disk drives. Compared to the SSDs, the HDDs have greater capacity and lower cost, but slower access times.
Easy Tier manages these tiers hierarchically with a feature called automatic data placement that dynamically determines the most frequently used data and stores it in the SSDs where it can be accessed more quickly than in the HDDs. PureApplication System's Storwize V7000 cluster organizes its storage as a RAID 5 array. The RAID supports hot-spare drives, which means that when one of the drives in the RAID array fails, the system automatically replaces the failed drive with the hot-spare drive and recovers the data on the lost drive.
The storage solution in PureApplication System is quite sophisticated and yet arrives out-of-the-box completely configured. The storage is configured as a SAN and the rack has all of the necessary cabling and adapters in place for the storage network. The storage is tiered and managed as a hierarchy to optimize throughput. The data is striped for additional performance improvements and stored redundantly for increased reliability. All of these storage features and optimizations are managed by the system such that the customer does not need a staff of storage administrators to configure and maintain the storage.
This article explained best practices that PureApplication System uses to virtualize its hardware resources to achieve infrastructure as a service. It also showed how PureApplication System makes use of the following virtualization techniques:
- Virtual machines (VMs) to virtualize computational resources
- Virtual local area networks (VLANs) to virtualize network resources
- A storage area network (SAN) to virtualize storage resources
These capabilities are built into the product and are available from the start, saving time and labor that would otherwise be needed to set up the hardware and configure its virtualization. Built-in virtualized resources are one of the great advantages of PureApplication System.
The author would like to thank the following IBMers for their help with this article: José De Jesús, Hendrik van Run, and Rohith Ashok.
- Product information:
- General information:
- SearchStorage by TechTarget