Navigating the IBM cloud, Part 1: A primer on cloud technologies

This article series explains the basics of cloud computing, beginning with some of the underlying technologies that support a cloud environment. Along the way, you will also see where some of IBM®’s key cloud offerings fit into the picture and how they can help you be successful in the cloud. This content is part of the IBM WebSphere Developer Technical Journal.

Share:

José De Jesús (jdejesus@us.ibm.com), Senior Certified Architect, IBM China

José De Jesús is a Senior Certified Architect on the ISSW for IBM (I4I) team. He has a B.S. degree in Computer Science from Fordham University and an M.S. degree in Management of Technology from the University of Miami. José is the Non-SAP Infrastructure Team Lead for the Blue Harmony internal IBM project, as well as the ISSW Certification Lead. He is also leading the I4I Cloud Initiative, and the I4I Technical Vitality Initiative, and is a member of the Distributed Management Task Force (DMTF). .



20 June 2012

Also available in Chinese Portuguese

Introduction

Cloud computing is a model that provides web-based software, middleware, and computing resources on demand. By deploying technology as a service, users have access only to the resources they need for a particular task, which ultimately enables them to realize savings in investment cost, development and deployment time, and resource overhead. Enabling users to access to the latest software and technologies also fosters business innovation.

This article series will help you understand what cloud computing is and how it works, and how IBM products can help you succeed with a cloud strategy.

This first article begins by examining some of the technologies that make cloud computing possible, and then explains the basics of cloud computing.


Inside the cloud

Platform virtualization

Platform virtualization refers to logically dividing a physical machine into multiple virtual machines (VMs) or guests. The goal is to consolidate resources, reduce space and energy costs, and to decouple the OS from the physical hardware for added flexibility. Platform virtualization is generally done by a hypervisor, which is a software or firmware layer that enables other software (usually operating systems) to run concurrently, as if they had full access to the real machine. The hypervisor, also called a virtual machine monitor (VMM), controls and presents a machine’s physical resources as virtual resources to the VMs.

There are two types of hypervisors: Type 1, which run directly on the physical hardware, and Type 2, which require a host operating system to run (Figure 1).

Figure 1. Types of hypervisors
Figure 1. Types of hypervisors

Examples of Type 1 hypervisors include: IBM z/VM®, IBM PowerVM®, and VMWare ESX/ESXi Server for Windows®. Others include Citrix Xen and Microsoft® Hyper-V®. Because they run on top of the hardware itself, Type 1 hypervisors are also called native or bare-metal hypervisors.

Examples of Type 2 hypervisors include VMWare Workstation, VMWare Server, Kernel-Based Virtual Machine (KVM), and Oracle® VM VirtualBox. Type 2 hypervisors are also known as hosted hypervisors. There are many heated debates as to whether some hypervisors (such as KVM) should be classified as Type 1 or Type 2, or if even categorizing them in this way is at all meaningful because of the way they work; that discussion is beyond the scope of this article.

Figure 2 provides a brief description of the hypervisors currently supported by IBM hardware and IBM cloud solutions.

Figure 2. Hypervisors currently supported on IBM hardware
Figure 2. Hypervisors currently supported on IBM hardware

Depending on the platform, VMs have different names. A virtual machine in an AIX® environment is called a logical partition or LPAR. On x86-based systems the term used is virtual machine. z/VM systems use both terms, LPARs and VMs. LPARs in z/VM are essentially allocated chunks of the hardware resources in a System z® computer, and each LPAR can support an independent operating system, one of which can be z/VM itself. So, a z/VM LPAR can host a z/VM guest, which in turn can host different VMs. A hardware facility in System z computers creates and manages LPARs in the same way that z/VM creates and manages VMs. When you have a Linux® guest OS running on IBM z/VM, it is known as a zLinux guest or a zLinux VM.

Virtualization is not new. IBM first developed the technology in the 1960s to multiplex its expensive mainframe computers. After much research and a reimplementation of the CP-40/CMS and CP-67/CMD operating systems, the first fully virtualized machine, the IBM S/360-67, appeared in 1967. By 1972, virtual machines had become a standard feature of all S/370 mainframes. The S/370 ran IBM’s VM/370 operating system, which was a time sharing system control program that eventually evolved into z/VM. Today, virtualization is pervasive across data centers and cloud environments, and is one of the key technologies that make cloud computing work.

Virtual appliances

A virtual appliance or virtual application represents a pre-configured VM, or a collection of pre-configured, interdependent VMs, each bundled with a fully-functional OS (operating system), known as a guest OS, and one or more applications. Figure 3 shows some examples.

Figure 3. Examples of virtual appliances
Figure 3. Examples of virtual appliances

Virtual appliances are portable, self-contained configurations of a software stack. They are also called virtual images and are usually built to host a single business application. The industry standard for the format of virtual appliances is the Open Virtualization Format (OVF), published by the Distributed Management Task Force (DMTF). Member companies such as IBM, VMware, Citrix, Microsoft, and Oracle all support OVF in their products.

OVF describes the software to be deployed, called the OVF package, and the environment for which they execute (the OVF environment). An OVF package consists of an OVF directory containing several files, including an OVF descriptor, which is an XML file with extension .ovf describing the content of the OVF package. OVF packages normally include virtual disks, and optionally include additional supplemental files such as a manifest file, ISO files, and certificate files.

The OVF package can be made available as a set of files or compressed using the TAR format and distributed as a single file with extension .ova (for open virtual appliance or application).

The OVF environment is an XML document that contains information pertaining to the deployment of the virtual appliance. For example, information such as host names, IP addresses, OS level configuration, and so on, can all be included as part of the OVF environment information.

The OVF descriptor specifies the properties that need to be configured during deployment of the virtual appliance.

The reason why virtual appliances are important is because they offer a new way of creating, distributing and deploying software. Having an abstract layer above the hypervisor, and being able to package the software and distribute it as a pre-configured and “ready to run” unit, provides a few important benefits:

  • A reduction in the provisioning and deployment time of applications, which means an increase in time to value.
  • An improvement in the quality of the final deliverable: a completely configured application that does not require installation and configuration and is therefore less error prone.
  • The ability to exchange virtual appliances between cloud providers.
  • OVF also enables workloads to be transferred to other physical servers during maintenance or unplanned equipment or application failures.

Hypervisor editions

A hypervisor edition of an IBM product or solution is a virtual image that contains an operating system and selected middleware components around that particular IBM product or solution. It is basically a virtual appliance for a specific IBM product, pre-configured to run optimally in a virtualized environment. Hypervisor editions of IBM software come bundled as .ova files and some are preloaded on IBM cloud hardware such as IBM Workload Deployer and IBM PureSystems™. There are hypervisor editions already available for products such as IBM HTTP Server, IBM WebSphere® Application Server, IBM WebSphere MQ, IBM WebSphere Message Broker, IBM WebSphere Process Server, IBM WebSphere Portal, IBM WebSphere Business Monitor, IBM DB2®, and IBM Lotus® Web Content Management.

Workloads

The term workload is used in two subtly different ways. In general, it refers to the processing demand placed on a computer resource at a given time. But it can also mean the deployed form of a virtual application (application-centric), virtual system (middleware-centric), or virtual appliance (machine-centric). .Within the context of cloud, phrases like, “provisioning a workload” or “deploying a workload” refer to enabling that virtualized application with everything else needed to run it, including the VM, the OS, and supplemental files. When wondering how well a specified server can handle a workload, it means how well it can handle the compute, memory, disk, and networking resource demands of that deployed virtual system, virtual application, or virtual appliance.

Not all workloads are the same. The resources needed for an I/O intensive workload, for example, will be different from those needed for a compute or memory intensive workload. Not all workloads require the same Quality of Service (QoS) levels either. Figuring out which workloads will run best under different environments can be a very effective way of reducing costs. As discussed further below, hybrid clouds can play an important role in these situations.

Elasticity

Capacity planning for particular workloads can be challenging. As Figure 4 illustrates, the usual pain points are known: not planning for enough capacity to meet workload demands is unwise and will lead to downtime. Conversely, overestimating capacity requirements will result in one or more servers being underutilized or sitting idle. Even correctly allocating capacity for peak usage is not enough because workload demands fluctuate, and capacity will be wasted when the system is not running at peak level. The end of a test cycle, for example, might be followed by significantly lower utilization of the hardware. Even with the best capacity planning in place, there will be cases where the workload demand is simply unpredictable. The ideal situation is to allocate only the capacity required at any given time. This is called elasticity, and it is an important characteristic of any cloud environment.

Figure 4. Elasticity
Figure 4. Elasticity

Resource pooling

Maximizing the use of hardware resources is also particularly important when multiple VMs share them. Resource pooling treats a collection of hardware resources -- compute, memory, storage, and networking bandwidth — as a single “pool” of resources available on demand. This enables hypervisors and higher level programs to dynamically assign and reassign resources based on demand and priority levels. Resource pooling is what enables multiple organizations or tenants to effectively share resources in a cloud environment.

Moving VMs and workloads

Whether through a built-in feature or with additional software, many OS and hypervisors today can dynamically move VMs or workloads to different physical servers. AIX 6.1, for example, has the Live Partition Mobility feature for moving LPARs and Live Application Mobility feature for moving workloads. VMware uses VMotion to move VMs from one physical server to another, and z/VM’s Live Guest Relocation (LGR) allows moving a virtual machine non-disruptively to another LPAR. z/VM also enables the system to move workloads to available system resources without interruption. This ability to move a VM or workload “on the fly” can increase robustness against component failures or dynamically support changes to workload demand.

Entering the cloud

No doubt, these different technologies have led data centers closer to a cloud services model. As they continue to evolve, they will further enhance the infrastructure flexibility of data centers, as well as simplify the user’s IT experience. They will also enable more advanced cloud possibilities for both consumers and providers. Alone, they do not comprise cloud computing, but together they have laid some of the foundation work that enables cloud.


Using cloud computing

Cloud computing is about delivering a set of IT capabilities and business functions as services on demand over the Internet or a private network. It is an entirely new way of creating, delivering, and consuming IT services. The “cloud” in this case is the infrastructure that the services run on — generally comprised of hypervisors, storage devices, and networking devices — as well as the VM and workload management technologies that make it possible to effectively deliver those services.

The National Institute of Standards and Technology (NIST) identifies five essential characteristics to any cloud environment:

  • On demand self-service

    The environment should support a “do-it-yourself” model where consumers can provision resources in an automated fashion through a web browser or an application programming interface (API), without requiring human interaction with the service provider.

  • Broad network access

    Capabilities should be available over a broad network and accessible via standard devices such as workstations, laptops, tablets, and mobile phones.

  • Resource pooling

    The system should pool resources so that they can be easily shared between multiple consumers. Shared resource pools allow the system to assign or reassign resources as needed based on demand. This is especially useful to support multi-tenancy models where multiple organizations or tenants share the same resources in a cloud environment.

  • Rapid elasticity

    (Also known as rapid scalability) The environment should be able to automatically (or at least quickly) add or remove compute resources based on workload demand without interrupting the running system.

  • Measured service

    A cloud environment must be able to meter and rate the resources that are being used and by whom to better manage workloads and optimize their execution, but also to provide a transparent view to the consumer of resource utilization.

Cloud delivery models

Cloud offers three delivery models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). These models determine the levels of sharing or multi-tenancy for consumers. Figure 5 illustrates this. At each level in the stack, the blocks highlighted in yellow are the components that tenants share as part of that delivery model.

Figure 5. Cloud delivery models
Figure 5. Cloud delivery models
  • Infrastructure as a Service

    At the lowest layer is Infrastructure as a Service (IaaS), where tenants share infrastructure resources such as processors, network, and storage, as well as the operating system. Tenants still have to install their own middleware components and applications. This gives them much more flexibility as to what they can install, but also makes it more difficult for them to configure and maintain. In other words, IaaS provides users with shared computing capacity, network-accessible storage, and an OS. Everything else they install and do not share. As subsequent articles will explain, IBM PureFlex™ Systems are a key solution in this area. PureFlex is a converged architecture environment that comes pre-configured with server, network, storage, and software systems for IaaS. IBM Workload Deployer, a hardware appliance based on IBM DataPower technology, also plays an important role here for private clouds.

  • Platform as a Service

    Platform as a Service (PaaS) is a layer above IaaS and provides middleware components, databases, storage, connectivity, reliability, caching, monitoring and routing. PaaS builds upon IaaS to deliver progressively more business value. Tenants continue to use their individual applications, but use shared middleware services such as monitoring, security, database, and portal. A key player here is IBM PureApplication™, which is a pre-configured platform for PaaS solutions. PureApplication builds on IaaS to provide, in addition, optimized virtual applications for transaction-oriented web and database applications. Another important player here is IBM Workload Deployer, which offers IaaS and PaaS for a private cloud. PureApplication is a converged architecture, whereas IBM Workload Deployer is an appliance that is external to your existing virtualized infrastructure. The model for the first is a “cloud in a box” whereas the model for IBM Workload Deployer is “bring your own cloud infrastructure.” Subsequent articles will cover this in more detail.

  • Software as a Service

    With Software as a Service (SaaS), tenants share everything that they would in an IaaS and PaaS solution, in addition to a single version of an application. In this case, all tenants share the same application, but maintain their application data isolated. The cloud provider might install the application on multiple machines to support horizontal scaling. With SaaS, it is easier to add new tenants because the client is simply selecting and customizing a cloud application, and does not have to worry about building the middleware or installing the application. There is little left to do for the client. IBM SmartCloud Solutions, discussed further later, are great examples of SaaS in practice.

Cloud deployment models

The cloud services model is comprised of four deployment models, listed below and shown in Figure 6.

Figure 6. Cloud deployment models
Figure 6. Cloud deployment models
  • Public cloud

    A public cloud is open to the general public. The cloud infrastructure exists on the premises of the cloud provider, and may be owned, managed, and operated by one or more entities. One of the main reasons companies move to a public cloud is to replace their capital expenses (CAPEX) with operating expenses (OPEX). A public cloud has a pay as you go pricing model. The consumer does not need to buy up front the necessary hardware to cover peak usage, and does not have to worry about correctly “forecasting” capacity requirements. This pay as you go pricing model, often referred to as utility computing, enables consumers to use compute resources as they would a utility. They pay only for what they use, and get the impression of unlimited capacity, available on demand. Consumers do not normally have to care about where or on what hardware the processing is done. They trust the cloud provider will maintain the necessary infrastructure to run their applications and provide the requested service at their required Service Level Agreement (SLA). IBM SmartCloud Enterprise+ offer unparalleled support in this area.

  • Private cloud

    A private cloud is deployed for the exclusive use of an organization. The organization or a third party can own, manage, and host it, and its infrastructure can exist on or off premises. When a third party manages the cloud, it is called a managed private cloud. When the private cloud is hosted and operated off premises, it is called a hosted private cloud. There are many reasons why companies adopt a private cloud solution. Here are a few:

    • Leverage and optimize existing hardware investments. Consolidated IT resources, automation, self-service, and better integrated management tools also reduce total costs and operating expenses.
    • Concern over data security and issues of trust with multiple client organizations sharing the same resources. Clients often start their cloud venture behind an enterprise firewall.
    • Resource contention. Since a public cloud has different organizations with applications vying for shared resources, a company might prefer exclusive use of hardware such as servers and load balancers to handle specific workloads or to obtain higher availability of systems and applications during specific times.

    People sometimes have a hard time distinguishing between an on-premise private cloud and a local virtualized environment. A fair question to ask is what an on-premise private cloud has to offer an organization that already has a highly virtualized environment with scripts already written to provision new applications. The answer is plenty more. A private cloud does not just make the provisioning easier; it provides a way to offer cloud-based services to your internal organization.

    With IBM technologies for private clouds, you get dynamic resource scaling, self-service, a highly standardized infrastructure, a workload catalog with ready to run workloads, approvals, metering, and integrated management through a single console. An IBM private cloud also gives you the ability to leverage a standard library of virtual machines or virtual appliances that can be provisioned at any time, and expanded on demand to more quickly respond to changing business needs and improve the overall utilization of the hardware. Virtualization alone does not give you these things. Mentioned earlier, IBM Workload Deployer is one of the main products for IBM’s private cloud strategy.

  • Community cloud

    A community cloud is for the exclusive use of a community, which is a group of people from different organizations that share a common interest or mission. This type of cloud can be owned, managed, and hosted by one or more members of the community, a third party, or a combination of both, and can exist on the premises of one of the parties involved or off premises for everyone. Vertical markets and academic institutions in particular can benefit from community clouds to address common concerns. For example, technology companies working together on a new specification can use a community cloud to share resources and proofs-of-concept.

  • Hybrid cloud

    A hybrid cloud consists of two or more different cloud infrastructures that remain distinct but share technologies that enable porting of data and applications from one to the other. Hybrid cloud solutions provide interoperability of workloads that can be managed across multiple cloud environments. This includes access to third party resources and to a client partner network. The idea is to seamlessly link on-premises applications — whether home-grown, packaged, or running on a private cloud — with off-premises clouds.

    Here are some examples where a hybrid cloud might be used:

    • An organization might use a public cloud to host an application, and place the underlying database in a private cloud.
    • A company might use a private cloud to host some of its work, and a public cloud for specific uses (for example, backup and archiving).
    • An organization might host its normal workload in a private cloud, and use a public cloud to handle its heavier traffic. There is a whole science behind knowing when and what to offload to a public cloud. For example, if two different workloads have the same computational requirements, the one with the lower migration cost is a better candidate to be moved. Some people formulate Markov Decision Process (MDP) models and complex algorithms around this problem.
    • A team might decide to split the location of an application based on its life cycle stage. For example, it might choose to do development in house and then go-live in a cloud environment.

    IBM WebSphere Cast Iron® technologies excel in hybrid cloud management and integration solutions. With a tool called WebSphere Cast Iron Studio, users without programming knowledge can graphically create orchestrations that integrate pubic clouds, private clouds, and on-premise applications using built-in connectors to different kinds of endpoints, including web services, databases, text files, and SAP.

    WebSphere Cast Iron comes packaged in three different ways:

    • As a self-contained physical appliance as the WebSphere DataPower® Cast Iron Appliance (XH40).
    • As a hypervisor edition or virtual appliance that offers the same functionality as the physical appliance.
    • As a cloud service itself that provides web API and cloud integration services for others to use and enhance.
    Subsequent articles will cover this in more detail.

IBM SmartCloud

By having a wide range of products and services for different domains, and in all cloud delivery and deployment models, IBM has a lot to offer to both cloud consumers and providers. The IBM SmartCloud family of offerings is currently the industry’s broadest portfolio of cloud services. As Figure 7 shows, SmartCloud consists of three families of offerings: IBM SmartCloud Foundation, IBM SmartCloud Services, and IBM SmartCloud Solutions.

Figure 7. IBM SmartCloud
Figure 7. IBM SmartCloud
  • IBM SmartCloud Foundation

    IBM SmartCloud Foundation includes IBM hardware and software products that enable companies to create and configure private or hybrid clouds. The cloud infrastructure products include servers, storage, and virtualization components.

  • IBM SmartCloud Services

    IBM SmartCloud Services include managed cloud services for IaaS and PaaS. These services include IBM SmartCloud Enterprise+, IBM SmartCloud Application Services, and IBM SmartCloud managed backup services.

    • SmartCloud Enterprise+ is an IBM hosted and managed private cloud service. It offers shared or dedicated resources, and a broader range of options for hypervisors and the underlying hardware platform.
    • SmartCloud Application Services add integrated platform functionality (PaaS) to the IaaS offering. This functionality includes tools for developing, deploying, integrating and managing applications in the cloud as well as special support for certain commercial business applications, such as SAP.
    • SmartCloud Managed Backup Services include capabilities for backing up and protecting critical data either on or off premises. These services enable companies to decrease their risk of data loss and improve availability in cases of outages or disaster, and also better manage their regulatory compliance requirements.
  • IBM SmartCloud Solutions

    IBM SmartCloud Solutions combine SaaS with IBM’s deep industry and process skills to create cloud-based applications that businesses can begin using immediately as services to meet specific needs. Figure 7 shows the different categories of SaaS applications available today. You can read more about them here.


Conclusion

This wraps up the first part of this series, giving you a fundamental overview of what cloud computing means and the technologies that make it work.

Resources

Learn

Get products and technologies

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into WebSphere on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=WebSphere, Cloud computing
ArticleID=821741
ArticleTitle=Navigating the IBM cloud, Part 1: A primer on cloud technologies
publish-date=06202012