Virtualization is a modern approach for enhancing a system's ability for sharing system resources to ensure provisioning can readily meet business requirements. Virtualization is used to make a single physical resource to operate as many separate, smaller resources or many smaller resources aggregate together to act as a single bigger resource. In traditional software development, complex systems are built in a sequential, phase-wise manner where all the requirements are gathered at once in the beginning. It is difficult to include stakeholder requirements as they might change due to changing marketing strategies and/or technology conditions. Agile development provides these features for software development. This article discusses different virtualization technologies that can be used in agile product development to adopt market requirements in product development.
Types of virtualization
Virtualization is an approach of logically dividing system resources such as hardware, software, time-sharing, and others into separate virtual machines independent from the operating system. Another way to define it is, virtualization allows the user to run multiple applications in an existing operating system. The concept of virtualization was first introduced to the world by IBM in the 1960s to utilize mainframe hardware fully by logically partitioning them into virtual machines to perform multiple tasks and applications at the same time to reduce the expenses.
In the distributed world, virtualization helps to make a single physical resource operate as many separate, smaller resources, or many smaller resources aggregate together to act as a single bigger resource based on application requirement. From the end user point of view, the resource looks like a single physical resource irrespective of type of virtualization technology used at the back-end. Virtualization can be adopted at different phases of software development life cycle like servers, storage, network, input/output and application development for effective utilization of resources.
Server virtualization is the process of hiding server resources such as individual physical servers, processors, and operating systems from end users. The server virtualization helps in dividing one physical server into multiple isolated virtual environments called virtual private servers, guests, instances, containers or emulations. Popular virtualization technologies available for servers are:
It consists of virtualization software (called a hypervisor) that creates a virtual machine by emulating an entire hardware environment and allows running multiple operating systems parallel on a host computer. The hypervisor monitors the execution of the guest operating systems as shown below in Figure 1 and it catches the system call and redirects them to manipulate data structures provided by the hypervisor.
Figure 1. Hardware virtualization example
In this way, multiple operating systems (including multiple instances of the same operating system) can share the same hardware resources. In the early 1970s, IBM introduced PR/SM hypervisor (type 1 hypervisor) that runs on hardware to allocate and monitor system resources such as CPUs, direct access storage devices, and memory across guest operating systems. In this model, guest operating system run on another layer above the hypervisor.
In hardware virtualization, the hypervisor provides the virtualization abstraction of the underlying computer system to the guest operating systems. So the guest operating system runs unmodified on a hypervisor. Where as in para virtualization, the hypervisor resides on the hardware as shown below in Figure 2. There is a communication mechanism between hypervisor and guest operating systems to improve performance and better efficiency with mutual cooperation.
The guest operating system uses APIs, provided by virtual machine monitor (VMM), to communicate with the hypervisor to directly access certain resources on the underlying hardware. VMware provided a para virtualization interface in 2005 called Virtual Machine Interface (VMI) to provide communication mechanism between the hypervisor and the guest operating systems, where as the IBM VM operating system on zSeries has used a similar concept since 1972.
Figure 2. Para virtualization example
Virtualization players run on host operating systems and provide the complete hardware virtualization to the guest operating systems. These players virtualize different hardware attached to the host operating system like network adapter, hard disk, USB driver, video adapter and different serial/parallel devices to make it available for different guest operating systems to run on it (as shown in Figure 3).
Figure 3. Virtualization player example
In this model, the virtualization players run within a conventional operating system environment so the guest operating systems running on top of these players are at the third level above the hardware. Some of the virtualization players provide a facility to save the context or suspend the running tasks on guest operating systems. Even the guest is moved to another physical system and resumes execution exactly at the point of suspension. VMware and Citrix are the examples for virtualization players.
Operating system level virtualization
In this model, the host runs a single OS kernel as its core and exports operating system functionality to each of the guest operating systems. Guests must use the same operating system as the host (shown below in Figure 4. Some host operating systems allow different distributions of the same operating system as the guests. In this method, the kernel of an operating system allows for multiple isolated user space instances. Each of these instances may look and feel like a real server from the point of view of its end user.
Figure 4. Operating system level virtualization example
Each of these instances are isolated and the kernel often provides resource management features to limit resources for every instance. The binaries and libraries on the host operating system can be shared across the ghosts. So this model provides more flexibility for the user to create multiple numbers of guests at the same time on a host system. There are different operating system virtualization technologies available, like AIX V6.1 workload partitions (WPAR), Solaris 10 containers, and HP-UX 11i vPars.
AIX 6.1 provides a WPAR facility that provides an application-isolated environment and the resource control for applications. The host operating system can monitor the processes running on a particular WPAR. Using shared-system WPAR configuration, there is no need to install the software on each WPAR. The software installed on the host operating system is shared on all WPARS created. In private WPAR configuration, the user can install the required software in a particular WPAR. Application WPAR can be created to run the applications with dedicated resources.
Network virtualization is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity—a virtual network. Network virtualization allows the network to be reconfigured on the fly without any need to touch a single cable or device. Instead, virtualization capable network devices are managed remotely and can be reconfigured logically. Different network virtualization includes network interface cards (NICs) at hardware level and virtual LANs (VLANs) at network level.
Storage virtualization separates the logical and physical locations of information, which helps make information more readily available for business applications regardless of changes to the physical infrastructure.
Figure 5. Storage virtualization example
Virtualization provides the pooling of multiple physical storage devices to appear as a single logical storage device and also handles the process of mapping requests to the actual physical location by itself, as shown above in Figure 5. The mapping depends on the following types of implementations.
- Host-based virtualization
In host-based virtualization, the software layer above the physical device driver layer processes the I/O requests and redirects the requests to the specific physical device. It also provides facility to combine disk subsystems from multiple vendors into a single reservoir of capacity that can be managed from a single point. The access to a specific device is handled by traditional physical device driver associated with it. Even though it is simple and it supports any physical device storage type, data migration/replication across systems and host instances synchronization across organization is not possible. IBM SAN Volume Controller and FalconStor Software NSS Virtual Appliance are examples for this type of virtualization.
Storage device-based virtualization
Storage device-based virtualization provides the facility to join multiple physical disks in a single array and divide it to multiple volumes of different sizes based on requirements. In this technique, a primary storage controller provides the virtualization services for other storage controllers of same or different vendors. Though it provides most of the benefits, like data migration and replication of storage virtualization, storage utilization can be optimized only across the connected controllers thereby making replication/data migration possible only within connected controllers.
Network-based virtualization is a file level computer data storage server connected to a computer network providing data access to multiple clients. These are dedicated units for storage in a network without keyboard or display and are controlled and configured over the network. It contains one or more hard disks, often arranged into logical, redundant storage containers or RAID arrays (redundant arrays of inexpensive/independent disks). This is most commonly used form of virtualization. It provides a single management interface for all virtualized storage connected to it and data replications across heterogeneous devices are also possible.
Input/output (I/O) virtualization
This is a methodology to simplify I/O enterprise environments by abstracting the upper layer protocols from the physical connections. This enables one physical adapter card to appear as multiple virtual network interface cards (vNICs) and virtual host bus adapters (vHBAs) to networking resources (LANs and SANs). In the physical view, virtual I/O replaces a servers multiple I/O cables with a single cable that provides a shared transport for all network and storage connections. Server I/O is a critical component to successful and effective server deployments, particularly with virtualized servers to accommodate multiple applications. Virtualized servers demand more network bandwidth and connections to more networks and storage. In virtualized environment, I/O performance problems are caused by running numerous virtual machines (VMs) on one server.
By abstracting upper layer protocols from physical connections, I/O virtualization provides greater flexibility, greater utilization, and faster provisioning when compared to traditional NIC and HBA card architectures. Virtual I/O technologies can be dynamically expanded and contracted, and they can usually replace multiple network and storage connections to each server with a single cable that carries multiple traffic types as configuration changes are implemented in software rather than hardware. Virtual I/O lowers costs and enables simplified server management by using fewer cards, cables, and switch ports, while still achieving full network I/O performance.
With client virtualization, a user's entire PC environment executes on a shared server or dedicated blade client in the data center with the graphical display output to a remote access device. The advantage of client virtualization is that it provides higher data security, easier manageability, decreases end-user downtime, and ensures business continuity.
Application virtualization refers to a separation of program execution from program display; in other words, a program executes on a server, but the graphical output is sent to a remote client device. The end-user sees the full graphical display of the program and is able to interact with it with input devices.
In desktop virtualization, a user's entire PC executes on a central server, with the graphical display output to a client device. Desktop virtualization often uses an inexpensive client device for the end-user display and interaction. These thin clients, as they are known, can be low cost devices with little computing power and no local disk storage. This can reduce the cost per employee end-user device significantly.
Advantages of virtualization
Though there are many, the important advantages of virtualization are:
- Hardware of servers can be used at its maximum processing capacity.
- Data centers can accommodate multiple servers by offering the ability to host multiple guest systems on a single physical server.
- Energy costs reduce as virtualization reduces the total number of physical servers. This can significantly reduce the overall cost of energy for companies.
- IT operation costs reduces with flexible infrastructure and less labor.
- Quality of service improves with virtualization as rapid allocation of resources to the business takes place in a lesser time.
- Disaster recovery is achieved as virtual machines can be easily transferred within seconds or minutes to another server in any circumstances.
- Client virtualization enables client software images to be maintained on centralized servers and thereby eases the updates; patches are applied regularly.
- Virtualization allows the companies to reduce energy, machines, cooling mechanisms and total number of machines in use by up to 90 percent and thereby allowing software development to go green.
Agile software development process
In this section we discuss the traditional waterfall model and the agile software development model emphasizing more on the benefits of the agile process.
Waterfall or traditional model
The basic and fundamental idea behind waterfall or traditional software development is that complex software systems can be built in a sequential, phase-wise manner where entire requirement can be garnered all at once at the beginning.
The entire design is completed as part of next phase. Eventually, the overall and master design is implemented into production quality software. The toughest hindrance with waterfall is the basic assumption that all project requirements can be accurately and appropriately gleaned at the beginning of the project, which often leads to a misplaced/misunderstood work product.
Limitations of a traditional approach
This approach warrants that complex systems can be built in a single loop, without going back and revisiting stakeholder requirements as they might change due to changing marketing strategies and/or technology conditions.
The steps for the agile development process are shown below in Figure 6. The process starts with the project approval and continues with the pre-iteration planning. Then we have an iteration plan, execution, and wrap-up which may be repeated over and again depending on the requirement of the project (followed by a post-iteration plan and finally a release).
Figure 6. Agile Development Process
The agile way
The agile software development method vouches for productivity and values over process and documentation overhead and artifacts. Agile methods stress on quick sprints, small and frequent releases, continuous integration, testing and evolving requirements facilitated by direct user involvement in the development process (as shown below in Figure 7. The common belief that agile departs from established, tried, and tested pre-historic waterfall software development is incorrect. Although waterfall is often referred to as "traditional and old", it first showed up in 1970 in an article by Winston W. Royce (1929–1995), when software engineering was in its infancy compared to other engineering disciplines.
The philosophical basis of agile software development is formed by four agile alliance values. The values are:
- Individuals and interactions over processes and tools.
The most important aspect of any organization is its people. An organization is as smart and productive as their employees. The best of tools and processes do not help if the people operating them are not proficient. The Belbin's team model is the right example of how understanding individual needs can result into a very productive organization. It was devised by Dr. Meredith Belbin to measure preference for the nine team roles discovered while studying numerous teams at Henley Management College.
- Working software over comprehensive documentation.
The ultimate intention of the software development life cycle is to develop software or work products and not documents; otherwise, it would have been called the document development life cycle or document organization instead. Preparing documents are important and have their own place, but it should be streamlined. The duality or dichotomy of the above specifics should be clearly understood.
- Customer collaboration over contract negotiation
Constant stakeholder feedback is a very important and a fundamental part of an agile development cycle. Many times, customers do not seek or require what they asked in the beginning because of changing business needs, market strategy, and so on. So, it is crucial and required to keep them involved and get continuous feedback as the product develops. As we have noted earlier, the terms on the left hand side are not replacements of the expressions on the right; the duality of these terms should be respected and in turn followed.
- Responding to change over following a plan.
Responding to change is the need of the hour. The market changes, business needs change, technology changes, and even the way stakeholders understand the system being developed changes. So our software should also reflect this change. A plan strongly affects the course of events and nature of things, but it should be tangible, flexible, and malleable to incorporate the change.
We see two parts of the four values above. The first term represents the agile manifesto and the second term represents the traditional waterfall value. We should understand very clearly that though we should value the traditional values, we should value the agile values a little more. We should not eliminate the traditional ones, but then, we should not overdo and carry them too far either.
Figure 7. Agile development process at lower level
The most commonly used agile software development methodologies for software development are as follows:
Extreme Programming (XP)
This is an agile software development methodology which is intended to improve software quality and agility to changing customer, business, market requirements. Some important XP rules and concepts are "pair programming"(I have coined a new term which is "pair testing"),"germinating effective user stories", "measuring project velocity" and "continuous integration" are a few worth mentioning.
Scrum is an iterative, incremental model for project management and development. A few Scrum concepts are product backlog, sprint backlog, product bur down chart and Scrum Master.
Virtualization techniques for agile development
Virtualization supports the dynamic goals of both responding to dynamism more quickly and effectively to business changes and improving IT asset utilization. Effective virtualization bridges and integrates local and remote resources and ultimately ties together a diverse implementation behind a common interface and resource repository.
Integrating virtualization and agile techniques are the first steps towards utility computing, where data and computing power can be automatically moved through an IT infrastructure, when needed. The virtualization of IT resources, such as server and desktops capacity, will satisfy the market's demand for flexibility. It is well known that effective virtualization can improve asset utilization making servers, storage, network and other capabilities more modular and hence more reusable. It also improves system throughput and reliability by deploying computational power when necessary and also maximizes utilization of systems otherwise underutilized resources.
Figure 8. Example of operating system virtualization technique for agile product development
Virtualization can be adopted at different stages of the product development life cycle, and each virtualization technique has a different advantage based on the usage. Even though hardware, network, storage, I/O and desktop virtualization techniques provide the advantages as specified in the earlier sections, these techniques can be adopted in any software development model. Where as operating system virtualization provides special benefits for agile development as it can provide on demand resource with out effecting existing operations. As the agile development talks about lean principle, eliminating the waste, at different levels of product development life cycle, by using operating system virtual techniques the physical system can be shared by different teams like test, development and can setup the systems for specific requirements.
The agile development contains short iterations, and each iteration ends with test certification. Time for setting up test machines can be saved by using shared operating system instances, since it provides the benefit of installing software on a global system and makes it available on all shared operating instances with minimal effort. It also helps reduce the number of physical servers required and time required to set up the servers. This technique also helps for setting up a fresh environment for testing easily and for detecting one time occurrence issues during install and functionality verification.
Adopting virtualization technologies in agile product development provides better ways for eliminating the waste of resources at different stages and thereby improves energy efficiency, disaster recovery, machine usage, cooling mechanisms, development time, and product quality.
- The AIX 6.1 Information Center provides technical details about AIX Operating system.
- See Introduction to Workload Partition Management in IBM AIX Version 6.1 to understand AIX 6.1 WPAR operating system virtualization capability.
- Workload Partition Management in IBM AIX Version 6.1 provides information of enhancements on WPAR with AIX 6.1 TL2.
- Workload Partition Management in IBM AIX Version 6.1 provides information of enhancements on WPAR with AIX 6.1 TL2.
- Workload Partition Management in IBM AIX Version 6.1 provides information of enhancements on WPAR with AIX 6.1 TL2.
- The AIX and UNIX developerWorks zone provides a wealth of information relating to all aspects of AIX systems administration.
- Stay current with developerWorks technical events and webcasts.
- Browse the technology bookstore for books on these and other technical topics.
Get products and technologies
- Download IBM product evaluation versions or explore the online trials in the IBM SOA Sandbox and get your hands on application development tools and middleware products from DB2, Lotus, Rational, Tivoli, and WebSphere.
- Check out developerWorks blogs and get involved in the developerWorks community.
- Follow developerWorks on Twitter.
- Participate in developerWorks blogs and get involved in the developerWorks community.
- Get involved in the My developerWorks community.
- Participate in the AIX and UNIX® forums:
Dig deeper into AIX and Unix on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Experiment with new directions in software development.
Software development in the cloud. Register today to create a project.
Evaluate IBM software and solutions, and transform challenges into opportunities.