Transition to AIX from Solaris

Partitioning and virtualization

You've been working with Solaris for ten years now and, like many other companies, you have just started a large server consolidation and migration project to AIX® 6.1 from Solaris 10. Many of the commands are similar, but you need to know how to work with partitioning and virtualization. What are the partitioning differences between an IBM® and Sun server? Furthermore, what do you need to know about workload partitions (WPARs) to make a successful transition from containers? What are some of the similarities and differences between WPARs and zones, and how does the process differ from creating zones and WPARs? Finally, what can you do on the IBM System p™ that you can't do with Sun servers? These are some of the questions addressed in this article, with the objective of making your transition easier.

Ken Milberg (ken@unix-linux.org), President and Managing Consultant, Technology Writer, and Site Expert, techtarget.com

Ken Milberg is a technology writer and site expert for Techtarget.com and provides Linux technical information and support at Searchopensource.com. He is also a writer and technical editor for IBM Systems Magazine, Power Systems edition, and a frequent contributor of content for IBM developerWorks. He holds a bachelor's degree in computer and information science, as well as a master's degree in technology management from the University of Maryland University College. He is the founder and group leader of the N.Y. Metro POWER-AIX/Linux Users Group. Through the years, he has worked for both large and small organizations and has held diverse positions from CIO to senior AIX engineer. He is currently president and managing consultant for UNIX-Linux Solutions, is a PMI-certified Project Management Professional (PMP), an IBM Certified Advanced Technical Expert (CATE), and is also IBM SCon certified.



12 February 2008

Also available in Chinese

Introduction

The first thing you need to do is understand some of the basic concepts as they relate to System p™ and AIX®. The differences in the commands themselves are easy enough to research. A popular site is Rosetta Stone (see Resources), which provides a side-by-side comparison of common system administration tasks with many different variants of UNIX® and Linux®. IBM® also offers several IBM Redbooks® that are specifically tailored to the Solaris administrator to assist with file system management, user administration, and kernel tuning (see Resources). But what about the non-traditional tasks such as partitioning and virtualization? This is where you have to dig further to really understand the concepts. Workload partitions (WPARs) are a relatively new technology for IBM. Simply put, they virtualize your operating system (OS), allowing for fewer operating system images on a partitioned physical server (which you can correlate to an IBM logical partition (LPAR), a Solaris Logical Domain (LDom), or a Sun Dynamic System Domain) and make your life easier. They simplify the deployment and administration of your systems and consolidate applications. Partitioning the server itself using either an IBM LPAR, a Solaris LDom, or a Sun Dynamic System Domain (DSD) would be done prior to the actual OS virtualization.

The history of partitioning on the IBM UNIX platform (it's worth noting that IBM started virtualization through its Hypervisor in 1967 and mainframe partitioning with LPARs in 1987) was introduced with the POWER4 architecture in 2001, along with AIX 5L™, through LPAR technology. The IBM LPAR technology allows you to partition physical servers into separate LPARs, each with its own version of AIX or Linux. The technology itself (hypervisor-based technology) is more of a virtual machine technology and compares with VMWARE and Xen. With the advent of AIX 5.2, it added the ability to dynamically move CPUs, I/O adapters, and memory without rebooting partitions. This was achieved through Dynamic Logical Partitioning (DLPAR)—the process of dynamically allocating additional CPU or memory servers to a running LPAR. AIX 5.3 and the POWER5 architecture brought Advanced POWER Virtualization (APV) to IBM System p, which includes support for Virtual I/O servers (shared Ethernet and virtual SCSI), micro-partitioning, uncapped partitions (using shared processor pools), and the newly retired Partition Load Manager (PLM). In IBM System p, AIX or Linux partitions run as LPARs within the same frame, each with its own OS image (CPUs, RAM, and I/O). To partition IBM servers, you can use either a Hardware Management Console (HMC) or a relatively new product called Integrated Virtualization Manager (IVM). The HMC is a PC running an IBM locked down or custom-configured version of Linux, which allows you to create, configure, and maintain the partitioned servers. IVM does this without the need for a separate workstation. It is a browser-based interface that allows you to configure logical resources on a server.

This article is not be a comparison of the IBM LPAR to Solaris containers and zones, as that is not really an apples-to-apples comparison. However, it is fair to say that both IBM and Sun have tried to sell this competition during marketing efforts throughout the past several years. Each LPAR in a partitioned IBM System p has its own image, while containers and zones each have the same base operating system. In this sense, LPARs are more similar to Sun's DSDs (hardware-based partitioning with no real virtualization capabilities) and LDoms; though from a technology standpoint, they are more similar to Sun's upcoming xVM, a paravirtualization strategy based on Xen. This article discusses Sun's Container virtual server technology, how it compares to the IBM WPAR technology, and what you need to know as a Solaris administrator to succeed in the IBM space. To a lesser extent, this should also appeal to AIX administrators trying to get more of an understanding of OS virtualization in general (WPARs and zones).

Why WPARs?

Why do you even need WPARs or zones if you have LPARs or DSDs or LDoms? It is fascinating that IBM and Sun have both been gravitating toward the center and toward one another. Rather than continue the debate over the pros and cons of LPARs versus Container virtual server technology, both IBM and Sun have recognized the importance of what each other's technology offered, which is why the push for IBM into WPARs and Sun into LDoms (which came after LPARs in an attempt to make Solaris more partition friendly) and soon a paravirtualization product like xVM. Any AIX administrator will tell you that while LPARs are wonderful, they have the disadvantage of maintaining multiple images and possibly over committing expensive hardware resources, such as RAM. Simply put, partitioning (both hardware and hypervisor- or firmware-based) helps you consolidate and virtualize hardware within a single physical server and provides an important ability to configure separate host system within a larger physical box. OS virtualization, such as WPARs or zones, provide an even more granular method of resource management. It does this by sharing OS images and is clearly the most efficient use of processor memory and I/O resources.

In the way that WPARs compliment LPARs, zones compliment DSDs or LDOMs. They both provide real value to businesses. OS virtualization, in the form of zones or WPARs, allows you to further virtualization application workloads, which supports running a single instance of an OS with multiple workload images. New applications can be deployed quickly with these virtualized OS partitions and, in the case of IBM, you don't always have to concern yourself with creating new partitions and installing a new OS. Furthermore, fewer images need to be managed and patched, and less hardware resources need to be allocated. On the other hand, WPARs are a single point of failure; for example, in the event of an LPAR problem, all underlying WPARs will also be affected. They are also affected by planned outages, as each WPAR instance running on the partitioned LPAR must have the same OS level in case a fixpack needs to be installed on the LPAR.

Containers and zones

Solaris' Container virtual server technology includes two main components: the zones partitioning technology and the Resource Management Tool. Though the terms are used interchangeably, a zone is the actual virtualized environment, while a container is a zone that also uses the OS Resource Management Facility. This technology was first introduced in Solaris 10. Zones are used to provide an isolated and secure environment for running applications and are environments created from within one single instance of Solaris. In theory, you can create up to 8192 zones; however, they are determined by other constraints on the system.

A Solaris Global zone is what you have prior to creating any other zones. When a system is first installed and deployed, all processes run on the global zone. For all intents and purposes, non-global zones are really the zones. There are two types of non-global zones: one is a sparse root zone and other is a whole root zone. The sparse root zone (which has the inherit-pkg-dir resource) optimizes object sharing and consists of a root file system that only partially consists of data copied to it from packages and files. This type of zone usually requires about 100GB of space. Four directories of the root f/s are accessible in this model:

  • /lib
  • /platform
  • /sbin
  • /usr

Further, only the most important files are copies, where SUNW_PKGTYPE is set to root. The rest of the packages are not installed into the zone and are accessible with a loopback file system (lofs) in read-only mode. A whole root zone requires a full Solaris 10 install. While it takes up more space, it offers more flexibility, as you can remove any file or package you might not wish to have, that you may not be able to do with a sparse model.

The sparse model is the default model. The global zone (the bootable system) is the only zone from which the non-global zones can be managed. It contains a complete installation of Solaris and is aware of all devices and file systems in the box. Resource Management allows for the management of systems resources, including CPU and memory. Sun offers dynamic resource pool through its Solaris Resource Manager (a component of the Container virtual server technology). It allows the distribution of resources with more control, including resource pools and resource capping. As a general rule, most applications work on zones out of the box, unless they require access to certain physical devices. In some cases, it is just a configuration issue. In others, applications must be modified to support zones. This includes applications that require access to /dev/kmem and network devices. Sun provides an application list of fully supported applications, as they are evaluated by independent software vendors (ISVs). Do not assume if your application is not there, it won't work. Some ISVs treat zones just like any other application and don't feel they have to certify them.

Creating and administering zones

In this section, you'll create and configure Solaris zones. The environment that you're working in is a Sun Fire V210—2-way UltraSparc-IIIi running at 1336 MHz.

First you need to initialize the zone:

. root[ksh]@ezqspc18# zonecfg -z testzone

Then you need to print out the current configuration default information, which should be saved to a file (see Listing 1).

Listing 1. Printing out current configuration default information
zonecfg:testzone>
zonecfg:testzone> export
create -b
set zonepath=/home/zones/myzone
set autoboot=false
set ip-type=shared
add inherit-pkg-dir
set dir=/lib
end
add inherit-pkg-dir
set dir=/platform
end
add inherit-pkg-dir
set dir=/sbin
end
add inherit-pkg-dir
set dir=/usr
end
add net
set address=192.168.0.22
set physical=e1000g0
end

From here, you'll make some configuration changes to some of the variables, setting up your installation zone path and configuration file (see Listing 2).

Listing 2. Making configuration changes to some variables
zonecfg:testzone>
zonecfg:testzone> set zonepath=/zones/testzone
zonecfg:testzone> commit
zonecfg:testzone> export -f /testzone.cfg

You're now ready for the installation (see Listing 3).

Listing 3. Installing
root[ksh]@ezqspc18# zoneadm -z testzone install
WARNING: skipping network interface 'e1000g0' which may not be present/plumbed \
    in the global zone.
Preparing to install zone <testzone>.
Creating list of files to copy from the global zone.
Copying <7231> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1580> packages on the zone.
Initializing package <367> of <1580>: percent complete: 23%

After approximately 17 minutes, the installation completes (see Listing 4).

Listing 4. Installation completed
Initialized <1580> packages on zone.
Zone <testzone> is initialized.
The file </zones/testzone/root/var/sadm/system/logs/install_log> contains a log \
    of the zone installation.
root[ksh]@ezqspc18#

Next, you need to boot the zone (see Listing 5).

Listing 5. Booting the zone
root[ksh]@ezqspc18# zoneadm -z testzone boot
WARNING: skipping network interface 'e1000g0' which may not be present/plumbed \
    in the global zone.
root[ksh]@ezqspc18#

At that point, you can log in from the global environment using zlogin:

zlogin -C testzone

After logging in, you'll see the connected screen, but nothing else:

Connected to zone 'testzone' console]

This process tasks about five minutes before you receive the next message. You will not get a login right away, but don't be alarmed. You'll need to further configure your environment the first time you do this, which requires inputting your terminal type plus hostname and network information (see Listing 6).

Listing 6. Further configuring your environment
Select a Language

  0. English

Please make a choice (0 - 0), or press h or ? for help: 0
What type of terminal are you using?
 1) ANSI Standard CRT
 2) DEC VT52
 3) DEC VT100
 4) Heathkit 19
 5) Lear Siegler ADM31
 6) PC Console
 7) Sun Command Tool
 8) Sun Workstation
 9) Televideo 910
 10) Televideo 925
 11) Wyse Model 50
 12) X Terminal Emulator (xterms)
 13) CDE Terminal Emulator (dtterm)
 14) Other
Type the number of your choice and press Return: 3
Creating new rsa public/private host key pair
Creating new dsa public/private host key pair
Configuring network interface addresses:.
? Host Name ????????????????????????????????????????????????????

  Enter the host name which identifies this system on the network.  The name
  must be unique within your domain; creating a duplicate host name will cause
  problems on the network after you install Solaris.

  A host name must have at least one character; it can contain letters,
  digits, and minus signs (-).

    Host name

System identification is completed.
rebooting system due to change(s) in /etc/default/init

[NOTICE: Zone rebooting]

System identification is completed.

rebooting system due to change(s) in /etc/default/init

[NOTICE: Zone rebooting]
SunOS Release 5.10 Version Generic_120011-14 64-bit
Copyright 1983-2007 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Hostname: testzone

testzone console login: Jan 13 15:57:10 testzone sendmail[25411]: 
My unqualified host name (localhost) unknown; sleeping for retry

At this point, you should be able to verify that the zone was up (see Listing 7).

Listing 7. Verifying that the zone is up
#hostname
Testzone

You can further look at zoning information from the global environment (see Listing 8).

Listing 8. Zoning information from the global environment
root[ksh]@ezqspc18# zoneadm list -v
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared
   1 testzone         running    /zones/testzone                native   shared
root[ksh]@ezqspc18#

The df command (using the -Z flag) shows the file system output, including the zones (see Listing 9).

Listing 9. df command with -Z flag
root[ksh]@ezqspc18# df -kZ
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/dsk/c0t0d0s0    60502476 11620301 48277151    20%    /
/devices                   0       0       0     0%    /devices
ctfs                       0       0       0     0%    /system/contract
proc                       0       0       0     0%    /proc
mnttab                     0       0       0     0%    /etc/mnttab
swap                 16904432    1376 16903056     1%    /etc/svc/volatile
objfs                      0       0       0     0%    /system/object
fd                         0       0       0     0%    /dev/fd
swap                 16903568     512 16903056     1%    /tmp
swap                 16903104      48 16903056     1%    /var/run
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1
        60502476 11620301 48277151    20%    /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
     60502476 11620301 48277151    20%    /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
/zones/testzone/dev  60502476 11620301 48277151    20%    /zones/testzone/root/dev
/lib                 60502476 11620301 48277151    20%    /zones/testzone/root/lib
/platform            60502476 11620301 48277151    20%    /zones/testzone/root/platform
/sbin                60502476 11620301 48277151    20%    /zones/testzone/root/sbin
/usr                 60502476 11620301 48277151    20%    /zones/testzone/root/usr
proc                       0       0       0     0%    /zones/testzone/root/proc
ctfs                       0       0       0     0%  /zones/testzone/root/system/contract
mnttab                     0       0       0     0%    /zones/testzone/root/etc/mnttab
objfs                      0       0       0     0%    /zones/testzone/root/system/object
swap            16903248     192 16903056     1%    /zones/testzone/root/etc/svc/volatile
/zones/testzone/root/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1
                     60502476 11620301 48277151    20% \
       /zones/testzone/root/platform/sun4u-us3/lib/libc_psr.so.1
/zones/testzone/root/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
                     60502476 11620301 48277151    20% \
       /zones/testzone/root/platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd                         0       0       0     0%    /zones/testzone/root/dev/fd
swap                 16903056       0 16903056     0%    /zones/testzone/root/tmp
swap                 16903056       0 16903056     0%    /zones/testzone/root/var/run
root[ksh]@ezqspc18#

WPARs

WPARs are virtualized OS environments that are created within a single AIX (only supported on AIX 6.1) image. Each partition is a secure and isolated environment, where processes are executed within its own image. They are usually created within LPARs, though they can be created on full physical servers that are not partitioned. The part of the AIX OS that hosts the partitions are referred to as the global environment. Similar to Solaris, you can have applications running in the global environment (in an LPAR that has WPARs) that are not hosted by WPARs. The global environment owns all the physical resources of the LPAR (CPUs, RAM, network, and disk I/O) and allocates CPU and memory resource to each WPAR. From the global environment, you can see and control all of the processes that are running within the WPARs, including its file systems. WPARs all are completely self-contained (having private execution environments) from any other processes running outside the WPARs and can also have dedicated network addresses. There are two kinds of WPARs: application workload partitions and system workload partitions. The system WPAR is closer to a complete version of AIX and is analogous to either sparse or fill root zones. Each WPAR has its own dedicated, writable file system. When a WPAR is started up, an init process is actually created for the WPAR, which spawns other processes requiring it (inetd and cron). System LPARs have a default option to share read-only /usr/ and /opt file systems; so, in this sense, they are also similar to both sparse and whole root zones.

An application WPAR is a real lightweight version of a virtualized OS environment, which is more geared to execute processes than entire applications. This type of partition shares the file system of the global environment, but it does not own any dedicated storage. While it can run application daemons, it does not run any daemons, such as inetd or cron. It also does not allow remote access to this specialist environment. Application WPARs are only temporary objects; they are created when the process is started and destroyed when the last process within that application partition ends. They share everything with the global environment and, in a way, they can be considered as a wrapper around running processes for isolation.

For AIX administrators, the advantage of WPARs is the flexibility of creating new environments without having to create and manage new AIX partitions. Further, they allow you to consolidate applications and scale out working environments, for example, development and test. Each application can is now executed within one WPAR, rather than creating separate LPARs. WPARs have no dependency on hardware, and you can even use them on POWER4 systems that do not support the IBM PowerVM Editions (formerly IBM Advanced POWER Virtualization). Theoretically, the amount of WPARs that can be executed in one LPAR is 8192 (same as Solaris); however, as a practical matter, other AIX limitations prevent this limit. Using a new method of software installation called Relocative Software Packages, you can actually install multiple versions of the same application in one AIX image and then start each version of the application in separate WPARs.

When upgrading or patching your LPAR, it's real important to understand how it can affect each WPAR that has been defined on that LPAR. So, the necessity of having multiple test environments outside of production becomes more critical. Further, the system administrator now has to manage more file systems; because WPARs create more file systems in the global environment, as each WPAR has at least four dedicated file system (/, /home, /tmp, and /var). From an application standpoint (similar to Solaris zones), most applications should work out-of-the-box; however, applications that require the use of system devices might be problematic. Physical devices are not supported within a WPAR. While there is a way to export devices, applications that require non-exportable devices are restricted to only running in the global environment. WPARs also support an important AIX 6.1 innovation called Live Application Mobility. This enables WPAR-hosted applications executing on one LPAR to be temporarily moved to another without any downtime.

Creating and administering WPARs

In this section, you'll create and configure IBM WPARs (system and application). The environment that you're working in is an LPAR partitioned p570 with one CPU— POWER5 running at 1654 MHz.

System WPAR

First, you'll run the mkwpar command, which starts off by creating the WPAR and installs the file systems (see Listing 10).

Listing 10. Running the mkwpar command
lpar13ml16fd_pub[/] > mkwpar -n tstsystemWPAR
mkwpar: Creating file systems...
 /
 /home
 /opt
 /proc
 /tmp
 /usr
 /var

When that process is completed, it finishes the installation by installing all the required filesets (see Listing 11).

Listing 11. Installing the required filesets
bos.rte.serv_aid            6.1.0.1         ROOT        COMMIT      SUCCESS
vac.C                       9.0.0.2         ROOT        COMMIT      SUCCESS
Workload partition testsystemWPAR created successfully.
mkwpar: 0960-390 To start the workload partition, execute the following as root: 
startwpar [-v] testsystemWPAR
lpar13ml16fd_pub[/] >

This process takes less than three minutes, including the installation of 217 filesets.

Let's check the status of WPARs (see Listing 12).

Listing 12. Checking the status of the WPARs
lpar13ml16fd_pub[/] > lswpar
Name            State  Type  Hostname        Directory
-------------------------------------------------------------------
MyTestWpar1     A      S     MyTestWpar1     /wpars/MyTestWpar1
MyTestWpar2     A      S     MyTestWpar2     /wpars/MyTestWpar2
testsystemWPAR  D      S     testsystemWPAR  /wpars/testsystemWPAR
lpar13ml16fd_pub[/] >

The WPAR you installed is now in a defined state. To activate it, use the startwpar command (see Listing 13).

Listing 13. Using the startwpar command
lpar13ml16fd_pub[/] > startwpar -v testsystemWPAR
Starting workload partition testsystemWPAR.
Mounting all workload partition file systems.
Mounting /wpars/testsystemWPAR
Mounting /wpars/testsystemWPAR/home
Mounting /wpars/testsystemWPAR/opt
Mounting /wpars/testsystemWPAR/proc
Mounting /wpars/testsystemWPAR/tmp
Mounting /wpars/testsystemWPAR/usr
Mounting /wpars/testsystemWPAR/var
Loading workload partition.
$corral_t = {
              'name' => 'testsystemWPAR',
              'wlm_cpu' => [
                             undef,
                             undef,
                             undef,
                             undef
                           ],
              'path' => '/wpars/testsystemWPAR',
              'hostname' => 'testsystemWPAR',
              'wlm_procVirtMem' => [
                                     -1,
                                     undef
                                   ],
              'wlm_mem' => [
                             undef,
                             undef,
                             undef,
                             undef
                           ],
              'key' => 4,
              'vips' => [],
              'wlm_rset' => undef,
              'opts' => 4,
              'id' => 0
            };
Exporting workload partition devices.
Starting workload partition subsystem cor_testsystemWPAR.
0513-059 The cor_testsystemWPAR Subsystem has been started. Subsystem PID is 237720.
Verifying workload partition startup.
Return Status = SUCCESS.
lpar13ml16fd_pub[/] >

You can further display the output information using the standard df command without any flags (see Listing 14).

Listing 14. Displaying the output information using the df command
df -k 
/dev/fslv13        131072    128660    2%        5     1% /wpars/testsystemWPAR/home
/opt               262144    119808   55%     3048    11% /wpars/testsystemWPAR/opt
/proc                   -         -    -         -     -  /wpars/testsystemWPAR/proc
/dev/fslv14        131072    128424    3%        9     1% /wpars/testsystemWPAR/tmp
/usr              3538944    158348   96%    91414    69% /wpars/testsystemWPAR/usr
/dev/fslv15        131072    117088   11%      370     2% /wpars/testsystemWPAR/var

It should be noted that some AIX commands have also been optimized to enhance the support of WPARs. An example is vmstat. This output, run from the global environment, outputs data on all running WPARs (see Listing 15).

Listing 15. Output data for all running WPARs
lpar13ml16fd_pub[/] > vmstat -@ ALL 1 5

System configuration: lcpu=2 mem=2048MB drives=0 ent=0.25 wpar=3

wpar  kthr    memory              page              faults              cpu
----- ----- ----------- ------------------------ ------------ --------------------------
       r  b   avm   fre  re  pi  po  fr   sr  cy  in   sy  cs us sy id wa    pc    rc
System  0  0 264810 112840   0   0   0   0    0   0 849  104 250  0 32 67  0  0.10  39.4
Global  0  0     -     -     0   0   0   0    0   0   -    - 249  1 99  -  -  0.08  32.8
MyTestWpar1  0  0     -      0   0   0   0    0   0   -    -   1 54 46  -  -  0.00   0.1
MyTestWpar2  0  0     -      0   0   0   0    0   0   -    -   0  0  0  -  -  0.00   0.0
testsystemWPAR  0  0     -   0   0   0   0    0   0   -    -   0  0  0  -  -  0.00   0.0
----------------------------------------------------------------------------------------

ifconfig, hostname, netstat, and ps have been optimized to run in the global environment and within WPARs. There are other commands, such as mpstat and sar, that simply will not work.

Application WPARs

As you recall, an application WPAR is defined as a WPAR that allows a process or application to run inside of it, almost like a wrapper. It is temporary and ends when the application ends. To create this, you need to use the wparexec command. Listing 17 shows the output of lswpar (shown in Listing 16) while the application WPAR is being created and run. In this case, it takes about six seconds to create the WPAR and about another second to run the command.

Listing 16. Creating the application WPAR using wparexec
lpar13ml16fd_pub[/usr/bin] > wparexec /usr/bin/w applWPAR
Starting workload partition w.
Mounting all workload partition file systems.
Loading workload partition.
  01:18PM   up 2 days,   1:02,  2 users,  load average: 0.01, 0.09, 0.16
User     tty          login@       idle      JCPU      PCPU what
Shutting down all workload partition processes.

Listing 17 shows the output of the lswpar command.

Listing 17. Output of lswpar
lpar13ml16fd_pub[/] > lswpar
Name            State  Type  Hostname        Directory
-------------------------------------------------------------------
MyTestWpar1     A      S     MyTestWpar1     /wpars/MyTestWpar1
MyTestWpar2     A      S     MyTestWpar2     /wpars/MyTestWpar2
testsystemWPAR  A      S     testsystemWPAR  /wpars/testsystemWPAR
tstsystemWPAR   D      S     tstsystemWPAR   /wpars/tstsystemWPAR
w               T      A     w               /

A split second after the process is completed, the WPAR is no more (see Listing 18).

Listing 18. The WPAR is gone
lpar13ml16fd_pub[/] > lswpar
Name            State  Type  Hostname        Directory
-------------------------------------------------------------------
MyTestWpar1     A      S     MyTestWpar1     /wpars/MyTestWpar1
MyTestWpar2     A      S     MyTestWpar2     /wpars/MyTestWpar2
testsystemWPAR  A      S     testsystemWPAR  /wpars/testsystemWPAR
tstsystemWPAR   D      S     tstsystemWPAR   /wpars/tstsystemWPAR
lpar13ml16fd_pub[/] >

While application WPARs are certainly very limited, they do have a purpose and it can provide system administrators with even more flexibility in testing processes and small applications.

Summary comparison

This section covers important differences between the Sun zones and the IBM WPARs. You can review a table that compares OS virtualization concepts and commands as well.

Live Application Mobility

Live Application Mobility, a feature of AIX 6.1, allows you to move running WPARs without any user disruption, and it is the single most important functionality or feature difference between zones and WPARs. This is simply not available in Solaris. To perform a similar action in Solaris, you must attach and detach a halted zone, as a Solaris systems administrator cannot move running working loads around in multiple servers. This feature also allows for multi-system workload balancing. In doing so, it can also help conserve data-center costs by allowing operators the ability to transfer workloads to fewer systems.

Ease of use

With zones, you need several commands to create a zone, and you must also go through further configuration iterations as part of the boot process. With WPARs, installation is as simple as running one command. The configuration is also more straightforward.

Speed

The system WPAR was completed in three minutes; however, the process to create a zone took me nearly half an hour, involving the execution of many more commands.

Application WPARs

While Solaris has two types of zones, both are equivalent to system WPARs in AIX. Solaris does not allow for application WPARs, which can be created in under 10 seconds.

Isolation

Zones provide only memory and processor isolation, while WPARs contain process and paging isolation.

Workload Partition Manager

The Workload Partition Manager is part of IBM Systems Director and the enablement for Live Application Mobility. It is a separate installable program, which is a much better manageability tool than what Sun offers. Among other features, it includes cross systems management for WPARs and automated policy-based application mobility.

Double whammy

Since partitioning and virtualization compliment one another, IBM clearly has the better partitioning product with its LPAR-based technology, driven today by IBM PowerVM Editions. The combination of LPARs and WPARs are simply a better team and more tightly integrated than either containers and DSDs or containers and LDoms. It's worth noting that LDoms run only on single-chip solutions (Suns UltraSPARC T1 or UltraSPARC T2 processors), running Solaris 10, which simply do not scale like the System p servers; no multi-chip SMPs are available. Dynamic Systems Domains are based on hardware partitioning (the granularity of floating resources allocated to DSDs is limited to the system board) and does not really offer any virtualization capabilities.

Maturity

It's only fair to say that since container-based technology has been around longer than WPARs, it is a more mature product and should be more stable. Though certification is not really necessary, more applications are "certified" by ISVs to run on zones than WPARs. It should also be noted that Oracle is certified to run on both, though Oracle RAC is not yet certified to run on either implementation.

Table 1 provides a comparison of OS virtualization commands and concepts between Solaris and AIX.

Table 1. Comparison of WPAR and zone commands and concepts
Type (Description)Solaris (Zone)AIX (WPAR)
OS image— MasterGlobal zoneGlobal environment
Create command or commandszonecfg & zoneadm (both commands required)mkwpar (System) & wparexec (application)
Typessparse root zone, whole root zonesystem WPAR, application WPAR
viewing informationzoneadm list -vlswpar
logging inzloginclogin
Filesystem informationdf -kZdf -k

Summary

This article introduced partitioning and virtualization on both SUN and IBM platforms. The elements of zones and WPARs were described, created, and compared. Solaris administrators who move to AIX should notice the simplicity and increased speed of creating WPARs. The recent innovations of AIX 6.1, which simply do not exist with Solaris and zones (including Live Application Mobility), allow you to move running WPARs to another LPAR without disruption. You should also experience increased manageability of the partitions available through the Workload Partition Manager. Furthermore, a little research upfront to determine where application WPARs can help you run and test applications would also be helpful in your efforts to utilize this important feature. You can run many WPAR commands with the System Management Interface Tool (SMIT)—the IBM front-end interface. Virtually all Solaris administrators who have made the transition to AIX have come to really enjoy this tool.

Resources

Learn

Get products and technologies

  • IBM trial software: Build your next development project with software for download directly from developerWorks.

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into AIX and Unix on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=AIX and UNIX
ArticleID=288756
ArticleTitle=Transition to AIX from Solaris
publish-date=02122008