Use the Integrated Virtualization Manager with Linux on POWER
With IBM POWER5 and POWER5+™ processor-based systems and Advanced POWER Virtualization (APV), there are many opportunities for consolidation and simplification of the IT environment. Multiple solutions can be created that take advantage of the benefits from the application of virtualization. Some examples of solutions that benefit by utilizing virtualization are:
- Server consolidation
- Rapid deployment
- Application development and testing
- Support for multiple operating system environments
The Advanced POWER Virtualization hardware feature includes software, firmware, and hardware enablement, which provide support for logical partitioning (LPAR), virtual LAN, and virtual I/O. In addition, servers featuring the POWER5 processor can utilize Micro-Partitioning™, which provides the capability to configure up to 10 logical partitions per processor.
A key component of APV is the Virtual I/O Server (VIOS). The Virtual I/O Server provides sharing of physical resources between partitions, including virtual SCSI and virtual networking. This allows more efficient utilization of physical resources through sharing between partitions and helps facilitate server consolidation.
To exploit the capabilities of APV, a system management interface is required. This function is often provided by a Hardware Management Console (HMC). The HMC is a dedicated workstation that runs integrated system management software. In some system installations an HMC may not be required, or for some businesses may not be a cost-effective solution.
The Integrated Virtualization Manager (IVM) is a browser-based management interface that is used to manage a single IBM System p5™, IBM eServer® p5, or IBM OpenPower™ server. It can be used to create logical partitions, manage the virtual storage and virtual Ethernet, and view service information related to the server. IVM provides the required system management capabilities to small and mid-sized businesses, as well as larger businesses with distributed environments.
IVM is provided as part of VIOS, starting with Version 1.2. The functionality of IVM is supported in system firmware level SF235 or later. When you install VIOS on a supported system that does not have a HMC present or previously installed, IVM is automatically enabled on that server. Therefore, IVM provides a simplified, cost-effective solution for partitioning and virtualization management.
This article illustrates how to set up and use IVM and how to create and work with Linux partitions.
POWER5 system configurations
IBM POWER5™ processor-based systems are manufactured in the factory default, or unmanaged configuration. In this configuration, the system has a single predefined partition. This configuration allows the system to be used as a standalone server with all of the resources allocated to the single partition.
After activating the virtualization feature, an HMC can be attached to the systemâs service processor to convert the unmanaged system into an HMC-managed system. As an HMC-managed system, the system can exploit virtualization, and the systemâs resources can be divided across multiple logical partitions.
When the system does not have an HMC available, an IVM-managed system can be created that can still utilize the virtualization and LPAR capabilities of the system. To convert the unmanaged configuration into an IVM-managed system, the VIOS is installed in the first partition on the unmanaged system. This VIOS partition owns all of the physical I/O resources of the system. Client partitions can then be created using the IVM interface. All of the client partition I/O is virtualized through the VIOS.
System administrators can work with LPAR configurations through IVMâs Web browser-based or command line interface. These interfaces are used to manage and configure the client partitions and the virtual I/O resources. The browser-based interface provides an intuitive, easy-to-use method of connecting to the VIOS partition of the managed system using standard with network access. The command line interface uses an interactive console, or a Telnet session, with the VIOS partition. Since IVM is not connected to the service processor; it uses a Virtual Management Channel (VMC) to communicate with the POWER5 Hypervisor.
Figure 1 depicts the VIOS and IVM components, along with their administration interfaces.
Figure 1. VIOS and IVM components
Partitions created in an IVM managed system can utilize the following virtualization support:
- Shared processor partitions or dedicated processor partitions
- Micro-partitioning support in shared processor partitions, providing shared processor pool usage for capped and uncapped partitions
- Uncapped partitions can utilize shared processor pool idle processing units, based on an uncapped weight
- Virtual Ethernet support allowing logical partitions to share a physical Ethernet adapter
- Virtual networks with bridges between the virtual networks and the physical Ethernet adapters
- Virtual SCSI support allowing logical partitions to share a physical SCSI adapter
- Assignment of physical disks, partial disks, or external logical unit numbers (LUNs) to client partitions
- Virtual optical support allowing logical partitions to share a CD or DVD drive
- Virtual console support, with virtual terminal console access from the VIOS partition
The Integrated Virtualization Manager provides a subset of the HMC functionality. One should carefully consider business needs when deciding whether to deploy IVM or an HMC. Some of the limitations when using IVM include the following:
- Full dynamic LPAR support is only available for the VIOS partition
- All physical I/O is owned by the VIOS partition
- There is no support for redundant or multiple VIOS partitions
- Client partitions can have a maximum of 2 virtual Ethernet adapters
- Client partitions can have only one virtual SCSI adapter
- No call-home service support is provided
As seen by some of these limitations, there are instances where an HMC managed system may be required or more desirable. Some examples of these instances include systems that require complex logical partitioning, client partitions with dynamic LPAR support or dedicated physical I/O adapters, redundant VIOS support, and complete HMC-based service support.
The Appendix provides a table that compares IVM with HMC.
Before you begin
This article assumes that the reader has a working knowledge of Linux, POWER5 processor-based server hardware, partitioning concepts, and virtualization concepts. To set up and use IVM to create a Linux-based solution requires the following system and software components:
- One of the following IBM POWER5 processor-based servers:
- System p5 505, 520, and 550
- eServer p5 510, 520, and 550
- OpenPower 710 and 720
- System microcode, Version SF235_160 or later.
- The virtualization feature for the System p5, eServer p5, or OpenPower server:
- The Advanced POWER Virtualization feature
- The Advanced OpenPower Virtualization feature
Server MTM Virtualization Feature Number OpenPower 710 9123-710 1965 OpenPower 720 9123-720 1965 System p5 505 Express 9115-505 7432 eServer p5 510 and 510 Express 9110-510 7432 eServer p5 520 and 520 Express 9111-520 7940 System p5 520 Express 9131-52A 7940 eServer p5 550 and 550 Express 9113-550 7941 System p5 550 Express
System p5 550Q Express
- Virtual I/O Server Version 1.2 or greater installation CD
- A supported Linux distribution:
- SUSE Linux Enterprise Server 9 for POWER (SLES9)
- Red Hat Enterprise Linux AS 3 for POWER (RHEL3), Update 2 or later
Note: RHEL3 is not supported on System p5 Express servers.
- Red Hat Enterprise Linux AS 4 for POWER (RHEL4)
- A PC with a serial terminal application (for example, Linux Minicom or Windows HyperTerminal) or a serial terminal.
- A 9-pin serial crossover connection (null modem) cable
- A network connected Web browser:
- Netscape 7.1, or higher
- Microsoft Internet Explorer 6.0, or higher
- Mozilla 1.7.X
- Firefox 1.0, or higher
Setup and configuration of an IVM managed system
Setting up and configuring an IVM managed system requires a serial terminal application (Minicom on Linux or HyperTerminal on Windows, for example) running on a PC that is plugged into the systemâs serial port 1 by a 9-pin serial crossover, or null modem, cable. In addition, you will need to connect the flexible service processorâs Link HMC1 Ethernet connection to your network. With the following steps, the flexible service processor (FSP) can be initialized and the VIOS installed.
Note: The following setup steps assume that the system is in the manufacturing default configuration, or unmanaged system state. If necessary, the system can be reset to the manufacturing default configuration by using the service processorâs System Service Aids menu, Factory Configuration option.
Initialize the service processor
- Set the terminal applicationâs connection to 19200 bits per second, 8 data bits, no parity, 1 stop bit.
- Power on the system and press a key on the terminal to receive the service processor prompt.
- Log in with the User ID
adminand the default password
admin. When prompted to change usersâ passwords, change the admin password.
- From Network Services > Network Configuration > Configure interface Eth0, set the static mode, the IP address, and the subnet mask. Other interface settings can optionally be set.
Listing 1. Interface settings
MAC address: 00:02:55:2F:FC:04 Type of IP address: Static 1. Host name (Currently: OP710) 2. Domain name (Currently: company.com) 3. IP address (Currently: 10.10.10.109) 4. Subnet mask (Currently: 255.255.255.0) 5. Default gateway (Currently: 10.10.10.1) 6. IP address of first DNS server 7. IP address of second DNS server 8. IP address of third DNS server 9. Save settings 98. Return to previous menu 99. Log out S1>
- Select Save settings and confirm the changes to reset the service processor.
- Open a Web browser and connect to the IP address set on the FSP using HTTPS protocol (for example,
https://10.10.10.109). The Advanced Systems Management interface (ASMI) will be displayed.
The Advanced Systems Management interface (ASMI) can now be accessed from either the serial console or the Web interface. The following steps illustrate how to set date and time and enable virtualization of the machine through the Web ASMI. You can perform the same tasks using the FSP menu through the serial console. I recommend the use of the Web interface to access the ASMI to perform FSP-related tasks for the following reasons:
- In order to access the FSP menus through the serial console, the machine must be in power-off state while the FSP is powered on.
- To use the FSP menus, you are required to be physically near to the machine since a connection to a PC through a serial connection (null modem) cable is required.
- It is more convenient to use the Web interface to access the ASMI, and use the serial console to bring up the System Management Services (SMS) menus to install the VIO server.
- Log in to the ASMI with User ID
adminand the changed password, as shown in Figure 2.
Figure 2. ASMI login page
- Select System Configuration > Time Of Day in the navigation area. Enter the date and time based on the UTC time. Click Save Settings.
Note: You can find the current UTC time at http://tycho.usno.navy.mil/cgi-bin/timer.pl.
Figure 3. Setting system configuration -- Time of day
- Select Power/Restart Control > Power On/Off System. In Boot to system server firmware, select Standby, and click Save settings and power on.
Figure 4. Powering on to boot system server firmware to standby mode
- Wait several minutes for the system to power on. If you re-display the Power On/Off System page, the current system server firmware state should be at "standby". Select On Demand Utilities > CoD Activation. Enter the CoD Activation code for the Advanced Power Virtualization feature, and click Continue.
Figure 5. Entering the CoD activation code
Install Virtual I/O Server
Now that the FSP is set up, VIOS can be installed and configured. This will again require the use of the serial terminal application on the PC connected to the system serial port.
- Insert the VIOS Version 1.2 disk into the system CD/DVD drive.
- From the ASMI Web interface, select Power/Restart Control > Power On/Off System. Select Running for Boot to system server firmware, and click Save settings and power on.
- Wait for the prompt on the serial terminal and then press 0 to select the active console.
- Wait for the boot screen to appear on the serial console. Immediately press 1 after the word Keyboard is displayed to go to the SMS (System Management Services) Menu.
Listing 2. Boot screen on serial terminal
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM 1 = SMS Menu 5 = Default Boot List 8 = Open Firmware Prompt 6 = Stored Boot List Memory Keyboard Network SCSI Speaker
- Enter the admin user accountâs password when prompted for it.
- After the SMS Main Menu is displayed, select the following SMS options: Select Boot Options > Select Install/Boot Device > List all Devices > IDE CD-ROM (use SCSI CD-ROM, if the drive is a SCSI CD or DVD drive).
- Select Normal Mode Boot, followed by Yes to exit System Management Services.
- The system begins to boot the VIOS disk image. After several minutes (possibly a long time on some systems with many I/O devices), the "Welcome to the Virtual I/O Server" boot image information will be displayed. When asked to define the system console, enter the number that is displayed as directed -- the number 2 in this example.
Listing 3. Define system console
******* Please define the System Console. ******* Type a 2 and press Enter to use this terminal as the system console. Pour definir ce terminal comme console systeme, appuyez sur 2 puis sur Entree. Taste 2 und anschliessend die Eingabetaste druecken, um diese Datenstation als Systemkonsole zu verwenden. Premere il tasto 2 ed Invio per usare questo terminal come console. Escriba 2 y pulse Intro para utilizar esta terminal como consola del sistema. Escriviu 1 2 i premeu Intro per utilitzar aquest terminal com a consola del sistema. Digite um 2 e pressione Enter para utilizar este terminal como console do sistema.
- Enter 1 to choose English during the install.
- When asked to choose the installation preferences, enter 1 to choose Start Install Now with Default Settings.
- On the System Installation Summary, make sure hdisk0 is the only disk selected and enter 1 to Continue with install.
- The installation progress will be displayed.
Listing 4. Installation progress
Installing Base Operating System Please wait... Approximate Elapsed time % tasks complete (in minutes) 57 18 67% of mksysb data restored.
- When the installation completes, the system will reboot. Log in as user
padminwith the default password
padmin. When prompted, change the password.
- To view the VIOS license agreement, enter:
- To accept the license, enter:
- To create the VIOS virtual Ethernet interfaces, enter:
mkgencfg -o init
- To find the Ethernet interface(s) that will be used for the serverâs external connection(s) to the network, enter:
lsdev | grep ent
The two marked with "2-Port 10/100/1000 Base-TX PCI-X Adapter" (ent0 and ent1) are the onboard Ethernet adapters.
Listing 5. Onboard Ethernet adapters
$ lsdev |grep ent ent0 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 ent1 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 ent2 Available Gigabit Ethernet-SX PCI-X Adapter (14106802) ent3 Available Gigabit Ethernet-SX PCI-X Adapter (14106802) ent4 Available Virtual I/O Ethernet Adapter (l-lan) ent5 Available Virtual I/O Ethernet Adapter (l-lan) ent6 Available Virtual I/O Ethernet Adapter (l-lan) ent7 Available Virtual I/O Ethernet Adapter (l-lan) ibmvmc0 Available Virtual Management Channel $
- Enter the
mktcpipcommand to configure the network interface(s) for the Ethernet adapter(s) that the VIOS will use. In this example, the network cable is plugged into the ent0 adapter and is connected to the 10.10.10.0 network. The interface en0, which is the corresponding VIOS network interface for the physical Ethernet device ent0, will be configured with the IP address 10.10.10.110.
$ mktcpip -hostname IBMOP_VIO -inetaddr 10.10.10.110 -interface en0 \ -netmask 255.255.255.0 -gateway 10.10.10.1 âstart
- Optionally, additional customization of VIOS can be done at this point. In addition, the system is now ready for use as an IVM-managed system using the IVM Web interface.
Using IVM to create a Linux partition
Now that VIOS is installed and configured, IVM can be used to manage the system and its resources. When creating a partition, storage should first be allocated for the partition. This storage will be used as a virtual disk for a Linux partition. A partition can then be created using a wizard. Finally, a Linux distribution can be installed into that partition. The Linux installation can be performed with a network or a CD-based install. This section will demonstrate the installation based on a CD install.
Create default storage pool and space for a virtual disk
- Open a Web browser with network access to the Ethernet adapter that was configured for VIOS to the IP address of the VIOS -- http://10.10.10.110. Log in to IVM using the User ID
padminand the password that was created for padmin.
Figure 6. Log in to IVM
- The IVM View/Modify Partitions page will display. Examine the System Overview information and the Partition Details. Notice that the only partition that is currently displayed is the VIOS partition, with a default partition name that is based on the system serial number.
Figure 7. System overview and partition details
- When creating a partition, storage must be allocated for the partition. This storage comes from a shared storage pool, which is managed by VIOS. The default storage pool used is set as rootvg. When VIOS is installed, the rootvg is set to hdisk0, which is the same disk that contains VIOS. It is considered a good practice on systems with several disks to create a new storage pool and define it as the default storage pool. See Related topics for more information on the use of the storage pool and advanced storage configurations.
To create a storage pool with other drives on the system, from the Storage Management menu in the navigation area, click Create Devices, and click on the Advanced Create Devices tab. Then click on Create Storage Pool.
Figure 8. Create storage Pool
- From the Create Storage Pool window, enter a storage pool name
LinuxPoolvg. Select hdisk1 and hdisk2. Click OK to create the pool.
Figure 9. Create storage pool
- From the Storage Management menu, click Advanced View/Modify Devices. Then on the Storage Pools tab, select LinuxPoolvg. Click on the Assign as default storage pool task on the bottom of the page.
Figure 10. Assign as default storage pool
- On the Assign as default storage pool page, click OK.
- From the Storage Management menu, click Advanced View/Modify Devices. Then click the Advanced Create Devices tab. Click on Create Logical Volume.
Figure 11. Create logical volume
- On the Create Logical Volume window, enter a Logical Volume Name
Linux01LV, select the Storage Pool Name LinuxPoolvg, enter Logical Volume Size
20, and select GB. Then click OK.
Figure 12. Enter logical volume name and size
- From the Storage Management menu Advanced View/Modify Devices page, the newly created Linux01LV volume can now be seen.
Create the Linux partition
- Create the Linux partition with the Create Partition Wizard. From the Partition Management menu, click Create Partitions. Click Start Wizard.
Figure 13. Create the Linux partition
- The Create Partition Wizard window will open:
- On the Create Partition: Name window, enter Partition ID
2and Partition name
Linux01. Click Next.
- On the Create Partition: Memory window, enter Assigned memory
768and select MB. Click Next.
- On the Create Partition: Processors window, select Assigned processors 2 and select Shared. Click Next.
- On the Create Partition: Virtual Ethernet window, select Virtual Ethernet 1 for Adapter 1. Click Next.
- On the Create Partition: Storage Type window, you can either create a virtual disk from the default storage pool or assign an existing virtual disk or a physical volume. To utilize the 20GB Linux01LV logical volume that was already created, select Assign existing virtual disks and physical volumes. Click Next.
- On the Create Partition: Storage window, select the available virtual disk Linux01LV. Click Next.
- On the Create Partition: Optical window, to assign the DVD drive to this partition, select cd0. Click Next.
- Review the Create Partition: Summary. Then click Finish.
Figure 14. Review the create partition summary
- On the Create Partition: Name window, enter Partition ID
- The IVM View/Modify Partitions page will now contain the Linux01 partition.
Figure 15. IVM view/modify partitions
- A virtual Ethernet bridge is required to provide access from the partitionâs virtual Ethernet to the external network. From the Virtual Ethernet Management menu, click View/Modify Virtual Ethernet. Click on the Virtual Ethernet Bridge tab. For the Virtual Ethernet ID 1, select the Physical Adapter ent0. Click Apply.
Figure 16. Virtual Ethernet bridge
Install Linux into the partition
- To install Linux, insert the Linux distribution installation CD in the CD/DVD drive.
- Start a virtual console running in the VIO server. Telnet to 10.10.10.110, log in as padmin, and enter the command
mkvt âid 2(since 2 is the partition ID for the Linux01 partition). The console will then wait until the partition is activated.
Listing 6. Virtual console
# telnet 10.10.10.110 Trying 10.10.10.110... Connected to 10.10.10.110. Escape character is '^]'. telnet (IBMOP_VIO) IBM Virtual I/O Server login: padmin padmin's Password: Last login: Wed Sep 28 15:34:44 CDT 2005 on /dev/pts/0 from 10.10.10.20 $ mkvt -id 2
- Linux can now be installed by activating the partition and using System Management Services (SMS) from the virtual console. From IVMâs Partition Management menu View/Modify Partitions, select the partition Linux01, and then select the Activate task on the bottom of the page.
Figure 17. Activate the Linux partition
- On the Activate Partitions page, click OK.
- Switch back to the telnet session with the virtual console. As the partition boots up, press 0 to select this session as the console.
- Wait for the boot screen to appear on the virtual console. Immediately press 1 after the word Keyboard is displayed, to go to the SMS Menu.
- After the SMS Main Menu is displayed, select the following SMS options: Select Boot Options > Select Install/Boot Device > List all Devices > SCSI CD-ROM
Note: Even if the physical drive is an IDE DVD drive, the virtual optical driver reports it back to SMS and Linux as a SCSI CD/DVD drive.
- Select Normal Mode Boot, followed by Yes to exit System Management Services.
- The system begins to boot the Linux installation CD. Proceed at this point with the standard Linux installation process.
- When the installation is complete, the virtual console can be closed by entering
Note: You can force a virtual console closed from the VIO Server with the command
rmvt âid <partition id>(for example,
rmvt âid 2).
- Now that Linux is running in the partition, the IVM View/Modify Partitions page will display a status of "Running" and a "Linux ppc64" indicator as a reference code for the Linux01 partition.
Note: The Linux ppc64 indicator is dependent on the Linux distribution installed.
Figure 18. Status: Running
Modifying partition resources
IVM can be used to modify the system resources that are available to the partitions. The memory and the processing resources for the VIOS (partition ID 1) can be modified dynamically. This is accomplished by selecting the Partition Management menu View/Modify Partitions page, and then selecting the Properties task. Figure 19 shows an example of modifying the Processing Units assigned setting for the VIOS partition with the Properties task.
Figure 19. Modify the processing resources for the VIOS partition
Resources for the Linux partitions can be also be modified from the Partition Management menu View/Modify Partitions > Properties task. However, changes to the memory, processing, and virtual Ethernet resources can only be made when the Linux partition is not active. In addition, after making the resource change on the Properties task, Linux must be rebooted. Figure 20 shows an example of changing the assigned processing units assigned to a Linux partition.
Figure 20. Modifying the processing resources for a Linux parition
To add additional storage to a Linux partition, first create another virtual disk from the Storage Management menu Create Devices page. Click Create Virtual Disk, and enter the new virtual disk information. Then click OK. Figure 21 shows an example of creating a new virtual disk.
Figure 21. Create a virtual disk
The View/Modify Partitions > properties task can be used to assign storage to a Linux partition. Changes can be made to the storage and optical devices properties while the partition is active. However, the storage device must not be in use by another partition. Figure 22 shows the assigning of the new virtual disk Linux01disk, created above, to the Linux01 partition.
Figure 22. Assign a virtual disk to a partitions storage
With SLES9 SP2, the SCSI bus can be rescanned with the bash shell script /bin/rescan-scsi-bus.sh to make the virtual disk available. However, with RHEL3 and RHEL4, the partition must be rebooted for the operating system to pick up the new SCSI device.
In SLES9 SP2, the
lsscsi command can be used to list all SCSI devices and find the new SCSI device. In SLES or RHEL, the
fdisk âl command can be used to display information about the virtual disks. The new virtual disk can then be partitioned with the
fdisk command. The
mkfs command can be used to build a Linux file system, and then the disk partition can be mounted with the
An alternative method of adding more storage to a Linux partition is to extend a current virtual disk. This can be done by using the Storage Management menu View/Modify Devices, and then selecting the Extend task. Figure 23 shows an example of using the Extend Virtual Disk task to increase the Linux01LV storage space by the 10GB. After the disk is extended, the Linux partition must be shut down. IVM should then be used to detach the virtual disk, and re-attach it. When the Linux partition is re-booted, the drive will be larger, due to the extended storage. Disk partitions can then be allocated with
Figure 23. Extend an existing virtual disk
Logical partitioning can be an integral component of a successful server consolidation strategy. With the Integrated Virtualization Manager, coupled with the performance of POWER5 systems, businesses can leverage an easy-to-use, intuitive, and cost-effective solution for creating partitions and working with virtualization resources. IVM provides a system management solution that is especially suited for small and mid-sized businesses, as well as larger businesses with distributed environments. This article discussed some of the capabilities and limitations of IVM, as well as how it can be used to work with Linux partitions.
For more information about using IVM for system management, refer to Related topics below.
Comparison of IVM and HMC
|Integrated Virtualization Manager||Hardware Management Console|
|Physical footprint||Integrated into the server||A desktop or rack-mounted appliance|
|Installation||Installed with the VIOS (optical or network). Preinstall option available on some systems.||Appliance is preinstalled. Reinstall via optical media or network is supported.|
|Managed Operating Systems supported||AIX and Linux||i5/OS, Linux, and AIX|
|Virtual console support||AIX and Linux virtual console support||i5/OS, Linux, and AIX virtual console support|
|User security||Password authentication with support for either full or ready only authorities||Password authentication with granular control of task based authorities and object based authorities|
|Supported Hardware||p5-505, p5-510, p5-520, p5-550, p5-550Q, OpenPower 710, and OpenPower 720||All POWER5 iSeries, pSeries, and OpenPower servers|
|Multiple system support||One IVM per server||One HMC can manage multiple servers|
|Redundancy||One IVM per server||Multiple HMCs can manage the same system for HMC redundancy|
|Maximum number of partitions supported||Firmware maximum||Firmware maximum|
|Uncapped Partition Support||Yes||Yes|
|Dynamic Resource Movement (DLPAR)||No - dynamic support is only available for the management partition||Yes - full support|
|I/O Support for AIX and Linux||Virtual optical, disk, Ethernet, and console.||Virtual and Direct|
|I/O Support for i5/OS||None||Virtual and Direct|
|Maximum # of virtual LANs||4||4096|
|Fix/Update process for Manager||VIOS fixes and updates||HMC e-fixes and release updates|
|Adapter microcode updates||Inventory scout||Inventory scout|
|Firmware updates||VIOS firmware update tools||Service Focal Point with concurrent firmware updates|
|I/O Concurrent Maintenance||VIOS support for slot and device level concurrent maintenance via the diag hot plug support||Guided support in the "Repair and Verify" function on the HMC|
|Scripting and Automation||VIOS CLI and HMC compatible CLI||HMC command line interface|
|Capacity on Demand||None||Full Support|
|Workload Management (WLM) Groups Supported||1||254|
|LPAR Configuration Data Backup and Restore||Yes||Yes|
|Support for multiple profiles per partition||No||Yes|
|Serviceable event management||Service Focal Point Light - Consolidated management of firmware and management partition detected errors.||Service Focal Point support for consolidated management of OS and firmware detected errors|
|Hypervisor and service processor dump support||Dump collection with support to do manual dump downloads||Dump collection and call home support|
|Remote support||No remote support connectivity||Full remote support for the HMC and connectivity for firmware remote support|
- The "Virtual I/O Server: Integrated Virtualization Manager" Redpaper provides an introduction to IVM, describing its architecture and showing how to install and configure a partitioned server using its capabilities.
- "IBM Integrated Virtualization Manager: Lowering the cost of entry into POWER5 virtualization" provides an in depth overview of IVM with some deployment examples.
- IBM eServer Hardware Information Center: Partitioning with the Integrated Virtualization Manager provides information on how to create logical partitions on a single managed system, manage the virtual storage and virtual Ethernet on the managed system, and view service information related to the managed system.
- IBM eServer Hardware Information Center provides information to familiarize you with the hardware and software required for logical partitions and to prepare you to plan for and create logical partitions on your server.
- Look at these:
- Linux on Power Architecture Developer's Corner: Learn more about Linux on Power. Find technical documentation, education, downloads, product information, and more.
- Linux on POWER ISV Resource Center: PartnerWorld offers a range of benefits for Business Partners who support Linux.
- Learn about Linux at IBM
- Search IBM Redbooks
- Microcode update files CD-ROM image from IBM.
- The Standalone Diagnostics CD-ROM image from IBM, if a diagnostic CD is needed to install the new firmware level.