- The big picture
- Hardware setup and connectivity
- Initial system setup
- Setting up SAN storage
- Setting up the image repository
- Create a virtual server and install the operating system
- Capture the virtual server as a virtual appliance
- Create a network system pool
- Create a server system pool
- Deploy the virtual appliance into the server system pool
- Verify that resources are properly created
- Downloadable resources
- Related topics
Manage an IBM PowerVM environment with IBM Flex System Manager
IBM PureFlex System is an expert integrated system that combines compute nodes, storage, and network resources with prebuilt intelligence and automation expertise and patterns. Although the system comes with many components preloaded and already integrated at the time of manufacture, you must perform a few steps manually to use the advanced virtualization functionality. This article takes you through that process by showing you the key steps for going from a newly configured system to deploying a virtual appliance into a resilient system pool. It also provides explanations and key concepts along the way.
Note: If you've never heard of IBM PureFlex System or the concept of a virtual appliance, read the developerWorks article, "Automate your virtual cloud appliance onto IBM PureFlex System".
Although the PureFlex System has built-in support of heterogeneous virtualization technologies such as IBM PowerVM and Kernel-based Virtual Machine (KVM), the configuration steps for each of them are not identical. This article only addresses PowerVM technology: A previous article that addresses KVM technology has been published by the same authors (see Related topics).
The big picture
One major valuable asset of the PureFlex System is having system management and virtualization management built into the design of the system instead of having to integrate them on your own. Another valuable feature is that the integrated offering includes heterogeneous compute platforms, shared storage, and advanced networking. On the integration side, it goes so far as to pre-cable chassis elements in the rack and preload software on the system.
Many of these capabilities are delivered by IBM Flex System Manager™ (FSM), an appliance running on a compute node inside the chassis. Of course, building all of these end-to-end capabilities in one package exposes an IT team to a significantly wider range of management problems they must solve. As such, it can give the illusion that the system design and user interface (UI) seem complex compared to some pure software offerings. But if you were to build a rack of systems that included heterogeneous platforms, shared storage, and advanced networking, all using best-of-breed components from various vendors, you would be exposed to more integration challenges and multiple UIs at every stage of the life cycle—planning, acquisition, installation, configuration, management, maintenance, and support. That setup most likely requires multiple roles and expertise to get the systems up and running. The PureFlex System (and other members of the IBM PureSystems™ family) is designed to minimize the number of physical experts required to successfully manage this sort of implementation, integration, and deployment.
To start setting up and deploying resilient virtual servers with the PureFlex System, we'll look at connectivity and the hardware setup. Notice that the terms partition, logical partition, and virtual server are used interchangeably in this article.
Hardware setup and connectivity
The following list shows the hardware used when writing this article:
- One IBM Flex System Enterprise chassis
- One FSM instance
- One Flex System FC3171 8GB storage area network (SAN) switch
- One Flex System EN2092 1GB Ethernet switch
- One IBM Storwize® V7000 storage server
- Two Flex System p260 compute nodes with Emulex host bus adapters
Figure 1 shows how these components are wired together. The compute nodes are automatically wired to the Ethernet switch and Fibre Channel switch at the chassis mid-plane. The FSM's first Ethernet port is wired at the mid-plane to the Chassis Management Module's built-in layer 2 Ethernet switch, and likewise for the compute nodes' Flexible Service Processor. The V7000 server is cabled to the chassis Ethernet switch and chassis SAN switch at manufacture.
Figure 1. Ethernet and Fibre Channel connection topology
The topology diagram would be more complex if you had redundant Ethernet and Fibre Channel paths and also if there were more than one chassis in the management domain. For the purpose of this exercise, we kept it simple with a single connection path.
The PureFlex System normally ships pre-cabled within the rack. You only need to connect the respective EN2092 uplink ports to the top of the rack switch.
Initial system setup
After you have the network and power connected, the system initial setup is fairly straightforward, so it's not covered here in detail. Essentially, you connect a monitor, keyboard, and mouse to the FSM using the console breakout cable, then complete the Initial Setup Wizard to provide basic information such as time, network addresses, administrator account, and password. The FSM restarts itself at the end of the initial setup to pick up the new configuration.
After the initial login, setup tasks guide you to update the system components and manage the chassis.
Check the hardware status in the FSM chassis map
The FSM's chassis map is a powerful interface that consolidates the launching point to most of the FSM's capabilities, especially for hardware management and status. Figure 2 shows the chassis map view of our system. Notice that the FSM is in bay 1 (bottom left), and IBM Power® servers are in bays 6 and 7. The Ethernet and Fibre Channel switches, respectively, are shown at the chassis rear. You can ignore the four x86 servers in bays 2, 3, 4, and 5 as well as the extra Fibre Channel and Ethernet switches in bays 1 and 4 of the chassis's rear.
Launch the FSM web console, then perform the following steps:
- Click the Chassis Manager tab to open a list of chassis that are under management.
- Click the chassis you want to manage (
TTV_chassisis the name used in this exercise).
Figure 2. Check the hardware status in the chassis map
Under normal circumstances, you shouldn't see any red alert icons on the chassis components. If you have any components in critical status, try to determine what the problem is and resolve it, if possible. A critical status may potentially prevent some steps in this article from finishing successfully.
Also, note that the Storwize V7000 server isn't automatically discovered during
the Manage Chassis task. Instead, you must run the
manageV7000 command from the FSM command line interface (CLI):
smcli manageV7000 -i V7000_IP_address -p superuser_password
Note also that the V7000 server won't show up in the chassis map. It appears only in the Resource Explorer table.
Check the status of all resources
When you add a chassis to the FSM's management scope, the FSM requests access to the hardware components in the background using default built-in accounts, which can vary by components. For a given component, there might be more than one protocol to be unlocked. When access is obtained, the FSM retrieves health status of the managed components to display in the Problems column.
To check the components' status:
- At the right side of Chassis Manager, click General Actions > Resource Explorer.
- Click All Systems to display all the resources that FSM manages (see Figure 3). Ensure that the Access state and Problems state of all the resources are OK.
Figure 3. Check the access state of resources
If the access state is Partial Access or No Access, you can click it to see which protocol(s) failed to unlock and attempt corrective actions. If problems exist, you can also click it to see the description.
Setting up SAN storage
A PureFlex System is sold with the Storwize V7000 server as the standard shared storage device. When you order a PureFlex System with Power compute nodes, they are zoned in the factory to have visibility to the V7000 controllers inside the chassis Fibre Channel switches. In addition, host aliases, storage pool, and volumes are predefined in the V7000 server for a better out-of-the-box experience.
You should have a predefined virtual I/O server (VIOS) in each Power node. In the first
Power node of the chassis (called the Foundation node), there
should be a 100GB logical unit number (LUN) for the
partition with IBM AIX® version 7.1 preloaded, a 400GB LUN for a media library
attached to the first VIOS, and a 50GB LUN for the IBM SmartCloud™ Enterprise
SmartCloud Entry partition.
The intention is that these predefined and preloaded partitions allow you to have AIX running within minutes after initial setup, ready to be captured and deployed across the other Power nodes. If you want your AIX instance installed differently than factory default, you can choose to install it yourself from the preloaded media library, which has a set of ISOs in place of the set DVDs that you would typically receive when buying a Power rack server.
Check the zoning configuration on the Fibre Channel switch
To ensure that the zoning configuration had been done properly during manufacturing, you can log in to the Fiber Channel switch web UI via its IP address. Click Zone > Edit Zoning to launch the zoning configuration page. Make sure the two Power node World Wide Port Names have been zoned to V7000 controllers in the active zone set.
All the LUNs mentioned above have been mapped to the corresponding host aliases. You can verify this configuration by navigating to Pools > Volumes by Pool, clicking the appropriate pool to view the volumes, then right-clicking each volume and clicking View Mapped Hosts to see which hosts the volume is mapped to.
After you have validated or completed the zoning and have the V7000 servers under management by the FSM, you must collect the inventory of the switches and the V7000 server to give the FSM a full picture of your storage infrastructure.
Setting up the image repository
The image repository is one of the necessary pieces of FSM virtualization image management. Virtual appliances are managed through the image repository during capture and deployment. As such, the image repository must run on a host with the same access to the SAN infrastructure as the hypervisor nodes. For a Power environment, the image repository must run in a VIOS. Perform the steps in the sections that follow to set up an image repository.
Configure the VIOS
The sections that follow explain how to configure the VIOS.
Check the SEA and configure the IP address for the VIOS
A shared ethernet adapter (SEA) is a VIOS component that bridges a physical Ethernet adapter and one or more virtual Ethernet adapters to support network connectivity for the virtual servers. The VIOS preloaded in the factory has an SEA predefined already, so you don't need to create it yourself if the default one is sufficient. Before configuring the IP address of the VIOS, determine which Ethernet adapter is preconfigured for SEA.
- Power on the central electronics complex if you haven't done it already.
- Activate the partition that contains the VIOS:
- On the right side of Chassis Manager, click General Actions > Manage Power Systems Resources.
- Right-click the VIOS, and then click Operations > Activate > profile.
- On the Activate Virtual server page, click Open a terminal window or console session and then click OK.
- Log in to the VIOS, and then run
lsdev -type adapterto list all the adapters used in the VIOS.
From Listing 1, you can see that
ent6is configured for SEA.
Listing 1. Viewing the list of adapters
$ lsdev -type adapter name status description ent0 Available 1GbE 4-port Mezzanine Adapter ent1 Available 1GbE 4-port Mezzanine Adapter ent2 Available 1GbE 4-port Mezzanine Adapter ent3 Available 1GbE 4-port Mezzanine Adapter ent4 Available Virtual I/O Ethernet Adapter (l-lan) ent5 Available Virtual I/O Ethernet Adapter (l-lan) ent6 Available Shared Ethernet Adapter
- Configure the IP address on
If you are using the build to order systems that need to create an SEA manually, see the IBM entry, "Creating a shared Ethernet adapter for a VIOS virtual server using the IBM Flex System Manager management software" to create an SEA for the VIOS through the FSM (see Related topics for a link).
Discover, request access, and collect inventory on the VIOS
To discover, request access, and collect inventory on the VIOS:
- On the FSM home page, click the Plug-ins tab.
- Under the Discovery Manager section, click System Discovery, and then enter the IP address of the VIOS. Click Discover Now.
- Click the access state of the endpoint, and use the padmin account to request access.
- When the access state of the operating system is OK, collect inventory on both the VIOS partition and the operating system endpoint by right-clicking them, and then clicking Inventory > Collect Inventory.
- Check the protocols of the operating system:
- Right-click the operating system of the VIOS, and then click Security > Configure Access.
- Make sure the Common Information Model and Common
Agent Services protocols are available.
A common reason for failing the protocol check is inadequate Domain Name System (DNS) settings, so try to modify the /etc/hosts file if there is no DNS in your environment, then run the
nslookupcommand to verify the communication.
Collect inventory on the Fibre Channel switch
On the FSM Plug-ins page, click Resource Explorer to find the Fibre Channel switch resource. Right-click the switch resource, and then click Inventory > Collect Inventory.
Create the VMControl image repository
To create the VMControl image repository:
- On the VMControl Summary page, click the Virtual Appliances tab.
- Click Create image repository, and then complete
the Create Image Repository Wizard:
- On the Name page, specify the name
of the image repository.
Power_ImageRepois used in this exercise.
- On the Target System page, select the available VIOS.
- On the Storage page, select the storage pool created on the V7000 server, such as DefaultPool01.
- On the Name page, specify the name of the image repository.
- Click Finish.
Figure 4 shows the completed Summary page.
Figure 4. Summary page of the Create Image Repository Wizard
Create a virtual server and install the operating system
This section walks you through the steps of setting up a new virtual server on a Power node.
Creating a virtual server is fairly straightforward using the wizard, but installing an operating system has additional considerations. Standard operating system installation methods, such as Network Installation Management (NIM) or physical optical device, are still applicable in the PureFlex environment. If you don't have a NIM server or a supported optical device, the easiest way to install the operating system is to use the virtual optical media.
To install an operating system on the virtual server:
- Create a new volume on the V7000 server and map it to your VIOS. Use this volume to install and run the operating system.
- Prepare the virtual optical media.
- Complete the wizard to create a virtual server.
- Power on the virtual server.
- Install the operating system on the virtual server using the terminal console.
Create a new volume on the V7000 server and map it to your VIOS
You must prepare a new SAN volume and map it to your target VIOS so that it can be chosen as the boot disk when creating the virtual server using the FSM Wizard. To prepare a new SAN volume:
- In the V7000 server's web console, click Volumes by Host in the navigation area.
- On the Volumes by Host page, select your target VIOS in the Host Filter area.
- Click New Volume at the top of the table on the right.
- Select a preset for the volume.
- Specify the name and size for the volume (in this exercise, use
- Click Create and Map to Host, and then click Continue when the Create Volumes task is complete.
- On the Modify Host Mappings page, click Map volumes or Apply to confirm the operation.
- In the VIOS's CLI, run
cfgdevto refresh the device list.
Create a new virtual server and install the operating system from an ISO image
Several ISO images are preloaded in the virtual media library that you can use as file-backed virtual optical devices for mounting onto client partitions. You can also import your own ISO images if the preloaded AIX 7.1 images don't fit your needs.
In the Create Virtual Server Wizard, choose an ISO image that you're going to use. It is assigned to a virtual optical device that in turn is assigned to the virtual server automatically, activates the virtual server, and installs the operating system as if it were using a normal DVD.
Create a new virtual server
To create a virtual server using the Create Virtual Server Wizard:
- On the Manage Power Systems Resources page, right-click the target Power server, and then click System Configuration > Create Virtual Server.
- Complete the Create Virtual Server Wizard:
- On the Name page, enter the name of the virtual server (VS_AIX71 in this exercise).
- Select AIX/Linux as the environment to associate with the virtual server.
- On the Ethernet page, configure the virtual network adapters for the virtual server. Choose the same port virtual LAN (VLAN) IDs as your VIOS, which are 1 and 99.
- On the Storage page, select Yes, Automatically manage the virtual storage adapters for this Virtual Server. Because you are using a physical volume as the storage device, select Physical Volumes.
- In the next Storage panel, select the disk you mapped in Create a new volume on the V7000 server and map it to your VIOS.
- On the Optical devices page, select the
media file used to install the operating system.
This exercise uses
Figure 5 shows the Summary page of the Create Virtual Server Wizard.
Figure 5. Summary page of the Create Virtual Server Wizard
Install the operating system using the terminal console
To install the operating system using the terminal console:
- On the Manage Power Systems Resources page, right-click the target virtual server, then click Operations > Activate > Profile to power on the virtual server.
- On the Activate Virtual server page, select Open a terminal window or console session, and then click OK.
- Install the operating system using the terminal console.
- When operating system installation is complete, configure the network of the virtual server.
Discover and collect inventory on the virtual server
To discover and collect inventory on the virtual server:
- Discover the operating system of the target virtual server.
- Request access on the discovered operating systems.
- When the access state of the operating system is OK, collect inventory on both the virtual server and the operating system.
Capture the virtual server as a virtual appliance
The Capture task allows you to capture from a variety of sources to create a virtual appliance. You can then deploy the virtual appliance to create a new virtual server that is complete with a fully functional operating system and software applications.
An activation engine enables a virtual server that is deployed from a virtual appliance to be automatically customized at the end of deployment when the virtual server is first started. We use the Virtual Solutions Activation Engine (VSAE) for the Linux® and AIX operating system that is shipped with the FSM.
Although using an activation engine is optional, it is convenient for the deployment of a virtual appliance, because it ensures that the deployed virtual server will have IP networking configured at first start, and the user doesn't have to add it in remotely to configure networking. This is particularly helpful in a fully automated self-service cloud environment in which Dynamic Host Configuration Protocol is not being used.
Install the VSAE
To install the VSAE on the virtual server:
- Log in to the virtual server and copy the VSAE package from FSM. The vmc.vsae.tar package is located in the directory, /opt/ibm/director/proddata/activation-engine in FSM.
- Extract the contents of the compressed file.
- For AIX, ensure that the JAVA_HOME environment variable is set and points at a Java™ Runtime Environment.
- Run the
./aix-install.shcommand in the VSAE-extracted directory to install the VSAE on the virtual server.
- At this point, you might want to install any application that you want included in the virtual appliance or stop all running applications to ensure that a clean image is captured.
/opt/ibm/ae/AE.sh -resetto prepare the virtual server to be captured. This command actually shuts off the virtual server to allow capture.
Capture the virtual server
To capture the virtual server:
- On the Manage Power Systems Resources page, right-click the target virtual server, and then click System Configuration > Capture.
- Complete the Capture Wizard:
- On the Name page, specify the name
for the virtual appliance (this exercise uses
- On the Repository page, choose the
Power-ImageRepocreated in the section, Setting up the image repository.
- On the Name page, specify the name for the virtual appliance (this exercise uses
Figure 6 shows the Summary page of the Capture Wizard.
Figure 6. Summary page of the Capture Wizard
Create a network system pool
The Flex System uses network system pools and logical networks to effectively manage your virtual and physical networks. A network system pool logically groups virtual and physical networks as a single network. Logical network profiles define the attributes that describe the logical networks that reside within a network system pool. Servers on the same logical network are guaranteed to be able to connect to each other.
To associate a network system pool with an existing host network on a server, you must first create a logical network profile with the same settings as the host network and associate the logical network profile with the network system pool. After the logical network profile is created and associated with the network system pool, the network system pool can be associated with the host network.
Before creating the network system pool, make sure all devices and resources for network systems have been discovered and inventory collected correctly.
Create the logical network profile
To create a logical network profile:
- On the Plug-ins tab, click Configuration Templates in the Configuration Manager section to display the Configuration Templates page.
- Click Create to launch the Create Template page.
- For Template Type, select System Pool.
- For Configuration to create a template, select Logical Network Configuration.
- Provide a configuration template name in the Configuration template name field.
- Click Continue to display the Logical Network Configuration page.
- Click Add profile, then complete the Logical Network
Configuration Profiles Wizard:
- On the Profile Name page, provide a
logical network profile name (in this exercise,
- On the VLAN Configuration page, enter the
VLAN ID (
2012in this exercise).
- On the Profile Name page, provide a logical network profile name (in this exercise,
- Click Save Template to save the configuration.
The new template is listed in the Configuration Templates view.
Figure 7 shows the Summary page of the Logical Network Configuration Profiles Wizard.
Figure 7. Summary page of the Logical Network Configuration Profiles Wizard
Create the network system pool and associate it with the logical network profile
To create a network system pool:
- On the VMControl Summary page, click the System Pools tab.
- Choose Network system pools from the View list.
- Click Create, then complete the Create Network System
- On the Name page, specify the name of
the network system pool (
Power_NSPin this exercise).
- On the Initial Switch page, select the switch
to be included in the network system pool
Ethernet switchin this exercise).
- On the Logical Network Profiles page, click
Add to add all profiles that are allowed
deployment within this network system pool (in this exercise,
- On the Name page, specify the name of the network system pool (
Figure 8 shows the Summary page of the Create Network System Pool Wizard.
Figure 8. Summary page of Create Network System Pool Wizard
Create a server system pool
A server system pool logically groups similar hosts and facilitates the relocation of virtual servers from one host to another in the system pool. Before creating the server system pool, make sure you have met the following prerequisites:
- Power compute nodes and the VIOS have been collected in inventory.
- Sufficient resources exist on each of hosts that will be ready to add to the server system pool.
To create a server system pool:
- On the VMControl Summary page, click the System Pools tab.
- Select Server system pools from the View list.
- Click Create, then complete the Create Server System Pool
- On the Name page, specify the name of the
server system pool (
Power_SSPis used in this exercise).
- On the Pooling Criteria page:
- Select Only add hosts capable of the virtual server relocation.
- Select Only add hosts connected by a network system pool and capable of automated network deployment.
- Select the target network system pool in the Network
System Pools (View Members) table (in this exercise,
- On the Initial Host page:
- Select All Targets from the Show list.
- Select one of the Power servers, and click Add to add it to the Selected field.
- Select target storage from the Available shared
storage list (
Storwize V7000in this exercise).
- On the Additional Hosts page, select the other Power server, and then click Add to add it to the Selected box.
- On the Name page, specify the name of the server system pool (
Figure 9 shows the Summary page of the Create Server System Pool Wizard.
Figure 9. Summary page of the Create Server System Pool Wizard
Deploy the virtual appliance into the server system pool
To deploy the virtual appliance:
- On the VMControl Summary page, click the Virtual Appliances tab.
- Select the virtual appliance to be deployed from the Virtual Appliances (View Members) table, and then click Deploy Virtual Appliance.
- Complete the Deploy Virtual Appliance Wizard:
- On the Target page, select Power_SSP, which is the target server system pool.
- On the Disks page, select vSCSI as the storage connection method.
- On the Workload Name page, specify the name of
the workload (
VS_AIX71_deployis used in this exercise).
- On the Network Mapping page, select VLAN2012, which is the target logical network profile from the Virtual Networks on Host list.
- On the Product page, enter the network attributes
(see Figure 10).
Figure 10. Entering network attributes on the Product page
Verify that resources are properly created
Make sure the virtual disk is created on the V7000 storage, and make sure the VLAN is created on the virtual adapters.
A virtual disk has been created on the V7000 storage
In the V7000 server web interface, view Volumes > Volumes by Host. The new volume is created on Power_server_1 with the UID 600507680280832838000000000002BC (see Figure 11).
Figure 11. Viewing the new virtual disk on the V7000 server
On the Manage Power Systems Resources page, right-click the
newly created virtual server, and then click System Configuration >
Manage Virtual Server > Storage Devices. In this exercise,
hdisk2 is assigned to the virtual server (see
Figure 12. Viewing the physical volumes attached to the virtual server
In the VIOS CLI, run the command
lsdev -dev hdisk2 -attr.
The unique ID shows that
hdisk2 is the volume
created on the V7000 server (see Listing 2).
Listing 2. Viewing the new virtual disk attached to the VIOS
$ lsdev -dev hdisk2 -attr attribute value unique_id 33213600507680280832838000000000002BC04214503IBMfcp
A VLAN has been created on virtual adapters and the Ethernet switch
On the Manage Power Systems Resources page, right-click the
VIOS, and then click System Configuration > Manage Virtual Server >
Network. A new adapter is created with VLAN
added (see Figure 13).
Figure 13. Viewing the VLAN added in the new adapter of the VIOS
If the virtual appliance is deployed to a different Power server in the server system pool, VLAN 2012 is added to the switch port automatically for communication between virtual servers in different physical servers (see Figure 14).
Figure 14. Viewing the new VLAN created on the Ethernet switch
This article showed you what comes in the PureFlex System with Power nodes and how to fully exploit the predefined resources and preloaded images for a smooth out-of-the-box experience. Great effort was put in to minimize the time to value of the PureFlex System initial setup, while giving you freedom to configure the system your own way.
The process to a fully integrated system—compute, storage, network, application—is just beginning. There are technologies to transform, UIs to simplify, product packaging to unify, and service processes to integrate, but most important of all, the way your clients' IT service teams design their IT architecture, procure technologies, and consume and manage these new systems will have to transform to truly benefit from these new, integrated systems.
- If you are unfamiliar with the PureFlex System, read the developerWorks article, "Automate your virtual cloud appliance onto IBM PureFlex System" (Jarek Miszczyk, April 2012).
- For more information about configuring this environment on KVM, see "Ensure a resilient virtual server" (CheKim Chhuor, Hai Hang Wang, Wen Qian, and Yong Han, developerWorks, September 2012).
- See the IBM entry, Creating a shared Ethernet adapter for a VIOS virtual server using the IBM Flex System Manager management software, for more information about creating SEAs for a VIOS.
- Learn more about PureFlex System and find resources on developerWorks.
- In the developerWorks cloud developer resources, discover and share knowledge and experience of application and services developers building their projects for cloud deployment.
- Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, use a product in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement service-oriented architecture efficiently.