Create IBM SmartCloud Orchestrator-compatible images for Power Systems

Creating images you can deploy through IBM SmartCloud Orchestrator requires some knowledge of the operating system, of the hypervisor on which the images are going to be deployed, and of the corresponding hypervisor manager. Learn how to easily and smoothly prepare a SmartCloud Orchestrator-compatible image for Power Systems™.

Share:

Rossella De Gaetano, IBM SmartCloud Orchestrator Field Quality Management Team Leader, IBM

Rossella De Gaetano is the technical leader for the IBM SmartCloud Orchestrator Field Quality team. She is responsible for supporting proofs of concept, managing customer situations, and improving the overall quality of the product.



Andrea Naglieri (andrea.naglieri@it.ibm.com), Advisory Software Engineer, IBM

Andrea Naglieri is the technical leader for the Worldwide C&SI IT Cloud team. He is responsible for internal IBM implementations of SmartCloud Orchestrator on Power Systems and for supporting cloud engagements on Power Systems.



Marco Barboni (marco.barboni@it.ibm.com), IT Specialist, IBM

Marco Barboni is the Worldwide IT C&SI administrator for Power Systems. He is responsible for maintaining and administering the IBM internal cloud on Power infrastructure.



10 June 2014

Also available in Chinese Japanese

Introduction

Shift from enterprise virtualization to dynamic cloud

Christopher Rosen, IBM Worldwide Cloud and Smarter Infrastructure Cloud team lead, notes that the "deployment of IBM SmartCloud Orchestrator... represented the end of enterprise virtualization and beginning of dynamic cloud." He said this shift will dramatically increase time savings and efficiency gains in five specific areas. Read more in Rosen's Thoughts on cloud blog entry.

Cloud computing can help dramatically increase the speed of delivery of new business services. IBM SmartCloud Orchestrator is an integrated cloud automation platform designed to help orchestrate the development, deployment, and management of robust enterprise cloud services. With SmartCloud Orchestrator, you can:

  • Accelerate the delivery of cloud services using an orchestration engine.
  • Automate the deployment of whole multi-node application topologies.
  • Monitor the health, performance, and planning capabilities of the cloud environment.
  • Track and analyze the cost of your various cloud resources.

An important task in cloud computing is the creation of the image. This requires knowledge of the underlying operating system, the hypervisor you'll use to deploy the images, and the manager controlling that hypervisor. In this article, we'll take you through this task using IBM Power Systems as our environment.

Frequently used terms

  • HMC: Hardware Management Console
  • ICCT: Image Construction and Composition Tool component of SmartCloud Orchestrator
  • LPAR: Logical partition
  • NIM: Network Installation Manager
  • VIL: Virtual Image Library component of SmartCloud Orchestrator
  • VIOS: Virtual I/O Server

To begin, let's establish some common terminology and summarize the general process to create a SmartCloud Orchestrator-compatible image. You will be able to easily adapt the process to an already existing image you want to deploy through SmartCloud Orchestrator. Additionally, we'll explain the common issues found while creating an image, how to debug, and fix them.

We use "image" or "virtual image" to identify the image virtual appliance, while we use "instance" or "virtual instance" to identify an actual logical partition deployed through SmartCloud Orchestrator.

The steps to create a SmartCloud Orchestrator-compatible image are fairly simple:

  1. Create an LPAR in VMControl
  2. Install the base operating system
  3. Install the activation engine
  4. Capture the image in VMControl
  5. Wait for VIL to discover the image
  6. Import the image in ICCT
  7. Extend the image in ICCT
  8. Synchronize the image in ICCT
  9. Capture the image in ICCT

We'll go into more detail about what each of these mean in the following sections.

Prerequisites

For simplicity, we assume the same user will complete the steps, but because different steps in the procedures could be done by different people with different roles in your organization, we will specify the needed privileges and roles for each step.

To create a SmartCloud Orchestrator-compatible image, you must have access to the HMC using an admin role (for example, hscroot), and to the IBM System Director server using an smadmin role (for example, root), and to SmartCloud Orchestrator UI as a user with admin role. You must also have access to ICCT UI and VIL UI. ICCT is single user, so no role is specified. You must also have access to the iso file that corresponds to whichever operating system you choose to install: AIX®, Red Hat Enterprise for Linux® (RHEL), or SUSE Linux Enterprise Server (SLES).


Create an LPAR in VMControl

There are different ways to create an LPAR. Here, we describe how to create it using VMControl:

  1. Log in into VMControl UI as root user.
  2. Expand Inventory > Views and click Virtual Server and host. A new tab opens where all the managed servers with the respective virtual servers (LPARs) are shown.
  3. Select the server where you plan to create the LPAR, click Actions and select System Configuration > Create Virtual Server, as shown below.
    Figure 1. Creating virtual server
    Screenshot shows steps to create a virtual server
  4. Insert the information related to the new LPAR:
    • Enter the LPAR name.
    • Choose AIX or Linux as planned operating system.
    • Select the number of processors.
    • Specify the disks to assign to the virtual server.

    Note: You can add an existing disk (if you use SAN storage, you should have a previously created disk and linked it to VIOSes) or create a new one based on your storage configuration. Select the VLANs you plan to use. Pay attention because all NICs you define at LPAR creation time will have to be configured at deploy time through SmartCloud Orchestrator. Proceed with the selection of devices and physical, then click Finish to start creation.


Install the base OS

After you've created the LPAR, your next step is to install the OS on it. See the list of the officially supported guest operating systems.

You have your choice of three ways to install the OS on an LPAR:

  • Server's cdrom (using DVD) — Insert the operating system dvd into the server's cdrom and add the physical cdrom resource to the LPAR through HMC.
  • VIOS's virtual CD-ROM (using ISO image) — Copy the ISO image into VIOS, create a virtual CD-ROM, and assign it to the LPAR. See details.
  • NIM server (using MKSYSB or LPPSOURCE image) — Get more information.

Regardless of the method you use, your first step is to log in to HMC as hscroot. Expand System Management > Servers and select the server where the LPAR has been created, then select the LPAR you need to install the OS on, and start it, as shown below.

Figure 2. Log in to the HMC
Screenshot shows the hardware management console showing path to take
Figure 3. Select the LPAR
Screenshot shows window displaying choice of LPAR

Note: Because you want access to the console in the next steps, make sure you select Open a terminal window or console session to open the console.

After the LPAR powers on, select SMS Menu in the console to enter the boot menu.

If you are using the Server's CD-ROM or the VIOS virtual CD-ROM method, select Install/Boot Device, then List All Devices and choose to boot from SCSI CD-ROM. Select Normal Boot Mode. At this point, the installation from CD-ROM starts.

If you are using the NIM server method, see AIX NIM Client Installation

If you are installing a Linux image, you will be prompted to supply the boot option. Choose Tab and select the only available one: linux. Customize the installation according to your needs. Be sure to note the administrative (root) password you specify at installation because you'll use it for ICCT synchronization.

It is important to set up the network so that the LPAR can be reached using SSH and SFTP to avoid problems during ICCT synchronization and image deployment through SmartCloud Orchestrator. For example, if you install a minimal version of RHEL 6, openssh-clients and wget are not installed by default, so you must add them after the OS installation is complete.

Note: It is good practice to test to see if SSH and SFTP are working before proceeding.

Troubleshooting

  1. If you are installing a RHEL 6 ISO image during the first boot, you may get a ramdisk error that will move you to the OF prompt:
    boot: linux
    Please wait, loading kernel...
    Elf64 kernel loaded...
    Loading ramdisk...
    Claim failed for initrd memory
    ramdisk load failed !
    ENTER called ok
    0 >

    This error is caused by malloc failing to get enough space for the initrd image. This is typically seen on Power Systems partitions where AIX was once installed or a diagnostic disk has been booted. As a consequence, the amount of free space for the boot program has been reduced, and it is not able to support a RHEL bootup image.

    A possible workaround is to type in the OF prompt:

    dev nvram
    wipe-nvram
  2. To install an SLES 11 ISO image, during the DVD installation process, modify the file system devices to be identified by Device Name instead of the default Device ID. This configuration prevents the provisioned LPAR from starting up during ICCT synchronization.

Install the activation engine

The activation engine is a set of binaries responsible for the personalization and de-personalization of the image. At deployment time, it assigns the correct network configuration to the instance (hostname, IP address, DNS, and gateway, for example) and the new password to the root user. It also triggers the setup of all the specific software customization scripts you may have added into the image through ICCT software bundles. Moreover, the activation engine is responsible to make the image capturable in VMControl.

In this article, we assume the image is getting created from scratch; but, depending on the origin of the virtual image you want to capture, it might or might not already have a suitable activation engine version. If the activation engine version is not current, you need to uninstall it and install the proper one.

The activation engine package is available on the IBM System Director system in the path where the IBM System Director has been installed.

To install it:

  1. Copy vmc.vsae.tar file in a directory of your choice on the virtual image you created in Install the base operating system.
  2. Extract the content of the TAR file into a temporary directory.
  3. If installing on AIX, ensure JAVA_HOME is set, then run aix-install.sh.
  4. If you are installing on Linux, run linux-install.sh.

Find additional details at Installing the VMControl activation engine on AIX and Linux.

Note: Although these additional details suggest that you prepare the image for capture after you install the activation engine, we suggest doing it after we ensure that we meet all the image prerequisites dictated by SmartCloud Orchestrator because the last task in the pre-capture step powers off the image.

Set SmartCloud Orchestrator prerequisites

To ensure that the image will successfully deploy through SmartCloud Orchestrator, specific configuration changes are needed. Typically, if one or more prerequisites is missing, the instance is powered on; in the SmartCloud Orchestrator UI, it hangs in "Checking to see if virtual system <system's name> is started" status.

Here are the prerequisites per operating system:

RHEL

  1. Ensure that /etc/hosts contains:
    127.0.0.1 localhost.localdomain localhost
  2. Edit /etc/sysconfig/network to set the value:
    #: vi /etc/sysconfig/network
        NETWORKING=yes

    Note: Comment out or delete all the other lines in the file.

  3. Edit /etc/sysconfig/network-scripts/ifcfg-eth0 to set the values:
        #: vi /etc/sysconfig/network-scripts/ifcfg-eth0
    
        	# Intel Corporation 82562GT 10/100 Network Connection 
        	DEVICE=eth0 
        	BOOTPROTO=dhcp 
        	ONBOOT=yes 
        	TYPE=Ethernet
        	PERSISTENT_DHCLIENT=1

    Note: Comment out or delete the line starting with HWADDR, if existing.

  4. Edit /etc/rc.local to add modprobe acpiphps:
        #: vi /etc/rc.local
    
        	#!/bin/sh
        	#
        	# This script will be executed *after* all the other init scripts.
        	# You can put your own initialization material here if you do not
        	# want to do the full Sys V style init.
    
        	modprobe acpiphp
        	touch /var/lock/subsys/local
  5. Edit /etc/udev/rules.d/70-persistent-net.rules and remove any lines associated with one of the NICs (eth0, eth1, etc.).
  6. Remove /lib/udev/write_net_rules.
  7. Be sure that these directories exist in the image:
            /etc/sysconfig/networking/profiles/default/
            /etc/sysconfig/networking/devices/
    Note: Missing steps 5 and 6 is a common problem at deployment. The usual symptom is that the instance is powered on but in SmartCloud Orchestrator UI, it hangs in "Checking to see if virtual system <system's name> is started." If you look at the logs created by the IBM Workload Deployer component, you see that Workload Deployer is not able to reach the image through SSH. If you log in to the deployed instance, ifconfig -a shows the network card is not named eth0 but eth1 or eth2, etc.

SLES

  1. Ensure that devices are mounted by device name or UUID, not by ID. If you use SLES 10, see New default in SLES/SLED 10 SP1: mount "by Device ID." If you use SLES 11:
    • Search /etc/fstab for the presence of /dev/disk/by* references.
    • Save the mapping of /dev/disk/by-* symlinks to their targets in a scratch file (for example, ls -l /dev/disk/by-* > /tmp/scratchpad.txt).
    • Remove all entries that involve non-local storage (such as SAN or iSCSI volumes) from the scratch file.
    • Edit /etc/fstab, replacing the /dev/disk/by-* entries with the device names the symlinks in the scratch file point to.
    • Make boot and root lines correct in /etc/lilo.conf.
    • Run lilo.
    • Run mkinitd.

    Figure 4 shows how the SUSE Linux partition table ought to appear.

    Figure 4. SUSE Linux partition table
    Screenshot shows the SUSE Linux partition table

    A PReP partition should be the first primary partition on one of the SCSI drives — preferably the first (naming the partition sda1). It must have type PReP boot (type 41) and must be large enough to hold a compressed Linux kernel image (zImage); 5-10 MB is enough.

  2. In the /etc directory, ensure that the HOSTNAME file is not empty.
  3. In the /lib/udev/rules.d directory, remove 75-persistent-net-generator.rules.
  4. Remove /lib/udev/write_net_rules.
  5. In the /etc/init.d directory, create a file called loadmodel with the content: modprobe acpiphp.
  6. In the /etc/init.d directory, run the following commands:
    #: chmod 755 loadmodel
    #: chkconfig -\-add loadmodel
    #: chkconfig loadmodel on

    After implementing these settings, reboot and verify the settings by running lsmod|grep acpiphp.

    Note: Missing configuration steps 3 and 4 causes a common problem at deployment time. The usual symptom is that the instance is powered on but in SmartCloud Orchestrator UI, it hangs in "Checking to see if virtual system <system's name> is started." If you look at the logs created by the IBM Workload Deployer component, you see Workload Deployer is not able to reach the image through SSH. If you log in to the deployed instance, ifconfig -a shows the network card is not named eth0 but eth1 or eth2, etc.

    For more information, see Prerequisites for KVM or VMware images, which applies to Power images as well.

For AIX images, no special prerequisites are needed.

After you meet all the prerequisites, run these steps to ensure that VMControl will be able to capture the image:

  1. Ensure that /opt/ibm/ae/AP and /opt/ibm/ae/AR are empty.
  2. Copy /opt/ibm/ae/AS/vmc-network-restore/resetenv into /opt/ibm/ae/AP/ovf-env.xml.
  3. Open VMControl UI as user root, go to Inventory > System Discovery and discover the image's operating system and request access click on no access link, fill out credentials, and click Request access. Note: This step is not needed if the server was deployed through VMControl and no smreset command was run.
    Figure 5. Discover the image's operating system
    Screenshot shows the path reflected in the VMControl windows
  4. When you are sure you no longer need to be logged into the image, run /opt/ibm/ae/AE.sh –reset, which depersonalizes and powers off the image.

Capture the image in VMControl

To capture the image, go to VMControl UI as root, select the Virtual Appliances tab, and click Capture.

Figure 6. Virtual Appliances tab
Screenshot of window showing the Virtual Applicances tab and the Capture button

We recommend that you give the captured image a meaningful name to help you recall its content/purpose/operating system so you can identify it later in ICCT. For example, we suggest you use a naming convention like OperatingSystemNameOslevel_Before_ICCT, where the Before_ICCT helps you remember that image has yet to be extended, synchronized, and captured in ICCT. When you are prompted for the LPAR to be captured, specify the LPAR name you installed the operating system on, as shown in Figure 7.

The LPAR state must be stopped or capture will fail.

Figure 7. Check the LPAR's status
Screenshot shows the window where you select the virtual server to capture

Leave the disk and network mapping fields with the default, but in the operating system field specify the one that matches the LPAR OS (AIX or Linux). In the version control panel, select Create a new version tree with the new virtual appliance as the root, click Next, then click Run Now. This starts the capture process. You can display how capture proceeds in the Task Management section of the System Director or click the Display button that pops up right after. If capturing successfully completes, you can see your image in the VMControl image repository. To check, go into VMControl Virtual appliance tab and click Virtual Appliances.

Figure 8. Check to see if capturing successfully completed
Screenshot shows the VMControl image repository

Find your captured image in the list.

Figure 9. Find your captured image in the list
Screenshot shows enlarged section of the repository where image is listed

Wait for VIL to discover the image

After the capture completes successfully, you can see the image in VIL UI. Because VIL discovers new images in VMControl every two minutes, a short delay might happen between when the capture completes and when you see the image in VIL. VIL has two operational repositories for the same Power System: one for OpenStack and one for VMControl. The latter is identified by the small chain icon.

The OpenStack repository is populated collecting information from Glance. The VMControl one is populated by talking directly to VMControl. At the end, their content should be exactly the same. If there are discrepancies (for example, the image is shown in one repository and not the other) you can force the synchronization from VIL UI by selecting an operational repository, then selecting Actions > Synchronize repositories. Wait for the synchronization task to complete before issuing the synchronization on the second repository.

Figure 10. Find the image in the VIL
Screenshot shows the Images tab in the VIL

Note: Despite what happens with KVM and VMware systems, VIL does not perform any indexing on Power images, nor does it support checking images in and out.


Import the image in ICCT

Because the activation engine we installed is not configured to support configuration of software bundles, and platform-as-a-service (PaaS) action LPARs captured through VMControl are not immediately consumable by SmartCloud Orchestrator, you must use ICCT to properly configure the image.

For simplicity, we assume no software bundles and no personalities are added to the image, but if you are interested in these topics, see Working with IBM Image Construction and Composition Tool for additional details.

The image must be listed in the VIL OpenStack operational repository before you can import the image in ICCT.

Despite its name, the import process is not copying and retrieving the disks of the image. Instead, it involves creating a pointer to that image and generating the proper image metadata. By metadata, we mean the collection of artifacts that describe that image at pattern design time and at image deploy time; how many parts (personalities) are included in the image, which parameters are configurable by the end user at deployment time (root password, for example), and which parameters are configured by the provisioning engine at deployment time (DNS, for example).

To import the image, log in to ICCT UI. We do not specify any role here since ICCT is single user. Click Build and manage images.

Figure 11. Click build and manage images
Screenshot shows ICCT welcome page with the Build and manage images link highlighted

If you have multiple regions, ensure that you are pointing to the right cloud provider (select it from the upper-right drop-down list) and click the Import from Cloud Provider icon.

Figure 12. Import from cloud provider
Screenshot shows ICCT Images page with the Import from Cloud Provider icon highlighted

Select the newly created image and click Add.

Figure 13. Select and add the newly created image
Screenshot shows Import from Cloud Provider window and the image to add is highlighted

Click Import.


Extend the image in ICCT

The extension process is the first step toward the creation of a new SmartCloud Orchestrator-compatible virtual image. This step does not involve OpenStack or the hypervisor. It is self-contained in ICCT. It creates another image object inside ICCT and copies the metadata from the base image.

To extend an image from ICCT, click the Extend icon from the image window into the GUI.

Figure 14. Extend the image from ICCT
Screenshot shows the opened image details window in ICCT with the extend icon highlighted

The status of the image in ICCT being "Out of sync" reflects that there is no image in the hypervisor that corresponds to the extended one. Also, the extended image has an additional software bundle compared to the base one.

Figure 15. Out-of-sync status
Screenshot shows image details are now extended and out of sync

ICCT at the extension stage is promising to install that software bundle, but so far nothing has been added to the image. Remember the image is not yet existing in the hypervisor.

The Enablement Bundle is responsible for installing the correct version of the activation engine if not already installed and to configure it.


Synchronize the image in ICCT

Synchronizing the image in ICCT creates a virtual instance starting from the base image, and adds the Enablement Bundle and all the software bundles you may have added during the previous step to that instance. While performing this action, ICCT interacts only with OpenStack (through iaasgateway). No other components are directly involved in this process. (Workload Deployer does not enter in to the synchronization game.)

To start the synchronization, click the Synchronize button.

Figure 16. Start the synchronization
Screenshot shows the image details and highlighting the Sychronize button

You are asked to select a flavor, a network, a cloud provider and to specify the root password. This password is the one set into the image when you installed the OS during Install the base operating system.

Figure 17. Set the parameters
Screenshot shows the parameter choices to select from password fields to enter before the image is sychronized

Be careful entering information in the Synchronize the image panel. The majority of synchronization failures happen because of data entered incorrectly. For example: Using a password that is not the actual root password or specifying the wrong network prevents ICCT connecting (through SSH) to the image and moving the binaries for the enablement bundle and the other software bundles you added when extending the image; specifying a too-small flavor also prevents OpenStack creating the image.

After synchronization completes, the status of the image is set to "Synchronized."

Figure 18. Image is set to synchronized
Screenshot shows the image details window and status is now shown as synchronized

Note: It is important to correctly configure the hostname resolution otherwise the ssh connection between ICCT and the deployed image times out and the synchronization fails.

Troubleshooting

In case of synchronization failure, you can:

  • Look at ICCT traces (/drouter/ramdisk2/mnt/raid-volume/raid0/logs/trace/trace.log).
  • Check that the instance is actually created in OpenStack. Run nova list, and ensure that the image exists (it is named ICCT <a number>) and is ACTIVE.
  • Check that you can SSH to the instance from the ICCT system. If there is nothing meaningful there, consider that what ICCT does is the analogous of this command: nova boot –flavor<flavor id> -\\-image-\\-net-id (you can check it using nova-manage network list). If this command fails, you must investigate /var/log/nova/smartcloud.log and/or /root/.SCE31/logs/skc-0.log for the root cause.

Capture the image in ICCT

This is the final step of the process. It actually generates a new image you can later import in SmartCloud Orchestrator and deploy as part of a pattern.

Capturing an image through ICCT triggers actions to:

  • Depersonalize the instance (i.e., /opt/ibm/ae/AE.sh –reset is run in the instance).
  • Shut down the instance.
  • Run an image capture in VMControl.
  • Create an image into VMControl repository.
  • Create an image in OpenStack.
  • Associate its own metadata to the image in OpenStack.

All of these actions are automatically done and do not require human interaction. For the synchronization step, ICCT talks only to OpenStack through iaasgateway; no other components are involved. OpenStack talks to VMControl (through SmartCloud driver and SmartCloud Entry). To capture, simply click Capture.

Figure 19. Capture the image
Screenshot shows image details window with the Capture button highlighted

Note: There is a known bug in SmartCloud Orchestrator 2.3 and 2.3 FP1 in which capture is not able to shut down the image. So before triggering the capture step, you must manually log in to the instance and shut it down.

After you complete these steps, the image is ready to be used in SmartCloud Orchestrator:

  • Wait for the newly created image to be discovered by VIL.
  • Register the image in Workload Deployer.
  • Create a virtual system pattern.
  • Add the newly registered image to the pattern.
  • Deploy the pattern.

Conclusion

Now you know how to create a SmartCloud Orchestrator-compatible image for a Power Systems environment. We've provided instruction for the mechanisms needed to complete the process to satisfy OS, hypervisor, and hypervisor manager requirements for various images to deploy in a Power Systems environment. We've also explained in detail:

  • How to troubleshoot common errors that might occur during the base OS install.
  • How to capture the image in VMControl (and how long it may take the Virtual Image Library to recognize that capture).
  • How to import, extend, synchronize, and capture the image in ICCT.

With this information, you should now be able to easily and smoothly prepare a SmartCloud Orchestrator-compatible image for Power Systems.

Resources

Learn

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Cloud computing on developerWorks


  • Bluemix Developers Community

    Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.

  • developerWorks Labs

    Experiment with new directions in software development.

  • DevOps Services

    Software development in the cloud. Register today to create a project.

  • Try SoftLayer Cloud

    Deploy public cloud instances in as few as 5 minutes. Try the SoftLayer public cloud instance for one month.

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Cloud computing
ArticleID=973305
ArticleTitle=Create IBM SmartCloud Orchestrator-compatible images for Power Systems
publish-date=06102014