Cloud computing can help dramatically increase the speed of delivery of new business services. IBM SmartCloud Orchestrator is an integrated cloud automation platform designed to help orchestrate the development, deployment, and management of robust enterprise cloud services. With SmartCloud Orchestrator, you can:
- Accelerate the delivery of cloud services using an orchestration engine.
- Automate the deployment of whole multi-node application topologies.
- Monitor the health, performance, and planning capabilities of the cloud environment.
- Track and analyze the cost of your various cloud resources.
An important task in cloud computing is the creation of the image. This requires knowledge of the underlying operating system, the hypervisor you'll use to deploy the images, and the manager controlling that hypervisor. In this article, we'll take you through this task using IBM Power Systems as our environment.
To begin, let's establish some common terminology and summarize the general process to create a SmartCloud Orchestrator-compatible image. You will be able to easily adapt the process to an already existing image you want to deploy through SmartCloud Orchestrator. Additionally, we'll explain the common issues found while creating an image, how to debug, and fix them.
We use "image" or "virtual image" to identify the image virtual appliance, while we use "instance" or "virtual instance" to identify an actual logical partition deployed through SmartCloud Orchestrator.
The steps to create a SmartCloud Orchestrator-compatible image are fairly simple:
- Create an LPAR in VMControl
- Install the base operating system
- Install the activation engine
- Capture the image in VMControl
- Wait for VIL to discover the image
- Import the image in ICCT
- Extend the image in ICCT
- Synchronize the image in ICCT
- Capture the image in ICCT
We'll go into more detail about what each of these mean in the following sections.
For simplicity, we assume the same user will complete the steps, but because different steps in the procedures could be done by different people with different roles in your organization, we will specify the needed privileges and roles for each step.
To create a SmartCloud Orchestrator-compatible image, you must have access
to the HMC using an admin role (for example,
hscroot), and to
the IBM System Director server using an smadmin role (for example,
root), and to SmartCloud Orchestrator UI as a user with admin
role. You must also have access to ICCT UI and VIL UI. ICCT is single
user, so no role is specified. You must also have access to the iso file that
corresponds to whichever operating system you choose to install:
AIX®, Red Hat Enterprise for Linux® (RHEL), or SUSE Linux Enterprise
Create an LPAR in VMControl
There are different ways to create an LPAR. Here, we describe how to create it using VMControl:
- Log in into VMControl UI as root user.
- Expand Inventory > Views and click Virtual Server and host. A new tab opens where all the managed servers with the respective virtual servers (LPARs) are shown.
- Select the server where you plan to create the LPAR, click
Actions and select System
Configuration > Create Virtual
Server, as shown below.
Figure 1. Creating virtual server
- Insert the information related to the new LPAR:
- Enter the LPAR name.
- Choose AIX or Linux as planned operating system.
- Select the number of processors.
- Specify the disks to assign to the virtual server.
Note: You can add an existing disk (if you use SAN storage, you should have a previously created disk and linked it to VIOSes) or create a new one based on your storage configuration. Select the VLANs you plan to use. Pay attention because all NICs you define at LPAR creation time will have to be configured at deploy time through SmartCloud Orchestrator. Proceed with the selection of devices and physical, then click Finish to start creation.
Install the base OS
After you've created the LPAR, your next step is to install the OS on it. See the list of the officially supported guest operating systems.
You have your choice of three ways to install the OS on an LPAR:
- Server's cdrom (using DVD) — Insert the operating system dvd into the server's cdrom and add the physical cdrom resource to the LPAR through HMC.
- VIOS's virtual CD-ROM (using ISO image) — Copy the ISO image into VIOS, create a virtual CD-ROM, and assign it to the LPAR. See details.
- NIM server (using MKSYSB or LPPSOURCE image) — Get more information.
Regardless of the method you use, your first step is to log in to HMC as hscroot. Expand System Management > Servers and select the server where the LPAR has been created, then select the LPAR you need to install the OS on, and start it, as shown below.
Figure 2. Log in to the HMC
Figure 3. Select the LPAR
Note: Because you want access to the console in the next steps, make sure you select Open a terminal window or console session to open the console.
After the LPAR powers on, select SMS Menu in the console to enter the boot menu.
If you are using the Server's CD-ROM or the VIOS virtual CD-ROM method, select Install/Boot Device, then List All Devices and choose to boot from SCSI CD-ROM. Select Normal Boot Mode. At this point, the installation from CD-ROM starts.
If you are using the NIM server method, see AIX NIM Client Installation
If you are installing a Linux image, you will be prompted to supply the boot option. Choose Tab and select the only available one: linux. Customize the installation according to your needs. Be sure to note the administrative (root) password you specify at installation because you'll use it for ICCT synchronization.
It is important to set up the network so that the LPAR can be reached using SSH and SFTP to avoid problems during ICCT synchronization and image deployment through SmartCloud Orchestrator. For example, if you install a minimal version of RHEL 6, openssh-clients and wget are not installed by default, so you must add them after the OS installation is complete.
Note: It is good practice to test to see if SSH and SFTP are working before proceeding.
- If you are installing a RHEL 6 ISO image during the first boot, you may
get a ramdisk error that will move you to the OF prompt:
boot: linux Please wait, loading kernel... Elf64 kernel loaded... Loading ramdisk... Claim failed for initrd memory ramdisk load failed ! ENTER called ok 0 >
This error is caused by
mallocfailing to get enough space for the
initrdimage. This is typically seen on Power Systems partitions where AIX was once installed or a diagnostic disk has been booted. As a consequence, the amount of free space for the boot program has been reduced, and it is not able to support a RHEL bootup image.
A possible workaround is to type in the OF prompt:
dev nvram wipe-nvram
- To install an SLES 11 ISO image, during the DVD installation process, modify the file system devices to be identified by Device Name instead of the default Device ID. This configuration prevents the provisioned LPAR from starting up during ICCT synchronization.
Install the activation engine
The activation engine is a set of binaries responsible for the personalization and de-personalization of the image. At deployment time, it assigns the correct network configuration to the instance (hostname, IP address, DNS, and gateway, for example) and the new password to the root user. It also triggers the setup of all the specific software customization scripts you may have added into the image through ICCT software bundles. Moreover, the activation engine is responsible to make the image capturable in VMControl.
In this article, we assume the image is getting created from scratch; but, depending on the origin of the virtual image you want to capture, it might or might not already have a suitable activation engine version. If the activation engine version is not current, you need to uninstall it and install the proper one.
The activation engine package is available on the IBM System Director system in the path where the IBM System Director has been installed.
To install it:
- Copy vmc.vsae.tar file in a directory of your choice on the virtual image you created in Install the base operating system.
- Extract the content of the TAR file into a temporary directory.
- If installing on AIX, ensure
JAVA_HOMEis set, then run
- If you are installing on Linux, run
Find additional details at Installing the VMControl activation engine on AIX and Linux.
Note: Although these additional details suggest that you prepare the image for capture after you install the activation engine, we suggest doing it after we ensure that we meet all the image prerequisites dictated by SmartCloud Orchestrator because the last task in the pre-capture step powers off the image.
Set SmartCloud Orchestrator prerequisites
To ensure that the image will successfully deploy through SmartCloud Orchestrator, specific configuration changes are needed. Typically, if one or more prerequisites is missing, the instance is powered on; in the SmartCloud Orchestrator UI, it hangs in "Checking to see if virtual system <system's name> is started" status.
Here are the prerequisites per operating system:
- Ensure that /etc/hosts contains:
127.0.0.1 localhost.localdomain localhost
- Edit /etc/sysconfig/network to set the value:
#: vi /etc/sysconfig/network NETWORKING=yes
Note: Comment out or delete all the other lines in the file.
- Edit /etc/sysconfig/network-scripts/ifcfg-eth0 to set the values:
#: vi /etc/sysconfig/network-scripts/ifcfg-eth0 # Intel Corporation 82562GT 10/100 Network Connection DEVICE=eth0 BOOTPROTO=dhcp ONBOOT=yes TYPE=Ethernet PERSISTENT_DHCLIENT=1
Note: Comment out or delete the line starting with
HWADDR, if existing.
- Edit /etc/rc.local to add
#: vi /etc/rc.local #!/bin/sh # # This script will be executed *after* all the other init scripts. # You can put your own initialization material here if you do not # want to do the full Sys V style init. modprobe acpiphp touch /var/lock/subsys/local
- Edit /etc/udev/rules.d/70-persistent-net.rules and remove any lines
associated with one of the NICs (
- Remove /lib/udev/write_net_rules.
- Be sure that these directories exist in the image:
Note: Missing steps 5 and 6 is a common problem at deployment. The usual symptom is that the instance is powered on but in SmartCloud Orchestrator UI, it hangs in "Checking to see if virtual system <system's name> is started." If you look at the logs created by the IBM Workload Deployer component, you see that Workload Deployer is not able to reach the image through SSH. If you log in to the deployed instance,
ifconfig -ashows the network card is not named
- Ensure that devices are mounted by device name or UUID, not by ID. If
you use SLES 10, see New
default in SLES/SLED 10 SP1: mount "by Device ID." If you use
- Search /etc/fstab for the presence of /dev/disk/by* references.
- Save the mapping of /dev/disk/by-* symlinks to their targets
in a scratch file (for example,
ls -l /dev/disk/by-* > /tmp/scratchpad.txt).
- Remove all entries that involve non-local storage (such as SAN or iSCSI volumes) from the scratch file.
- Edit /etc/fstab, replacing the /dev/disk/by-* entries with the device names the symlinks in the scratch file point to.
- Make boot and root lines correct in /etc/lilo.conf.
- Run lilo.
- Run mkinitd.
Figure 4 shows how the SUSE Linux partition table ought to appear.
Figure 4. SUSE Linux partition table
A PReP partition should be the first primary partition on one of the SCSI drives — preferably the first (naming the partition sda1). It must have type PReP boot (type 41) and must be large enough to hold a compressed Linux kernel image (zImage); 5-10 MB is enough.
- In the /etc directory, ensure that the HOSTNAME file is not empty.
- In the /lib/udev/rules.d directory, remove 75-persistent-net-generator.rules.
- Remove /lib/udev/write_net_rules.
- In the /etc/init.d directory, create a file called loadmodel with the
- In the /etc/init.d directory, run the following commands:
#: chmod 755 loadmodel #: chkconfig -\-add loadmodel #: chkconfig loadmodel on
After implementing these settings, reboot and verify the settings by running
Note: Missing configuration steps 3 and 4 causes a common problem at deployment time. The usual symptom is that the instance is powered on but in SmartCloud Orchestrator UI, it hangs in "Checking to see if virtual system <system's name> is started." If you look at the logs created by the IBM Workload Deployer component, you see Workload Deployer is not able to reach the image through SSH. If you log in to the deployed instance,
ifconfig -ashows the network card is not named
For more information, see Prerequisites for KVM or VMware images, which applies to Power images as well.
For AIX images, no special prerequisites are needed.
After you meet all the prerequisites, run these steps to ensure that VMControl will be able to capture the image:
- Ensure that /opt/ibm/ae/AP and /opt/ibm/ae/AR are empty.
- Copy /opt/ibm/ae/AS/vmc-network-restore/resetenv into /opt/ibm/ae/AP/ovf-env.xml.
- Open VMControl UI as user root, go to Inventory >
System Discovery and discover the image's
operating system and request access click on no access
link, fill out credentials, and click Request
Note: This step is not needed if
the server was deployed through VMControl and no
smresetcommand was run.
Figure 5. Discover the image's operating system
- When you are sure you no longer need to be logged into the image, run
/opt/ibm/ae/AE.sh –reset, which depersonalizes and powers off the image.
Capture the image in VMControl
To capture the image, go to VMControl UI as root, select the Virtual Appliances tab, and click Capture.
Figure 6. Virtual Appliances tab
We recommend that you give the captured image a meaningful name to help you recall its content/purpose/operating system so you can identify it later in ICCT. For example, we suggest you use a naming convention like OperatingSystemNameOslevel_Before_ICCT, where the Before_ICCT helps you remember that image has yet to be extended, synchronized, and captured in ICCT. When you are prompted for the LPAR to be captured, specify the LPAR name you installed the operating system on, as shown in Figure 7.
The LPAR state must be stopped or capture will fail.
Figure 7. Check the LPAR's status
Leave the disk and network mapping fields with the default, but in the operating system field specify the one that matches the LPAR OS (AIX or Linux). In the version control panel, select Create a new version tree with the new virtual appliance as the root, click Next, then click Run Now. This starts the capture process. You can display how capture proceeds in the Task Management section of the System Director or click the Display button that pops up right after. If capturing successfully completes, you can see your image in the VMControl image repository. To check, go into VMControl Virtual appliance tab and click Virtual Appliances.
Figure 8. Check to see if capturing successfully completed
Find your captured image in the list.
Figure 9. Find your captured image in the list
Wait for VIL to discover the image
After the capture completes successfully, you can see the image in VIL UI. Because VIL discovers new images in VMControl every two minutes, a short delay might happen between when the capture completes and when you see the image in VIL. VIL has two operational repositories for the same Power System: one for OpenStack and one for VMControl. The latter is identified by the small chain icon.
The OpenStack repository is populated collecting information from Glance. The VMControl one is populated by talking directly to VMControl. At the end, their content should be exactly the same. If there are discrepancies (for example, the image is shown in one repository and not the other) you can force the synchronization from VIL UI by selecting an operational repository, then selecting Actions > Synchronize repositories. Wait for the synchronization task to complete before issuing the synchronization on the second repository.
Figure 10. Find the image in the VIL
Note: Despite what happens with KVM and VMware systems, VIL does not perform any indexing on Power images, nor does it support checking images in and out.
Import the image in ICCT
Because the activation engine we installed is not configured to support configuration of software bundles, and platform-as-a-service (PaaS) action LPARs captured through VMControl are not immediately consumable by SmartCloud Orchestrator, you must use ICCT to properly configure the image.
For simplicity, we assume no software bundles and no personalities are added to the image, but if you are interested in these topics, see Working with IBM Image Construction and Composition Tool for additional details.
The image must be listed in the VIL OpenStack operational repository before you can import the image in ICCT.
Despite its name, the import process is not copying and retrieving the disks of the image. Instead, it involves creating a pointer to that image and generating the proper image metadata. By metadata, we mean the collection of artifacts that describe that image at pattern design time and at image deploy time; how many parts (personalities) are included in the image, which parameters are configurable by the end user at deployment time (root password, for example), and which parameters are configured by the provisioning engine at deployment time (DNS, for example).
To import the image, log in to ICCT UI. We do not specify any role here since ICCT is single user. Click Build and manage images.
Figure 11. Click build and manage images
If you have multiple regions, ensure that you are pointing to the right cloud provider (select it from the upper-right drop-down list) and click the Import from Cloud Provider icon.
Figure 12. Import from cloud provider
Select the newly created image and click Add.
Figure 13. Select and add the newly created image
Extend the image in ICCT
The extension process is the first step toward the creation of a new SmartCloud Orchestrator-compatible virtual image. This step does not involve OpenStack or the hypervisor. It is self-contained in ICCT. It creates another image object inside ICCT and copies the metadata from the base image.
To extend an image from ICCT, click the Extend icon from the image window into the GUI.
Figure 14. Extend the image from ICCT
The status of the image in ICCT being "Out of sync" reflects that there is no image in the hypervisor that corresponds to the extended one. Also, the extended image has an additional software bundle compared to the base one.
Figure 15. Out-of-sync status
ICCT at the extension stage is promising to install that software bundle, but so far nothing has been added to the image. Remember the image is not yet existing in the hypervisor.
The Enablement Bundle is responsible for installing the correct version of the activation engine if not already installed and to configure it.
Synchronize the image in ICCT
Synchronizing the image in ICCT creates a virtual instance starting from the base image, and adds the Enablement Bundle and all the software bundles you may have added during the previous step to that instance. While performing this action, ICCT interacts only with OpenStack (through iaasgateway). No other components are directly involved in this process. (Workload Deployer does not enter in to the synchronization game.)
To start the synchronization, click the Synchronize button.
Figure 16. Start the synchronization
You are asked to select a flavor, a network, a cloud provider and to specify the root password. This password is the one set into the image when you installed the OS during Install the base operating system.
Figure 17. Set the parameters
Be careful entering information in the Synchronize the image panel. The majority of synchronization failures happen because of data entered incorrectly. For example: Using a password that is not the actual root password or specifying the wrong network prevents ICCT connecting (through SSH) to the image and moving the binaries for the enablement bundle and the other software bundles you added when extending the image; specifying a too-small flavor also prevents OpenStack creating the image.
After synchronization completes, the status of the image is set to "Synchronized."
Figure 18. Image is set to synchronized
Note: It is important to correctly configure the hostname resolution otherwise the ssh connection between ICCT and the deployed image times out and the synchronization fails.
In case of synchronization failure, you can:
- Look at ICCT traces (/drouter/ramdisk2/mnt/raid-volume/raid0/logs/trace/trace.log).
- Check that the instance is actually created in OpenStack. Run nova list, and ensure that the image exists (it is named ICCT <a number>) and is ACTIVE.
- Check that you can SSH to the instance from the ICCT system.
If there is nothing meaningful there, consider that what ICCT does is
the analogous of this command:
nova boot –flavor<flavor id> -\\-image-\\-net-id(you can check it using nova-manage network list). If this command fails, you must investigate /var/log/nova/smartcloud.log and/or /root/.SCE31/logs/skc-0.log for the root cause.
Capture the image in ICCT
This is the final step of the process. It actually generates a new image you can later import in SmartCloud Orchestrator and deploy as part of a pattern.
Capturing an image through ICCT triggers actions to:
- Depersonalize the instance (i.e., /opt/ibm/ae/AE.sh –reset is run in the instance).
- Shut down the instance.
- Run an image capture in VMControl.
- Create an image into VMControl repository.
- Create an image in OpenStack.
- Associate its own metadata to the image in OpenStack.
All of these actions are automatically done and do not require human interaction. For the synchronization step, ICCT talks only to OpenStack through iaasgateway; no other components are involved. OpenStack talks to VMControl (through SmartCloud driver and SmartCloud Entry). To capture, simply click Capture.
Figure 19. Capture the image
Note: There is a known bug in SmartCloud Orchestrator 2.3 and 2.3 FP1 in which capture is not able to shut down the image. So before triggering the capture step, you must manually log in to the instance and shut it down.
After you complete these steps, the image is ready to be used in SmartCloud Orchestrator:
- Wait for the newly created image to be discovered by VIL.
- Register the image in Workload Deployer.
- Create a virtual system pattern.
- Add the newly registered image to the pattern.
- Deploy the pattern.
Now you know how to create a SmartCloud Orchestrator-compatible image for a Power Systems environment. We've provided instruction for the mechanisms needed to complete the process to satisfy OS, hypervisor, and hypervisor manager requirements for various images to deploy in a Power Systems environment. We've also explained in detail:
- How to troubleshoot common errors that might occur during the base OS install.
- How to capture the image in VMControl (and how long it may take the Virtual Image Library to recognize that capture).
- How to import, extend, synchronize, and capture the image in ICCT.
With this information, you should now be able to easily and smoothly prepare a SmartCloud Orchestrator-compatible image for Power Systems.
- Learn more about IBM SmartCloud Orchestrator Information Center.
- Explore the IBM SmartCloud Orchestrator.
- Application development and deployment on IBM Power Systems contains detailed information.
- Read "The choice is yours: The Power to choose: Linux, AIX, IBM i."
- Follow developerWorks on Twitter.
- Check out some developerWorks demos.
- Participate in the developerWorks community.
Dig deeper into Cloud computing on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Experiment with new directions in software development.
Software development in the cloud. Register today to create a project.
Deploy public cloud instances in as few as 5 minutes. Try the SoftLayer public cloud instance for one month.