Configure multiple networks in CloudBurst 2.1

Implementing VLAN support

Learn how to define multiple network configurations with IBM® Tivoli® Service Automation Manager 7.2.1.1 in a CloudBurst 2.1 on System x environment, and how to segregate each to prevent packets within a network from reaching other networks. The multiple network configuration is based on VLAN using VMware as the hypervisor and SuSe Linux as the guest operating system.

Antonio Di Cocco (antonio_dicocco@it.ibm.com), Technical Lead, IBM

Antonio Di Cocco is the technical leader for IBM Cloudburst 2.1. He is responsible for the integrity of the design and code of the Tivoli software stack included both in IBM CloudBurst and IBM Service Delivery Manager.



Rossella De Gaetano (rossella.degaetano@it.ibm.com), Test Lead, IBM

Rossella De Gaetano is the test leader for IBM CloudBurst 2.1 and IBM Service Delivery Manager 7.2.1. She is responsible for the quality assurance of the Tivoli software stack included both in IBM CloudBurst and IBM Service Delivery Manager. Previously she was the development leader for Tivoli Asset Management for distributed and the License Metric Tool.



11 May 2011

Also available in Chinese

Introduction

Key concepts

In this article, we define multiple network configurations with Tivoli Service Automation Manager 7.2.1.1 in a CloudBurst 2.1 on System x environment; and will segregate each of them preventing packets from a network to reach other networks. Before beginning, let's cover some concepts used in this article.

  • With Tivoli Service Automation Manager 7.2.1, you can deploy virtual appliances using a customizable network configuration, which we are defining as a set of sub-network definitions specified as a starting IP and net mask.
  • Each configuration must contain at least one network to be used as a "management network," which is the network used by Tivoli Service Automation Manager to work with the provisioned virtual appliances.
  • A network configuration has a one-to-one association with a virtual pool (VRPool). A virtual pool represents all resources or a part of the resources managed by a hypervisor. So a hypervisor can be modeled as one or more virtual pools, but a virtual pool is associated only to one hypervisor.

Figure 1 describes the relationships among virtual pools, network configurations, VLANs, and hypervisor resources using Tivoli Service Automation Manager.

Figure 1. Tivoli Service Automation Manager objects relationship
Tivoli Service Automation Manager objects relationship

Multiple network configurations

Based on the user-defined configuration, it is possible to create projects including servers from any defined VRPool. So if you define one pool for development, one pool for test and one pool for production, you can have a project that includes virtual servers belonging to different environments to develop and test a feature and to put it on production.

In this way, you can use different network configurations for each pool. In order to have multiple environments using different networks you must have at least one management network for each of them and the server on which Tivoli Service Automation Manager is installed must be able to reach all defined management networks.

This article explains how to define multiple network configurations with Tivoli Service Automation Manager 7.2.1.1 in a CloudBurst 2.1 on System x environment, and demonstrates how to segregate each to prevent packets within a network from reaching other networks.

Our solution is targeted for IBM CloudBurst 2.1 on System x, and it relies on VMware 4.1 as the hypervisor with the Tivoli software stack installed on SuSE SLES 10 Service Pack 3. However, this solution can easily be adapted to an environment that relies on IBM Service Delivery Manager 7.2.1. With a working knowledge of the Power VM architecture, you can apply the concepts shown here to CloudBurst 2.1 on Power VM.

VLAN support provided by the guest operating system

Before digging into the technical details, there are various approaches that you can take to exploit a multiple networks configuration:

  • Create a VLAN for each management network. However, this may turn out to be a non-scalable solution due to the fact that the hypervisor can have limitations on the number of different vNICs that can be defined for each virtual system. For example VMware 4.1 limits the number of different vNICs for a virtual system to 10.
  • Use a single vNIC without a VLAN ID. While this bypasses the previous limitation, it introduces a security breach in case you want to build segregated networks. Different VLAN IDs are required to achieve network isolation.
  • Use VLAN support provided by the guest operating system.

The approach used in CloudBurst 2.1 for System x is to leverage the VLAN support provided by the guest operating system (i.e. SuSe SLES 10 Service Pack 3). Following is the process for implementing that approach.


Configure CloudBurst 2.1 software stack to support multiple networks

The available addresses for the management networks are in the range 10.100.*.* up to 10.129.*.* The 10.100 .*.* network is defined by default, so in this article we will show how to add 10.101.*.*

From the customer network point of view, analogously 30 networks can be added (for the sake of simplicity we name them 10.130.*.* up to 10.159.*.*). Of course you can use whatever set of IP addresses that suits your environment.

The fact that different network configurations are associated to different virtual pools and that each virtual pool models are part of the resources of a hypervisor means in a CloudBurst environment you need to create in VMware virtual center a cluster for each virtual pool.

Step1. Create a cluster

In CloudBurst, by default, all blades devoted to provisioning are grouped in a cluster called CloudBurst-cluster. In this step, create a second cluster called Second-cluster. For sake of simplicity leave half of the blades in CloudBurst-cluster and associate the other ones to Second-cluster.

  1. Login to the VMware virtual Center and go to the Home > Inventory Hosts and Clusters view. Right-click CloudDC datacenter and choose New Cluster. In the Cluster Features window, type Second-cluster and select both VMware HA and VMware DRS. In the VMware DRS window and in the Power Management window, accept the defaults.
  2. In the VMware HA window, check Enable Host Monitoring and select Enable: Do not power on VMs that violate availability constraints. In the Virtual Machine Options window select Leave powered on as Host Isolation Response. In the VM Monitoring leave the defaults. In the VMware EVC window select Enable EVC for Intel Host. In the Virtual Machine Swapfile location window, accept the defaults.
  3. In the Ready to Complete window, click Finish. You should see Second-cluster under CloudDC. You can now drag half of the blades from CloudBurst-cluster into Second-cluster.

Step 2: Create an additional VLAN using VLAN tagging

To overcome VMware limitation to have at most 10 NICs per virtual image, leverage the VLAN support provided by SuSe SLES 10 Service Pack 3.

The technology exploited is VLAN tagging. To leverage VLAN tagging the operating system adds a VLAN ID tag to each TCP/IP packet. For it to work the virtual image must be able to connect to any port group on VMWare ESXi servers with a VLAN ID different from zero. VMWare defines a special port group with ID 4095. Once this number is used (and you have the proper switch configuration), a TCP/IP packet is allowed on this port group only if it has a VLAN ID tag, regardless of its' value. If you are dealing with CloudBurst 2.1 on System x, this configuration is already done for you.

Using that technique, a single port group is enough for all management virtual images (icb-tivsam, icb-itm, icb-nfs, icb-tuam). On the other side, a specific port group is used for provisioned virtual images.

Figure 2. CloudBurst 2.1 out of the box network configuration for VM deployment
CloudBurst 2.1 out of the box network configuration for VM deployment

When you're done, you can create two disjointed sets of virtual images. The first set, (already configured by default) uses 10.100.*.* as management network and 10.130.*.* as customer network. The second set uses 10.101.*.* as the management network and 10.131.*.* as the customer network.

First you need to define a vNIC on 10.101.*.* for icb-tivsam, icb-nfs and icb-itm.

It's easy to understand the need for vNIC on icb-tivsam: 10.101.*.* is the management network used by Tivoli Service Automation Manager. Less obvious is why an additional vNIC is needed on icb-nfs and icb-itm.

On icb-nfs, the additional vNIC is needed because, when installing the IBM Tivoli Monitoring agent on the provisioned virtual image, the provisioned virtual images remotely mounts /repository filesystem in order to access to the IBM Tivoli monitoring agent installation binaries.

On icb-itm the additional vNIC is used by the provisioned virtual images to send monitoring data to the IBM Tivoli Monitoring server, if the virtual image has been equipped with an IBM Tivoli Monitoring agent.

Additional NICs to icb-tuam are not needed because icb-tuam interacts only with icb-tivsam to exchange usage and accounting data. Icb-tuam does not interact directly with the provisioned VMs.

To create a VLAN with VLAN tagging, predefined commands are present on icb-tivsam, icb-nfs and icb-itm. This shields the user from the complexity of dealing directly with the operating system commands.

For example, if you log into icb-tivsam, you can find the command:

/opt/IBM/CB/bin/addVLAN.sh

The syntax is:

/opt/IBM/CB/bin/addVLAN.sh <VLAN ID> <IP address> <netmask>

Run the following command:

/opt/IBM/CB/bin/addVLAN.sh 101 10.101.0.1 255.255.0.0

Once the command completes you can check that the network card has been properly configured using the command ifconfig eth0.101 .

Repeat the same process for icb-itm and icb-nfs using 10.101.0.7 and 10.101.0.5 as IP addresses respectively.

If exploiting dual node HA, repeat the same for icb-nfs-ha using IP address 10.101.0.6 and icb-tivsam-ha using IP address 10.101.0.3.

Note: After adding the vNIC on icb-itm, you need to restart IBM Tivoli Monitoring server (TEMS), otherwise it cannot listen on the newly created IP address and then the provisioned virtual images are not able to send it any data. To do that, log into icb-itm as virtuser and launch /opt/IBM/ITM/bin/itmcmd server stop TEMS. Once that operation completes, issue /opt/IBM/ITM/bin/itmcmd server start TEMS.

Step 3: Add the service IP addresses

It is important to define service IP addresses for the VLAN created in the previous step especially if exploiting dual node high availability. The key reason is that you want Tivoli System Automation for Multiplatform to shield you from which virtual image (icb-tivsam or icb-tivsam-ha, icb-nfs or icb-nfs-ha) is effectively providing the service. This task needs to be performed on icb-tivsam and icb-nfs.

To do that, stop the services managed by Tivoli System Automation on icb-nfs and icb-tivsam.

  1. Login as root on icb-nfs or icb-tivsam and launch rgreq -o stop top-rg or rgreq -o stop tsam-rg on icb-tivsam.
  2. Check the services are offline by looking at the output of the command lssam -V. It can take a few minutes for all services to be offline.
  3. Use the /opt/IBM/CB/bin command to create the service IP address. The syntax is:
    /opt/IBM/CB/bin/tsa_add_service_ip.sh <name of the NIC> <IP
    address> <netmask>
  4. In this example, for icb-nfsuse:
    /opt/IBM/CB/bin/tsa_add_service_ip.sh eth0.101 10.101.0.4 255.255.0.0

    For icb-tivsam use:
    /opt/IBM/CB/bin/tsa_add_service_ip.sh eth0.101
    10.101.0.2 255.255.0.0
  5. After the service IP address has been created, restart the services managed by Tivoli System Automation using the command rgreq -o cancel top-rg on icb-nfs and rgreq -o cancel tsam-rg on icb-tivsam.
  6. Check the output with lssam -V to be sure all resources are online. It may take a few minutes so keep checking.

Note: The procedure is exactly the same whether or not you are exploiting dual node high availability. You do not need to run this step on icb-tivsam-ha and icb-nfs-ha, Tivoli System Automation for Multiplatforms understands if you are in dual node configuration and properly propagates the needed information to the backup virtual images.

Step 4: Create a new resource pool and new cloud pool

To exploit the Tivoli Service Automation Manager capability to create a set of virtual images using different networks and different resources, create different resource pools.

To simplify the complexity of Tivoli Service Automation Manager, on the icb-tivsam image we've provided templates you can easily adapt and import.

The first file to look at is /opt/IBM/CB/TSAMDefinitions/New_Pool_DCM_objects.xml. This file contains the minimum set of entities that need to be defined when creating a new resource pool: The management network, customer network, cloud file repository (for UNIX® and Windows®), and software stack.

Customize

The minimal customization required is:

  • Change all the occurrences of the variable $MGMT_VLANID with the VLAN ID of the new management network (which in our scenario is 101).
  • Change all the occurrences of the variable $CUST_VLANID with the VLAN ID of your new customer network (for this example we will be using 10.131).

You also may want to customize the customer network with your specific networking data (such as IP address, netmask, and DNS). Remember to modify the variable SANStgPoolName to store the proper list of VM disks to be used.

It is useful to create a backup copy of New_Pool_DCM_objects.xml in case of problem determination or if you want to iterate the procedure adding more resource pools.

For this article we named the customized version of New_Pool_DCM_objects.xml to 101_Pool_DCM_objects.xml.

Import

After the file has been customized, import it into Tivoli Provisioning Manager:

As user tioadmin, run the command:

/opt/IBM/tivoli/tpm/tools/xmlimport.sh
file:///opt/IBM/CB/TSAMDefinitions/101_Pool_DCM_objects.xml
  1. Create a new cloud pool. At this point you need to create a new cloud pool that uses the newly created resource pool. A file like this one can be used:
    1.name=VMware System x 101
    1.tpmHPType=VMware
    1.order=1
    1.tpmPool=Esx Cloud Pool 101
    1.hypervisorHostName=vsphere
    1.fileRepositoryName=VMwareFileRepository101
    1.maxVCPU=4
    1.maxCPUUnits=40
    1.maxMemMB=8192
  2. Change the file for your environment. Remember to change the file above according to your environment. Give the file a meaningful name like 101_vrpool.properties so that you remember it has been created for your 101 VLAN.
  3. Import the file. Once the file has been created, import it using TSAM administrative user interface:
    1. Login to https://icb-nfs/maximo as maxadmin.
    2. Go to > Service Automation > Cloud Pool Administration.
    3. Click Import Virtual Resource Pools.
    4. Browse to the directory on the disk where you put 101_vrpool.properties and import it.
  4. Rerun Virtual Center Discovery. Click the newly imported cloud pool. Theoretically you should not need to rerun the Virtual Center Discovery, but in this case it is needed because you created Second-cluster in VMware after Virtual Center Discovery was run. The speed with which the Virtual Center Discovery completes depends on the number of blades.
  5. When Virtual Center Discovery has successfully completed, configure the Image Repository Host.
    1. Set it to cn2.private.cloud.com.
    2. Specify as File Repository Name VMwareFileRepository101 (i.e. the one used in /opt/IBM/CB/TSAMDefinitions/101_Pool_DCM_objects.xml.)
    3. Specify in Cluster Name Second-cluster. The specified cluster should not already be associated to another resource pool.
    4. Specify as Saved Image Repository clone_backup_disk.
    5. Cick the Validate and Enable Cloud Pool button to make the new pool operational.
  6. Add the second cluster. The second cluster needs to be manually added to the newly created resource pool. While you are still in the Tivoli Service Automation Manager administrative user interface:
    1. Click Go To > Administration > Provisioning > Resource Pools.
    2. Select ESX Cloud Pool 101 (it was created when importing 101_Pool_DCM_objects.xml).
    3. Click Select Action and select Add Computer.
    4. Select Second-cluster clicking the check box close to it.

At this point you are ready to start provisioning without ITM agent. If you are not interested in this feature, you can skip step 5.

Step 5: Configure Tivoli Service Automation Manager for IBM Tivoli Monitoring agent deployment

For the cloud pool defined by default (VMware System x), you can install the IBM Tivoli Monitoring agent at provisioning time by simply selecting the Monitoring Agent to be Installed check box on the Create Project panel.

This technique will not work for the second cloud pool (VMware System x 101). That is because the software definition for IBM Tivoli Monitoring agent associated to that check box is configured to use 10.100.*.* network (see the configuration of CloudFileRepository and WindowsCloudFileRepository ).

If, when creating a project, you select VMware System x 101 and select the check box to install the Monitoring Agent, provisioning will start, but it will fail because the provisioned virtual image has eth0 configured on 10.101.*.* and it tries to mount /repository using a 10.100.*.* IP address. Since you are trying to achieve network isolation between the two pools (i.e. the netmask used is 255.255.0.0), the mount operation fails determining in such a way to cause the failure of the whole provisioning workflow.

To have the IBM Tivoli Monitoring agent installed at provisioning time for the second cloud pool, you need to define it as additional software. Do not use the Monitoring Agent to be Installed check box; instead use the Available Software list in the Create Project panel.

To use this functionality, create a software module that represents this new configuration for IBM Tivoli Monitoring agent.

A template is available in icb-tivsam under /opt/IBM/SC/script. Copy it into /opt/IBM/CB/TSAMDefinitions/101_itm.xml and modify it:

  1. Change all occurrences of IBM Tivoli Monitoring Agent with something that lets you easily identify it as related to your 101 network, for example IBM Tivoli Monitoring Agent 101.
  2. Name the cloud file repository CloudFileRepository 101, specify the proper version (6.2.2) and the proper path in the file repository (itmAgent622). In the parameter host, specify as value the additional IP address of TEMS (in our example 10.101.0.7).

    The file looks similar to the one in this sample file.

  3. Import the 101_itm.xml file. As user tioadmin, launch:
    /opt/IBM/tivoli/tm/tools/xmlimport.sh
    file:///opt/IBM/CB/TSAMDefinition/101_itm.xml

    The installation template works for deploying IBM Tivoli Monitoring agent on Linux. For other platforms, refer to the TSAM information center.
  4. Finally, attach the IBM Tivoli Monitoring agent to the software stack:
    1. Log in toTivoli Service Automation User Interface.
    2. Click Go To > IT infrastructure > Software Catalog > Software Stacks.
    3. Select EsxPoolStack101, then select Add Stack Entry from the Select Action drop down list.
    4. In the Software Definition edit field type, IBM Tivoli Monitoring Agent 101.
    5. Click Submit.
    6. Remember to save the changes.

Step 6: Verification

The best way to verify the configuration is correct is to create a project for the new resource pool.

Consider that each resource pool has its own set of registered image templates. If an image template has already been registered for a resource pool, you cannot register it for another resource pool. If you really want to use that same template, you need to clone it, run the Virtual Center discovery and then you can register it for your new resource pool.

For the sake of simplicity, assume you already have the image template in VMware and it is not associated to any other resource pool (in this example it has not been registered for VMware System x). We added the template into the hypervisor after running the Image Template Discovery and need to run it again:

  1. From the Tivoli Service Automation Manager administrative user interface, click Go To > Service Automation > Cloud Pool Administration.
  2. Filter to find VMware System x 101 and click that entry.
  3. Launch the image template discovery by clicking the Image Discovery button.

Once it successfully completed, you can go to https://icb-nfs/SimpleSRM and register the newly discovered template. Remember to select VMware System x 101 as resource pool.

After registering the template, you are ready to create a project. Remember to select VMware System x 101 as resource pool and do not check the Monitoring Agent to be Installed check box. Instead select ITM Monitoring Agent 101 from the Available Software.

Upon successful completion of the service request, you can login to the provisioned VM and check the network configuration. You also can see that an additional system showed up in IBM Tivoli Monitoring user interface (http://icb-itm:1920).

Note: while logged in https://icb-nfs/SimpleSRM, if you select the provisioned server and look for monitoring information, they are not available. They are available only for the virtual images provisioned on VMware System x.

At this point, the last check to be done is to check that the provisioned virtual image is not able to reach any provisioned virtual image belonging to VMware System x pool.


Conclusion

Upon completion of this article you should be familiar with CloudBurst 2.1 internals and with the available techniques and templates to exploit the multiple networks support functionality.


Download

DescriptionNameSize
Sample code for this article101.tar20KB

Resources

Learn

Get products and technologies

  • See the product images available on the IBM Smart Business Development and Test on the IBM Cloud.

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Cloud computing on developerWorks


  • Bluemix Developers Community

    Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.

  • Cloud digest

    Complete cloud software, infrastructure, and platform knowledge.

  • DevOps Services

    Software development in the cloud. Register today to create a project.

  • Try SoftLayer Cloud

    Deploy public cloud instances in as few as 5 minutes. Try the SoftLayer public cloud instance for one month.

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Cloud computing, Tivoli
ArticleID=658055
ArticleTitle=Configure multiple networks in CloudBurst 2.1
publish-date=05112011