Fine-grain network quality of service on IBM AIX 7.1 and Linux in a multi-tenant cloud environment

Learn how to provide network quality service to AIX 7.1

In a multi-tenant environment, typically tenants share a single infrastructure on which resources are logically partitioned and where the cloud administrators support the required provisioning to enable sharing of the infrastructure. In purview of the overall administrative tasks, maintenance of network quality of service (QoS) per tenant is a must have but a tedious task. This article demonstrates a powerful QoS mechanism available with the Linux® operating system that can help cloud administrators to effectively divide the total network bandwidth available on a centralized Linux server, hosting cloud applications among tenant client systems such as IBM® AIX®, Microsoft® Windows®, and Linux.

Share:

Sandeep Ramesh Patil (sandeep.patil@in.ibm.com), Senior Software Engineer, IBM

Photo of Sandeep PatilSandeep Ramesh Patil is a senior software engineer in IBM India Software Labs. He is working as a cloud architect for IBM scale out storage systems. Previously, Sandeep has worked on distributed technology including DCE, SARPC, and security products, such as the IBM Network Authentication Services (IBM Kerberos). He is an IBM developerWorks master author and also an IBM Master Inventor. Sandeep holds a bachelor degree in Computer Science Engineering from the University of Pune, India. You can contact him at sandeep.patil@in.ibm.com.



Deepak Rambhau Ghuge (deeghuge@in.ibm.com), Software Developer, IBM

Photo of DeepakDeepak Rambhau Ghuge is a software developer working for the IBM India Software Labs. He is currently working in NAS security development for IBM Scaled out Network Attached Storage (SONAS). He has worked for the development of light weight multi-tenant virtualization and IBM System x® maintenance. Deepak holds a bachelor of technology degree in Computer Engineering from College of Engineering, Pune (COEP). You can contact him at deeghuge@in.ibm.com.



Sasikanth Eda (sasikanth.eda@in.ibm.com), Associate System Engineer, IBM

Photo of Sasikanth Sasikanth Eda is an Associate System Engineer with the SONAS team at IBM India Software Labs. He holds a master’s degree in Microelectronics and VLSI Design. You can contact him at sasikanth.eda@in.ibm.com.



21 March 2013

Also available in Chinese

Introduction

With the increase in growth of small- and medium-scale business, the requirements such as scalability, lower initial investment, easy deployment, reduction in floor space, pay-per-usage, security, and so on have emerged as hot issues and are effectively solved by using the cloud model. Out of all cloud components, the multi-tenancy model has favored for sharing of both software and hardware layers. In layman terms, a tenant is someone who utilizes the property owned by someone else, and in return pays for the utilization. Similarly multi-tenancy in cloud terminology refers to multiple clients utilizing the shared resources provided by the cloud infrastructure owner.

Multi-tenancy concept has formed foundation block for two matured models of cloud implementation namely software as a service (SaaS) where multiple clients share the software as a resource and infrastructure as a service (IaaS) where multiple clients share the application, platform, and network resources. The primary functions that multi-tenancy enabled a cloud vendor to perform are maintaining isolation between the clients and managing the quality of services provided to the clients.

This article helps the cloud administrator to provide a fine-grained QoS to tenants on the basis of an opted tariff plan and consider building different network QoS policies per IP address using Linux, instead of using dedicated hardware (such as network splitter).

Network schema in a multi-tenant environment

Let's consider a scenario where a cloud vendor has multiple clients and the vendor wants to provide different quality of services over the network, based on the plan opted by the client. The following figure provides a pictorial overview of a sample QoS implementation, where a set of machines is categorized into different QoS pools and the cloud administrator needs to likewise divide the network bandwidth.

Figure 1. Typical network configuration in an enterprise cloud
A typical network configuration in Enterprise cloud

To drive the context, let’s consider that a cloud vendor has launched multiple tariff slabs and has come up with the following plan details:

  • Clients under the Platinum plan will obtain 60% of total network speed.
  • Clients under the Gold plan will obtain 25% of total network speed.
  • Clients under the Silver plan will obtain 10% of total network speed.
  • Clients under the Bronze plan will obtain 5% of total network speed.

Now, consider a scenario where two customers A and B are utilizing the same data server infrastructure in which customer A belongs to the Platinum plan and customer B belongs to the Bronze plan. If the workload of customer B on the commonly shared, Linux-based cloud server is very network-intensive, it is obvious that if no network separation or limiting exists, then customer B will consume the network resource that ideally is reserved for customer A, indirectly impacting its workload and violating the service level agreement (SLA) with its premium customer. This is clearly a problem in a multi-tenant environment. To address such quality of service and SLA to multi-tenant clients, cloud vendor should use a manageable fine-grained network shaping on the network interface card that serves the clients’ requests.

To show how the above issue can be addressed, we are going to demonstrate the configuration of network QoS over a simple setup that involves a Linux server (as the central common cloud server hosting cloud applications) and an AIX 7.1 client (which forms the tenant’s system, accessing the cloud application) as shown in the following figure. In a hypothetical case, the network-related QoS and SLA associated with the AIX client requires to limit the client with a 1MBps data download speed. In this demonstration, we will shape network bandwidth on Linux server such that the AIX client is restricted to 1MBps for any data that is being fetched or downloaded by the client. This helps to adhere with the network-related QoS that is associated with the AIX client.

Figure 2. Example setup between the Linux sever and the AIX 7.1 client
Example setup between Linux sever and AIX7 client

Prerequisites to configure network QoS

The prerequisites for the setup include:

  1. Linux server running
    • Linux kernel 2.6.x (RHEL 6 default kernel is sufficient)
    • RPM package list
      • Iproute2 /tc (iproute-2.6.35-9.fc14.x86_64.rpm, included with Red Hat Enterprise Linux [RHEL] installation)
      • tcptrack (to measure the clients network bandwidth) [optional]
  2. AIX 7.1 client
    • No special packages or configuration required

The Linux kernel 2.6.x provides a flexible implementation strategy for implementing network shaping, and the entire process can be grouped in to the following three steps:

Step 1: Identify the destination on which network shaping has to be applied. In this case, it has to be the tenant’s IP address on which network restriction has to be applied (refer to the Configure network QoS on the Linux server connected to the AIX client section).

Step 2: Create a queuing policy based on the modules enabled in kernel (refer to the Configure network QoS on the Linux server connected to the AIX client section below from Listing 5 to Listing 8).

Step 3: Attach a filter (default: u32) to the queuing policy (refer to the Configure network QoS on the Linux server connected to the AIX client section below Listing 9).

Figure 3 shows a conceptual view of network shaping. For detailed information and advanced explanation about network shaping in Linux, refer to the Resources section.

Figure 3. Conceptual overview of network shaping
Conceptual overview of network shaping

In Figure 3:

  • X denotes the total network speed (physical bandwidth) of the server.
  • Ci=X/Ri denotes the network partition gained by the child class obtained by partitioning the total network bandwidth with ratio (R).
  • Ci/Ti denotes the network bandwidth allocated to each tenant obtained by partitioning the child class network bandwidth with the QoS plan (T) opted by the client.
  • The terms root qdisc, child class, and filter and their importance are explained in the later part of the article.

Configure network QoS on the Linux server connected to the AIX client

  1. Assigning static IP address

    Assign a static IP address to the Linux server and the client AIX system.

    Linux server IP address = 172.18.10.60

    AIX client IP address = 172.18.10.40

    Note: Make sure that both the IP addresses are reachable.

  2. Checking for the existence of root queuing discipline (qdisc)

    The network bandwidth shaping is a tree-based hierarchical approach in which root qdisc forms the basic cell (queuing discipline that communicates with the kernel) that is attached to the server network interface connected to the client. The tc command (provided in Linux) is the administrative interface that we use in this example for network bandwidth shaping. (For more information about the tc command, refer to the tc main page on Linux server). The following steps allow you to check the existence of previous network bandwidth limiting configurations attached to the kernel. This article assumes zero previous configurations and proceeds with the creation of a new configuration.

    Listing 1. Checking for existing qdisc
    root@linuxserver:# tc qdisc show dev ethX0
    root@linuxserver:# echo $?
    0

    Make sure there is no root -qdisc attached to the network device. If any exists, delete the -qdisc as shown below.

    Listing 2. Deletion of qdisc
    root@linuxserver:# tc qdisc del root dev ethX0
    root@linuxserver:# echo $?
    0
  3. Checking for classes attached to qdisc

    Classes form the programmable entities that are attached to the root queuing discipline.
    Listing 3. Checking for existing classes
    root@linuxserver:# tc class show dev ethX0
    root@linuxserver:# echo $?
    0
  4. Checking for the existence of filters

    Filters are the action entities in separating the specified IP address among all the IP addresses connected to Linux server.
    Listing 4. Checking for an existing filter
    root@linuxserver:# tc -s –d filter show dev ethX0
    root@linuxserver:# echo $?
    0

    More detailed output can be realized by including –s (statistics) and/or –d (details) to any tc command.

  5. Addition of queuing discipline

    Adding a root -qdisc to ethX0 (Ethernet card of a server that is attached to the Linux server and serves the requests of the AIX client). Here, htb refers to the hierarchical token bucket class full queuing discipline. For further details, refer to the Resources section.
    Listing 5. Adding a root -qdisc
    root@linuxserver:# tc qdisc add dev ethX0 root handle 10: htb
    
    root@linuxserver:# tc -s qdisc show dev eth0
    qdisc htb 10: r2q 10 default 0 direct_packets_stat 29
     Sent 3338 bytes 29 pkt (dropped 0, overlimits 0 requeues 0) 
     rate 0bit 0pps backlog 0b 0p requeues 0 
    
    root@linuxserver:# tc -d qdisc show dev eth0
    qdisc htb 10: r2q 10 default 0 direct_packets_stat 58 ver 3.17
  6. Creation of a child class discipline

    When a child class is created and attached to a qdisc, this acts as the parent for all other classes. This class has the bandwidth parameter equal to that of the physical bandwidth of the interface (here, the assumed physical bandwidth of the interface is100 Mbps).
    Listing 6. Adding a child class
    root@linuxserver:# tc class add dev ethX0 parent 10:0 classid 10:10 htb rate 100mbps
    
    root@linuxserver:# tc -s class show dev eth0
    class htb 10:10 root prio 0 rate 800000Kbit ceil 800000Kbit burst 101600b 
    cburst 101600b 
     Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
     rate 0bit 0pps backlog 0b 0p requeues 0 
     lended: 0 borrowed: 0 giants: 0
     tokens: 1016 ctokens: 1016
    
    root@linuxserver :# tc -d class show dev eth0
    class htb 10:10 root prio 0 quantum 200000 rate 800000Kbit ceil 800000Kbit 
    burst 101600b/8 mpu 0b overhead 0b cburst 101600b/8 mpu 0b overhead 0b level 0

    You can find the physical bandwidth of the Ethernet card by using the ethtool command, as shown below. For more information about the ethtool command, refer to the man pages provided by Red Hat Linux distributions.

    Listing 7. Using the ethtool command on the Linux server to identify the physical network bandwidth
    root@linuxserver:# ethtool ethX0
  7. Attaching a leaf class to the child class created

    Attach a leaf class created with speed provided in the plan as parameters that are opted by the AIX clients. Here is an example considering 1Mbps as the network speed option opted by an AIX client.
    Listing 8. Adding a leaf class
    root@linuxserver:# tc class add dev ethX0 parent 10:10 classid 10:100 htb rate 1mbps
    
    root@linuxserver:# tc -s class show dev eth0
    class htb 10:10 root rate 800000Kbit ceil 800000Kbit burst 101600b cburst 101600b 
     Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
     rate 0bit 0pps backlog 0b 0p requeues 0 
     lended: 0 borrowed: 0 giants: 0
     tokens: 1016 ctokens: 1016
    class htb 10:100 parent 10:10 prio 0 rate 8000Kbit ceil 8000Kbit burst 2600b 
    cburst 2600b 
     Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
     rate 0bit 0pps backlog 0b 0p requeues 0 
     lended: 0 borrowed: 0 giants: 0
     tokens: 2600 ctokens: 2600
     
    root@localhost :# tc -d class show dev eth0
    class htb 10:10 root rate 800000Kbit ceil 800000Kbit burst 101600b/8 mpu 0b 
    overhead 0b cburst 101600b/8 mpu 0b overhead 0b level 7 
    class htb 10:100 parent 10:10 prio 0 quantum 100000 rate 8000Kbit ceil 8000Kbit 
    burst 2600b/8 mpu 0b overhead 0b cburst 2600b/8 mpu 0b overhead 0b level 0
  8. Attaching a qdisc to the child class created

    Attach a stochastic fair queuing (SFQ) that belongs to the family of queuing disciplines based on the fair queuing algorithm to qdisc.
    Listing 9. Adding qdisc to the child class
    root@linuxserver:# tc qdisc add dev ethX0 parent 10:100 sfq quantum 1514b perturb 15
    
    root@linuxserver:# tc -s qdisc show dev eth0
    qdisc htb 10: r2q 10 default 0 direct_packets_stat 256
     Sent 37768 bytes 256 pkt (dropped 0, overlimits 0 requeues 0) 
     rate 0bit 0pps backlog 0b 0p requeues 0 
    qdisc sfq 8002: parent 10:100 limit 128p quantum 1514b perturb 15sec 
     Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
     rate 0bit 0pps backlog 0b 0p requeues 0 
    
    root@linuxserver:# tc -d qdisc show dev eth0
    qdisc htb 10: r2q 10 default 0 direct_packets_stat 286 ver 3.17
    qdisc sfq 8002: parent 10:100 limit 128p quantum 1514b flows 128/1024 perturb 15sec

    Here, quantum used as an option for adding child class (in Listing 9) refers to amount of bytes in a stream that is allowed to de-queue before the next queue gets a turn.

  9. Attaching a filter to the leaf child class created
    Listing 10. Adding a filter to the leaf child class
    root@linuxserver:# tc filter add dev ethX0 protocol ip parent 10:0 prio 5 u32 match 
    ip dst 172.18.10.40 flowid 10:100
    
    root@linuxserver:# tc -s filter show dev eth0
    filter parent 10: protocol ip pref 5 u32 
    filter parent 10: protocol ip pref 5 u32 fh 800: ht divisor 1 
    filter parent 10: protocol ip pref 5 u32 fh 800::800 order 2048 key ht 800 bkt 0 
    flowid 10:100 rule hit 30 success 0)
      match 097a7a04/ffffffff at 16 (success 0 ) 
    
    root@linuxserver:# tc -d filter show dev eth0
    filter parent 10: protocol ip pref 5 u32 
    filter parent 10: protocol ip pref 5 u32 fh 800: ht divisor 1 
    filter parent 10: protocol ip pref 5 u32 fh 800::800 order 2048 key ht 800 bkt 0 
    flowid 10:100 
      match 097a7a04/ffffffff at 16

    Listing 10 represents addition of a filter to match for a particular IP address to which network needs to be restricted.

    But in an enterprise environment, the client can request to apply network speed restriction per subnet, and this kind of scenario can be addressed as shown in Listing 11.

    Listing 11. Applying a filter for the entire subnet
    root@linuxserver:# tc filter add dev ethX0 protocol ip parent 10:0 prio 5 
    u32 match ip dst 172.18.10.1/24 flowid 10:100

Sample network bandwidth test on AIX

Listing 12 shows the network bandwidth test on AIX 7.1 before applying restriction on available bandwidth.

Listing 12. Sample test data download on AIX 7.1 from Linux server using SCP protocol
root@linuxserver:# ssh root@172.18.10.40
******************************************************************************
*                                                                            *
*   Welcome to AIX Version 7.1 !                                             * 
*                                                                            * 
*   Please see the README file in /usr/lpp/bos for information pertinent to  *
*   this release of the AIX Operating System.                                *
*                                                                            *
*                                                                            *
******************************************************************************
# cd tmp/traffic_shaping
# scp root@172.18.10.60:/root/Data .
root@172.18.10.60’s password:
Data                                       38%   502MB   24MB/s   1:06  ETA

Listing 13 shows the network bandwidth test on AIX 7.1 after applying restriction on available bandwidth.

Listing 13. Sample test data download on AIX 7.1 from Linux server using SCP protocol
root@linuxserver:# ssh root@172.18.10.40
******************************************************************************
*                                                                            *
*   Welcome to AIX Version 7.1 !                                             * 
*                                                                            * 
*   Please see the README file in /usr/lpp/bos for information pertinent to  *
*   this release of the AIX Operating System.                                *
*                                                                            *
*                                                                            *
******************************************************************************
# cd tmp/traffic_shaping
# scp root@172.18.10.60:/root/Data .
root@172.18.10.60’s password:
Data                                      7%   125MB   962KB/s   17:01  ETA

In the output shown in Listing 12 and Listing 13, you can clearly see the effect of our configuration on network bandwidth. With our changes, the bandwidth allocated to copy a file is limited from 24 MB/s to 962 KB/s (around 1MBps).

Note

  1. The above steps require root privilege to perform configuration on network device.
  2. Shaping configuration gets destroyed upon reboot. (This can be solved by adding the rules in the initialization script).
  3. The above explained network shaping works only for network traffic that is coming out of the Linux server. In other words, it is only for data being fetched or downloaded from the Linux server. It does not work for the data that is being uploaded to the Linux server. For network shaping of the network traffic that is going into the Linux server (that is for data being uploaded to the Linux server) you need to place a Linux system, acting as a router between the client and server and appropriately configure the rules using the tc command. Alternatively, you can use Intermediate Message Queue (IMQ) or Intermediate Functional Block (IFB), which is a patch available for Linux kernel 2.6.x to be applied over iptables and configured on the central hosting Linux server. For more information, refer to the Resources section.
  4. In the above example, when we say that 35% network is allocated to the systems falling under Gold QoS, it means that each system in the Gold bucket has 35% allocated. It does not mean that the cumulative allocation of all the systems in the Gold bucket is 35%.

Conclusion

This article demonstrated how cloud administrators can effectively divide the available bandwidth on a central Linux server among multiple tenants using the powerful Linux network-shaping capabilities. Such capabilities can typically help in private cloud deployments where the cloud administrator wants to shape the network bandwidth across different departments, where departments act as tenants associated with different network QoS needs.

Resources

Learn

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into AIX and Unix on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=AIX and UNIX
ArticleID=861701
ArticleTitle=Fine-grain network quality of service on IBM AIX 7.1 and Linux in a multi-tenant cloud environment
publish-date=03212013