Deploy IBM Security Network Protection in an Open vSwitch

Network security on software-defined networks

This article outlines how to configure IBM Security Network Protection (XGS5100) into an Open vSwitch–based software-defined network to protect your virtual assets. Open vSwitch is an OpenFlow–based virtual switch commonly used in cloud-based environments.

Share:

Paul Ashley, Security Architect: Infrastructure Protection, Security Systems, IBM

Paul AshleyPaul Ashley is a senior technical staff member for the IBM Security Systems Division. He has 24 years of IT experience, with the past 20 years focusing on information security. His experience includes working in identity and access management, privacy management, SOA security, mobile security, and advanced threat protection. He has worked with IBM clients in Asia, the United States, Europe, and the Middle East. He holds a doctorate in information security from the Queensland University of Technology and is an IBM Master Inventor. Paul is a member of the IBM Academy of Technology.



Chenta Lee, Advisory Software Engineer, Infrastructure Protection, Security Systems , IBM

Photo of Chenta LeeChenta Lee is a senior software engineer with IBM Security Systems. His expertise includes emerging cloud technologies, with five years of experience in cloud security products, plus software-defined networking and virtualization. He currently focuses on the network security on the IBM Software Defined Network to enable network security in IaaS frameworks. View his related video on this topic.



18 February 2014

Software-defined networking (SDN) is the technology of choice for cloud deployments, providing a scalable and flexible environment suitable for the dynamic nature of cloud. In this article, you'll learn how to deploy IBM Security Network Protection (ISNP) within an OpenFlow–supported SDN switch — the Open vSwitch — to demonstrate how easily ISNP can be deployed into an SDN environment.

Software-defined networking

SDN is a new architectural approach that aims to provide a highly flexible network suitable for today's dynamic environment. Existing networking technology is inherently static and difficult to change because minor network alterations often require substantial reconfiguration across many switches, routers, and firewalls. This process is time-consuming for administrators and inherently error-prone.

One way to think of network management is that the control plane manages where traffic is sent on the network, and the data plane forwards the traffic. In traditional network appliances, the control and data planes are within the network device, and configuration of the control plane is proprietary to the vendor's product. The SDN approach, however, separates the control and data planes. In SDN, the control plane is centrally managed across the network equipment within the enterprise (independent of vendor equipment). This architecture provides for a simple and fast way to manage the flow of traffic.

Cloud environments require dynamic resource allocation. Public and private clouds have virtualized applications and storage, so the next logical step is to virtualize the network. SDN provides the platform to allow network virtualization, allowing a fully dynamic environment for applications, storage, and the network.

OpenFlow

SDN requires the centrally managed control plane to communicate with the data plane (implemented in the physical devices). The OpenFlow protocol, from the Open Networking Foundation (see Resources), is one SDN protocol that provides this communication. OpenFlow provides granular traffic control across the range of switches and routers in an enterprise environment, both physical and virtual, independent of software vendor. This capability removes the need to configure each vendor's device individually through its proprietary interface.

Open vSwitch

Open vSwitch is an open source virtual switch licensed under the Apache 2.0 license. It's typically installed into a server to control the traffic into a hypervisor, providing a dynamic networking environment. Open vSwitch supports a range of protocols, including NetFlow, sFlow, SPAN, RSPAN, CLI, and 802.1ag. It also supports OpenFlow as a method of managing traffic flow.

ISNP

NSS Labs network analysis of the GX7800

Intrusion prevention systems (IPSes) are a vital component of any IT organization's security strategy. Explore the security effectiveness and performance characteristics of the GX7800 Intrusion prevention appliance, based on the findings of the NSS Labs product analysis. Read the report to learn:

  • How IBM achieved an exploit block rate of 95.7 percent.
  • GX7800 ratings for stability/reliability and evasion.
  • GX7800 performance ratings based on real-world scenarios.

Download "NSS Labs Network Intrusion Prevention System Product Analysis."

The ISNP product consolidates the traditional Intrusion prevention system function with sophisticated user-based and IP reputation–based application control. ISNP provides critical insight into and control over all user activities by analyzing each connection to identify the web or non-web application in use, the action being taken, and the reputation of the application. ISNP can then decide to allow or block the connection. Additionally, ISNP can record connection information, including user and application context, and use this information for local policy refinement, including bandwidth management. The approach of an integrated Intrusion prevention system with other security functions allows for faster deployment and simpler administration than is associated with the deployment of multiple products.

The ISNP Intrusion prevention functionality is implemented in the IBM X-Force R&D Team's Protocol Analysis Module (PAM). This module adds sophisticated protocol and heuristics-based analysis engines to signature detection. The efficacy of the PAM engine was recently tested by the NSS labs using the GX7800 as outlined in the sidebar. The overall efficacy result achieved was 95.7 percent."


Environment setup

In this video, Chenta Lee shows you how to set up your Ubuntu environment for the rest of the article

Video: Outlines how to configure IBM Security
                        Network Protection (XGS5100) into an Open vSwitch-based
                        software-defined network to protect your virtual assets.
                        (5:45)

The operating system we used was a 64-bit Ubuntu 12.04 long-term support (LTS) release, with support guaranteed until April 2017. We installed Ubuntu onto a bare metal server, rather than into a virtual machine (VM). Using a bare metal server to install the KVM hypervisor provides a reasonable VM performance. However, if you want to run your KVM hypervisor on a VM, you could do an online search for "nested KVM" for instructions.

The bare metal server you plan to use should have at least three network interface cards (NICs): one for remote access, and two for the connection to the ISNP device. Figure 1 shows the layout after finishing the environment setup. Two NICs on the server connect directly to the protection ports on the ISNP. Figure 1 shows the ISNP model XGS5100.

Figure 1. Final layout of the environment setup
Image shows environment layout with three NICs

KVM hypervisor setup

Kernel-based virtual machine (KVM) is a virtualization infrastructure for the Linux kernel that turns it into a hypervisor. Before you install Ubuntu, enable the VT-d instruction set in the CPU from the BIOS setting (see Resources). You can usually find it in the advance setting section in the BIOS menu.

The recommended Linux distribution is Ubuntu 12.04 LTS. When installing Ubuntu 12.04, follow the standard steps. Then when you select software, select the Virtual Machine host and OpenSSH server (to enable remote access to your server). See Figure 2.

Figure 2. VM host and OpenSSH server
Image shows choosing VM host and OpenSSH server

If you forget to select Virtual Machine host during setup, you can install the necessary KVM components with this command: # sudo apt-get install qemu-kvm libvirt-bin.

After installation, follow the next steps to finish the KVM hypervisor setup. First, make sure that your Ubuntu installation is up to date: # sudo apt-get update && apt-get dist-upgrade.

Make sure your CPU supports the instruction set that KVM needs: # sudo kvm-ok.

If the output from the kvm-ok command is not KVM acceleration can be used, you either didn't enable the VT-d feature in BIOS or your CPU is too old to run KVM hypervisor.

Add user to kvm group: # sudo gpasswd -a $USER kvm. Add user to libvirtd group: # sudo gpasswd -a $USER libvirtd. Then modify /etc/libvirt/qemu.conf to add the following settings:

# sudo vi /etc/libvirt/qemu.conf
  user = "root"
  group = "kvm"
  security_driver = "none"
     cgroup_device_acl = [ 
      "/dev/null", "/dev/full", "/dev/zero", 
      "/dev/random", "/dev/urandom", 
      "/dev/ptmx", "/dev/kvm", "/dev/kqemu", 
      "/dev/rtc", "/dev/hpet", "/dev/net/tun", 
   ] 
   clear_emulator_capabilities = 0

Make sure to uncomment the above settings by removing the # at the beginning of each line.

Restart libvirt-bin to load the new qemu setting: # sudo service libvirt-bin restart.

Optionally install virt-manager: # sudo apt-get install virt-manager. This step provides a user-friendly interface to configure your VMs.


Open vSwitch setup

To set up Open vSwitch, install all the necessary Open vSwitch components (we used Open vSwitch 1.4.0):

# sudo apt-get install openvswitch-controller openvswitch-switch 
openvswitch-datapath-source openvswitch-datapath-dkms

Note: You might encounter some issues when installing these packages on the 3.8 kernel. The Ubuntu team is working to resolve the issues at the time of this writing. The good news is that the 3.8 kernel already has the native openvswitch module included, so it's not necessary to build your own. As long as the openvswitch module can be loaded, your system is ready to use.

Make sure that the openvswitch module is loaded properly: # lsmod | grep openvswitch. Use this command to see if ovsdb-server and ovs-vswitchd are up and running: # sudo service openvswitch-switch status. You should see output like this:

ovsdb-server is running with pid xxxx
ovs-vswitchd is running with pid yyyy

Configure Open vSwitch

Warning: Do not run these steps remotely

Because eth0 is our uplink and also our management interface, these steps break the network connect on your server. Therefore, do not run the following steps remotely. Use a local or serial console to run them. The alternative is to use another NIC as the uplink on the Open vSwitch. Replace the network identifier (eth0) in each step with another network identifier, such as eth1.

The next steps demonstrate how to configure one Open vSwitch and use eth0 as the uplink. After that, the VMs will connect to this switch to get access to the network.

Before you start, make sure all the Open vSwitch services are up and running. Run these commands at any time to make sure your Open vSwitch is ready to use.

Follow along with Chenta Lee

In this video, Chenta Lee shows you how to configure the Open vSwitch.

Video: Configuring the Open vSwitch

Check to see if the Open vSwitch module is loaded: # lsmod | grep openvswitch. You should see output like this: # openvswitch 47849 0. Check the status of all Open vSwitch services: # sudo service openvswitch-switch status. You should see output like this:

ovsdb-server is running with pid xxxx
ovs-vswitchd is running with pid yyyy

You can start configuring your Open vSwitch after you make sure all the Open vSwitch services are running properly.

Reset eth0:

# sudo ifconfig eth0 0

Create a new vSwitch:

# sudo ovs-vsctl add-br ovs-internal

Add eth0 to ovs-internal:

# sudo ovs-vsctl add-port ovs-internal eth0

Bring up ovs-internal:

# sudo ifconfig ovs-internal up

Set the IP address for ovs-internal:

Static IP:

# sudo ifconfig ovs-internal <ip> <netmask>
# sudo route add default gw <gw_ip>

DHCP:

# sudo dhclient ovs-internal

Add eth1 and eth2 to ovs-internal. Enter the next commands in this exact order:

# sudo ovs-vsctl add-port ovs-internal eth1
# sudo ovs-ofctl mod-port ovs-internal eth1 down
# sudo ovs-ofctl mod-port ovs-internal eth1 noflood
# sudo ovs-vsctl add-port ovs-internal eth2
# sudo ovs-ofctl mod-port ovs-internal eth2 down
# sudo ovs-ofctl mod-port ovs-internal eth2 noflood

These commands bring down eth1 and eth2 and set the noflood option on them. The command order is important: If you don't follow this exact order, you'll loop the network. You will enable these two ports when the ISNP device is ready and the cables are connected to eth1 and eth2.

After finishing the Open vSwitch setup, you can use ovs-vsctl to see the status of your switches:

# sudo ovs-vsctl show 
ed1f5e9a-8c2e-4a1e-9fe8-73740f57589c 
    Bridge ovs-internal 
        Port ovs-internal 
            Interface ovs-internal 
                type: internal 
        Port "eth2" 
            Interface "eth2" 
        Port "eth1" 
            Interface "eth1" 
        Port "eth0" 
            Interface "eth0" 
    ovs_version: "1.4.0+build0"

If you want to make your Open vSwitch setting persistent, you could edit /etc/network/interfaces to add the persistent settings.

For a static IP configuration, change the setting to something like this:

# The loopback network interface 
auto lo 
iface lo inet loopback 

# The uplink on ovs-internal
auto eth0
iface eth0 inet static 
        address 0.0.0.0 

# The interface for connecting to the protection ports on ISNP
auto eth1
iface eth1 inet static 
        address 0.0.0.0 

# The interface for connecting to the protection ports on ISNP
auto eth2
iface eth2 inet static 
        address 0.0.0.0 

# The Open vSwitch setting 
auto ovs-internal
iface ovs-internal inet static 
        address 10.40.28.1 
        netmask 255.255.128.0 
        network 10.40.0.0 
        broadcast 10.40.127.255 
        gateway 10.40.0.1 
        dns-nameservers 10.40.1.1

For DHCP configuration, change the setting to something like this:

# The loopback network interface 
auto lo 
iface lo inet loopback 

# The uplink on ovs-internal
auto eth0
iface eth0 inet static 
        address 0.0.0.0 

# The interface for connecting to the protection ports on ISNP
auto eth1
iface eth1 inet static 
        address 0.0.0.0 

# The interface for connecting to the protection ports on ISNP
auto eth2
iface eth2 inet static 
        address 0.0.0.0 

# The primary network interface 
auto ovs-internal
iface ovs-internal inet dhcp

Making the noflood options on eth1 and eth2 persistant is important. However, there's no good way to do it in /etc/network/interfaces, so you need to add the following commands to /etc/rc.local:

# sudo vi /etc/rc.local
ovs-ofctl mod-port ovs-internal eth1 noflood
ovs-ofctl mod-port ovs-internal eth2 noflood

ISNP setup

Follow along with Chenta Lee

In this video, Chenta Lee shows you how to set up the IBM Security Network Protection.

Video: Configuring the Open vSwitch

Set up your ISNP appliance, then connect eth1 and eth2 to the protection ports on it (as shown in Figure 1). When setup is complete, you can log in to the Local Management Interface to see the ISNP dashboard.

Figure 3. The ISNP dashboard
Log in to see the ISNP dashboard

For this article, we used the ISNP as an IPS only. We didn't use the Network Access Policy user-based application control and IP reputation available; we used the default X-Force IPS policy. Only the "any any" rule at the end is enabled. The other rules shown are the initial rule set supplied with the XGS5100 as of the 5.1 release.

Figure 4. Choosing only the default X-Force IPS policy
Screenshot showing the default X-Force IPS policy

Click to see larger image

Figure 4. Choosing only the default X-Force IPS policy

Screenshot showing the default X-Force IPS policy

After you set up ISNP and connect the cables, bring up eth1 port and eth2 port on ovs-internal:

# sudo ovs-ofctl mod-port ovs-internal eth1 up
# sudo ovs-ofctl mod-port ovs-internal eth2 up

Remember to double-check if you set the noflood option correctly on both eth1 and eth2. If you don't set this option correctly, you'll loop your network directly when you connect the cables.

Create the first virtual machine and connect it to ovs-internal

First, you need to prepare a script to automatically connect the virtual NIC on the VM to ovs-internal when it's powered on. Copy this code to /etc/ovs-ifup:

# sudo vi /etc/ovs-ifup
     #!/bin/sh 
     switch='ovs-internal' 
     /sbin/ifconfig $1 0.0.0.0 up 
     ovs-vsctl add-port ${switch} $1

Add execution permission to the script: # sudo chmod +x /etc/ovs-ifup.

Use virt-manager to create the new virtual machine

You can use virt-manager to create a new VM. Because we're using the Ubuntu 12.04 server edition here, we can't run any GUI program on the server because we don't have X server installed. If you chose to use the desktop edition, then the X server is installed and you can access virt-manager directly. To launch virt-manager from your KVM hypervisor, you could use X forwarding (see Resources) to show the GUI program on your local X server.

If you're using a Linux® workstation to connect to your remote KVM hypervisor, run the following command to enable X forwarding when connecting to the server using Secure Shell (SSH): # ssh -X user@<server_ip>.

You need to obtain the IP address of your server from ovs-internal instead of eth0: # ifconfig ovs-internal>.

When you log in to your remote KVM hypervisor, you can launch the virt-manager: # virt-manager. You should see the GUI program on your screen.

Figure 5. Launch the virt-manager to see the GUI program
Screenshot showing the GUI program

You can now start creating the first VM.

Figure 6. Step 1 in creating a VM
Screenshot showing how to create a virtual machine
Figure 7. Step 2 in creating a VM
Screenshot showing how to create a virtual machine

The new VM should have at least one VNIC on it.

Figure 8. The new VM should have one VNIC
Screenshot showing one VNIC on the virtual machine

You don't need to worry about configuring the VNIC right now because you'll modify it manually later. Once the VM is created, shut it down right away so you can manually modify the XML description file of the new VM. Edit /etc/libvirt/qemu/<VM NAME>.xml. The original VNIC setting in the XML file should look something like this:

# sudo vi /etc/libvirt/qemu/<VM NAME>.xml
    <interface type='bridge'> 
      <mac address='xx:xx:xx:xx:xx:xx'/> 
      <source bridge='xxxx'/>
      <model type='virtio'/> 
      <address type='pci' domain='0x0000' bus='0x00'
            slot='0x03' function='0x0'/> 
    </interface>

Change the VNIC setting as follows:

<interface type='ethernet'> 
      <mac address='xx:xx:xx:xx:xx:xx'/>
      <script path='/etc/ovs-ifup'/>
      <model type='virtio'/> 
      <address type='pci' domain='0x0000' bus='0x00'
            slot='0x03' function='0x0'/> 
</interface>

Tell the KVM hypervisor to load this new VM description: # sudo virsh define /etc/libvirt/qemu/<VM NAME>.xml.

Now it's time to power on the new VM. By running the following commands, you should see that a new port has been created on ovs-internal.

# sudo ovs-vsctl show 
# sudo ovs-ofctl show ovs-internal

You could now install any OS you want on the VM and try to access the network. Notice that the network on the VM is in bridge mode when your VM is connected to Open vSwitch, so the DHCP server in your environment should work. You might need to manually assign an IP address to the new VM if no DHCP server is available.

Because of a known bug in libvirt, the ovs-ifdown script can't be specified in the description file. Therefore, you need to manually disconnect the tap device from the switch when the VM is powered off. If you don't, you'll encounter an error the next time you want to boot the VM.

Figure 9. Boot error
Screenshot showing boot error

Use this command to manually disconnect the tap device from ovs-internal: # sudo ovs-vsctl del-port ovs-internal tapN.

You need to get the name of the tap device that should be disconnected from the switch. You could get that device name from the error message, as shown by the red block in Figure 9.

Protect the VM from external traffic

Warning: Do not do this remotely

Warning: You should not do this remotely, because you'll be manipulating the Open vSwtich that eth0 is attached to. If you have an extra NIC you can use as the uplink of ovs-internal, you can ignore this warning.

Now you'll teach ovs-internal how to forward traffic to the ISNP device for packet inspection. Use the ovs-ofctl command to install the SDN rules manually on ovs-internal. It's where the use of OpenFlow on the virtual switch starts taking place. The structure of the rules you push to the switch is defined in the OpenFlow standard. Follow the same steps to protect other VMs on your server.

If you pushed the wrong rules to the switch, your VMs won't be able to use the network. To restore network connectivity, reset your switch using these commands:

# sudo ovs-ofctl del-flows ovs-internal
# sudo ovs-ofctl add-flow ovs-internal action=normal

In a future article, you'll see how to use the OpenFlow SDN controller to automate this process. Then all the rules will be automatically generated and pushed to the SDN switches.

Follow along with Chenta Lee

In this video, Chenta Lee shows you how to set up the configure protection for the virtual machine.

Video: Configuring the Open vSwitch

You need to gather two attributes from the first VM: the MAC address of the first VM and the port number that the VM is connected to on ovs-internal. You can find the MAC address assigned to the virtual NIC in /etc/libvirt/qemu/<VM NAME>.xml:

#sudo vi /etc/libvirt/qemu/<VM NAME>.xml
   <interface type='ethernet'>
      ... 
      <mac address='xx:xx:xx:xx:xx:xx'/> 
      ... 
   </interface>

Alternatively, you can just log into your VM and get the MAC address assigned to your network device.

The second thing you need is the port number that the VM is connected to on ovs-internal. Run the next command to get the port status on ovs-internal: # sudo ovs-ofctl show ovs-internal.

The output from the command should look like this:

OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:0000e41f136c4952 
n_tables:255, n_buffers:256 
features: capabilities:0xc7, actions:0xfff 
 1(eth0): addr:e4:1f:13:6c:49:52 
     config:     0 
     state:      0 
     current:    1GB-FD COPPER AUTO_NEG 
     advertised: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD AUTO_NEG 
     supported:  10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD COPPER AUTO_NEG 
 2(eth1): addr:e4:1f:13:6c:aa:56
     config:     NO_FLOOD
     state:      0 
     current:    1GB-FD COPPER AUTO_NEG 
     advertised: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD AUTO_NEG 
     supported:  10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD COPPER AUTO_NEG 
 3(eth2): addr:e4:1f:13:6c:ab:57
     config:     NO_FLOOD
     state:      0 
     current:    1GB-FD COPPER AUTO_NEG 
     advertised: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD AUTO_NEG 
     supported:  10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD COPPER AUTO_NEG 
 4(tap0): addr:be:cc:7b:8b:c0:04 
     config:     0 
     state:      0 
     current:    10MB-FD COPPER 
 LOCAL(ovs-internal): addr:e4:1f:13:6c:49:52 
     config:     0 
     state:      0

The numbers in bold are the port number, and the strings in italics are the names of the devices mapped to the port number. Every VNIC on the VM is mapped to a tap device on the hypervisor. Therefore, you can see that the first VM is connected to port 4 on ovs-internal. Moreover, eth2 is on port 2, and eth3 is on port 3. You will see these numbers in the rules you push to the switch.

Notice that the port number on the switch might be changed after you reboot the server. You must use the correct port number in the following rules pushed to ovs-internal. In addition, sometimes the eth0 won't be located on port 1. Don't use the wrong uplink port number in the rules, or the network traffic won't go through the ISNP device.

Assume that the MAC address of the first VM is 52:54:00:aa:bb:cc. Push the following rules to ovs-internal:

Rule 01: # sudo ovs-ofctl del-flows ovs-internal
Rule 02: # sudo ovs-ofctl add-flow ovs-internal
priority=100,in_port=1,dl_dst=52:54:00:ac:bb:cc,idle_timeout=0,action=output:2
Rule 03: # sudo ovs-ofctl add-flow ovs-internal
priority=100,in_port=3,dl_dst=52:54:00:ac:bb:cc,idle_timeout=0,action=output:4
Rule 04: # sudo ovs-ofctl add-flow ovs-internal
priority=100,in_port=4,idle_timeout=0,action=output:3
Rule 05: # sudo ovs-ofctl add-flow ovs-internal 
priority=0,action=normal

Here is the rule-by-rule explanation:

Rule 01: Flush every rule on ovs-internal
Rule 02: If the flow is sent from the uplink (eth0 @ port #1) and the destination MAC is
         “52:54:00:aa:bb:cc” then forward the flow to eth1@port#2.
Rule 03: After ISNP inspected the flow, it will be forwarded to eth2@port#3. Therefore, 
         if the flow is sent from port#3 and the destination MAC is “52:54:00:aa:bb:cc”  
         then it should be forward the port that the first VM is connected to (port#4).
Rule 04: Forward every packets sent from the first VM (port#4) to eth2 (port #3).
Rule 05: It is the default rule to handle the broadcast/multicast traffic. This rule has 
         the lowest priority.

The blue lines in Figure 10 show how the switch forwards the traffic sent from the external machine to ISNP and then to the first VM.

Figure 10. Forwarding traffic
Screenshot showing how traffic is forwarded

You can dump all the current rules on ovs-internal— and see how many packets hit each rule — by running this command: # sudo ovs-ofctl dump-flows ovs-internal.

The output should look something like this:

NXST_FLOW reply (0x4): 
   cookie=0x0, duration=4345.083s, table=0, n_packets=4637, n_bytes=441598, 
priority=100,in_port=4 actions=output:3 
   cookie=0x0, duration=4399.466s, table=0, n_packets=4618, n_bytes=449256,
priority=100,in_port=1,dl_dst=52:54:00:aa:bb:cc actions=output:2 
   cookie=0x0, duration=4363.898s, table=0, n_packets=4618, n_bytes=449256,
priority=100,in_port=3,dl_dst=52:54:00:aa:bb:cc actions=output:4 
   cookie=0x0, duration=4324.14s, table=0, n_packets=24095, n_bytes=1916023,
priority=0 actions=NORMAL

You can now run the penetration testing to the first VM to see how ISNP protects it. Refer to the section "Verify that the ISNP device is protecting your virtual machines" for some simple tests that can verify that the ISNP is protecting your VMs.

Create the second VM and connect it to Open vSwitch

You can create your second VM by cloning your first VM using virt-manager or manually creating a new one. Remember to modify the XML file to let the VM automatically connect to ovs-internal.

Protect the VM from internal traffic

To get the port number used by the second VM, run the ovs-ofctl command again: # sudo ovs-ofctl show ovs-internal.

The output will look something like this:

OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:0000e41f136c4952 
n_tables:255, n_buffers:256 
features: capabilities:0xc7, actions:0xfff 
 1(eth0): addr:e4:1f:13:6c:49:52 
     config:     0 
     state:      0 
     current:    1GB-FD COPPER AUTO_NEG 
     advertised: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD AUTO_NEG 
     supported:  10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD COPPER AUTO_NEG 
 2(eth1): addr:e4:1f:13:6c:aa:56
     config:     NO_FLOOD
     state:      0 
     current:    1GB-FD COPPER AUTO_NEG 
     advertised: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD AUTO_NEG 
     supported:  10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD COPPER AUTO_NEG 
 3(eth2): addr:e4:1f:13:6c:ab:57
     config:     NO_FLOOD
     state:      0 
     current:    1GB-FD COPPER AUTO_NEG 
     advertised: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD AUTO_NEG 
     supported:  10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD COPPER AUTO_NEG 
 4(tap0): addr:be:cc:7b:8b:c0:04 
     config:     0 
     state:      0 
     current:    10MB-FD COPPER 
 5(tap1): addr:be:cc:7b:8b:c0:05
     config:     0 
     state:      0 
     current:    10MB-FD COPPER 
 LOCAL(ovs-internal): addr:e4:1f:13:6c:49:52 
     config:     0 
     state:      0

You can see that a new tap device is connected to ovs-internal at port 5 where the second VM connects.

Assume that the MAC address of the second VM is 52:54:00:aa:cc:dd. You need to push the following rules to ovs-internal to protect it from both external traffic and inter-vm traffic.

Rule 06: # sudo ovs-ofctl add-flow ovs-internal
priority=100,in_port=1,dl_dst=52:54:00:ac:cc:dd,idle_timeout=0,action=output:2
Rule 07: # sudo ovs-ofctl add-flow ovs-internal
priority=100,in_port=3,dl_dst=52:54:00:ac:cc:dd,idle_timeout=0,action=output:5
Rule 08: # sudo ovs-ofctl add-flow ovs-internal
priority=100,in_port=5,idle_timeout=0,action=output:3
Rule 09: sudo ovs-ofctl add-flow ovs-internal
priority=100,in_port=2,dl_dst=52:54:00:ac:bb:cc,idle_timeout=0,action=output:4
Rule 10: sudo ovs-ofctl add-flow ovs-internal
priority=100,in_port=2,dl_dst=52:54:00:ac:cc:dd,idle_timeout=0,action=output:5

Here is the rule-by-rule explanation:

Rule 06–08: These rules protect the second VM from the external traffic.
Rule 09: Because you must take care of the inter-vm traffic now, you need to tell the   
         switch how to forward the inter-vm traffic after ISNP inspects it. This rule
         says that after ISNP returns the traffic — and if the destination is to the
         first VM — the traffic should be forwarded to port #4.
Rule 10: After ISNP inspects the traffic, it will be forwarded to port #2. This rule  
         forwards the traffic to port #5 if the destination is the second VM.

The red lines in Figure 11 show the new rules that we pushed to the switch to protect the VMs from the inter-vm traffic.

Figure 11. New rules protect the VMs from inter-VM traffic
Drawing shows how new rules protect the VMs

You can use ovs-ofctl to verify that all the rules are pushed correctly to ovs-internal: # sudo ovs-ofctl dump-flows ovs-internal.

Now you can start the penetration testing from one VM to another VM to see how ISNP protects them. The next section describes some simple tests to verify that the ISNP is protecting your VMs.

Verify that the ISNP device is protecting your VMs

Follow along with Chenta Lee

In this video, Chenta Lee shows you how to verify that the virtual machine is being protected.

Video: Configuring the Open vSwitch

The easiest way to verify that the network traffic went through the ISNP device is to use Network Graphs in the Local Management Interface in ISNP. Remember to first send some traffic to your VMs before checking the network graphs. In Figure 12, you can see that you can choose from many types of views.

Figure 12. Using the "Traffic Detail by User" view
Screenshot showing the IP address of the VM

We recommend using the Traffic Detail by User view to see if the IP addresses of your VMs appear in the network statistics, as shown in Figure 13.

Figure 13. Network statistics showing the IP addresses of the VMs
Screenshot showing network statistics

You can send a harmless attack to your your VMs to see if the ISNP can really protect them. The ISNP will block the attacks, and the Local Management Interface will show a security event.

Run a web server in your VM. If your VM runs Linux, you can use this command to start a web server on port 8000: # python -m SimpleHTTPServer.

When the web server is running, send a URL_MANY_SLASHES attack (see Resources) to it. You can send the attack from one VM to another VM, or send it from an external machine. The ISNP will block all the attacks in both scenarios.

Sending a URL_MANY_SLASHES attack is easy. Just use more than 500 slashes in your HTTP request to trigger the event. Open your web browser to enter this URL:

http://<web server IP>:8000///////////////.....////////////////////////

Remember to enter at least 500 slashes in the URL.

You can also use the following command to send a URL_MANY_SLASHES attack:

# wget http://10.40.25.10:8000/`python -c "print '/'*500"`

After sending the attack, check the security events in the Event Log view. Select IPS Events to see all the security events blocked by ISNP, as shown in Figure 14.

Figure 14. Security events blocked by ISNP
Screenshot shows security events the ISNP blocked

If you saw the security events in your Local Management Interface, then you've succeeded.


Conclusion

This article has described how to manually configure the ISNP appliance (XGS5100) into an Open vSwitch–based software-defined network. This type of virtual switch is commonly found in cloud environments. The steps to manually configure the flows in the virtual switch are complex; they'll be addressed in our next article in the series by showing how to use an OpenFlow–based controller to manage the flows.

Acknowledgements

The authors would like to acknowledge the excellent testing work of Lun Pin Yuan, who helped capture some of the steps in this article.

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Security on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Security, Cloud computing
ArticleID=962286
ArticleTitle=Deploy IBM Security Network Protection in an Open vSwitch
publish-date=02182014