How to Use a Transit VPC with Firewall/Routers to Control Traffic in a Collection of Spoke VPCs

4 min read

Extend your existing firewall infrastructure to the IBM Cloud or create one from scratch.

Off-the-shelf firewall appliances can be a central point of control for many businesses. IBM Cloud supports appliances from a number of different vendors, including the following: Fortinet FortiGate Next-Generation Firewall - A/P HA, Juniper vSRX Next-Generation Firewall and Check Point CloudGuard IaaS Security Cluster.  

The high-level architecture often follows this pattern:

The high-level architecture often follows this pattern:

This is a hub-and-spoke model where all data transfers through the firewall/routing appliances. The hub is called a transit VPC. The spoke instances are for teams to provide isolated services. A spoke could be a department (e.g., marketing, development, sales), a layer of software (e.g., frontend, backend, data) or DevOps environment (e.g., production, stage, test).

In this environment, data within a spoke does not pass through the firewalls, but data from a spoke to the Internet or from a spoke to a spoke does pass through a firewall. Firewalls are configured to limit access to meet corporate security standards. Traffic can also be monitored/audited from the firewalls.

Configuring a firewall appliance is beyond the scope of this post; instead, the firewall will be a stock Linux instance configured using iptables.

A zonal view of a multi-zone architecture looks like this:

A zonal view of a multi-zone architecture looks like this:

The zonal structure of the architecture results in simple routing rules. Some important notes:

  • There are three zone CIDR blocks: 10.8.0.0, 10.9.0.0 and 10.10.0.0.
  • Each VPC carves out a CIDR block in each zone.
  • Keeping traffic within a zone ensures resiliency from a zone failure.

Prerequisites 

Before continuing, you must satisfy the following prerequisites:

  • Permissions to create resources, including VPC and instances, Transit Gateway, etc.
  • VPC SSH key
  • Shell with ibmcloud cli and terraform
  • Follow the instructions to satisfy the Prerequisites for NLB with routing mode
  • IAM policy to create instances with network interfaces that allow spoofing. See about IP spoofing checks:
    • Allow spoofing:

      ibmcloud iam user-policy-create YOUR_USER_EMAIL_ADDRESS --roles "IP Spoofing Operator" --service-name is

      Note: Even if you are an account administrator you must enable the creation of network interfaces that allow spoofing

Creating transit and spoke VPCs

We use a Terraform configuration to create the transit and spoke resources. The GitHub repository has detailed instructions and a more in-depth description of iptables configuration and additional analysis.

Create VPCs, routes, network load balancer (NLB) and firewalls. The default configuration generates a small footprint:

  • Single zone
  • Network load balancer (NLB) with one firewall
  • Two spokes where virtual server instances will be added at a later stage
git clone https://github.com/IBM-Cloud/vpc-transit-firewall
cd vpc-transit-firewall
cp template.local.env local.env
edit local.env; # some values must be supplied
source local.env
terraform init
terraform apply

Configuring the iptables-based firewall

The Terraform configuration created an iptables firewall configured to forward most traffic. Check out this iptables tutorial to more deeply understand the configuration. If you ssh to the firewall instance, take a look at the files /etc/iptables.private and /etc/iptables.public. The public version is installed by default and allows the spokes to connect to each other and also the Internet. The design of the firewall settings can be found in the README of the GitHub repository.

Routing table in spoke VPCs

The subnet in the spoke VPC is configured with the following Terraform configuration (see here). Look at the following snippets:

Egress to firewall

resource "ibm_is_vpc_routing_table_route" "zone" {
  vpc           = var.vpc_id
  routing_table = var.vpc_routing_table_id
  zone          = var.zone
  name          = var.name
  destination   = "0.0.0.0/0" # allow spoke to spoke and external access through the firewall
  # destination   = var.cidr_zone # allow only spoke to spoke access through firewall
  action        = "deliver"
  next_hop      = var.next_hop
}

The routing table route for all data (destination = 0.0.0.0/0) will be sent directly to the firewall in the transit VPC. The firewall is passed as var.next_hop. If the spoke is to be isolated from the Internet and only allow spoke-to-spoke communication, you can switch the comments on the destination. This is not required, however, and can be filtered by configuring the firewall.

Attach egress route table to subnet

resource "ibm_is_subnet" "zone" {
  routing_table   = var.vpc_routing_table_id
  ...
}

It is interesting to visit the subnets and route tables in the IBM Cloud Console. Open Virtual private cloud; on the left, select Subnets and look at the subnets that were created with your prefix. Click on one of the spoke subnets then click on the Routing table in the subnet's detail page. You will see the route discussed above.

For more detailed instructions check out the README.

Testing

Now that the infrastructure is in place, we can add some load to the spokes. The first step is to add an instance to each spoke and the transit bastion:

cd load_tf
terraform init
terraform apply

The result is a zonal architecture captured below. Included are example IP addresses (yours will be different):

The result is a zonal architecture captured below. Included are example IP addresses (yours will be different):

Notice the debug output. It will look something like this:

spoke_instances = [
  {
    "jump_floating_ip" = "52.118.184.49"
    "name" = "vpcfw-load-0-us-south-1"
    "primary_ipv4_address" = "10.8.1.4"
    "spoke_key" = "0"
    "ssh" = "ssh -J root@52.118.184.49 root@10.8.1.4"
    "zone" = "us-south-1"
    "zone_key" = "0"
  },
  {
    "jump_floating_ip" = "52.118.184.49"
    "name" = "vpcfw-load-1-us-south-1"
    "primary_ipv4_address" = "10.8.2.4"
    "spoke_key" = "1"
    "ssh" = "ssh -J root@52.118.184.49 root@10.8.2.4"
    "zone" = "us-south-1"
    "zone_key" = "0"
  },
]

Copy/paste the ssh command for one of the spokes, then curl to the other spoke in the same zone. Using the above output, the session output is captured below. Your IP addresses will be different:

$ ssh -J root@52.118.184.49 root@10.8.1.4
...
root@vpcfw-load-0-us-south-1-private:~# curl 10.8.2.4/instance
vpcfw-load-1-us-south-1-private

All the instances have been pre-configured to run an nginx web server with an /instance path that will return the name of the instance.

But how can you tell if the data is actually passing through the firewall? First, you can verify that ICMP packets are not delivered through the NLB:

ping -c 2 10.8.2.4
PING 10.8.2.4 (10.8.2.4) 56(84) bytes of data.

--- 10.8.2.4 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss

The firewall is configured to only allow tcp traffic, so ICMP will not be successful — notice the 100% packet loss.

Create and run this program:

cat > curl.sh <<EOF
#!/bin/bash
while true; do
  date
  curl 10.8.2.4/instance
  sleep 1
done
EOF
chmod 775 curl.sh
nohup ./curl.sh 

The GitHub repository has instructions on how to use tcpdump to watch the traffic as it is routed through the firewall. It is also possible to stop the firewall. Navigate to the Virtual server instances for VPC and click on the firewall instance in the same zone. On the displayed details page, click on Actions... > Stop.

This will terminate your terminal session. After a few minutes, refresh the browser and Actions... > Start the instance. Then, ssh to the instance again. You should see a failure in the nohup.out file. Here is an example snippet:

$ ssh -J root@52.118.184.49 root@10.8.1.4
...
root@vpcfw-load-0-us-south-1-private:~# grep Failed nohup.out
curl: (28) Failed to connect to 10.8.2.4 port 80: Connection timed out
# verify curl.sh is still running
root@vpcfw-load-0-us-south-1-private:~# tail -f nohup.out
100    32  100    32    0     0    380      0 --:--:-- --:--:-- --:--:--   380
vpcfw-load-1-us-south-1-private
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
...

Clean up

Ensure that the current working directory is in the load_tf directory:

terraform destroy

Now clean up in the root directory:

cd ..
terraform destroy

Summary and next steps

Firewalls are required in a number of scenarios. You can use the resources described in this post to extend your existing firewall infrastructure to the IBM Cloud or create one from scratch.

Related topics of interest:

 

Be the first to hear about news, product updates, and innovation from IBM Cloud