On-Premises Private Access to Workloads Across Zones Using a DNS GLB and VPC NLB

5 min read

How cloud workloads can be privately accessed from your on-premises data center

It may be time to modernize or create new workloads in the cloud, and the IBM Cloud provides a secure, compliant and elastic environment to host your applications. You can create and automate workloads that can be run anywhere with consistency, while only paying for what you use and scaling up or down at any time.

This blog post describes how cloud workloads can be privately accessed from your on-premises data center. The application will be distributed over cloud compute nodes for scalability and across multiple zones in a region for high availability. An IBM Cloud Virtual Private Cloud (VPC) provides high-performance network connectivity to Virtual Server Instances (VSIs) and services. IBM Cloud Load Balancers are used to distribute requests to the instances. The detailed specification for this example is captured in a source code repository and then provisioned at the click of a button using IBM Cloud Schematics:

Architecture.

Architecture.

There is an associated GitHub repository that captures the architecture. This post will walk you through the description of the solution and explain how to provision and test.

TL;DR: You can skip down to the Provision section below to start creating the resources.

Network Load Balancer (NLB)

In a VPC, a load balancer distributes workloads over multiple Virtual Server Instances. There are two types of IBM Cloud Load Balancers for VPC: application load balancers and network load balancers. This post describes the use of a IBM Cloud Network Load Balancer for VPC (or NLB):

Network Load Balancer.

Network Load Balancer.

The Zone in the diagram is a physically isolated Availability Zone (AZ) within a multi-zone region. Spreading load across all of the zones will provide resiliency, even in the unlikely event of a zone failure. 

Global Load Balancer (conceptual)

Conceptually, the architecture is captured in this diagram:

Conceptual private access to a multi-zone load-balanced architecture.

Conceptual private access to a multi-zone load-balanced architecture.

The on-premises network is connected to the IBM Cloud using Direct Link into a Transit Gateway for distribution. The IBM Cloud DNS Service provides Domain Name Services (DNS). A DNS zone is a name like ".widgets.cogs". The "DNS GLB backend.widgets.cogs" in the diagram is a IBM DNS Services Global Load Balancer with the name "backend" in the DNS zone.

The conceptual diagram above shows traffic flowing through the DNS Global Load Balancer (GLB). A DNS load balancer provides distribution using DNS name resolution. This portion of the Architecture diagram captures the DNS GLB configuration:

DNS GLB configuration.

DNS GLB configuration.

There are three pools created in the Terraform snippet below, one for each AZ. The monitor is the Health Check sent via an interface in the supplied subnet. The health check target is each origins member — in this example, a single NLB. You can find the full file here:

resource "ibm_dns_glb_pool" "cloud" {
  for_each                  = module.zone
  origins {
    address = each.value.lb.hostname
  }
  monitor             = ibm_dns_glb_monitor.cloud.monitor_id
  healthcheck_subnets = [each.value.subnet_dns.resource_crn]
}

The GLB "backend" is in the DNS zone "widgets.cogs" is captured in the Terraform snippet below. The az_pools are capturing the mapping from the requestor's zone to the corresponding pool, documented as Location policy:

resource "ibm_dns_glb" "widgets" {
  name        = "backend"
  zone_id     = ibm_dns_zone.widgets_cogs.zone_id
  dynamic "az_pools" {
    for_each = module.zone
    content {
      availability_zone = az_pools.value.zone
      pools             = [ibm_dns_glb_pool.cloud[az_pools.key].pool_id]
    }
  }
}

DNS custom resolver locations

The Terraform configuration snippet below shows the custom resolver resources and locations. Notice there is a location in each zone:

resource "ibm_dns_custom_resolver" "cloud" {
  dynamic "locations" {
    for_each = module.zone
    content {
      subnet_crn = locations.value.subnet_dns.resource_crn
    }
  }
}

Each location will create a DNS resolver that has an IP address in the associated subnet. The on-premises DNS resolver will forward requests to the IBM DNS Service through these IP addresses.

GLB step-by-step

This diagram shows the step by step DNS resolution of the GLB, starting with the on-premises client:

DNS resolution steps.

DNS resolution steps.

  1. Client requests the IP address of backend.widgets.cogs to the on-premises DNS resolver.
  2. The DNS Resolver chooses "round robin" from the list of resolver locations.
  3. The request is forwarded to the selected resolver location.
  4. The IP address of the NLB in the zone of the location is returned and then provided to the client.
  5. The client establishes a TCP connection directly to the NLB, which distributes to one of the backend instances in the AZ.

Provision

Prerequisites 

Before continuing, you must satisfy the following prerequisites:

  • Permissions to create resources, including VPC and instances, Transit Gateway, DNS Service, etc.
  • VPC SSH key

Create the resources using IBM Cloud Schematics:

  1. Log in to the IBM Cloud.
  2. Click Schematics Workspaces.
  3. Click Create workspace to create a new workspace.
  4. Enter https://github.com/IBM-Cloud/vpc-dnsglb-nlb for the GitHub repository.
  5. Select Terraform version terraform_v1.1.
  6. Click Next.
  7. Optionally, change the Workspace details and click Next.
  8. Click Create.

In the new workspace Settings panel, initialize the variables by clicking the menu selection on the left. You must provide values for the variables that do not have defaults:

In the new workspace Settings panel, initialize the variables by clicking the menu selection on the left. You must provide values for the variables that do not have defaults:

Click Apply plan to create the resources. Wait for completion.

Test

Note: A more complete description of the steps below and troubleshooting tips can be found in the GitHub repository.

Gather Schematics workspace output

Open the Cloud Shell or your own terminal with ibmcloud cli, schematics plugin and jq installed (see Getting started):

ibmcloud schematics workspace list
WORKSPACE=workspace ID column value of the workspace you just created/applied
ibmcloud schematics output --id $WORKSPACE --output json | jq -r '.[0].output_values[].onprem.value'
# to troubleshoot it may be useful to get some cloud data
# ibmcloud schematics output --id $WORKSPACE --output json | jq -r '.[0].output_values[].cloud.value'

Example on-prem output:

{
  "floating_ip": "52.116.142.140",
  "glb": "backend.widgets.cogs",
  "ssh": "ssh root@52.116.142.140"
}

In a laptop terminal session, ssh to the instance representing on-premises. Then turn off the default DNS resolver and install a simpler CoreDNS resolver configured to forward to the DNS locations provided by the cloud:

ssh root@52.116.142.140; # copy from your output
...
# download CoreDNS
version=1.9.3
file=coredns_${version}_linux_amd64.tgz
wget https://github.com/coredns/coredns/releases/download/v${version}/$file
tar zxvf $file

# turn off the default dns resolution
systemctl disable systemd-resolved
systemctl stop systemd-resolved


# chattr -i stops the resolv.conf file from being updated, configure resolution to be from localhost port 53
rm /etc/resolv.conf
cat > /etc/resolv.conf <<EOF
nameserver 127.0.0.1
EOF
chattr +i /etc/resolv.conf
cat /etc/resolv.conf
ls -l /etc/resolv.conf

# CoreDNS will resolve on localhost port 53. DNS_SERVER_IPS are the custom resolver locations
cat > Corefile <<EOF
.:53 {
    log
    forward .  $(cat DNS_SERVER_IPS)
    prometheus localhost:9253
}
EOF
cat Corefile
./coredns

Here is an example of the last few lines, including Corefile and start up for CoreDNS. Your output might be a little different:

...
root@glbnlb-onprem:~# cat Corefile
.:53 {
    log
    forward .  10.0.0.149 10.0.1.149
    prometheus localhost:9253
}
root@glbnlb-onprem:~# ./coredns
.:53
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11

Create a second ssh session to the on-premises ubuntu instance that is running CoreDNS. Try some dig and curl commands. Finally, leave the curl running in a while loop:

ssh root@...
...
glb=backend.widgets.cogs
dig $glb
dig $glb; # try a few times
curl $glb/instance
while sleep 1; do curl --connect-timeout 2 $glb/instance; done

Open the VPC Instances list and stop the zonal instances to see the load get balanced as failures are experienced:

Open the VPC Instances list and stop the zonal instances to see the load get balanced as failures are experienced:

It will take a couple of minutes for the DNS GLB to detect unhealthy origins. If all zonal instances are stopped, there is a fallback dnstlb-test instance that will be used when all of the zonal instances are stopped.

Clean up

Navigate to the the Schematics Workspace and open your workspace:

  • Click Actions > Destroy resources.
  • Wait for resources to be destroyed.
  • Click Actions > Delete workspace.

Summary and next steps

DNS load balancers can be useful resource for constructing a highly available architecture. The IBM DNS Services Global Load Balancer provides zonal and regional distribution of traffic. The IBM Cloud Network Load Balancer for VPC distributes traffic directly to instances with Direct Server Return.

Use the provided GitHub repository as a starting point to create your own application. Choose from the hundreds of IBM Cloud services to simplify your development. If you do not have IBM Direct Link consider using Virtual Private Networking (see VPNs for VPC).

Related topics of interest:

Be the first to hear about news, product updates, and innovation from IBM Cloud