Cloud Foundry Enterprise Environment with Network Isolation

5 min read

By: Gili Mendel and Lucinio Santos

Operating a Cloud Foundry Enterprise Environment with network isolation

Cloud Foundry Enterprise Environment (CFEE) v2.2.0 can now operate behind an isolated network that protects and secures the environment from external threats.

An isolated network is established by leveraging private VLANs and a set of control mechanisms that route, filter, and protect traffic flowing into and out of a CFEE cluster. You can set up an isolated network using technologies like Virtual Router Appliance (VRAs).

A CFEE instance creates an IBM Cloud Kubernetes Service cluster and adds a Cloud Foundry on top of it. The Kubernetes cluster master node manages the worker nodes, where workloads run. Access to workloads is controlled by Application Loads Balancers (ALBs), which distribute workloads requests among worker nodes. IBM Cloud manages the master node. The Kubernetes cluster worker nodes always reside in the same account as the CFEE environment.

Although earlier versions of CFEE enabled only the public Kubernetes ALBs, Cloud Foundry Enterprise Environment v2.2.0 now enables both the public and private Kubernetes ALBs. This means that you can access a CFEE v2.2.0 from both the public and private networks. This is critical for an enterprise that wants to protect the platform hosting critical applications since interaction with the platform and applications takes place through the private network.

Management control plane and the isolated network

In v2.2.0, the CFEE management control plane leverages the Virtual Private Network (VPN) connection used by the Kubernetes master that manages the cluster worker nodes. The CFEE management control plane manages CFEE instances via their Kubernetes master node, not via the ALBs or worker nodes. This implies that the CFEE control plane can manage Cloud Foundry even when the network protects against inbound and outbound access.

After you create a CFEE v2.2.0, the CFEE control plane creates an Identity and Access Management (IAM) service ID. This service ID resides in the same account where the CFEE cluster resides. The service ID has permissions to the cluster’s master node. Subsequently, the CFEE control plane uses this service ID (through its API key) to access the cluster leveraging the master’s VPN. Most importantly, through this master connection, the CFEE control plan initiates the Kubernetes helm charts that configure Cloud Foundry.

The diagram below depicts a CFEE installed behind a VRA network perimeter:

IBM Cloud Infrastructure

CFEE isolated network access control

The CFEE control plane accesses CFEE instances (network-isolated or not) to perform management operations. Similarly, workloads running on a CFEE may need to reach outside the isolated network. Applications may need outbound access to public RESTful services or to packages from public registries (like Docker images, NPM packages).

You can control inbound/outbound connectivity to/from an isolated network by configuring rules for specific protocols, sources, and targets. You can create and configure those rules using a number of technologies (e.g., a VRA). However, the simplest way to control network access to a Kubernetes cluster is to create calico network policies that define firewall rules specifically for the cluster.

Network access rules in a network-isolated CFEE control inbound or outbound traffic access:

  • Inbound access: When a Kubernetes cluster is installed, it automatically sets and configures calico rulesto allow management access from the control plane to the worker nodes in which CFEE cells are provisioned.

  • Outbound access: The list of target outbound endpoints that you need to open up is listed in the CFEE documentation. Some target endpoints may be behind an edge network.  Examples of an edge network are IBM Cloud Internet Services and Akamai. In this case, you might need to open (potentially) thousands of IP addresses; a list that can change often. You can avoid having to enumerate many of the required endpoints by using generic rules. Consider opening up outbound access to all http and https traffic from worker nodes. Here is an example that allows all outgoing TCP connection to ports 80 and 443.

apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
  name: cfee-egress-allow
spec:
  egress:
  - action: Allow
    destination:
      ports:
      - 443
      - 80
    protocol: TCP
    source: {}
  order: 2600
  selector: ibm.role in { 'worker_public' }
  applyOnForward: true
  types:
- Egress

For more information, see “Operating in an isolated network” in the CFEE documentation.

Private access endpoints

Independently of the isolated network support, with CFEE 2.2.0, you can bind applications to services through the IBM Cloud Service Endpoint (CSE). This means you can consume services supporting the CSE using private endpoints, without accessing the public internet.

Note that the use of the private endpoints pre-requires enabling the IBM Cloud account for Virtual Routing and Forwarding (VRF). VRF also routes across VLANs in your account (VLAN spanning), a capability needed by the Kubernetes cluster.

Learn more about CFEE 2.2.0 or check out CFEE in the IBM Cloud Catalog.

Be the first to hear about news, product updates, and innovation from IBM Cloud