Installing Cloud Foundry Enterprise Environment
The installation of Cloud Foundry Enterprise Environment is a multi-step process.
- Target namespace
- Install the IBM Certified Container
- Worker nodes
- Inbound ports
- Create persistent storage for Cloud Foundry deployment tool
- Deploy the Helm release
- Deploy Cloud Foundry Enterprise Environment by using Cloud Foundry deployment tool
- Deploy Cloud Foundry Enterprise Environment by using the config-manager CLI
- IBM Cloud Private Cloud Foundry management console
Note: Cloud Foundry Enterprise Environment currently only supports IBM Cloud Private that is installed on Ubuntu.
Target namespace
You can use the default namespace or a target namespace of your choice. You must target the same namespace when you install the IBM Certified Container and deploy the Helm chart because the load_cloudpak.sh script loads
images into the private registry, and all images that are loaded into the private registry include the target namespace in their name.
Unless stated otherwise, any resources that you create for the Helm chart that belong to a namespace, such as Kubernetes secrets, must be created in the target namespace.
A number of specific policies and privileges are required by the Helm chart in order to deploy Cloud Foundry Enterprise Environment. If you are not using the default namespace, you must ensure that you set the following permissions.
-
The target namespace must have a pod security policy that permits pods to run with any user, such as
ibm-anyuid-psp. If you are using hostPath volumes for a demonstration installation, use a policy that permits hostPath, such asibm-anyuid-hostpath-psp. You can bind a pod security policy to a namespace when you create it by using the management console. The followingkubectlcommands show how you can bind a policy to a namespace calledcfeeby using the CLI.kubectl create namespace cfee kubectl -n cfee create rolebinding ibm-anyuid-clusterrole-rolebinding --clusterrole=ibm-anyuid-clusterrole --group=system:serviceaccounts:cfee - The default service account for the target namespace must have cluster administrator privileges. This permission is required for the Cloud Foundry deployment tool to determine the IP address of the proxy node and the port for the IBM Cloud Private
management ingress service, and for the Open Service Broker to deploy Helm charts. You can run the following command to create a ClusterRoleBinding that grants these permissions to a non-default namespace. In the following example, the target
namespace is called
cfee.kubectl create clusterrolebinding cfee-serviceaccount-cluster-admin --clusterrole=cluster-admin --serviceaccount=cfee:default
Install the IBM Certified Container
Complete the following steps to download and install the Cloud Foundry Enterprise Environment IBM Certified Container chart.
- Download the IBM Certified Container chart from IBM Passport Advantage®
- Prepare for Installing IBM software onto IBM Cloud Private, but do not perform the step,
cloudctl catalog load-archive. Follow the remaining steps on this page instead. -
Unpack the IBM Certified Container by using the following command:
tar xvf <IBM Cloud Private binary download>.tgz -
Load the IBM Certified Container into IBM Cloud Private:
scripts/load_cloudpak.sh -n <namespace> -c <ICP hostname> -u <ICP User> -a ./ibm-cfee-installer-archive-3.2.0-*.tgzDefault examples: -n default -c mycluster.icp -u admin
Worker nodes
There must be a minimum of four worker nodes in your cluster. All worker nodes must contain at least four cores each. The worker nodes must have the role of worker, which means they must have a label with node-role.kubernetes.io/worker=true.
Each worker node can be used by either a control plane instance or a cell instance. Placement of the control plane nodes is determined by the user. Placement of the cell instances is determined by the user. The maximum number of cell instances
and control plane instances is limited by the number of worker nodes.
All worker nodes for control plane instances must be modified to ensure proper operation. Perform the following changes to each worker node that is a control plane instance:
- Label the worker nodes that are used to run control instance pods as
bcf.type=controlby running the following command:kubectl label node <node-name> bcf.type=control
All worker nodes for cell instances must be modified to ensure proper operation. Perform the following changes to each worker node that is a cell instance:
-
Label the worker nodes that are dedicated to run diego-cell pods as
bcf.type=diego-cellby using the following command:kubectl label node <node-name> bcf.type=diego-cell -
Taint the worker nodes that are dedicated to run diego-cell pods as
dedicated=diego-cellby running the following command:kubectl taint node <node-name> dedicated=diego-cell:NoSchedule -
Change grub settings on the worker nodes that run diego-cell instances to ensure there are no issues with the cgroup swap limit while Docker is running. Without this modification, you might see the following error messages:
WARNING: Your kernel does not support cgroup swap limit. WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.or
memory.memsw.limit_in_bytes: permission denied issue
For each worker node that is a cell instance in your environment, complete the following steps:
- SSH to the worker node. Note: You might need to SSH to the master node first, and then to the worker nodes from the master.
- Check
/etc/default/grubto ensure the the following line exists:GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1" -
Worker nodes that are designated as
diego-cellmust be updated with grub settings. If you make changes in/etc/default/grub, update grub with the following command:sudo update-grub -
If grub was updated, reboot the worker node. Follow either the standard Kubernetes Maintenance on a Node
procedure or the procedure that is used by your organization. For example:
- Mark the worker node as unschedulable
kubectl cordon <worker node> - Drain the worker node
kubectl drain <worker node> - On the worker node:
sudo reboot -f - Once the worker node is running, enable the worker node for scheduling
kubectl uncordon <worker node>
- Mark the worker node as unschedulable
- Perform these actions on each worker node that is a cell instance. If you add a new worker node, perform the same actions.
Inbound ports
Ensure that the following ports have inbound access into the Kubernetes environment:
- 2222
- 2793
- If you want to use the Cloud Foundry deployment tool to do the installation then you have to open four ports in the range of 30000-32767 (defaults are 30100, 30101, 30102, 30103, 30104). Then connect the Cloud Foundry deployment tool from the user network to the IBM Cloud Private environment.
For example, on OpenStack where inbound traffic is restricted, perform the following tasks to create the required security group so that the ingress controller allows inbound traffic on the required ports. From the OpenStack management console, with the proper Domain and Project selected, complete the following procedure:
- Navigate to Security Groups:
- For older OpenStack versions, such as Liberty or Mitaka:
- Select Project > Compute > Access & Security > Security Groups.
- For newer OpenStack versions, such as Pike:
- Select Project > Network > Security Groups.
- For older OpenStack versions, such as Liberty or Mitaka:
- Click Create Security Group.
- Name the security group icp-cfee and add the description ICP CFEE Security Group.
- Click Create Security Group.
- Select the ICP CFEE Security Group and click Edit Rules.
- Click Add Rule
- Add the following rules to the ICP CFEE Security Group:
| Rule | Direction | Ether Type | IP Protocol | Port or Range | Remote | Purpose |
|---|---|---|---|---|---|---|
| Custom TCP Rule | Ingress | IPv4 | TCP | 2222 | 0.0.0.0/0 (CIDR) | CFEE UAA |
| Custom TCP Rule | Ingress | IPv4 | TCP | 2793 | 0.0.0.0/0 (CIDR) | CFEE diego-access |
| Custom TCP Rule | Egress | IPv4 | Any | - | 0.0.0.0/0 (CIDR) | |
| Custom TCP Rule | Egress | IPv6 | Any | - | ::/0 (CIDR) |
Create persistent storage for Cloud Foundry deployment tool
Persistent volume for the Helm release
- The administrator must create a persistent volume. The storage class of the persistent volume is used for the persistent volume claim of the Helm chart.
- The example in Install the chart uses
hostPath, but it is recommended to use a persistent volume on a network file system (NFS), GlusterFS, or other shared infrastructure. ThehostPathcan be used only for demonstration. - The persistent volume must have at least 10 GB available for the deployment tool.
- The persistent volume must be set to
Retainfor the persistent volume claim policy to keep the deployment data in case the application is removed temporarily.
Persistent volume for Cloud Foundry Enterprise Environment
You need separate persistent storage for Cloud Foundry Enterprise Environment. The storage class name is needed when you use the Cloud Foundry deployment tool in the
Kubernetes storage class name field. The name local is reserved and should only be used for non-production environments. The storage class name must already exist, except if the value is specified as local.
-
From the IBM Cloud Private management console, open the Catalog.
-
Locate and select the
ibm-cfee-installerchart. -
Create a persistent volume (PV) that can be a network file system (NFS) or other PV type with a specific storage class. The storage capacity needs to be at least 10 GB. The following code is a sample persistent volume definition that can be used only for demonstration or proof-of-concept purposes.
kubectl create -f - <<EOF kind: PersistentVolume apiVersion: v1 metadata: name: ibm-cfee-installer-data spec: capacity: storage: 10Gi storageClassName: ibm-cfee-installer-storage accessModes: - "ReadWriteOnce" persistentVolumeReclaimPolicy: Retain hostPath: path: /tmp/icp/cfee/data type: DirectoryOrCreate --- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: ibm-cfee-installer-storage provisioner: kubernetes.io/no-provisioner EOF
Deploy the Helm release
-
From the IBM Cloud Private management console, open the Catalog.
-
Locate and select the
ibm-cfee-installerchart. -
Ensure you create a persistent volume as shown in the Helm chart readme file.
-
Review the provided instructions and select Configure.
-
Provide a release name and select a namespace. In the example in the Helm chart, the release name is
cfee-inceptionand the namespace isdefault. -
Review and accept the license or licenses.
-
Provide the storage class name. In the example in the Helm chart, the storage class is
ibm-cfee-installer-storage. -
Select Install to complete the Helm installation.
Deploy Cloud Foundry Enterprise Environment by using the Cloud Foundry deployment tool
When the chart is installed, perform the following actions to access the Cloud Foundry deployment tool and begin the Cloud Foundry deployment.
-
From the IBM Cloud Private management console, open Workloads > Helm Releases.
-
Locate and select the
ibm-cfee-installerchart that you installed. -
From Helm Release, select Launch > deployment-tool. A new tab opens with the Cloud Foundry deployment tool settings page. The two settings values that you need can be obtained by running
kubectlcommands. The commands to run are listed in the Notes section of the Helm release. -
Run the two commands that were generated when the Helm chart deployed. To see these commands, navigate to the deployed Helm chart and scroll down. These commands are required to get the API key and the API URL for the Cloud Foundry Enterprise Environment Installer. Copy the values to the
Configuration manager API end-pointfield on the Cloud Foundry deployment tool. -
Run the command listed in
3. Get the token by running these commands:. Copy the value to theTokenfield on the Cloud Foundry deployment tool. -
On the Cloud Foundry deployment tool, select Submit.
-
When the Configuration page opens, click Select a configuration type and choose Kubernetes from the menu. Select the pencil icon. Enter the required parameters. See Specifying common parameters for Cloud Foundry Enterprise Environment.
-
Select Save and Exit.
-
The configuration is verified. Select Start deployment. The
Statespage shows the deployment status and log files.
Deploy Cloud Foundry Enterprise Environment by using the configuration manager CLI
Prerequisites
- The
ibm-cfee-installermust be deployed with a host name that is different from the default valuelocalhostto be able to reach that host name from any other machine. - The host name must be in your
/etc/hostsdirectory or registered in a DNS.
Accessing the configuration manager CLI
When the ibm-cfee-installer is installed, perform the following actions to access the configuration manager (CM) CLI and begin the Cloud Foundry deployment.
-
From the directory where you unpack the IBM Certified Container, run the following command:
scripts/setup_client.sh -n <namespace> -hr <helm_release_name> -pn <ibm_cfee_inception_pod_name> -c <ICP hostname> -u <ICP User>`This command downloads the configuration templates and the CM CLI. It also configures the CM CLI to access the configuration manager embedded in the
ibm-cfee-inceptioncontainer. - Choose your language and copy the template to a new file with the name of your choice (extension: .yml)
- Launch
scripts/launch_deployment.sh -c <your_configuration_file>.
IBM Cloud Private Cloud Foundry management console
The Cloud Foundry deployment tool installs a Helm release that provides the IBM Cloud Private Cloud Foundry management console.
-
From the IBM Cloud Private dashboard console, open Workloads > Helm Releases.
-
Locate and select the Helm release. The release name matches the name you chose for
ibm-cfee-installerwith-consoleappended. For example, if you usedcfee, the release for the IBM Cloud Private Cloud Foundry management console iscfee-console. The name of the chart isibm-cf-ui. -
In the Helm release, select Launch to open the IBM Cloud Private Cloud Foundry management console.