Specifying TLS ciphers for etcd and Kubernetes after IBM Cloud Private installation

After the installation of your IBM® Cloud Private cluster, you can configure etcd and kubelet to specify cipher suites that have strong protection.

Note: HTTP2 enablement can complicate the ordering of cipher suites. You must select your own ciphers and specify the order.

etcd

You can specify the supported TLS ciphers to use in communication between the master and etcd servers. Run the following commands on all the master nodes in your cluster:

  1. Copy and back up etcd static pod manifest file:

    cp /etc/cfc/pods/etcd.json ~/icp-backup/
    cp ~/icp-backup/etcd.json ~/icp-backup/etcd.json.bak
    
  2. Open etcd.json, the etcd static pod manifest file, for editing:

    vim ~/icp-backup/etcd.json
    
  3. Add the following options to the etcd static pod manifest file:

    "--cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
    

    By default, etcd uses TLSv1.2. You do not need to update the TLS minimum version.

  4. Copy back the updated etcd static pod manifest. Kubelet restarts the etcd service after you copy back the file:

    cp ~/icp-backup/etcd.json /etc/cfc/pods/
    
  5. Verify that the etcd service is started:

    docker ps |grep etcd
    

    Following is a sample output:

    416e7e7ed2a5    33bdcac177c2    "etcd --name=etcd0 -…"   2 minutes ago   Up 2 minutes   k8s_etcd_k8s-etcd-9.21.55.15_kube-system_ae53b0c24e347e2f786003f83ab595b7_0
    
  6. Check the status of the etcd cluster. You can install the etcdctl binary from the etcd image:

    ETCDCTL_API=3 etcdctl --endpoints=10.82.108.113:4001,10.82.108.81:4001,10.82.108.84:4001 --cacert=/etc/cfc/conf/etcd/ca.pem --cert=/etc/cfc/conf/etcd/client.pem --key=/etc/cfc/conf/etcd/client-key.pem endpoint status -w table
    

kubelet

You can specify the supported TLS ciphers to use in communication between the kubelet and applications. For example, Prometheus. Run the following commands on all cluster nodes:

  1. Open the kubelet-service-config file for editing:

    vim /etc/cfc/kubelet/kubelet-service-config
    
  2. Add the following options to the kubelet service configuration:

    tlsCipherSuites: ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305","TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_RSA_WITH_AES_256_GCM_SHA384","TLS_RSA_WITH_AES_128_GCM_SHA256"]
    

    By default, etcd uses TLSv1.2. You do not need to update the TLS minimum version.

  3. Restart the kubelet service:

    systemctl restart kubelet
    systemctl status kubelet
    
  4. Verify that the kubelet service is started:

    • Check the kubelet log.

      journalctl -u kubelet.service -f
      
    • Check the status of the node.

      kubectl get nodes
      

Kubernetes control plane

You can specify the supported TLS ciphers to use in communication between kube-apiserver and kubelet or other applications. Run the following commands on all the master nodes in your cluster:

  1. Copy and back up Kubernetes static pod manifest file:

    cp /etc/cfc/pods/master.json ~/icp-backup/
    cp ~/icp-backup/master.json ~/icp-backup/master.json.bak
    
  2. Open master.json, the Kubernetes static pod manifest file, for editing:

    vim ~/icp-backup/master.json
    
  3. Add the following options to the Kubernetes static pod manifest file:

    There are three containers in the static pod: kube-controller-manager, kube-apiserver, and kube-scheduler. Add the following option to all the three containers.

    "--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
    
  4. Copy back the updated Kubernetes static pod manifest. Kubelet restarts the Kubernetes service:

    cp ~/icp-backup/master.json /etc/cfc/pods/
    
  5. Verify that the Kubernetes service is started:

    docker ps |grep hyperkube
    

    Following is a sample output:

    b4844586cc1a        a28dcbcae557   "/hyperkube schedule…"   14 minutes ago      Up 14 minutes       k8s_scheduler_k8s-master-9.21.55.15_kube-system_40af6d00537d84138a6f8acab99c123a_3
    4472e7a9f4bd        a28dcbcae557   "/hyperkube controll…"   14 minutes ago      Up 14 minutes       k8s_controller-manager_k8s-master-9.21.55.15_kube-system_40af6d00537d84138a6f8acab99c123a_3
    bbc2b79cee2e        a28dcbcae557   "/hyperkube apiserve…"   20 hours ago        Up 14 minutes       k8s_apiserver_k8s-master-9.21.55.15_kube-system_40af6d00537d84138a6f8acab99c123a_0
    

    { pre}

  6. Verify that the Kubernetes service is started:

    kubectl get pods --all-namespaces