Transferring roles from the master node

There is not enough resource capacity on the master node when the master and management nodes are in the same host file.

Symptoms

The log message file becomes too large and utilizes disk space.

Causes

There are master and management nodes in the same host file.

Resolving the problem

Transfer the node roles to a new host file: management, proxy, and Vulnerability Advisor.

Transferring management roles

Important: The PersistentVolumes must be transferred to a new node because some pods have local storage data on the management node.

  1. Remove the management label from your master node:

    1. Get the label for your master node by running the following command:

       kubectl get nodes <master.node.name> --show-labels
      

      Your output might resemble the following content:

        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=<master.node.name>,management=true,node-role.kubernetes.io/management=true
      
    2. Remove the management role labels from your master node by running the following command:

       kubectl label nodes <master.node.name> management- node-role.kubernetes.io/management-
      

      Your output might resemble the following message:

       <master.node.name> labeled
      
    3. Verify the management role labels are removed from your master node by running the following command:

       kubectl get nodes <master.node.name> --show-labels
      

      Your output might resemble the following content:

       beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=<master.node.name>
      
  2. Add your new management node with the following steps:

    1. Prepare your new management node for installation. For more information, see Preparing the new node for installation.

    2. Remove the IP address of your master node in the host file on your management node. To access your host file, run the following command:

       vi /etc/hosts
      
    3. Add a new management node to your IBM Cloud Private cluster by running the following command:

       docker run -e LICENSE=accept --net=host \
       -v "$(pwd)":/installer/cluster \
       ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.1.1-ee management -l \
       ip_address_managementnode1,ip_address_managementnode2
      
  3. Create PersistentVolumes on your new management node with the following steps:
    1. On each node, create a local storage directory by running the following commands:
       mkdir -p /var/lib/icp/logging/elk-data && chmod 755 /var/lib/icp/logging/elk-data
      
    2. Create a pv.yaml file for each node. Your PersistentVolume file might resemble the following content:
       apiVersion: v1
       kind: PersistentVolume
       metadata:
           name: logging-datanode-<management_node_name> # replace <management_node_name with the name of the ICP management node, can be found using kubectl get nodes
       spec:
           accessModes:
           - ReadWriteOnce
           capacity:
               storage: 20Gi # adjust to customer need
           local:
               path: /var/lib/icp/logging/elk-data # adjust to customer need
           nodeAffinity:
               required:
                   nodeSelectorTerms:
                   - matchExpressions:
                       - key: kubernetes.io/hostname
                       operator: In
                       values:
                       - <management_node_name> # set to name of the ICP management node, can be found using kubectl get nodes
           persistentVolumeReclaimPolicy: Retain
           storageClassName: logging-storage-datanode
      
    3. Create a PersistentVolume by running the following command:
      kubectl create -f pv.yaml
      
  4. Move the local storage data from the master node onto your new management node. Transfer the elasticsearch data with the following steps:

    1. Backup the elasticsearch data by creating a compressed archive file of the data from the storage directory by running the following command:

       tar -czf ~/logging-backup.tar.gz /var/lib/icp/logging/elk-data/*
      
    2. Use your server copy to transfer the data file onto your new master node by running the following command:

       scp ~/logging-backup.tar.gz <node_user>@<node_ip>:~/logging-backup.tar.gz
      
    3. Restore the elasticsearch data by replacing the files in local storage directory by the files that are extracted from the compressed archive file by running the following commands:

       rm -r /var/lib/icp/logging/elk-data/*
       tar -C /var/lib/icp/logging/elk-data -xzf ~/logging-backup.tar.gz --strip-components 2
      
  5. Migrate the logging, metering, monitoring, and key-management services to your new management node.

    1. Disable the services on your master nodes. See Disable services for more information.

      • You can disable the services from your config.yaml file. Update the management_services parameter. Your config.yaml file might resemble the following contents:

           management_services:
           logging: disabled
           metering: disabled
           monitoring: disabled
           key-management: disabled
        
      • You can also disable the services by running the following command:

           docker run --rm -t -e LICENSE=accept --net=host -v $(pwd):/installer/cluster ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.1.1-ee addon
        
    2. Delete the PersistentVolumeClaim and PersistentVolumes of the logging service with the following steps:

      • Get the PersistentVolumeClaim and PersistentVolumes of the logging service by running the following commands:

          kubectl get pvc -n kube-system | grep data-logging-elk-data
          kubectl get pv | grep logging-datanode
        
      • Delete the PersistentVolumeClaim and PersistentVolumes of the logging service by running the following commands:

          kubectl delete pvc <persistent-volume-claim-name> -n kube-system
          kubectl delete pv <persistent-volume-name>
        
    3. Enable the services on your master nodes.

      • You can enable the services from your config.yaml file. Update the management_services parameter. Your config.yaml file might resemble the following contents:

           management_services:
           logging: enabled
           metering: enabled
           monitoring: enabled
           key-management: enabled
        
      • You can also enable the services by running the following command:

           docker run --rm -t -e LICENSE=accept --net=host -v $(pwd):/installer/cluster ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.1.1-ee addon
        
  6. Verify that your pods are transferred onto your new management node by running the following command:

     kubectl get pods -n kube-system -o custom-columns=Name:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName
    

    Your output might resemble the following information:

     logging-elk-client-74d744cdc6-l7ds5                           Running                      <new-node-name> 
     logging-elk-data-0                                            Running                      <new-node-name>  
     metering-*                                                    Running                      <new-node-name>
     monitoring-*                                                  Running                      <new-node-name>
     key-management-*                                              Running                      <new-node-name>
    

Your management node roles are transferred to a new node.

Transferring proxy roles

  1. Prepare your new proxy node for installation. For more information, check Preparing the new node for installation.

  2. Add your new proxy node to your IBM Cloud Private cluster by running the following command:

     docker run -e LICENSE=accept --net=host \
     -v "$(pwd)":/installer/cluster \
     ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.1.1-ee proxy -l \
     ip_address_proxynode1,ip_address_proxynode2
    
  3. Remove the proxy label from your master node.

    1. Get the label for your master node by running the following command:

       kubectl get nodes <master.node.name> --show-labels
      
    2. Remove the proxy role labels from your master nodes by running the following command:

       kubectl label nodes <master.node.name> proxy- node-role.kubernetes.io/proxy-
      
    3. Verify that the proxy role labels are removed from your master node by running the following command:

       kubectl get nodes <master.node.name> --show-labels
      
  4. Delete and transfer the following pods onto your new management node:

    • nginx-ingress-controller
    • default-http-backend
    • istio-egressgateway
    • istio-ingressgateway

    To delete the pods, run the following commands:

     kubectl get pods --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace | grep -E 'nginx-ingress-controller|default-http-backend|istio-ingressgateway|istio-egressgateway' | while read pods; do 
     pods_name=$(echo $pods | awk '{print $1}');
     pods_namespace=$(echo $pods | awk '{print $2}');
     echo "-----------------------------------------------------------------------"
     echo "|                      Pods: ${pods_name}"
     echo "|                  Namespace: ${pods_namespace}"
     echo "-----------------------------------------------------------------------"
     echo "Deleting proxy pod ${pods_name} ..."
     kubectl delete pods ${pods_name} -n ${pods_namespace} --grace-period=0 --force &>/dev/null
     done
    

    Note: If the k8s-proxy-vip pod exists on your master node, you must move the pod to the /etc/cfc/pods/k8s-proxy-vip.json file onto your new proxy node. Verify that k8s-proxy-vip pod exists and move it by running the following commands:

     kubectl get k8s-proxy-vip -n kube-system
     scp etc/cfc/pods/k8s-proxy-vip.json <node_user>@<node_ip>:etc/cfc/pods/k8s-proxy-vip.json
    

    The kube-scheduler reschedules the pods to your new proxy node.

  5. Verify that your pods are transferred to your new proxy node by running the following command:

     kubectl get pods -n kube-system -o custom-columns=Name:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName
    

    Your output might resemble the following information:

     nginx-ingress-controller                           Running                      <new-node-name>
     default-http-backend                               Running                      <new-node-name>
    

Your proxy node roles are transferred to a new node.

Transferring vulnerability advisor roles

Important: The PersistentVolumes must be transferred to a new node because some pods have local storage data on the management node.

  1. Remove the va label from your master node with the following steps:

    1. Get the label for your master node by running the following command:

       kubectl get nodes <master.node.name> --show-labels
      
    2. Remove the va role label by running the following command:

       kubectl label nodes <master.node.name> va- node-role.kubernetes.io/va-
      
    3. Verify that the va role label is removed from your master node by running the following command:

       kubectl get nodes <master.node.name> --show-labels
      
  2. Add your new VA node with the following steps:

    1. Prepare your new VA node for installation. For more information, see Preparing the new node for installation.

    2. Remove the IP address of your master node from your host file on your VA node. To access your host file, run the following command:

       vi /etc/hosts
      
    3. Add a new vulnerability advisor node to your IBM Cloud Private cluster by running the following command:

       docker run --rm -t -e LICENSE=accept --net=host -v \
       $(pwd):/installer/cluster ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.1.1-ee va -l \
       ip_address_vanode1,ip_address_vanode2
      
  3. Create PersistentVolumes on the new VA node
    1. On each node, you need to create a local storage directory for minio, zookeeper and kafka by running the following commands:
       mkdir -p /var/lib/icp/va/minio/ && chmod 755 /var/lib/icp/va/minio/
       mkdir -p /var/lib/icp/va/zookeeper/ && chmod 755 /var/lib/icp/va/zookeeper/
       mkdir -p /var/lib/icp/va/kafka/ && chmod 755 /var/lib/icp/va/kafka/
      
    2. Create a pv-minio.yaml, pv-zookeeper.yaml, and pv-kafka.yaml file for each node with following contents:
       apiVersion: v1
       kind: PersistentVolume
       metadata:
           name: minio-<va_node_name> # replace <va_node_name with the name of the ICP va node, can be found using kubectl get nodes
       spec:
           accessModes:
           - ReadWriteOnce
           persistentVolumeReclaimPolicy: Retain
           capacity:
               storage: 20Gi # adjust to customer need
           local:
               path: /var/lib/icp/va/minio/ # adjust to customer need
           nodeAffinity:
               required:
                   nodeSelectorTerms:
                   - matchExpressions:
                       - key: kubernetes.io/hostname
                       operator: In
                       values:
                       - <va_node_name> # set to name of the ICP va node, can be found using kubectl get nodes
           storageClassName: minio-storage
      
       apiVersion: v1
       kind: PersistentVolume
       metadata:
           name: kafka-<va_node_name> # replace <va_node_name with the name of the ICP va node, can be found using kubectl get nodes
       spec:
           accessModes:
           - ReadWriteOnce
           persistentVolumeReclaimPolicy: Retain
           capacity:
               storage: 5Gi # adjust to customer need
           local:
               path: /var/lib/icp/va/kafka/ # adjust to customer need
           nodeAffinity:
               required:
                   nodeSelectorTerms:
                   - matchExpressions:
                       - key: kubernetes.io/hostname
                       operator: In
                       values:
                       - <va_node_name> # set to name of the ICP va node, can be found using kubectl get nodes
           storageClassName: kafka-storage
      
       apiVersion: v1
       kind: PersistentVolume
       metadata:
           name: zookeeper-<va_node_name> # replace <va_node_name with the name of the ICP va node, can be found using kubectl get nodes
       spec:
           accessModes:
           - ReadWriteOnce
           persistentVolumeReclaimPolicy: Retain
           capacity:
               storage: 5Gi # adjust to customer need
           local:
               path: /var/lib/icp/va/zookeeper/ # adjust to customer need
           nodeAffinity:
               required:
                   nodeSelectorTerms:
                   - matchExpressions:
                       - key: kubernetes.io/hostname
                       operator: In
                       values:
                       - <va_node_name> # set to name of the ICP va node, can be found using kubectl get nodes
           storageClassName: zookeeper-storage
      
    3. Create the PersistentVolumes by running the following commands:
      kubectl create -f pv-minio.yaml
      kubectl create -f pv-kafka.yaml
      kubectl create -f pv-zookeeper.yaml
      
  4. Move the local storage data from the master node onto your new VA node. Transfer the minio, zookeeper, and kafka data with the following steps:

    1. Back up the minio, zookeeper, and kafka data by creating a compressed archive file of the data from the storage directory by running the following commands:

       tar -czf ~/minio-backup.tar.gz /var/lib/icp/va/minio/*
       tar -czf ~/zookeeper-backup.tar.gz /var/lib/icp/va/zookeeper/*
       tar -czf ~/kafka-backup.tar.gz /var/lib/icp/va/kafka/*
      
    2. Use your service copy to transfer the data file onto your new master node by running the following commands:

       scp ~/minio-backup.tar.gz <node_user>@<node_ip>:~/minio-backup.tar.gz
       scp ~/zookeeper-backup.tar.gz <node_user>@<node_ip>:~/zookeeper-backup.tar.gz
       scp ~/minio-backup.tar.gz <node_user>@<node_ip>:~/minio-backup.tar.gz
      
    3. Restore the minio, zookeeper and kafka data by replacing the files in local storage directory by the files extracted from the compressed archive file by running the following commands:

       rm -r /var/lib/icp/va/minio/*
       rm -r /var/lib/icp/va/zookeeper/*
       rm -r /var/lib/icp/va/kafka/*
       tar -C /var/lib/icp/va/minio -xzf ~/minio-backup.tar.gz --strip-components 2
       tar -C /var/lib/icp/va/zookeeper -xzf ~/zookeeper-backup.tar.gz --strip-components 2
       tar -C /var/lib/icp/va/kafka -xzf ~/kafka-backup.tar.gz --strip-components 2
      
  5. Migrate the VA services to your new management node with the following steps:

    1. Disable the VA services on your master nodes. See Disable services.

      • You can disable the VA services from your config.yaml file. Update the management_services parameter. Your config.yaml file might resemble the following contents:
           management_services:
           vunerability=advisor: disabled
        
      • You can also disable the VA by running the following command:
        docker run --rm -t -e LICENSE=accept --net=host -v $(pwd):/installer/cluster ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.1.1-ee addon
        
    2. Delete the PersistentVolumeClaims and PersistentVolumes of the minio, zookeeper, and kafka services with the following steps:

      • Get the PersistentVolumeClaims and PersistentVolumes of minio, zookeeper, and kafka by running the following commands:

           kubectl get pvc -n kube-system | grep minio
           kubectl get pv | grep minio
           kubectl get pvc -n kube-system | grep zookeeper
           kubectl get pv | grep zookeeper
           kubectl get pvc -n kube-system | grep kafka
           kubectl get pv | grep kafka
        
      • Delete the PersistentVolumeClaims and PersistentVolumes of the minio, zookeeper, and kafka services by running the following commands:

           kubectl delete pvc <persistent-volume-claim-name> -n kube-system
           kubectl delete pv <persistent-volume-name>
        
    3. Enable the services on your master nodes.

      • You can enable the services from your config.yaml file. Update the management_services parameter. Your config.yaml file might resemble the following contents:

           management_services:
           vulnerability-advisor: enabled
        
      • You can also enable the services by running the following command:

           docker run --rm -t -e LICENSE=accept --net=host -v $(pwd):/installer/cluster ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.1.1-ee addon
        
  6. Verify that your pods are transferred to your new VA node by running the following command:

     kubectl get pods -n kube-system -o custom-columns=Name:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName
    

    Your output might resemble the following information:

     vulnerability-advisor-compliance-annotator                           Running                      <new-node-name>
    

Your Vulnerability Advisor node roles are transferred to a new node.