Transferring roles from the master node
There is not enough resource capacity on the master node when the master and management nodes are in the same host file.
Symptoms
The log message file becomes too large and utilizes disk space.
Causes
There are master and management nodes in the same host file.
Resolving the problem
Transfer the node roles to a new host file: management, proxy, and Vulnerability Advisor.
Transferring management roles
Important: The PersistentVolumes must be transferred to a new node because some pods have local storage data on the management node.
-
Remove the management label from your master node:
-
Get the label for your master node by running the following command:
kubectl get nodes <master.node.name> --show-labelsYour output might resemble the following content:
beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=<master.node.name>,management=true,node-role.kubernetes.io/management=true -
Remove the
managementrole labels from your master node by running the following commands:kubectl label nodes <master.node.name> management- node-role.kubernetes.io/management-Your output might resemble the following message:
<master.node.name> labeled -
Verify the
managementrole labels are removed from your master node by running the following command:kubectl get nodes <master.node.name> --show-labelsYour output might resemble the following content:
beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=<master.node.name>
-
-
Add your new management node with the following steps:
-
Prepare your new management node for installation. For more information, see Preparing the new node for installation.
-
Remove the IP address of your master node in the host file on your management node. To access your host file, run the following command:
vi /etc/hosts -
Add a new management node to your IBM Cloud Private cluster by running the following command:
docker run -e LICENSE=accept --net=host \ -v "$(pwd)":/installer/cluster \ ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.2.1-ee management -l \ ip_address_managementnode1,ip_address_managementnode2
-
-
Move the local storage data from the master node onto your new management node. Transfer the
elasticsearchdata with the following steps:-
Back up the
elasticsearchdata by creating a compressed archive file of the data from the storage directory by running the following command:tar -czf ~/logging-backup.tar.gz /var/lib/icp/logging/elk-data/* -
Use your server copy to transfer the data file onto your new master node by running the following command:
scp ~/logging-backup.tar.gz <node_user>@<node_ip>:~/logging-backup.tar.gz -
Restore the
elasticsearchdata by replacing the files in local storage directory by the files that are extracted from the compressed archive file by running the following commands:rm -r /var/lib/icp/logging/elk-data/* tar -C /var/lib/icp/logging/elk-data -xzf ~/logging-backup.tar.gz --strip-components 2
-
-
Migrate the
logging,metering,monitoringandkey-managementservices to your new management node.-
Disable the services on master nodes. See Disable services for more information.
-
You can disable the services from your
config.yamlfile. Update themanagement_servicesparameter. Yourconfig.yamlfile might resemble the following contents:management_services: logging: disabled metering: disabled monitoring: disabled key-management: disabled -
You can also disable the services by running the following command:
docker run --rm -t -e LICENSE=accept --net=host -v $(pwd):/installer/cluster ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.2.1-ee addon
-
-
Delete the PersistentVolumeClaim and PersistentVolumes of the
loggingservice with the following steps:-
Get the PersistentVolumeClaim and PersistentVolumes of the
loggingservice by running the following commands:kubectl get pvc -n kube-system | grep data-logging-elk-data kubectl get pv | grep logging-datanode -
Delete the PersistentVolumeClaim and PersistentVolumes of the
loggingservice by running the following commands:kubectl delete pvc <persistent-volume-claim-name> -n kube-system kubectl delete pv <persistent-volume-name>
-
-
Enable the services on your master nodes.
-
You can enable the services from your
config.yamlfile. Update themanagement_servicesparameter. Yourconfig.yamlfile might resemble the following contents:management_services: logging: enabled metering: enabled monitoring: enabled key-management: enabled -
You can also enable the services by running the following command:
docker run --rm -t -e LICENSE=accept --net=host -v $(pwd):/installer/cluster ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.2.1-ee addon
-
-
-
Verify that your pods are transferred onto your new management node by running the following command:
kubectl get pods -n kube-system -o custom-columns=Name:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeNameYour output might resemble the following information:
logging-elk-client-74d744cdc6-l7ds5 Running <new-node-name> logging-elk-data-0 Running <new-node-name> metering-* Running <new-node-name> monitoring-* Running <new-node-name> key-management-* Running <new-node-name>
Your management node roles are transferred to a new node.
Transferring proxy roles
-
Prepare your new proxy node for installation. For more information, check Preparing the new node for installation
-
Add your new proxy node to your IBM Cloud Private cluster by running the following command:
docker run -e LICENSE=accept --net=host \ -v "$(pwd)":/installer/cluster \ ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.2.1-ee proxy -l \ ip_address_proxynode1,ip_address_proxynode2 -
Remove the
proxylabel from your master node.-
Get the label for your master node by running the following command:
kubectl get nodes <master.node.name> --show-labels -
Remove the
proxyrole labels from your master nodes by running the following command:kubectl label nodes <master.node.name> proxy- node-role.kubernetes.io/proxy- -
Verify that the
proxyrole labels are removed from your master node by running the following command:kubectl get nodes <master.node.name> --show-labels
-
-
Delete and transfer the following pods onto your new management node:
- nginx-ingress-controller
- default-http-backend
- istio-egressgateway
- istio-ingressgateway
To delete the pods, run the following commands:
kubectl get pods --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace | grep -E 'nginx-ingress-controller|default-http-backend|istio-ingressgateway|istio-egressgateway' | while read pods; do pods_name=$(echo $pods | awk '{print $1}'); pods_namespace=$(echo $pods | awk '{print $2}'); echo "-----------------------------------------------------------------------" echo "| Pods: ${pods_name}" echo "| Namespace: ${pods_namespace}" echo "-----------------------------------------------------------------------" echo "Deleting proxy pod ${pods_name} ..." kubectl delete pods ${pods_name} -n ${pods_namespace} --grace-period=0 --force &>/dev/null doneNote: If the
k8s-proxy-vippod exists on your master node, you must move the pod to the/etc/cfc/pods/k8s-proxy-vip.jsonfile onto your new proxy node. Verify thatk8s-proxy-vippod exists and move it by running the following commands:kubectl get k8s-proxy-vip -n kube-system scp etc/cfc/pods/k8s-proxy-vip.json <node_user>@<node_ip>:etc/cfc/pods/k8s-proxy-vip.jsonThe
kube-schedulerreschedules the pods to your new proxy node. -
Verify that your pods are transferred to your new proxy node by running the following command:
kubectl get pods -n kube-system -o custom-columns=Name:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeNameYour output might resemble the following information:
nginx-ingress-controller Running <new-node-name> default-http-backend Running <new-node-name>
Your proxy node roles are transferred to a new node.
Transferring vulnerability advisor roles
Important: The PersistentVolumes must be transferred to a new node because some pods have local storage data on the management node.
-
Remove the
valabel from your master node:-
Get the label for your master node by running the following command:
kubectl get nodes <master.node.name> --show-labels -
Remove the
varole label by running the following command:kubectl label nodes <master.node.name> va- node-role.kubernetes.io/va- -
Verify that the
varole label is removed from your master node by running the following command:kubectl get nodes <master.node.name> --show-labels
-
-
Add your new VA node with the following steps:
-
Prepare your new VA node for installation. For more information, see Preparing the new node for installation.
-
Remove the IP address of your master node from your host file on your VA node. To access your host file, run the following command:
vi /etc/hosts -
Add a new vulnerability advisor node to your IBM Cloud Private cluster by running the following command:
docker run --rm -t -e LICENSE=accept --net=host -v \ $(pwd):/installer/cluster ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.2.1-ee va -l \ ip_address_vanode1,ip_address_vanode2
-
-
Move the local storage data from the master node onto your new VA node. Transfer the
minio,zookeeperandkafkadata with the following steps:-
Back up the
minio, , andkafkadata by creating a compressed archive file of the data from the storage directory by running the following commands:tar -czf ~/minio-backup.tar.gz /var/lib/icp/va/minio/* tar -czf ~/zookeeper-backup.tar.gz /var/lib/icp/va/zookeeper/* tar -czf ~/kafka-backup.tar.gz /var/lib/icp/va/kafka/* -
Use your service copy to transfer the data file onto your new master node by running the following commands:
scp ~/minio-backup.tar.gz <node_user>@<node_ip>:~/minio-backup.tar.gz scp ~/zookeeper-backup.tar.gz <node_user>@<node_ip>:~/zookeeper-backup.tar.gz scp ~/minio-backup.tar.gz <node_user>@<node_ip>:~/minio-backup.tar.gz -
Restore the
minio, , andkafkadata by replacing the files in local storage directory by the files extracted from the compressed archive file by running the following commands:rm -r /var/lib/icp/va/minio/* rm -r /var/lib/icp/va/zookeeper/* rm -r /var/lib/icp/va/kafka/* tar -C /var/lib/icp/va/minio -xzf ~/minio-backup.tar.gz --strip-components 2 tar -C /var/lib/icp/va/zookeeper -xzf ~/zookeeper-backup.tar.gz --strip-components 2 tar -C /var/lib/icp/va/kafka -xzf ~/kafka-backup.tar.gz --strip-components 2
-
-
Migrate the VA services to your new management node with the following steps:
-
Disable the VA services on your master nodes. See Disable services.
- You can disable the VA services from your
config.yamlfile. Update themanagement_servicesparameter. Yourconfig.yamlfile might resemble the following contents:management_services: vunerability=advisor: disabled - You can also disable the VA by running the following command:
docker run --rm -t -e LICENSE=accept --net=host -v $(pwd):/installer/cluster ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.2.1-ee addon
- You can disable the VA services from your
-
Delete the PersistentVolumeClaims and PersistentVolumes of the
minio, , andkafkaservices with the following steps:-
Get the PersistentVolumeClaims and PersistentVolumes of
minio, , andkafkaby running the following commands:kubectl get pvc -n kube-system | grep minio kubectl get pv | grep minio kubectl get pvc -n kube-system | grep zookeeper kubectl get pv | grep zookeeper kubectl get pvc -n kube-system | grep kafka kubectl get pv | grep kafka -
Delete the PersistentVolumeClaims and PersistentVolumes of the
minio, , andkafkaservices by running the following commands:kubectl delete pvc <persistent-volume-claim-name> -n kube-system kubectl delete pv <persistent-volume-name>
-
-
Enable the services on your master nodes.
-
You can enable the services from your
config.yamlfile. Update themanagement_servicesparameter. Yourconfig.yamlfile might resemble the following contents:management_services: vulnerability-advisor: enabled -
You can also enable the services by running the following command:
docker run --rm -t -e LICENSE=accept --net=host -v $(pwd):/installer/cluster ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.2.1-ee addon
-
-
-
Verify that your pods are transferred to your new VA node by running the following command:
kubectl get pods -n kube-system -o custom-columns=Name:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeNameYour output might resemble the following information:
vulnerability-advisor-compliance-annotator Running <new-node-name>
Your Vulnerability Advisor node roles are transferred to a new node.