Troubleshooting
Problem
When running kubectl commands on the Turbonomic OVA command line you get one of the following errors:
The connection to the server <IP>:<PORT> was refused - did you specify the right host or port?Or:
error: You must be logged in to server (Unauthorized)Cause
The internal certificates or configuration files for Kubernetes or etcd have expired and need to be renewed.
Resolving The Problem
Please take a snapshot prior to executing any scripts
Note (please read carefully):
After Turbonomic version 8.13.0, the scripts under
/opt/local/bin used to renew certificates are no longer valid for CentOS VMs, but only for Rocky Linux VMs. These scripts have also been renamed for Rocky-only VMs. For Turbonomic deployments running Turbo 8.13.X and still on CentOS, you will find different directories under the /opt/local/ directory for your previous versions of Turbonomic (8.12.X and older). These directories are created with each update, to back up all the scripts of earlier versions. You will find the previous versions of the scripts for CentOS under those directories (kubeNodeCertUpdate.sh and kubeNodeIPChange.sh) to update K8s certificates.If you are updating the Kubernetes certificates for Turbo 8.13.0 or newer AND running on CentOS, we also recommend taking a backup of the file
/etc/kubernetes/kubeadm-config.yaml by running:cp /etc/kubernetes/kubeadm-config.yaml /opt/turbonomic/kubeadm-config.yaml.bakRenewing Kubernetes Certificates and Config files.
Start by running the command the following command to check the status of the Kubernetes certificates and configuration files:
sudo kubeadm certs check-expirationIf the previous command fails, run the following instead:
sudo kubeadm alpha certs check-expirationIf both commands fail, please contact support.
An example of a good output is:
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0104 16:25:14.389588 222931 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Jun 09, 2024 18:02 UTC 157d ca no
apiserver Jul 18, 2024 17:58 UTC 196d ca no
apiserver-kubelet-client Jul 18, 2024 17:58 UTC 196d ca no
controller-manager.conf Jun 09, 2024 18:02 UTC 157d ca no
front-proxy-client Jul 18, 2024 17:58 UTC 196d front-proxy-ca no
scheduler.conf Jun 09, 2024 18:02 UTC 157d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Jun 07, 2033 18:01 UTC 9y no
front-proxy-ca Jun 07, 2033 18:02 UTC 9y noIf there are any bad certificates or configuration files the "
EXPIRES" column value will be in the past and the "RESIDUAL TIME" column will say "<invalid>".If the apiserver, apiserver-kubelet-client, or front-proxy-client are bad run:
For CentOS Linux VMs and Turbonomic on version 8.13.0 or newer:
sudo /opt/local/bin-8.N.N/kubeNodeCertUpdate.shWhere the 8.N.N is for a Turbo version previous to 8.13.0, for example, 8.12.5
For Rocky Linux VMs:
sudo /opt/local/bin/k8s-certs-renew.shIf the output shows issues with etcd certificates such as:
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Jun 09, 2024 18:02 UTC 157d ca no
apiserver Jan 03, 2025 16:37 UTC 364d ca no
!MISSING! apiserver-etcd-client
apiserver-kubelet-client Jan 03, 2025 16:37 UTC 364d ca no
controller-manager.conf Jun 09, 2024 18:02 UTC 157d ca no
!MISSING! etcd-healthcheck-client
!MISSING! etcd-peer
!MISSING! etcd-server
front-proxy-client Jan 03, 2025 16:37 UTC 364d front-proxy-ca no
scheduler.conf Jun 09, 2024 18:02 UTC 157d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Jun 07, 2033 18:01 UTC 9y no
etcd-ca May 17, 2123 17:57 UTC 99y no
front-proxy-ca Jun 07, 2033 18:02 UTC 9y noFor CentOS Linux VMs:
Run the command:
sudo /opt/local/bin-8.N.N/kubeNodeIPChange.shWhere the 8.N.N is for a Turbo version previous to 8.13.0, for example, 8.12.5
If after running the kubeNodeIPChange script it still reports that the above are !MISSING! check /etc/kubernetes to see if kubeadm-config.yaml and kube-proxy.yaml are empty files. If they are running the following commands:
sudo mkdir old
sudo mv kubeadm-config.yaml kube-proxy.yaml kubelet.conf scheduler.conf controller-manager.conf admin.conf old/.After that run the kubeNodeIPChange script again.
For Rocky Linux VMs:
Run the command:
sudo /opt/local/bin/k8s-ip-change.shIf there are issues with the configuration files (
admin.conf, controller-manager.conf, or scheduler.conf) execute the following commands:cd /etc/kubernetes
sudo mkdir old
sudo mv admin.conf controller-manager.conf scheduler.conf old/.Then, run one of the scripts from the previous section, depending on if you are running: Rocky Linux (
/opt/local/bin/k8s-ip-change.sh) or CentOS (/opt/local/bin-8.N.N/kubeNodeIPChange.sh) After running the scripts to resolve the issue run the
kubeadm command again to confirm that everything has been renewed. If successful, the kubectl commands should now be working. If not, please contact support.Document Location
Worldwide
[{"Type":"MASTER","Line of Business":{"code":"LOB77","label":"Automation Platform"},"Business Unit":{"code":"BU048","label":"IBM Software"},"Product":{"code":"SSFV9Z","label":"IBM Turbonomic Application Resource Management"},"ARM Category":[{"code":"a8m3p000000PCRRAA4","label":"System"}],"ARM Case Number":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"}]
Was this topic helpful?
Document Information
Modified date:
01 October 2024
UID
ibm17105276