Troubleshooting Minio
Review frequently encountered Minio issues.
Gathering information
For troubleshooting issues, you must gather the following information:
Note: You need to set up kubectl CLI to run these commands. For more information, see Accessing your cluster from the kubectl CLI.
- IBM Cloud Private version.
- Architecture type of the nodes in your cluster. For example, Linux® x86_64 or Linux® on Power® (ppc64le)
-
Version of the Minio Helm chart that you installed. Use the following command:
helm list --tls | grep mini
Following is a sample output:
minio 1 Fri Sep 14 05:10:28 2018 DEPLOYED ibm-minio-objectstore-1.6.0 default
-
Minio deployment or statefuleset state. Use the following commands:
kubectl get statefulsets
Following is a sample output:
NAME DESIRED CURRENT AGE minio-ibm-minio-objectstore 4 4 2m
kubectl describe statefulsets
Following is a sample output:
Name: minio-ibm-minio-objectstore Namespace: default CreationTimestamp: Fri, 14 Sep 2018 05:10:37 -0700 Selector: app=ibm-minio-objectstore,release=minio Labels: app=ibm-minio-objectstore chart=ibm-minio-objectstore-1.6.0 heritage=Tiller ...
-
Minio service status. Use the following command:
-
Get the service.
kubectl get svc
Following is a sample output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d minio-ibm-minio-objectstore ClusterIP 10.0.0.68 <none> 9000/TCP 4m
-
Get the service description.
kubectl describe svc
Following is a sample output:
Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: <none> Selector: <none> Type: ClusterIP IP: 10.0.0.1 Port: https 443/TCP TargetPort: 8001/TCP Endpoints: 10.41.1.182:8001 Session Affinity: None Events: <none> . . Name: minio-ibm-minio-objectstore Namespace: default Labels: app=ibm-minio-objectstore chart=ibm-minio-objectstore-1.6.0 heritage=Tiller release=minio Annotations: prometheus.io/path=/minio/prometheus/metrics prometheus.io/port=9000 prometheus.io/scrape=false Selector: app=ibm-minio-objectstore,release=minio Type: ClusterIP IP: 10.0.0.68 Port: service 9000/TCP TargetPort: 9000/TCP Endpoints: 10.1.137.211:9000,10.1.180.251:9000,10.1.236.84:9000 + 1 more... Session Affinity: None Events: <none>
-
-
Minio server pods, logs, and description.
-
Get all Minio pods.
kubectl get po | grep mini
Following is a sample output:
minio-ibm-minio-objectstore-0 1/1 Running 0 4m minio-ibm-minio-objectstore-1 1/1 Running 0 4m minio-ibm-minio-objectstore-2 1/1 Running 0 4m minio-ibm-minio-objectstore-3 1/1 Running 0 4m
-
Get logs and description of all pods. Following is an example command:
kubectl describe po minio-ibm-minio-objectstore-0
Following is a sample output:
Name: minio-ibm-minio-objectstore-0 Namespace: default Priority: 0 PriorityClassName: <none> Node: 10.41.4.202/10.41.4.202 Start Time: Fri, 14 Sep 2018 05:10:37 -0700 Labels: app=ibm-minio-objectstore chart=ibm-minio-objectstore-1.6.0 controller-revision-hash=minio-ibm-minio-objectstore-7b77fd5658 heritage=Tiller release=minio statefulset.kubernetes.io/pod-name=minio-ibm-minio-objectstore-0 Annotations: kubernetes.io/psp=00-rook-ceph-operator productID=Minio_RELEASE.2018-08-21T00-37-20Z_free_00000 productName=Minio productVersion=RELEASE.2018-08-21T00-37-20Z scheduler.alpha.kubernetes.io/critical-pod= Status: Running IP: 10.1.236.84 Controlled By: StatefulSet/minio-ibm-minio-objectstore Containers: ibm-minio-objectstore: Container ID: docker://5e71782564d1c956d6855006f06472773da59ad22743a52bb64f83f4ac0ccf02 Image: minio/minio:RELEASE.2018-08-21T00-37-20Z Image ID: docker-pullable://minio/minio@sha256:3145ff901d491f46e59dd9fb79dc2771e75a524bbfdba8fa8cd35723960fe7d5 Port: 9000/TCP Host Port: 0/TCP Command: /bin/sh -ce cp /tmp/config.json /root/.minio/ && /usr/bin/docker-entrypoint.sh minio -C /root/.minio/ server http://minio-ibm-minio-objectstore-0.minio-ibm-minio-objectstore.default.svc.cluster.local/export http://minio-ibm-minio-objectstore-1.minio-ibm-minio-objectstore.default.svc.cluster.local/export http://minio-ibm-minio-objectstore-2.minio-ibm-minio-objectstore.default.svc.cluster.local/export http://minio-ibm-minio-objectstore-3.minio-ibm-minio-objectstore.default.svc.cluster.local/export State: Running Started: Fri, 14 Sep 2018 05:10:39 -0700 Ready: True Restart Count: 0 Requests: cpu: 250m memory: 256Mi Environment: MINIO_ACCESS_KEY: <set to the key 'accesskey' in secret 'minio'> Optional: false MINIO_SECRET_KEY: <set to the key 'secretkey' in secret 'minio'> Optional: false Mounts: /export from export (rw) /root/.minio/ from minio-config-dir (rw) /tmp/config.json from minio-server-config (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-tzxvl (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: export: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: export-minio-ibm-minio-objectstore-0 ReadOnly: false minio-user: Type: Secret (a volume populated by a Secret) SecretName: minio Optional: false minio-server-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: minio-ibm-minio-objectstore Optional: false minio-config-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: default-token-tzxvl: Type: Secret (a volume populated by a Secret) SecretName: default-token-tzxvl Optional: false QoS Class: Burstable Node-Selectors: <none> Tolerations: CriticalAddonsOnly dedicated node.kubernetes.io/memory-pressure:NoSchedule Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m default-scheduler Successfully assigned default/minio-ibm-minio-objectstore-0 to 10.41.4.202 Normal Pulled 5m kubelet, 10.41.4.202 Container image "minio/minio:RELEASE.2018-08-21T00-37-20Z" already present on machine Normal Created 5m kubelet, 10.41.4.202 Created container Normal Started 5m kubelet, 10.41.4.202 Started container
-
-
Information about the persistent volume claim (PVC), if you used dynamic storage provisioning.
-
Get all PVCs.
kubectl get pvc
Following is a sample output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE export-minio-ibm-minio-objectstore-0 Bound pvc-a35afd44-b811-11e8-bc28-00000a2901b6 5Gi RWO rook-ceph-block 46m export-minio-ibm-minio-objectstore-1 Bound pvc-a71ea92b-b811-11e8-bc28-00000a2901b6 5Gi RWO rook-ceph-block 46m export-minio-ibm-minio-objectstore-2 Bound pvc-ab6f00af-b811-11e8-bc28-00000a2901b6 5Gi RWO rook-ceph-block 46m export-minio-ibm-minio-objectstore-3 Bound pvc-b27b35fc-b811-11e8-bc28-00000a2901b6 5Gi RWO rook-ceph-block 45m
-
Get information about a PVC. Following is a sample command:
kubectl describe pvc export-minio-ibm-minio-objectstore-0
Following is a sample output:
Name: export-minio-ibm-minio-objectstore-0 Namespace: default StorageClass: rook-ceph-block Status: Bound Volume: pvc-a35afd44-b811-11e8-bc28-00000a2901b6 Labels: app=ibm-minio-objectstore release=minio Annotations: control-plane.alpha.kubernetes.io/leader={"holderIdentity":"ee888338-b654-11e8-86f0-16fe371b5da0","leaseDurationSeconds":15,"acquireTime":"2018-09-14T11:30:52Z","renewTime":"2018-09-14T11:30:57Z","lea... pv.kubernetes.io/bind-completed=yes pv.kubernetes.io/bound-by-controller=yes volume.beta.kubernetes.io/storage-provisioner=ceph.rook.io/block Finalizers: [kubernetes.io/pvc-protection] Capacity: 5Gi Access Modes: RWO Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Provisioning 46m ceph.rook.io/block rook-ceph-operator-5f84847c67-c6nzl ee888338-b654-11e8-86f0-16fe371b5da0 External provisioner is provisioning volume for claim "default/export-minio-ibm-minio-objectstore-0" Normal ExternalProvisioning 46m (x2 over 46m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "ceph.rook.io/block" or manually created by system administrator Normal ProvisioningSucceeded 46m ceph.rook.io/block rook-ceph-operator-5f84847c67-c6nzl ee888338-b654-11e8-86f0-16fe371b5da0 Successfully provisioned volume pvc-a35afd44-b811-11e8-bc28-00000a2901b6
-
Troubleshooting common issues
Minio pods get stuck with ContainerCreating
Status
When Minio is deployed in any mode, the first Minio pod might get stuck with the status as ContainerCreating.
Gather information about the issue
-
Get the list of pods.
kubectl get po
Following is a sample output:
NAME READY STATUS RESTARTS AGE mc2 1/1 Running 53 2d minio-ibm-minio-objectstore-848fbcb6f5-2wpq2 0/1 ContainerCreating 0 3m
-
Check the logs. If the logs are empty, describe the pod.
kubectl logs minio-ibm-minio-objectstore-848fbcb6f5-2wpq2
Following is a sample output:
Error from server (BadRequest): container "ibm-minio-objectstore" in pod "minio-ibm-minio-objectstore-848fbcb6f5-2wpq2" is waiting to start: ContainerCreating
-
Get pod description.
kubectl describe po minio-ibm-minio-objectstore-848fbcb6f5-2wpq2
Following is a sample output:
Name: minio-ibm-minio-objectstore-848fbcb6f5-2wpq2 Namespace: default Priority: 0 PriorityClassName: <none> Node: 10.41.4.202/10.41.4.202 Start Time: Fri, 14 Sep 2018 05:52:01 -0700 ... ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m default-scheduler Successfully assigned default/minio-ibm-minio-objectstore-848fbcb6f5-2wpq2 to 10.41.4.202 Warning FailedMount 1m (x10 over 5m) kubelet, 10.41.4.202 MountVolume.SetUp failed for volume "minio-user" : secrets "minio" not found Warning FailedMount 1m (x2 over 3m) kubelet, 10.41.4.202 Unable to mount volumes for pod "minio-ibm-minio-objectstore-848fbcb6f5-2wpq2_default(f9036aa0-b81c-11e8-bc28-00000a2901b6)": timeout expired waiting for volumes to attach or mount for pod "default"/"minio-ibm-minio-objectstore-848fbcb6f5-2wpq2". list of unmounted volumes=[minio-user]. list of unattached volumes=[export minio-server-config minio-user minio-config-dir default-token-tzxvl]
The pod description indicates that the pod is unable to mount volume as Minio secret is unavailable.
Check whether the secret is available in the namspace where Minio is deployed.
kubectl get secret minio
Following is a sample output:
No resources found. Error from server (NotFound): secrets "minio" not found
The output indicates that the secret is not available in the namespace. Create the secret by following instructions that are in the readme file .
Resolve the issue
To resolve the issue, complete the following steps:
- Delete the Helm release.
- Add the secret to the Helm chart configuration.
- Deploy the Helm chart.
Minio server pod gets stuck in Pending STATUS
When Minio is deployed in distributed mode with dynamic storage allocation, the server pod might get stuck with status as Pending
.
Gather information about the issue
-
Get the list of pods.
kubectl get po
Following is a sample output:
NAME READY STATUS RESTARTS AGE mc2 1/1 Running 54 2d minio-ibm-minio-objectstore-0 0/1 Pending 0 7s
-
Get pod description.
kubectl describe po minio-ibm-minio-objectstore-0
Following is a sample output:
Name: minio-ibm-minio-objectstore-0 Namespace: default Priority: 0 PriorityClassName: <none> Node: <none> Labels: app=ibm-minio-objectstore chart=ibm-minio-objectstore-1.6.0 controller-revision-hash=minio-ibm-minio-objectstore-7b77fd5658 heritage=Tiller release=minio statefulset.kubernetes.io/pod-name=minio-ibm-minio-objectstore-0 ... Volumes: export: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: export-minio-ibm-minio-objectstore-0 ReadOnly: false ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 14s (x25 over 57s) default-scheduler pod has unbound PersistentVolumeClaims (repeated 5 times)
The output indicates that the PVCs are unbound.
-
Describe the PVC.
kubectl describe pvc export-minio-ibm-minio-objectstore-0
Following is a sample output:
Name: export-minio-ibm-minio-objectstore-0 Namespace: default StorageClass: standard Status: Pending Volume: Labels: app=ibm-minio-objectstore release=minio Annotations: <none> Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 8s (x19 over 4m) persistentvolume-controller storageclass.storage.k8s.io "standard" not found
The output indicates that the persistence volume is trying to bind through the storage class named
standard
. Check whether the storage class exists in your cluster.kubectl get sc standard
Following is a sample output:
No resources found. Error from server (NotFound): storageclasses.storage.k8s.io "standard" not found
The output indicates that the storage class does not exist.
Resolve the issue
To resolve the issue, complete the following steps:
- Install a suitable block storage, such as GlusterFS or Ceph, in your cluster.
- Ensure that the block storage has a storage class.
- Add the storage class in the Helm chart configuration.
- Deploy the Helm chart.
Minio in distributed mode is not accessible when you provide a TLS certificate
When Minio server is connected through Minio Client, or any S3-compatible client, you see the error "Server not initialized".
Gather information about the issue
-
Access the container.
kubectl exec -it mc2 sh
Following is a sample output:
/ # mc config host add myminio https://minio-ibm-minio-objectstore:9000 admin ad min1234 S3v4 --insecure mc: <ERROR> Unable to initialize new config from the provided credentials. Server not initialized, please try again. / #
-
Check the pod status.
kubectl get po
Following is a sample output:
NAME READY STATUS RESTARTS AGE mc2 1/1 Running 52 2d minio-ibm-minio-objectstore-0 1/1 Running 0 4m minio-ibm-minio-objectstore-1 1/1 Running 0 4m minio-ibm-minio-objectstore-2 1/1 Running 0 4m minio-ibm-minio-objectstore-3 1/1 Running 0 4m
The output indicates that all the pods are running.
-
Check pod logs.
kubectl logs minio-ibm-minio-objectstore-0
Following is a sample output:
You are running an older version of Minio released 3 weeks ago Update: https://docs.minio.io/docs/deploy-minio-on-kubernetes Waiting for a minimum of 2 disks to come online (elapsed 0s) Waiting for a minimum of 2 disks to come online (elapsed 1s) Waiting for a minimum of 2 disks to come online (elapsed 2s) Waiting for a minimum of 2 disks to come online (elapsed 7s) ...
The output indicates that the server replicas are not able to communicate with each other. The issue might be about the TLS certificate.
Resolve the issue
Ensure that you generate TLS certificate for Minio servers to common name (CN) in the following format:
"/CN=*.<chart deployment name>-ibm-minio-objectstore.<namespace>.svc.<cluster domain name>"
This step is a requirement for Minio servers that are configured with TLS certificate.
Following example has the steps for generating a certificate for Minio deployment:
- Helm release name: minio
- Namespace for deployment: default
- Cluster domain name: cluster.local
openssl genrsa -out private.key 2048
openssl req -new -x509 -days 3650 -key private.key -out public.crt -subj "/CN=*.minio-ibm-minio-objectstore.default.svc.cluster.local"
cp public.crt ca.crt
kubectl create secret generic tls-ssl-minio --from-file=./private.key --from-file=./public.crt --from-file=./ca.crt
Note: Certificate is generated for a specified Helm release name, namespace, and cluster domain name combination. The certificate does not work if any of these values is different. You must create a different certificate for any other combination of values.