Troubleshooting Analytics start-up

If you receive an error starting the Analytics service after installing, upgrading, or recovering from a catastrophic failure of IBM® API Connect, you might be able to resolve it by completing this task.

Uh oh! An error has occurred

In rare cases, the Kibana index can fail and the following message displays:
Typical error message when the Analytics service fails

This error typically indicates that the index was corrupted. Complete the following steps to verify the problem and delete the corrupted index.

Before you begin, determine the names of one of the deployment's storage-coordinating pods (called analytics-storage-coordinating-pod in the steps that follow) and one of the client pods (called analytics-client-pod in the steps). To see the pod names, run the following command:

kubectl get pods

In the following sample output, a storage-coordination pod is named r70eaa1a0f2-analytics-storage-coordinating-5d87d4c76-btpfq and a client pod is named r70eaa1a0f2-analytics-client-68c499d5f9-7g7vm:

Sample pods listing
  1. Verify that you have both the .kibana and .kibana-6 indexes by running the following command:
    kubectl exec -it analytics-storage-coordinating-pod -- curl_es /_cat/indices?v

    In the index column of the response, look for .kibana and .kibana-6:

    Index listing

    If the response includes both indexes, it indicates a problem with the index creation. Proceed to the next step.

    Note: If you received the Uh oh! An error has occurred page and the response to this command only includes the .kibana-6 index, then something else is wrong with the cluster and you should contact IBM Support for assistance.
  2. Set the .kibana index to be read-and-write enabled by running the following command, replacing analytics-storage-coordinating-pod with the name of the storage-coordinating pod that you determined at the beginning of this task:
    kubectl exec -it analytics-storage-coordinating-pod -- curl_es -XPUT .kibana/_settings -d '{"index.blocks.write":false}'

    When you see the following response: {"acknowledged":true}, continue to the next step.

    Note: If you see a different response, make sure the request was correct and try again. If the command still does not work, then something else is wrong with the cluster and you should contact IBM Support for assistance.
  3. Delete the .kibana-6 index by running the following command:
    kubectl exec -it analytics-storage-coordinating-pod -- curl_es -XDELETE .kibana-6

    When you see the following response: {"acknowledged":true} to indicate that the deletion was successful, continue to the next step.

    Note: If you see a different response, then the index was not deleted. Make sure the request was correct and try again. If the delete operation still fails, then something else is wrong with the cluster and you should contact IBM Support for assistance.
  4. Restart a single analytics client pod by running the following command and replacing analytics-client-pod with the name of the pod (which you determined at the beginning of this task):
    kubectl delete pod analytics-client-pod

    Sample response: pod "analytics-client-pod" deleted

    Wait for the response to ensure that the pod was successfully deleted. Then wait a few minutes for a new pod to start automatically. To check the status of the pods, run the following command:
    kubectl get pods
  5. Navigate to the Analytics page and refresh to verify that the dashboard now displays correctly.
  6. (Optional) If you have a backup of ui or .kibana-6, you restore it now as explained in Backing up and restoring the analytics database and Backing up and restoring the analytics database