Troubleshooting audit logs
Troubleshooting audit logs
Cannot see audit logs in Kibana
The issue might be due to any of the following reasons:
-
The
audit-logging-fluentd
pods are deployed but result inContainerCreating
error and require a secret fromlogging-elk-certs
error.Audit logging has a dependency on the logging service. If the logging service is not deployed, then audit logging pods do not run or audit logging functionality does not run. If the logging service is not deployed, deploy it from the Catalog in the management console. After the logging service is deployed, delete the
audit-logging
chart.-
Use the management console to delete the chart:
- Log in to the management console.
- Navigate to Workload > Helm Releases.
- Locate
audit-logging
release. - Click ... > Delete.
-
Use the Helm CLI to delete the chart:
- Install the Helm CLI. For more information, see Installing the Helm CLI (helm).
- Run the following command:
helm delete --purge audit-logging --tls
Redeploy the
audit-logging
chart by using the management console.- Log in to the management console.
- Click Catalog.
- Search for the
audit-logging
chart. - Install the chart.
-
-
Audit logging is disabled by default. If you need to generate audit logs for a service, you must enable it for that service. For more information, see Audit logging in IBM Cloud Private.
-
The
AUDIT flag
parameter is set totrue
in the ConfigMap of the service, but you still cannot see audit logs in Kibana.After you set
AUDIT flag: true
in the ConfigMap of the service, check whether the related service pods are restarted. For more information, see Audit logging in IBM Cloud Private. -
The
AUDIT flag
parameter is set totrue
in the ConfigMap of the service, and the related pods are restarted, but you still cannot see audit logs in Kibana.-
Check whether your role-based access control (RBAC) role has access to the logs. Only the
auditor
andcluster administrator
roles have privileges to see the audit logs. -
Check whether
audit
index is created in Kibana. If it is not created, you can create it by following these steps:- Open the Kibana dashboard.
- Navigate to Management > Index Patterns.
- Click Create index pattern.
- Add index pattern as
audit-*
. - Select
@timestamp
in the Time Filter field name drop-down menu. - Click Create index pattern.
You can see the audit logs in the Discover section of the dashboard. The
audit-*
index must display in the Selected Fields section. -
If you still cannot see the audit logs, check the audit log flow and identify the issue. The audit log flow from the service pod to the Kibana dashboard is
Pods generate audit logs
>Journald
>Fluentd
>Elasticsearch
>Kibanaa
.- Install
kubectl
. For more information, see Installing the Kubernetes CLI (kubectl). - Find the IP address of the service pod that has audit log enabled.
kubectl -n kube-system get pods -o wide | grep <service name or pod name of the service>
- Use Secure Shell (SSH) to connect to that node and check whether audit logs are reaching
journald
.journalctl -t 'icp-audit' journalctl -t 'icp-audit' -o json-pretty
- If you do not find any logs, check whether
journald
is working. Then, repeat step 4.systemd-cat -t icp-audit tail "Audit log testing message."
- Check whether
fluentd
pods and logging pods are running.kubectl -n kube-system get pods
-
Analyze the
fluentd
pod logs to check whetherfluentd
is connected to ELK.kubectl -n kube-system log <fluentd pod name>
Note:
fluentd
pod name starts withaudit-logging-fluentd-ds-*
.If there is no error in the log and you can see the following text in the first few lines of the log, then
fluentd
is successfully connected to ELK."Connection opened to Elasticsearch cluster => {:host=>"elasticsearch", :port=>9200, :scheme=>"https"} "
If you do not see the lines in the log, check whether the logging service is installed and is running.
- Install
-
Enable audit logging but do not send logs to ELK
If you want to enable audit logging, but do not want to send the logs to ELK, complete the following steps:
-
Edit
audit-logging-fluentd-ds-config
ConfigMap file.kubectl -n kube-system edit configmap audit-logging-fluentd-ds-config
-
Remove the following ELK configuration from
fluentd
configuration:<match icp-audit kube-audit>\n @type elasticsearch\n @log_level info\n type_name fluentd\n hosts \ elasticsearch:9200\n type_name fluentd\n id_key _hash\n remove_keys _hash\n logstash_format true\n \ logstash_prefix audit\n scheme https\n ssl_version TLSv1_2\n ca_file /fluentd/etc/tls/ca.crt\n client_cert \ /fluentd/etc/tls/curator.crt\n client_key /fluentd/etc/tls/curator.key\n client_key_pass \ \"#{ENV[\"APP_KEYSTORE_PASSWORD\"]}\"\n <buffer>\n flush_thread_count 8\n flush_interval 5s\n \ chunk_limit_size 2M\n queue_limit_length 32\n retry_max_interval 30\n retry_forever true\n </buffer>\n</match>
Note: Carefully edit the ConfigMap. If you accidentally delete any space or add a line, the
fluentd
pods might show an error. -
Restart the
fluentd
pods.-
Use the IBM® Cloud Private management console to restart the pods.
a. Log on to the IBM® Cloud Private management console.
b. From the navigation menu, click Workloads > DaemonSets > audit-logging-fluentd-ds.
c. Remove all pods.
-
Use
kubectl
to restart the pods.kubectl -n kube-system get pod -o wide | grep audit-logging-fluentd-ds- | awk '{print $1}' | xargs kubectl delete pod -n kube-system
Kubernetes restarts the
fluentd
pod with the updated configuration. -
Error when upgrading audit-logging
When you use the web console to upgrade audit-logging
, an error that resembles the following message appears at the top of the Upgrade
modal:
"Invalid request : rpc error: code = Unknown desc = DaemonSet.apps "audit-logging-fluentd-ds" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"role":"fluentd", "app":"audit-logging-fluentd", "component":"fluentd", "heritage":"Tiller", "release":"audit-logging"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable"
If you see the error, you must use the command line interface (CLI). Complete the following steps to upgrade audit-logging
.
-
Delete all
audit-logging-fluentd-ds-
ConfigMaps.kubectl get cm -n kube-system -o wide | grep audit-logging-fluentd-ds- | awk '{print $1}' | xargs kubectl delete cm -n kube-system
-
Upgrade the
audit-logging
Helm release.helm upgrade audit-logging mgmt-charts/audit-logging --force -f audit-value.yaml --version <newer version> --tls
-
Alternatively, you can delete the
audit-logging
Helm release and configure a newer version.