Exporting Cloud Pak for Data audit records to your security information and event management solution
You can configure IBM® Cloud Pak for Data to forward audit records to a security information and event management (SIEM) solution, such as Splunk, LogDNA, or QRadar®.
Overview
The Audit Logging Service is automatically installed when you install an instance of Cloud Pak for Data. However, you must enable and configure the Audit Logging Service if you want Cloud Pak for Data to collect and forward Cloud Auditing Data Federation (CADF) compliant audit records from the services that are associated with your Cloud Pak for Data deployment. (For details on the type of information that is included in the audit records, see Sample Cloud Pak for Data CADF Audit Records.)
The Audit Logging Service is scoped to the project where the Cloud Pak for Data control plane is installed. If you install multiple instances of Cloud Pak for Data on the same cluster, each instance of the Audit Logging Service functions independently. The CADF audit records for each instance are isolated from the other instances, and the records for each instance can be forwarded to different SIEM systems.
You can connect each instance of Cloud Pak for Data to one or more SIEM systems.
- Splunk
- LogDNA
- QRadar
You might be able to use another SIEM solution if it supports the Fluentd output plugins. Two of the most
commonly used are the TCP/IP @type forward
and RSYSLOG @type remote_syslog
plugins.
You can also optionally forward the records to the zen-audit
pod
stdout
log. Although the stdout
log is not recommended for
long-term audit record management, this configuration helps you confirm that all of the records are
forwarded to your SIEM system.
Support for audit logging in services
Auditing logging is not supported by certain components and services of Cloud Pak for Data. For more information, see Services that support audit logging.
Connecting to supported SIEM solutions
- Actions that you must complete in the external SIEM interface.
- Actions that you must complete on the cluster where Cloud Pak for Data is deployed.Note: To complete this portion of the procedure, you must be a cluster administrator.
If you need to enable TLS to connect to your SIEM system, see Using a TLS certificate to connect to an SIEM system.
Follow the appropriate steps to connect to your SIEM system:
Splunk
To enable Cloud Pak for Data to integrate with Splunk, Cloud Pak for Data uses the Splunk HTTP Event Collector Output Plugin.
From the Splunk dashboard:
- Click .
- In the HTTP Event Collector section, click Add new.
- Give the Cloud Pak for Data instance a unique name.
- In the Source name override enter a name for a source to be assigned to events that this endpoint generates.
- In the Description enter a description for the input.
- If you want to enable indexer acknowledgment for this token, click the Enable indexer acknowledgment checkbox.
- Configure the source type by creating a specific Cloud Pak for Data source, or by using the automatic detecting option, or by selecting generic JSON source type.
- Configure App Context and Indexes for the specific use case.
- Click Review and then click Submit.
- Save the Generated Token Value so that it can be used in a later step.
After you add Cloud Pak for Data to Splunk, connect to the cluster where Cloud Pak for Data is installed:
- Change to the project where Cloud Pak for Data is
installed:
oc project cpd-instance
- Make a backup of the current
zen-audit-config
configmap. - Edit the
zen-audit-config
config map:oc edit configmap zen-audit-config
- Add the
<store>
configuration to the configmap.Tip: Thezen-audit-config
configmap includes a sample Splunk configuration, which is commented out by default.The
<store>
configuration must be inside the<match export export.** records records.** syslog syslog.**>
tag and after the@type copy
tag.Ensure that the entry is indented correctly and that you replace the values in the example with the appropriate values for your environment.
Important: If you want to use TLS to connect to Splunk, see SSL parameters in thefluent-plugin-splunk-hec
Readme on GitHub to determine which parameters you need to specify in the<store>
section of the configmap.apiVersion: v1 metadata: name: zen-audit-config data: ca.pem: | -----BEGIN CERTIFICATE----- <XXXXXXXXX> -----END CERTIFICATE----- fluent.conf: |- <match export export.** records records.** syslog syslog.**> @type copy <store> @type splunk_hec hec_host <SPLUNK-HOST> # Replace <SPLUNK-HOST> with the address of the Splunk host hec_port <SPLUNK-PORT> # Replace <SPLUNK-PORT>. The default port is 8088 hec_token <SPLUNK-TOKEN> # Replace <SPLUNK-TOKEN> with the token you generated flush_interval 10s # Recommended value # Add SSL parameters here ca_file /fluentd/config/ca.pem # Required to use TLS; specify the cert in the ca.pem section
</store>
</match> - Save the changes to the
zen-audit-config
configmap. For example, if you are usingvi
, hit esc and enter :wq. - Delete all
zen-audit
pods to force a restart to pick up changes. To avoid forced restart, implement a manual rolling update.
LogDNA
In LogDNA client:
- Open the Help menu by clicking the question mark icon at the lower left section of the screen.
- Select Logging Setup.
- Under Via Platform, select Fluentd.
- Copy or download configuration snippet from the If you use Fluentd
section. The snippet has the following
format:
<match your_match> @type logdna api_key YOUR-APIKEY ingester_domain YOUR-HOST hostname "#{Socket.gethostname}" # your hostname (required) app my_app # replace with your app name #mac C0:FF:EE:C0:FF:EE # Optional: Mac address #ip 127.0.0.1 # Optional: IP address </match>
Important: Some versions of LogDNA useingestion_host
instead ofingester_domain
. Use the correct parameter for your LogDNA installation. - Replace the opening and closing
<match>
tags with<store>
tags so that the<store>
configuration snippet resembles the following example. In addition, change the type from@type logdna
to@type logdna2
.<store> @type logdna2 api_key YOUR-APIKEY ingester_domain YOUR-HOST # LogDNA ingester domain address hostname "#{Socket.gethostname}" # your hostname (required) app my_app # replace with your app name # mac C0:FF:EE:C0:FF:EE # Optional: mac address # ip 127.0.0.1 # Optional: IP address buffer_chunk_limit 1m # optional: Parameter to improve performance flush_at_shutdown true # optional: Parameter to improve performance </store>
Save this snippet.
After you copy the required information from LogDNA, connect to the cluster where Cloud Pak for Data is installed:
- Change to the project where Cloud Pak for Data is
installed:
oc project cpd-instance
- Make a backup of the current
zen-audit-config
configmap. - Edit the
zen-audit-config
config map:oc edit configmap zen-audit-config
- Add the
<store>
configuration to the configmap.Tip: Thezen-audit-config
configmap includes a sample LogDNA configuration, which is commented out by default.The
<store>
configuration must be inside the<match export export.** records records.** syslog syslog.**>
tag and after the@type copy
tag.Ensure that the entry is indented correctly and that you replace the values in the example with the appropriate values for your environment.
Change
hostname
to something static and configureapp
,mac
, andip
.apiVersion: v1 metadata: name: zen-audit-config data: ca.pem: | -----BEGIN CERTIFICATE----- <XXXXXXXXX> -----END CERTIFICATE----- fluent.conf: |- <match export export.** records records.** syslog syslog.**> @type copy <store> @type logdna2 api_key YOUR-APIKEY # API key from snippet ingester_domain HOST # LogDNA ingester domain address hostname <CPD-HOSTNAME> app <CPD-ZEN-AUDIT> # mac C0:FF:EE:C0:FF:EE # optional mac address # ip 127.0.0.1 # optional IP address buffer_chunk_limit 1m flush_at_shutdown true tls true # Required to use TLS ca_file /fluentd/config/ca.pem # Required to use TLS; specify the cert in the ca.pem section </store></match>
- Save the changes to the
zen-audit-config
configmap. For example, if you are usingvi
, hit esc and enter :wq. - Delete all
zen-audit
pods to force a restart to pick up changes. To avoid forced restart, implement a manual rolling update.
After the changes are applied, new audit events from Cloud Pak for Data are sent to LogDNA.
QRadar
In the QRadar client:
- Go to Admin.
- Click Log Sources.
- Click Add.
- Configure a Name and Description for the new log source.
- Under Type, select an ICP CADF Format if configured, or a generic log type.
- Change Protocol Configuration to Syslog, or TLS Syslog if TLS is enabled.
- Provide a unique Identifier and ensure it is enabled. Then provide an event collector and optionally select an Extension.
- Click Save.
- Go back to the Admin menu.
- Click Deploy Changes.
After you add Cloud Pak for Data to QRadar, connect to the cluster where Cloud Pak for Data is installed:
- Change to the project where Cloud Pak for Data is
installed:
oc project cpd-instance
- Make a backup of the current
zen-audit-config
configmap. - Edit the
zen-audit-config
config map:oc edit configmap zen-audit-config
- Add the
<store>
configuration to the configmap.Tip: Thezen-audit-config
configmap includes a sample QRadar configuration, which is commented out by default.The
<store>
configuration must be inside the<match export export.** records records.** syslog syslog.**>
tag and after the@type copy
tag.Ensure that the entry is indented correctly and that you replace the values in the example with the appropriate values for your environment.
apiVersion: v1 metadata: name: zen-audit-config data: ca.pem: | -----BEGIN CERTIFICATE----- <XXXXXXXXX> -----END CERTIFICATE----- fluent.conf: |- <match export export.** records records.** syslog syslog.**> @type copy <store> @type remote_syslog host <QRADAR-HOST> # Replace <QRADAR-HOST> with the address of the Splunk host port <QRADAR-PORT> # Replace <QRADAR-PORT>. The default port is 514 hostname <CPD-HOSTNAME> # Replace <CPD-HOSTNAME> with the Cloud Pak for Data hostname protocol tcp tls true # Required to use TLS ca_file /fluentd/config/ca.pem # Required to use TLS; specify the cert in the ca.pem section <format> @type json </format><buffer> flush_thread_count 2 flush_interval 10s chunk_limit_size 2M queue_limit_length 32 retry_max_interval 30 retry_forever true </buffer> </store> </match>
- Save the changes to the
zen-audit-config
configmap. For example, if you are usingvi
, hit esc and enter :wq. - Delete all
zen-audit
pods to force a restart to pick up changes. To avoid forced restart, implement a manual rolling update.
After the changes are applied, new audit events from Cloud Pak for Data are sent to QRadar.
Local zen-audit
log
You can optionally publish the audit logs to the zen-audit
pod
stdout
logs on the cluster where Cloud Pak for Data is installed.
This method is not recommended for long-term record management. Instead, this method is useful to validate that all of the records that are generated by the Audit Logging Service are sent to your SIEM system.
To send the records to the local zen-audit
pods:
- Change to the project where Cloud Pak for Data is
installed:
oc project cpd-instance
- Make a backup of the current
zen-audit-config
configmap. - Edit the
zen-audit-config
config map:oc edit configmap zen-audit-config
- Add the
<store>
configuration to the configmap.Tip: Thezen-audit-config
configmap includes a samplestdout
configuration, which is commented out by default.The
<store>
configuration must be inside the<match export export.** records records.** syslog syslog.**>
tag and after the@type copy
tag.apiVersion: v1 metadata: name: zen-audit-config data: fluent.conf: |- <match export export.** records records.** syslog syslog.**> @type copy <store> @type stdout </store> </match>
- Save the changes to the
zen-audit-config
configmap. For example, if you are usingvi
, hit esc and enter :wq. - Delete all
zen-audit
pods to force a restart to pick up changes. To avoid forced restart, implement a manual rolling update.
After the changes are applied, new audit events from Cloud Pak for Data are sent to the zen-audit
pod
stdout
logs.
Using a TLS certificate to connect to an SIEM system
Security compliance typically requires TLS to connect to external repositories and tools. You can
add TLS certificates to the zen-audit-config
configmap to enable Cloud Pak for Data to connect your SIEM system.
To add TLS certificates to the zen-audit-config
configmap:
- Change to the project where Cloud Pak for Data is
installed:
oc project cpd-instance
- Make a backup of the current
zen-audit-config
configmap. - Edit the
zen-audit-config
config map:oc edit configmap zen-audit-config
- Add the certificate to the
ca.pem:
section:apiVersion: v1 metadata: name: zen-audit-config data: ca.pem: | -----BEGIN CERTIFICATE----- XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX -----END CERTIFICATE----- fluent.conf: |- <matchexportexport.** recordsrecords.** syslogsyslog.**> @type copy <store> ... ca_file /fluentd/config/ca.pem </store></match>
- Ensure that the
<store>
entry includes the following information:- The TLS parameterNote: There is no standard parameter to enable TLS. Each type of export plug-in typically requires a unique configuration. The
<store>
entry for each supported SIEM system includes the parameter that you need to specify to enable TLS. - A reference to the certificate (
ca_file /fluentd/config/ca.pem
)
apiVersion: v1 metadata: name: zen-audit-config data: ca.pem: | -----BEGIN CERTIFICATE----- XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX -----END CERTIFICATE----- fluent.conf: |- <matchexportexport.** recordsrecords.** syslogsyslog.**> @type copy <store> ... TLSparameter ca_file /fluentd/config/ca.pem </store></match>
- The TLS parameter
- Save the changes to the
zen-audit-config
configmap. For example, if you are usingvi
, hit esc and enter :wq. - Delete all
zen-audit
pods to force a restart to pick up changes. To avoid forced restart, implement a manual rolling update.