Configuring Kafka log parameter
Use the Kafka logging to enable logging data for monitoring, debugging, diagnostics, troubleshooting, security, auditing performance optimization, visibility, and historical analysis.
About this task
You can configure the Kafka logging for OMServer, Data Management, and Order Service only. Configuring Kafka logging for the dev instances of Cassandra and Elasticsearch is not supported
Procedure
- Set the
apps.oms.ibm.com/enable-kafka-loggingannotation toyesto enable Kafka logging.After it is enabled, the Operator installs the following JAR packages for logging when the pods are bootstrapped. The given versions of these packages are default. Theom-agentimage already includes thejackson-*packages, so it installs thekafka_log4j_appenderJAR only. Theom-appimage installs all the four JARs.- kafka-log4j-appender-3.6.0.jar
- jackson-databind-2.16.0.jar
- jackson-core-2.16.0.jar
- jackson-annotations-2.16.0.jar
Note: For production environments, when theapps.oms.ibm.com/enable-kafka-loggingannotation is set, the downloading and installing of these external JAR files each time a pod initiates might result in slow performance and lead to excessive resource utilization. Hence, it is recommended building these JARs into the base image and not use theapps.oms.ibm.com/enable-kafka-loggingannotation. - Optional: Set the following environment variables to override the default
versions of the jars by providing the required value. For
om-agent, thejackson-*jars are already included in the image, so if thejackson-*environment variables are provided, they are ignored.kafka_log4j_appender_version: <version_value> jackson_databind_version: <version_value> jackson_core_version: <version_value> jackson_annotations_version: <version_value> - Configure a configMap with the
custom log4j2.xmlthat is to be used for logging. Mount the configMap to the pod by usingadditionalMounts. For more information aboutadditionalMounts, see additionalMounts parameter.The following example is just an illustration, and you must customize the log4j2 according to your requirements. Also, the Kafka host and topics in the example has variables, so if required, you can split the logs by using jvmArgs.
<Configuration status="info" packages="com.sterlingcommerce.woodstock.util.frame.logex"> <Appenders> <Kafka name="kafkaLogAppender" topic="${env:KAFKA_TOPIC}"> <PatternLayout pattern="%d [%p] ${env:OM_POD_NAME} - %message"/> <Property name="bootstrap.servers">${env:KAFKA_HOST}</Property> </Kafka> <Console name="Console" target="SYSTEM_OUT"> <PatternLayout pattern="%d:%-7p:%t: %-60m [%X{sys:AppUserId:-}]: [%X{sys:TransactionId:-}]: [%X{sys:TenantId:-}]: %-25c{sys:1:-}%n"/> </Console> </Appenders> <Loggers> <Root level="INFO"> <AppenderRef ref="kafkaLogAppender" /> </Root> <Logger name="requestlogger" level="debug" additivity="false"> <AppenderRef ref="kafkaLogAppender"/> <AppenderRef ref="Console"/> </Logger> <Logger name="com.yantra.tools.property" level="debug" additivity="false"> <AppenderRef ref="kafkaLogAppender"/> <AppenderRef ref="Console"/> </Logger> <Logger name="com.yantra" level="debug" additivity="false"> <AppenderRef ref="kafkaLogAppender"/> <AppenderRef ref="Console"/> </Logger> <Logger name="com.sterlingcommerce" level="debug" additivity="false"> <AppenderRef ref="kafkaLogAppender"/> <AppenderRef ref="Console"/> </Logger> <Logger name="com.ibm" level="debug" additivity="false"> <AppenderRef ref="kafkaLogAppender"/> <AppenderRef ref="Console"/> </Logger> <Logger name="org.apache.struts2" level="warn" additivity="false"> <AppenderRef ref="kafkaLogAppender"/> <AppenderRef ref="Console"/> </Logger> <Logger name="org.jose4j" level="warn" additivity="false"> <AppenderRef ref="kafkaLogAppender"/> <AppenderRef ref="Console"/> </Logger> <Logger name="freemarker.cache" level="warn" additivity="false"> <AppenderRef ref="kafkaLogAppender"/> <AppenderRef ref="Console"/> </Logger> <Logger name="net.sf.ehcache" level="warn" additivity="false"> <AppenderRef ref="kafkaLogAppender"/> <AppenderRef ref="Console"/> </Logger> <Logger name="com.opensymphony.xwork2" level="warn" additivity="false"> <AppenderRef ref="kafkaLogAppender"/> <AppenderRef ref="Console"/> </Logger> <Logger name="com.stercomm.SecurityLogger" level="info" additivity="false"> <AppenderRef ref="kafkaLogAppender"/> <AppenderRef ref="Console"/> </Logger> <Logger name="api.security" level="debug" additivity="false"> <AppenderRef ref="kafkaLogAppender"/> <AppenderRef ref="Console"/> </Logger> <Logger name="com.yantra.integration.adapter" level="debug" additivity="false"> <AppenderRef ref="kafkaLogAppender"/> <AppenderRef ref="Console"/> </Logger> <Logger name="com.yantra.yfs.ui.backend.YFSLoginIPLogger" level="info" additivity="false"> <AppenderRef ref="kafkaLogAppender"/> <AppenderRef ref="Console"/> </Logger> <Logger name="DataValidationLogger" level="debug" additivity="false"> <AppenderRef ref="kafkaLogAppender"/> <AppenderRef ref="Console"/> </Logger> <Logger name="com.yantra.yfc.log.transactiontracing.ConfigurationDataTracer$FFDCLogger" level="debug" additivity="false"> <AppenderRef ref="kafkaLogAppender"/> <AppenderRef ref="Console"/> </Logger> <Logger name="org.apache.kafka" level="warn" /> </Loggers> </Configuration> - To enable Kafka logging for Order Service, configure the following extra parameters in
the Order Service specification:
configuration: additionalConfig: ORDERSERVICE_LOG_LEVEL: INFO # The logging level, default value is INFO. ORDERSERVICE_LOG_CHANNEL: KAFKA # The log channel that is used for logging. For Kakfa logging, it is KAFKA. ORDERSERVICE_LOG_TOPIC: os-logs # The name of the Kafka log topic to which data will be logged. ORDERSERVICE_GRAYLOG_ADDR: 10.148.81.157:9092 # The Kafka server and port address.