Adapter commands for consumers and producers

Kafka adapter commands for consumers and producers are valid for input data sources and output data targets. See the related Kafka configurations for both consumers and producers in the Apache Kafka documentation for additional details.

Server Hostname

Identifies one or more servers in a Kafka cluster that the adapter is to establish an initial connection with. After the initial connection, the adapter discovers and uses the full set of servers. If the connection to the first server fails, the adapter attempts to connect to each subsequent server in the list until it succeeds.

Related Kafka configurations: bootstrap.servers

The corresponding adapter command is -SRV hostname:port [,hostname:port[,hostname:port[, ...]]] (or -SERVER hostname:port [,hostname:port[,hostname:port[, ...]]]).

Topic Name

The target topic to publish to, or one or more source topics to consume from. Separate multiple topic names with spaces. This command is required unless producers identify the topic for each message separately using message header version 2 or later. See "Headers for data payloads" and the -HDR command.

topicname

Publish the target message to a partition of the topic selected by the Kafka cluster.

topicname:partition

Publish the target message to the specified partition of the topic.

To consume messages, specify one or more source topics to consume from, using any combination of the following:

topicname

Consume all source messages from the topic.

topicname:partition

Consume all source messages from the specified partition of the topic.

topicname:*

Consume source messages from all partitions of the topic.

topicname:partition-offset

Consume the source message at the specified offset of the specified partition of the topic.
  • If the offset does not exist in the specified partition, the adapter consumes messages based on the policy specified by the -AOR command.
  • If the -AOR command is not specified, the adapter consumes the message from the most recent offset position.

Client ID

A logical application name to identify the source of requests in Kafka server logs. The default value is a null string. The corresponding adapter command is -CID clientID (or - CLIENTID ClientID)

Related Kafka configurations: client.id

Header

The version of the header that precedes the message payload size and message payload data. See "Headers for data payloads" for details about the header versions. This is an optional property. The corresponding adapter command is -HDR {1 | 2 | 3} (or -HDR {1 | 2 | 3}

Security Protocol

Specifies the security protocol. Default is Plain Text. The corresponding adapter command is -SP {PLAINTEXT | SSL | SASL_PLAINTEXT| SASL_SSL} (or -SECURITYPROTOCOL {PLAINTEXT | SSL | SASL_PLAINTEXT| SASL_SSL}).
  • PLAINTEXT: Use a plain connection.
  • SSL: Use an SSL connection for host authentication and data encryption. With SSL connections, you must also provide the path (-TSL command) and password (-TSP command) of the truststore.
  • SASL_PLAINTEXT: Use a Simple Authentication Security Layer (SASL) mechanism for authentication over a plain connection.
  • SASL_SSL: Use a SASL mechanism for authentication over an SSL connection.

See the security section in the Apache Kafka documentation for more details.

Login Configuration File Location

Specifies the location of the Java Authentication and Authorization Service (JAAS) login configuration file, which contains information about the security model and parameters to use for authentication. The corresponding adapter command is -LCFL file_path (or -LOGINCONFIGFILELOCATION file_path). This command is valid when the -SP command is set to sasl_plaintext or sasl_ssl.

Use the -LCFL command only for testing and debugging. This command sets the java.security.auth.login.config system property each time it connects to Kafka, and the property applies to the entire JVM process and all threads and adapters that run in it.

In a production environment, specify the location of the login configuration file with the Java Virtual Machine (JVM) Djava.security.auth.login.config=file_path parameter for the Java process in which the adapter runs.

Kerberos Configuration File Location

Specifies the location of the Kerberos configuration file, which contains information about the security model and parameters to use for authentication. The corresponding adapter command is -KCFL file_path (or -KERBEROSCONFIGFILELOCATION file_path).

This command is valid when the -SP command is set to sasl_plaintext or sasl_ssl and the -SM command is set to gssapi. The file is typically named krb5.conf.

Use the -KCFL command only for testing and debugging. This command sets the java.security.krb5.conf system property each time it connects to Kafka, and the property applies to the entire JVM process and all threads and adapters that run in it.

In a production environment, specify the location of the Kerberos configuration file with the Java Virtual Machine (JVM) Djava.security.krb5.conf=file_path parameter for the Java process in which the adapter runs.

Truststore Location File Path

Specifies the full path to the truststore. The corresponding adapter command is -TSL file_path (or -TRUSTSTORELOCATION file_path). This command is valid only when the -SP command is set to ssl or sasl_ssl.

Truststore Password

Specifies the truststore password.

The corresponding adapter command is -TSP password (or -TRUSTSTOREPASSWORD password). This command is valid only when the -SP command is set to ssl or sasl_ssl.

Keystore Location File Path

Specifies the full path to the keystore. The corresponding adapter command is -KSL file_path (or -KEYSTORELOCATION file_path). This command is valid only when the -SP command is set to ssl or sasl_ssl.

Keystore Password

Specifies the key password. The corresponding adapter command is -KP password or (-KEYPASSWORD password).

Logical Message Mode

Specifies logical message mode, in which the adapter processes multiple physical Kafka messages as a single record (a logical message). In this mode, the message payload of each physical message within the logical message is preceded with the 4-byte size of the payload, regardless of whether the -HDR command is specified. The corresponding adapter command is -LMM (-LOGICALMESSAGEMODE).

Add Profile Filename

Specifies the name of a file that contains additional Kafka producer or consumer configurations in Java™ properties format. The values specified in the file override those specified on the adapter command. The corresponding adapter command is -APF filename(or -ADDPROPFILE filename).

Add Profile

Specifies one or more Kafka producer or consumer configurations. Each key=value pair is separated by spaces. The values specified on the -AP command override the values specified on the adapter command and the configurations in the file specified by the -APF command. The corresponding adapter command is -AP key=value [key=value [key=value]] (or -ADDPROP key=value [key=value [key=value]]).

Logging

This property specifies the level of logging to use for the log (trace) file produced by the adapter.

The corresponding adapter command is:

-T [E|V] [+] [file_path]

-TRACE or -T -> Log adapter informational messages.

-TRACEERROR or -TE -> Log only adapter errors.

-TRACEVERBOSE or -TV -> Use verbose (debug) logging. The log file records all activity that occurs while the adapter is producing or consuming messages.

+ -> Appends the trace information to the existing log file. Omit this argument to create a new log file.

file_path -> The full path to the adapter trace log. If you omit this keyword, the adapter creates the m4jdbc.mtr log file in the map directory.

Key Handling Mode

Specifies the mode for handling message keys. The corresponding adapter command is -KHM {bytes | string | object_string | avro_json} (or -KEYHANDLINGMODE {bytes | string | object_string | avro_json}).

The supported modes are:
  • bytes: The adapter assumes message keys to be represented as Java byte array instances and it passes message key bytes through without any modification. This applies to both the consumer and producer scenarios. This is the default mode.
  • string: In consumer, the adapter assumes message keys to be provided as Java String instances and converts them to bytes using the current system encoding. In producer, the adapter converts bytes to Java String instances using the current system encoding.
  • object_string: In consumer, the adapter assumes message keys to be provided as Java Object instances, calls toString() method on them to obtain their string representation and converts them to bytes using the current system encoding. In producer, this mode behaves the same as string mode.
  • avro_json: In consumer, the adapter assumes message keys to be provided as Java Avro GenericRecord objects, calls toString() method on them to obtain their JSON string representation and converts them to bytes using current system encoding. In producer, the adapter converts bytes to Java String instances, and assumes that they are formatted as JSON documents. It then converts them to Avro GenericRecord instances based on the Avro schema specified with -KASF adapter command.

For all modes other than bytes mode, the adapter must be configured to use an appropriate deserializer (consumer case) or serializer (producer case) that can perform the respected key data deserialization and serialization. The adapter commands -AP and -APF can be used to specify the deserializer or serializer to use.

For example, for string mode, in case of consumer, the value key.deserializer=org.apache.kafka.common.serialization.StringDeserializer can be included in the -AP command to specify a deserializer that will deserialize consumed message key bytes to String format before providing them to the adapter. Similarly, in case of producer, the value key.serializer=org.apache.kafka.common.serialization.StringSerializer can be included in the -AP command to specify a serializer that will receive message keys in String format from the adapter before serializing them as key bytes in the produced messages.

When the specified serializer or deserializer implementation resides in an external JAR that is not part of the standard Kafka client included with the adapter, the JAR and its dependencies need to be saved in the class path for the adapter. This can be done for example by adding them to extra or libs/extra sub-directory of the product home directory.

See the security section in the Apache Kafka documentation for more details.

Value Handling Mode

Specifies the mode for handling message values (payloads). The corresponding adapter command is -VHM {bytes | string | object_string | avro_json} (or -VALUEHANDLINGMODE {bytes | string | object_string | avro_json}).

The supported modes are:
  • bytes: The adapter assumes message values to be represented as Java byte array instances and it passes message value bytes through without any modification. This applies to both the consumer and producer scenarios. This is the default mode.
  • string: In consumer, the adapter assumes message values to be provided as Java String instances and converts them to bytes using the current system encoding. In producer, the adapter converts bytes to Java String instances using the current system encoding.
  • object_string: In consumer, the adapter assumes message values to be provided as Java Object instances, calls toString() method on them to obtain their string representation and converts them to bytes using the current system encoding. In producer, this mode behaves the same as string mode.
  • avro_json: In consumer, the adapter assumes message values to be provided as Java Avro GenericRecord objects, calls toString() method on them to obtain their JSON string representation and converts them to bytes using current system encoding. In producer, the adapter converts bytes to Java String instances, and assumes that they are formatted as JSON documents. It then converts them to Avro GenericRecord instances based on the Avro schema specified with -VASF adapter command.

For all modes other than bytes mode, the adapter must be configured to use an appropriate deserializer (consumer case) or serializer (producer case) that can perform the respected value data deserialization and serialization. The adapter commands -AP and -APF can be used to specify the deserializer or serializer to use.

For example, for string mode, in case of consumer, the value value.deserializer=org.apache.kafka.common.serialization.StringDeserializer can be included in the -AP command to specify a deserializer that will deserialize consumed message value bytes to String format before providing them to the adapter. Similarly, in case of producer, the value value.serializer=org.apache.kafka.common.serialization.StringSerializer can be included in the -AP command to specify a serializer that will receive message values in String format from the adapter before serializing them as value bytes in the produced messages.

When the specified serializer or deserializer implementation resides in an external JAR that is not part of the standard Kafka client included with the adapter, the JAR and its dependencies need to be saved in the class path for the adapter. This can be done for example by adding them to extra or libs/extra sub-directory of the product home directory.

See the security section in the Apache Kafka documentation for more details.