In this 22nd edition of the Kafka Monthly Digest, I will cover what happened in the Kafka Community in November.
For last month’s digest, see “Kafka Monthly Digest: October 2019.”
- 2.2.2: After two release candidates, Randall Hauch released this bugfix version on December 1, 2019. It contains 38 fixes, including 5 blockers. See the release notes for full details.
- 2.4.0: The release process of 2.4.0 is continuing. On November 14, 2019, Manikumar Reddy published the first release candidate. The community found a number of blocker issues that postponed the release. RC1 was published on November 20, 2019, but another blocker was found. Another vote will start once a new RC is out.
November was relatively quiet in terms of KIPs as the community only submitted three (KIP-547 to KIP-549). So, let’s look at all of them:
- KIP-547: Extend ConsumerInterceptor to allow modification of Consumer Commits: When committing offsets, it’s possible to also include some metadata. Unfortunately, at the moment, metadata can only be provided when explicitly specifying offsets via commitSync() and commitAsync(), otherwise it is left empty. This KIP proposes adding a method to the ConsumerInterceptor, preCommit(), in order to allow setting the commit metadata even when using auto or implicit commits.
- KIP-548 Add Option to enforce rack-aware custom partition reassignment execution: Reassigning partitions using the kafka-reassign-partitions.sh tool is a relatively common administrative operation. However, this tool should be used with care as it does not do a lot of validation to ensure the submitted assignment makes sense. This is especially true for clusters using rack awareness. This KIP aims to make such operations safer by performing rack awareness verifications when executing a reassignment.
- KIP-549: Surface Consumer's Metadata in KafkaAdminClient#describeConsumerGroups: When a consumer joins a group, it provides some metadata that is used to compute the partition assignment. Accessing this metadata can be useful when debugging consumer groups issues but at the moment, there is no easy way to retrieve it. The goal of this KIP is to expose each consumer metadata via the existing describeConsumerGroups() method of the AdminClient.
- How to Pipe Your Data with Kafka Connect
- Announcing Quarkus 1.0
- Introducing Azkarra Streams: The first microframework for Apache Kafka Streams
- Processing guarantees in Kafka
IBM Event Streams for Cloud is Apache Kafka-as-a-Service for IBM Cloud.