December 5, 2018 By Mickael Maison 2 min read

What happened in the Kafka community in November 2018?

Kafka Releases:

  • 2.1.0: Dong Ling released this new version on November 21, 2018, and it contains a number of interesting new features:

    • Java 11 support: Apache Kafka is keeping up with the faster JVM release cycle. See the OpenJDK page for Java 11 for a list of new features.

    • Support for Zstandard compression: Zstd offers great compression ratios and is also really fast. Consequently, on most workloads, it can provide significant throughput improvements compared to Snappy, GZIP, and LZ4. See KIP-110 for more details.

    • Updated replication protocol: The work to strengthen the inter-broker protocol continued in this release. TLA+, a formal specification language, was used to model and verify the replication protocol.

    • A ton of client improvements: This includes better timeouts for Producers, access to AdminClient metricsnew ACL for listing consumer groups, and DNS improvements for Kerberos and Cloud environments.

    • Improved Streams API: This includes a few new metrics, cleanups of StoreSuppliers APIs, and better timestamp synchronization.

  • 2.0.1: This release came out on the November 9, 2018, and contains 51 fixes. Thanks to Manikumar Reddy for running this release.

Finally, you can access the release plan on the wiki for the full details about each release: 2.0.1 and 2.1.0.

KIPs:

Last month, the community submitted 12 KIPs (KIP-387 to KIP-398). These are the ones that caught my eye:

KIP-388: Add observer interface to record request and response
This KIP proposes adding an interface to brokers to observe requests and responses. The goal is to allow access to information that cannot be retrieved using existing metrics like latency per topic or bytes transmitted per principal.

KIP-390: Allow fine-grained configuration for compression
Compression operations involve a tradeoff between speed and compressed size. Therefore compression algorithms have the concept of compression level to enable adjusting the tradeoff. This KIP’s goal is to allow users to specify the compression level they desire when configuring compression in Producers or Brokers instead of always using the default.

KIP-391: Allow producing with offsets for cluster replication
Replicating data between clusters is a very common operation. However, one difficulty to handle is consumer offsets as messages are likely to have different offsets in each cluster. The proposal of this KIP is to support replicating messages with their offsets. This would enable you to easily replicate committed offsets and allow consumers to rely on them even when switching clusters.

KIP-392: Allow consumers to fetch from closest replica
Currently, Producers and Consumers have to connect to partition leaders in order to send or receive messages. This KIP proposes to give some flexibility to Consumers and allow them to fetch messages from other replicas as long as they are in-sync. In addition to potentially spreading client load on more brokers, this is especially important when a Kafka cluster is spanning multiple availability zones because Consumers would be able to fetch from the closest replica.

Blogs:

IBM Event Streams for Cloud is Apache Kafka-as-a-service for IBM Cloud.

Get started with IBM Event Streams

More from Announcements

IBM Consulting augments expertise with AWS Competencies: A win-win for clients 

3 min read - In today's dynamic economic landscape, businesses demand continuous innovation and speed of execution. At IBM Consulting®, our unwavering focus on partnerships and shared commitment to delivering enterprise-level solutions to mutual clients have been core to our success.   We are thrilled to announce that IBM® has recently gained five competencies from Amazon Web Services (AWS) in vital domains including Cloud Operations, Internet of Things (IoT), Life Sciences, Mainframe Modernization, and Telecommunications. With these credentials, IBM further establishes its position as a…

Probable Root Cause: Accelerating incident remediation with causal AI 

5 min read - It has been proven time and time again that a business application’s outages are very costly. The estimated cost of an average downtime can run USD 50,000 to 500,000 per hour, and more as businesses are actively moving to digitization. The complexity of applications is growing as well, so Site Reliability Engineers (SREs) require hours—and sometimes days—to identify and resolve problems.   To alleviate this problem, we have introduced the new feature Probable Root Cause as part of Intelligent Incident…

Reflecting on IBM’s legacy of environmental innovation and leadership

4 min read - Upholding a legacy of more than 50 years of environmental responsibility through our company’s actions and commitments, IBM continues to be a leader in driving sustainability for our business, our communities and our clients—including a 34-year history of annual, public environmental reporting, which we continue today. As a hybrid cloud and artificial intelligence (AI) company, we believe that leveraging technology is key to unlocking impact, and it will play a substantial role in how society addresses, adapts to, and overcomes…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters