What is Apache Kafka?

30 April 2025

8 minutes

Authors

Ian Smalley

Senior Editorial Strategist

What is Apache Kafka?

Apache Kafka is an open-source, distributed, event-streaming platform that processes real-time data. Kafka excels at supporting event-driven applications and building reliable data pipelines, offering low latency and high-throughput data delivery.

Today, billions of data sources continuously produce streams of information, often in the form of events, foundational data structures that record any occurrence in the system or environment. 

Typically, an event is an action that drives another action as part of a process. A customer placing an order, choosing a seat on a flight, or submitting a registration form are all examples of events. An event doesn’t have to involve a person, for instance, a connected thermostat’s report of the temperature at a given time is also an event.

Event streaming offers opportunities for applications to respond instantly to new information. Streaming data platforms like Apache Kafka allow developers to build systems that consume, process and act on data as it arrives while maintaining the order and reliability of each event.

Kafka has evolved into the most widely used event-streaming platform, capable of ingesting and processing trillions of records daily without any perceptible performance lag to support scalable volumes. Over 80% of Fortune 500 organizations use Kafka, including Target, Microsoft, AirBnB and Netflix, to deliver real-time, data-driven customer experiences.

The origin of Apache Kafka

In 2011, LinkedIn developed Apache Kafka to meet the company’s growing need for a high-throughput, low-latency system capable of handling massive volumes of real-time event data. Built using Java and Scala, Kafka was later open-sourced and donated to the Apache Software Foundation.

While organizations already supported or used traditional message queue systems (e.g., AWS’s Amazon SQS), Kafka introduced a fundamentally different messaging system architecture.

Unlike conventional message queues that delete messages after consumption, Kafka retains messages for a configurable duration, enabling multiple consumers to read the same data independently. This capability makes Kafka ideal for messaging and event sourcing, stream processing, and building real-time data pipelines.

Today, Kafka has become the de facto standard for real-time event streaming. Industries that use Kafka include finance, ecommerce, telecommunications and transportation, where the ability to handle large volumes of data quickly and reliably is essential.

How Apache Kafka works

Kafka is a distributed platform; it runs as a fault-tolerant, highly available cluster that can span multiple servers and even multiple data centers.

Kafka has three primary capabilities:

  1. It enables applications to publish or subscribe to data or event streams.
  2. It stores records accurately in the order they occurred, with fault-tolerant and durable storage.
  3. It processes records in real-time as they occur.

Producers (applications or topics) write records to topics named logs that store the records in the order they occurred relative to one another. Topics are then split into partitions and distributed across a cluster of Kafka brokers (servers).

Within each partition, Kafka maintains the order of the records and stores them durably on disk for a configurable retention period. While ordering is guaranteed within a partition, it is not across partitions. Based on the application’s needs, consumers can independently read from these partitions in real time or from a specific offset.

Kafka ensures reliability through partition replication. Each partition has a leader (on one broker) and one or more followers (replicas) on other brokers. This replication helps tolerate node failures without data loss.

Historically, Kafka relied on Apache ZooKeeper, a centralized coordination service for distributed brokers. ZooKeeper ensured Kafka brokers remained synchronized, even if some brokers failed. In 2011, Kafka introduced KRaft (Kafka Raft Protocol) mode, eliminating the need for ZooKeeper by consolidating these tasks into the Kafka brokers themselves. This shift reduces external dependencies, simplifies architecture and makes Kafka clusters more fault-tolerant and easier to manage and scale.

Aerial view of highways

Keep your head in the cloud 


Get the weekly Think Newsletter for expert guidance on optimizing multicloud settings in the AI era.

Apache Kafka APIs

Developers can leverage Kafka’s capabilities through four primary application programming interfaces (APIs):

  1. Producer API
  2. Consumer API
  3. Streams API
  4. Connector API
Producer API

The Producer API enables an application to publish a stream to a Kafka topic. After a record is written to a topic, it can’t be altered or deleted. Instead, it remains in the topic for a preconfigured amount of time, for two days, or until storage space runs out.

Consumer API

The Consumer API enables an application to subscribe to one or more topics and to ingest and process the stream stored in the topic. It can work with records in the topic in real time or ingest and process past records.

Streams API

This API builds on the Producer and Consumer APIs and adds complex processing capabilities that enable an application to perform continuous, front-to-backstream processing. Specifically, the Streams API involves consuming records from one or more topics, analyzing, aggregating or transforming them as required, and publishing the resulting streams to the same topics or other topics.

While the Producer and Consumer APIs can be used for simple stream processing, the Streams API enables the development of more sophisticated data- and event-streaming applications.

Connector API

This API lets developers build connectors, which are reusable producers or consumers that simplify and automate the integration of a data source into a Kafka cluster.

Apache Kafka use cases

Developers use Kafka primarily for creating two kinds of applications:

  • Real-time streaming data pipelines
  • Real-time streaming applications

Real-time streaming data pipelines

Applications designed specifically to move millions and millions of data or event records between enterprise systems, at scale and in real-time. The apps must move them reliably, without risk of corruption, duplication of data, or other problems that typically occur when moving such huge volumes of data at high speeds.

For example, financial institutions use Kafka to stream thousands of transactions per second across payment gateways, fraud detection services and accounting systems, ensuring accurate, real-time data flow without duplication or loss.

Real-time streaming applications

Applications that are driven by record or event streams and that generate streams of their own. In the digitally driven world, we encounter these apps every day.

Examples include ecommerce sites that update product availability in real-time or platforms that deliver personalized content and ads based on live user activity. Kafka drives these experiences by streaming user interactions directly into analytics and recommendation engines.

Other Apache Kafka use cases

  • Microservices: Kafka facilitates communication between microservices by enabling asynchronous, event-driven messaging. This feature allows services to trigger actions across other services without being tightly coupled, supporting scalable and decoupled system architectures.
  • Containerized cloud-native environments: Kafka integrates seamlessly with cloud-native platforms using Docker for containerization and Kubernetes for container orchestration. This setup supports scalable, fault-tolerant, event-driven communication while minimizing the need for manual infrastructure management. Kafka can automatically scale and recover from failures within Kubernetes, making it ideal for dynamic cloud computing environments that run diverse application workloads.
  • Data lakes and data warehousing: Kafka acts as a real-time data pipeline between data sources and storage platforms, such as data lakes or data warehouses. This feature enables streaming large data volumes for timely ingestion and analysis, which is essential for modern analytics and business intelligence workflows.
  • IoT (Internet of Things) data handling: Kafka is well-suited for processing continuous data streams from IoT devices, enabling real-time routing of high-throughput, low-latency data to destinations like databases, analytics engines or monitoring tools. This capability supports time-sensitive applications in industries like manufacturing and healthcare.
AI Academy

Achieving AI-readiness with hybrid cloud

Led by top IBM thought leaders, the curriculum is designed to help business leaders gain the knowledge needed to prioritize the AI investments that can drive growth.

The Apache Kafka ecosystem

Kafka integrates with several other technologies, many of which are part of the Apache Software Foundation (ASF). Organizations typically use these technologies in larger event-driven architectures, stream processing or big data analytics solutions.

Some of these technologies are open source, while Confluent, a platform built around Kafka, provides enterprise-grade features and managed services for real-time data processing at scale. Companies like IBM, Amazon Web Services and others offer Kafka-based solutions (e.g., IBM Event Streams, Amazon Kinesis) that integrate with Kafka for scalable event streaming.

The Apache Kafka ecosystem includes:

  • Apache Spark
  • Apache NiFi
  • Apache Flink
  • Apache Hadoop
  • Apache Camel
  • Apache Cassandra

Apache Spark

Apache Spark is an analytics engine for large-scale data processing. You can use Spark to perform analytics on streams delivered by Apache Kafka and to produce real-time stream-processing applications, such as clickstream analysis.

Apache NiFi

Apache NiFi is a data-flow management system with a visual, drag-and-drop interface. Because NiFi can run as a Kafka producer and a Kafka consumer, it’s an ideal tool for managing data-flow challenges that Kafka can’t address.

Apache Flink

Apache Flink is an engine for performing large-scale computations on event streams with consistently high speed and low latency. Flink can ingest streams as a Kafka consumer, perform real-time operations based on these streams, and publish the results for Kafka or another application.

Apache Hadoop

Apache Hadoop is a distributed software framework that lets you store massive amounts of data in a cluster of computers for use in big data analytics, machine learning, data mining and other data-driven applications that process structured and unstructured data. Kafka is often used to create a real-time streaming data pipeline to a Hadoop cluster.

Apache Camel

Apache Camel is an integration framework with a rule-based routing and mediation engine. It supports Kafka as a component, enabling easy data integration with other systems (e.g., databases, messaging queues), thus allowing Kafka to become part of a larger event-driven architecture.

Apache Cassandra

Apache Cassandra is a highly scalable NoSQL database designed to handle large amounts of data across many commodity servers without any single point of failure.

Kafka is commonly used to stream data to Cassandra for real-time data ingestion and for building scalable, fault-tolerant applications.

Kafka vs. RabbitMQ

RabbitMQ is a popular open-source message broker that enables applications, systems and services to communicate by translating messaging protocols. Since Kafka began as a message broker (and can still be used as one) and RabbitMQ supports a publish/subscribe messaging model (among others), Kafka and RabbitMQ are often compared as alternatives. However, they serve different purposes and are designed to address various types of use cases.  For example, Kafka topics can have multiple subscribers, whereas each RabbitMQ message can have only one. Additionally, Kafka topics are durable, whereas RabbitMQ messages are deleted once consumed.

When choosing between them, it’s essential to consider the specific needs of your application, such as throughput, message durability and latency. Kafka is well-suited for large-scale event streaming, while RabbitMQ excels in scenarios requiring flexible message routing and low-latency processing.

Apache Kafka and open-source AI

Integrating Apache Kafka and open-source AI transforms how organizations handle real-time data and artificial intelligence. When combined with open-source AI tools, Kafka enables the application of pre-trained AI models to live data, supporting real-time decision-making and automation.

Open-source AI has made artificial intelligence more accessible, and Kafka provides the infrastructure needed to process data in real-time. This setup eliminates the need for batch processing, allowing businesses to act on data immediately as it’s produced.

For example, an ecommerce company might use Kafka to stream customer interactions, such as clicks or product views, as they happen. Pre-trained AI models then process this data in real-time, providing personalized recommendations or targeted offers. Kafka manages the data flow, while the AI models adapt based on incoming data, improving customer engagement.

By combining real-time data processing with AI models, organizations can make quicker decisions in fraud detection, predictive maintenance or dynamic pricing, enabling more responsive and efficient systems.

Benefits of Apache Kafka

  • Real-time data processing: Kafka enables real-time data pipelines and streaming applications. It allows systems to publish, subscribe and process streams of records as they happen, supporting use cases like monitoring, alerting and analytics.
  • High performance and scalability: Kafka topics are partitioned and replicated in such a way that they can scale to serve high volumes of simultaneous consumers without impacting performance.
  •  Security and compliance: Apache Kafka supports encryption for data both in transit and at rest, along with configurable access controls. These features help organizations protect sensitive data and align with industry compliance standards.
  • High availability: Kafka ensures high availability through data replication across multiple brokers. Messages remain available even if a node fails, and the system continues running without data loss or downtime.
  •  Simplified management: Kafka includes monitoring, config and automation tools, which help reduce the operational burden. It integrates well with orchestration and management platforms, making deployment and scaling more manageable. Kafka also exposes detailed performance and health metrics via JMX (Java Management Extensions), allowing teams to track throughput, latency, broker health and consumer lag, which are critical for effective monitoring and capacity planning.
  • Broad integration: With a wide range of connectors and client APIs, Kafka integrates easily with databases, file systems and cloud services. For instance, Kafka Connect facilitates seamless data movement between systems, while Kafka Streams enables real-time stream processing, further enhancing integration capabilities.
Related solutions
IBM Event Streams

IBM Event Streams is an event streaming software built on open source Apache Kafka. It is available as a fully managed service on IBM Cloud or for self-hosting.

Explore Event Streams
Integration Software and Solutions

Unlock business potential with IBM integration solutions, connecting applications and systems to access critical data quickly and securely.

Explore integration solutions
Cloud consulting services

Unlock new capabilities and drive business agility with IBM cloud consulting services.

Explore cloud consulting services
Take the next step

IBM Event Streams is an event streaming software built on open source Apache Kafka. It is available as a fully managed service on IBM Cloud or for self-hosting.

Explore Event Streams Get more information