# Kafka Architecture

In 
Published 2022-12-03

This tutorial explains the main concepts of Apache Kafka Server. This tutorial explains the Apache Kafka Architecture.

Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. More than 80% of all Fortune 100 companies trust, and use Kafka.

Here is the big picture of Apache Kafka Architecture :

Here are some points to underline about Apache Kafka Architecture:

  • Apache Kafka is a publish-subscribe based durable messaging system

  • a Kafka broker is a node on the Kafka cluster, its use is to persist and replicate the data

  • a Kafka producer pushes the message into the Kafka Topic

  • a Kafka consumer pulls the message from the Kafka Topic. Each consumer is part of a specific consumer group

  • a consumer group is a set of consumers which cooperate to consume data from some topics. The partitions of all the topics are divided among the consumers in the group. As new group members arrive and old members leave, the partitions are re-assigned so that each member receives a proportional share of the partitions.

  • Zookeeper is used to manage service discovery for Kafka Brokers that form the cluster. Zookeeper sends changes of the topology to Kafka, so each node in the cluster knows when a new broker joined, a Broker died, a topic was removed or a topic was added, etc. Future Kafka releases are planning to remove the Zookeeper dependency but as of now it is an integral part of it.

  • a Kafka source connector read messages from external systems and put the data into a Kafka topics

  • a Kafka sink connector read messages from Kafka topics and persist/put them into external systems

  • a streaming processing is an application which is integrated with Kafka Cluster which read/ listen on particular topics and automatically process that data (without modifying the original data). Stream processing allows applications to respond to new data events at the moment they occur.

  • a topic (a logical concept) is the place where the messages are put on. Physically, the messages are put on partitions. A topic could have more partitions for increasing the throughput.

You could take a look at the article named Create a Kafka Producer with a Key (using Java).