Distributed streaming platform that can: 1. Publish and subscribe to streams of records, similar to a mesage queue. 2. Store streams of records in a fault tolerate and durable way. 3. Process streams as they occur.
Used for: 1. Building real-time streaming applications that reliably get data. 2. Building real-time streaming applications that transform data.
Kafka is run as a cluster on one or more servers that can span multiple data centers. Kafka cluster stores streams of records in categories called topics
Each cluster handle a share of the partitions. Each partition is replicated across a number of servers. Each partition has 1 leader and other followers.
Leader handles all read and write request, followers replicate. If leader fails, one of the followers will take over.
Topic is a category to which records are published. Topics in Kafka are always multi-subscriber: i.e. a topic can have zero, one, or many consumers.
Queueing Allows us to divide the processing over multiple consumer instances, which lets us scale processing.
Publish-subscribe record is broadcasted to all consumers
Kafka is a generalized notion of stream processing that subsumes batch processing as well as message-driven applications.
Kafka accepts the use of keys. Keys are useful if a strong order for a key is required, or if we require that messages with the same key go to the same parition.
Cluster typically contains multiple brokers to maintain load balance. Brokers are statemless, and use zookeeper for maintaining cluster state. One broker can handle hundreds and thousands of reads and writes per second, with TB of messages. ZooKeeper coordinates the brokers
Kafka maintains information about messages sent and consumed. (For the instances where messages are sent, package lost, and the consumer ends up time-out or waiting forever).
This creates a problem of double consumption, or broker must keep multiple states about each message.
Kafka handles this by ensuring that each partition is consumed by one consumer group at a single moment in time.
New consumer relies on a server side coordinator to negotioate the set of consumer processes that form the group
Kafka balances the node at 3 different locations 1. Assignment strategy (Look at partition assignment strategy) 2. Producer level, strategy for selecting partition to store message. 3. If consumer processes message for a long time, Kafka thinks partitino is dead and reassign partitions among other consumers. Once the job is done, partitions can be assigned to it again.
Kafka has a data copy algorithm to ensure that Aif leader fails or hangs up, a new leader is elected and message is written.
In N replicas, 1 replica is the leader replica. Leader replica handles all reads and writes to the partition. Follower passively and regularly copies data from the leader. image Leader is responsible for maintaining and tracking the status of follower lags in the ISR (In-Sync Replicas) which is a copy sync queue. When producer sends a message to the broker, leader writes message and copies it to all followers.
Message replicate latency limited by the slowest follower, and it is important to detect slow copies quickly.
Messages in partition are totally ordered but not between partitions.
request.required.acks: 1(default): producer sends message after leader successfully received the data in the ISR and is confirmed. If leader is down, it will lose data 0: producer does not need to wait for confirmation from broker to continue sending the next batch of messages. -1: producer wait for all followers to confirm data sent, but not guarantee data will not be lost.
We don’t increase the number of partitions to be too large because on failure, the reelectino for broker for the partition Suppose that a broker has a total of 2000 partitions, each with 2 replicas. Roughly, this broker will be the leader for about 1000 partitions. When this broker fails uncleanly, all those 1000 partitions become unavailable at exactly the same time. Suppose that it takes 5 ms to elect a new leader for a single partition. It will take up to 5 seconds to elect the new leader for all 1000 partitions.
The consumer group coordinator is one of the brokers while the group leader is one of the consumer in a consumer group.
The group coordinator is nothing but one of the brokers which receives heartbeats (or polling for messages) from all consumers of a consumer group. Every consumer group has a group coordinator. If a consumer stops sending heartbeats, the coordinator will trigger a rebalance.
When a consumer wants to join a consumer group, it sends a JoinGroup request to the group coordinator. The first consumer to join the group becomes the group leader. The leader receives a list of all consumers in the group from the group coordinator (this will include all consumers that sent a heartbeat recently and are therefore considered alive) and it is responsible for assigning a subset of partitions to each consumer. It uses an implementation of the PartitionAssignor interface to decide which partitions should be handled by which consumer. After deciding on the partition assignment, the consumer leader sends the list of assignments to the GroupCoordinator which sends this information to all the consumers. Each consumer only sees his own assignment - the leader is the only client process that has the full list of consumers in the group and their assignments. This process repeats every time a rebalance happens.
Kafka server vs zoo keeper in new kafka version