The Kafka documentation describes Apache Kafka as a
distributed streaming platform.
Kafka enables you to:
- Publish and Subscribe to streams of data records
- Store the records in a fault-tolerant and scalable fashion
- Process streams of records in real-time
- Kafka is written in
- Originally created at LinkedIn
- Became open source in 2011
streaming data pipelines that reliably get data between systems or applications.
If you have two different applications or two different systems and they need to pass data, Kafka allows us to do in
- An online store application. When a customer makes a purchase, the application sends the order data to a backend that handles fulfillment/shipping. You can use a messaging platform like Kafka to ensure that the data is passed to the backend application reliably and with no data loss, even if the backend application or one your servers goes down.
We can use Kafka to pass data from our front application web to our backend fulfillment/shipping system. If you use Kafka to do that and if our system goes down for example our shipping system goes down for a while, Kafka makes sure the messages reliably deliver when the system is back up.
Building real-time streaming applications that transform or react to the streams of data
Log aggregation. You have multiple servers creating log data. You can feed the data to Kafka and use steams to transform the data into
standardized format, then combine the log data from all of the servers into a single feed for analysis.
- Steaming is kind of similar to messaging. Messaging is one application talking to another. Steaming is the application which is receiving data from Kafka doing some processing on it. Even some changes the data. feeding and back to Kafka. This is all about real time data processing.
Benefits of Kafka
- Strong reliability guarantees
- Fault tolerance
- Robust APIs