Confluent
Log Compaction | Highlights in the Kafka and Stream Processing Community | January 2016
Log Compaction

Log Compaction | Highlights in the Kafka and Stream Processing Community | January 2016

Gwen Shapira

Happy 2016! Wishing you a wonderful, highly scalable, and very reliable year. Log Compaction is a monthly digest of highlights in the Apache Kafka and stream processing community. Got a newsworthy item? Let us know.

Many things have happened since we last shared the state of Apache Kafka and the streams ecosystem. Lets take a look!

  • Kafka 0.9 was released and so was Plateforme Confluent 2.0
  • The call for proposals for Kafka Summit is closing soon – submit your abstract by Monday, January 11! Make it your new year’s resolution to participate more in the Kafka community and you can start by registering for the conference. Catch the Early Bird price (save $100) before it expires on January 15. 
  • Congratulations to Ewen Cheslack-Postava who joined Apache Kafka as a committer! We wish him much success and many patch reviews.
  • Kafka 0.9 added protocol support for managing groups so that clients no longer need to interact with ZooKeeper directly. KIP-40 gives a design for adding protocol support for clients to list available groups and to show the group members and their lag. The protocol is implemented in the new ConsumerGroupCommand (kafka-consumer-groups.sh), and is also available for use by 3rd party clients.
  • Kafka currently only supports a “logical” notion of time – the message offset, which indicates a relative order of messages. Many users want to be able to know the physical time that a message was produced. KIP-32, which is in active discussions and voting, will add a timestamp field to each Kafka message, indicating the time the client created the message. This would eventually allow adding special indexes and also support consuming messages based on their timestamp.
  • Kafka Connect is a new feature introduced in 0.9 that makes it really easy to directly integrate Kafka with external data systems like RDBMS’s or Hadoop. This tutorial shows how to use Kafka Connect to get events from a relational database to Kafka, and from there to HDFS and Hive, including automated partitioning of the data and updates to the Hive schema.
  • Looking to use Kafka with Spring for stream processing? There’s a 5-part blog post on how to do just that.
  • Yahoo published a benchmark comparing popular stream processing frameworks – Storm, SparkStreaming, and Flink.
  • At the first Seattle Kafka Meetup this past November, Microsoft shared how they use Kafka in Bing. One trillion messages per day!
  • Kafka, The Definitive Guide is now available for pre-order from O’Reilly.

Subscribe to the Confluent Blog

S'abonner

More Articles Like This

Security Camera
Erik-Berndt Scheper

Bust the Burglars – Machine Learning with TensorFlow and Apache Kafka

Erik-Berndt Scheper .

Have you ever realized that, according to the latest FBI report, more than 80% of all crimes are property crimes, such as burglaries? And that the FBI clearance figures indicate ...

Figure 1. The packaging of payloads for Oracle WMS Cloud
Stewart Bryson

Deploying Kafka Streams and KSQL with Gradle – Part 3: KSQL User-Defined Functions and Kafka Streams

Stewart Bryson .

Building off part 1 where we discussed an event streaming architecture that we implemented for a customer using Apache Kafka, KSQL, and Kafka Streams, and part 2 where we discussed ...

Figure 2. Scaling indexing
Pere Urbón-Bayes

Building a Scalable Search Architecture

Pere Urbón-Bayes .

Software projects of all sizes and complexities have a common challenge: building a scalable solution for search. Who has never seen an application use RDBMS SQL statements to run searches? ...

Leave a Reply

Your email address will not be published. Required fields are marked *

Try Confluent Platform

Download Now

Nous utilisons des cookies afin de comprendre comment vous utilisez notre site et améliorer votre expérience. Cliquez ici pour en apprendre davantage ou pour modifier vos paramètres de cookies. En poursuivant la navigation, vous consentez à ce que nous utilisions des cookies.