Project Metamorphosis: Unveiling the next-gen event streaming platformLearn More

How Blizzard Used Kafka to Save Our Pipeline (and Azeroth)

Kafka Summit SF 2017 | Pipelines Track

When Blizzard started sending gameplay data to Hadoop in 2013, we went through several iterations before settling on Flumes in many data centers around the world reading from RabbitMQ and writing to central flumes in our Los Angeles datacenter. While this worked at first, by 2015 we were hitting problems scaling to the number of events required. This is how we used Kafka to save our pipeline.

Jeff Field
Systems Engineer, Blizzard