Démo en direct : réalisation d’un streaming Kafka en 10 minutes sur Confluent | S’inscrire

Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka

At Stripe, we operate a general ledger modeled as double-entry bookkeeping for all financial transactions. Warehousing such data is challenging due to its high volume and high cardinality of unique accounts. aFurthermore, it is financially critical to get up-to-date, accurate analytics over all records. Due to the changing nature of real time transactions, it is impossible to pre-compute the analytics as a fixed time series. We have overcome the challenge by creating a real time key-value store inside Pinot that can sustain half million QPS with all the financial transactions.

We will talk about the details of our solution and the interesting technical challenges faced.

Présentateurs

Xiaoman Dong
Joey Pereira