Building Robust and Scalable Data Pipelines with Kafka

Event-sourcing data architectures are all the rage, but what does it mean to actually build one? This session will cover the basics of an event pipeline and best practices for ensuring your data systems are fault-tolerant, verifiably correct, and scalable for all your use cases. After covering the basics of the Kafka distributed message queue, we'll go through the additional tools and patterns Twilio uses to get data from its sources to a variety of storage and analytics systems.