Distributed Data Show Episode 87
Description - Jeff and Tim talk about the most common questions developers have about Kafka and three great ways to combine Kafka with Cassandra in your applications.
0:30 - debating the great innovations of human history
1:38 - questions developers ask Tim about Kafka: there are lots of specific questions about tuning producer/consumer throughput, but also further up the stack in terms of stream processing APIs and bridging synchronous and asynchronous interactions
4:17 - Jeff's first fail with Kafka - publishing just a value to a topic configured as a key-value topic.
5:50 - Best practices for working with microservices - having services communicate through durable logs of immutable events is a great pattern. When these services also expose synchronous APIs, they often need to query for other data. Logs like Kafka don't do well with complex queries like full text search, geospatial, etc, and that's where incorporating Cassandra and DataStax Enterprise makes sense..
8:31 - Pattern 1 for combining Kafka and Cassandra: a service consumes events from a stream, performs computation, and produces new events. Service may provide API and need to grab data from Cassandra.
9:33 - Pattern 2: Cassandra-centric view - use Kafka as a pipe for data ingest into Cassandra. This is great when you want to leveraging Cassandra's multi-DC replication
10:29 - Pattern 3: Cassandra into Kafka. Possible to do change data capture (CDC) from Cassandra and other databases via connectors plugged into Kafka connect
11:32 - Wrapping up - the challenges of finding time to code when leading DevRel teams, having outside interests and hobbies.
Developer Relations at DataStax