Toggle Menu
Four Reasons Why Apache Pulsar is Essential to the Modern Data Stack

Four Reasons Why Apache Pulsar is Essential to the Modern Data Stack

Messaging has been on DataStax’s radar for several years. A significant motivator for this is the increasing popularity of microservice-based architectures. Briefly, microservice architectures use a message bus to decouple communication between services and to simplify replay, error handling, and load spikes.


With Cassandra and Astra, developers and architects have a database ecosystem that is

  1. Based on open source
  2. Well suited for hybrid- and multi-cloud deployments
  3. Available in a cloud-native, consumption-priced service

There is no current messaging solution that satisfies these requirements, so we’re building one.

We started by evaluating the most popular option, Apache Kafka. We found that it came up short in four areas:

  1. Geo-replication
  2. Scaling
  3. Multitenancy
  4. Queuing

Apache Pulsar solves all of these problems to our satisfaction. Let’s look at each of these in more detail.


Cassandra supports synchronous and asynchronous replication within or across datacenters. (Most often, Cassandra is configured for synchronous replication within a region, and asynchronous replication across regions.) This allows Cassandra users like Netflix to serve customers everywhere with local latency, to comply with data sovereignty regulations, and to survive infrastructure failures. (When AWS rebooted 218 Cassandra nodes to patch a security vulnerability, “Netflix experienced 0 downtime.”)

Kafka is designed to run in a single region and does not support cross-datacenter replication. Clients outside the region where Kafka is deployed must simply tolerate the increased latency. There are several projects that attempt to add cross-datacenter replication to Kafka at the client level, but these are necessarily difficult to operate and prone to failure.

Like Cassandra, Pulsar builds geo-replication into the core server. (Also like Cassandra, you can choose to deploy this in a synchronous or asynchronous configuration, and you can configure replication by topic.) Producers can write to a shared topic from any region, and Pulsar takes care of ensuring those messages are visible to consumers everywhere.

Geo Replication

Splunk wrote up a good overview of Pulsar geo-replication in two parts: one, two.


In Kafka, the unit of storage is a segment file, but the unit of replication is all the segment files in a partition. Each partition is owned by a single leader broker, which replicates to several followers. So when you need to add capacity to your Kafka cluster, some partitions need to be copied to the new node before it can participate in reducing the load on the existing nodes.  



This means that adding capacity to a Kafka cluster makes it slower before it makes it faster. If your capacity planning is on point, then this is fine, but if business needs change faster than you expected then it could be a serious problem.

Pulsar adds a layer of indirection. (Pulsar also splits apart compute and storage, which are managed by the broker and the bookie, respectively, but the important part here is how Pulsar, via Bookkeeper, increases the granularity of replication.) In Pulsar, partitions are split up into ledgers, but unlike Kafka segments, ledgers can be replicated independently of one another. Pulsar keeps a map of which ledgers belong to a partition in Zookeeper. So when we add a new storage node to the cluster, all we have to do is start a new ledger on that node. Existing data can stay where it is, no extra work needs to be done by the cluster.

MessagingSee Jack Vanlightly’s blog for an in-depth explanation of Pulsar’s architecture and storage model.


Multi-tenant infrastructure can be shared across multiple users and organizations while isolating them from each other. The activities of one tenant should not be able to affect the security or the SLAs of other tenants.

Fundamentally, multitenancy reduces costs in two ways. First, simply by sharing infrastructure that isn’t maxed out by a single tenant -- the cost of that component can be amortized across all users. Second, by simplifying administration -- when there are dozens or hundreds or thousands of tenants, managing a single instance offers significant simplification.  Even in a containerized world, “get me an account on this shared system” is much easier to fulfil than “stand me up a new instance of this service.” And global problems may be obscured by being scattered across many instances.

Like geo-replication, multitenancy is hard to graft on to a system that wasn’t designed for it. Kafka is a single-tenant design, but Pulsar builds multitenancy in at the core.


Pulsar allows us to manage multiple tenants across multiple regions from a single interface that includes authentication and authorization, isolation policy (Pulsar can optionally carve out hardware within the cluster that is dedicated to a single tenant), and storage quotas. CapitalOne wrote up a good overview of Pulsar multitenancy here.

DataStax’s new Admin Console for Pulsar makes this even easier.

Queuing (as well as Streaming)

Kafka offers a classic pub/sub (publish/subscribe) messaging model -- publishers send messages to Kafka, which orders them by partition within a topic, and sends a copy to every subscriber (or “consumer”).  


Kafka records which messages a consumer has seen with an offset into the log. This means that messages cannot be acknowledged out-of-order, which in turn means that a subscription cannot be shared across multiple consumers.  (Kafka allows mapping multiple partitions to a single consumer in its consumer group design, but not the other way around.)

This is fine for pub/sub use cases, sometimes called streaming. For streaming, it’s important to consume messages in the same order in which they were published.

Pulsar supports the pub/sub model, but it also supports the queuing model, where processing order is not important and we just want to load balance messages in a topic across an arbitrary number of consumers:


This (and queuing-oriented features like “dead letter queue” and negative acknowledgment with redelivery) means that Pulsar can often replace AMQP and JMS use cases as well as Kafka-style pub/sub, offering a further opportunity for cost reduction to enterprises adopting Pulsar.


Pulsar’s architecture gives it important advantages over Kafka in geo-replication, scaling, multitenancy, and queuing.  DataStax is excited to join the Pulsar community with today’s announcement of our acquisition of the Kesque Pulsar-as-a-service and open-sourcing the management and monitoring tools built by the Kesque team in our new Luna Streaming distribution of Pulsar.

Learn more about what Pulsar can do for Cassandra, and what Cassandra can do for Pulsar:

Open-Source, Scale-Out, Cloud-Native NoSQL Database

DataStax is scale-out NoSQL built on Apache Cassandra.™ Handle any workload with zero downtime and zero lock-in at global scale.

Get started for free
Astra Product Screen