CompanyJune 2, 2021

Why Nutanix Beam Selected Apache Pulsar over Apache Kafka

Jonathan Ellis
Jonathan EllisTechnology
Why Nutanix Beam Selected Apache Pulsar over Apache Kafka

Apache Pulsar™ is used by hundreds of companies to solve distributed messaging problems at scale. Some of these use cases are well-publicized, like Splunk’s or Tencent’s, but many are not. We’ve been sitting down with Pulsar users to increase awareness of this useful technology.

We recently spoke with Shivji Kumar Jha, senior member of technical staff at Nutanix, about why this leader in cloud software and hyperconverged infrastructure solutions chose Pulsar as its message streaming solution.

1. What industry do you work in and what is your role?

I work for Nutanix in a SaaS platform/enablement team. Among other things, we provide a data platform and devops services for multiple SaaS products. I am a senior member of technical staff in the data platform team, and a subject matter expert for Apache Pulsar at Nutanix.

2. What projects / use cases led to you adopting Apache Pulsar?

We use Apache Pulsar for message streaming (failover subscription) as well as queuing (shared subscription) use cases. We have an infra usage trail that we stream atop Pulsar and then do stream analytics, materialised views, etc., on top of these streams to derive insights for our customers. We also use Pulsar for event sourcing and general pub-sub use cases, where we emit change events from our microservices into Pulsar. You can learn more about our usage and experiences in my previous talks in the Pulsar community here.

3. How did you decide that Pulsar was the right technology to adopt? Were there specific features you wanted to take advantage of?

We were an early user of Apache Pulsar. What caught our eye was the following:

  1. One cluster for both stream and queue use cases.
  2. Multi-tenancy
  3. Cloud native architecture
  4. Modular design
  5. Distribution of topic persistence across nodes with Bookkeeper backend
  6. Offload to object store (S3 for us) 

There are more details on why we chose Pulsar over Kafka in this blog post.

4. What was the most useful resource or content to you as you got started?

The Pulsar community is very active on Slack as well as GitHub, so that was a major help. We also leveraged a lot of blogs, videos, and the official documentation to get ourselves started. We also relied a lot on just reading code to understand how things work. Having used Kafka in the past we did understand the stream technology and design principles, hence going through the code was easy.

Of course we did our POCs and performance tests because Pulsar was new at that time, but Pulsar did meet our performance tests quite clearly. 

5. What advice would you have for other organizations who are considering Pulsar?

Pulsar, as of today, is mature. Hence it is easy to onboard, test and use. The official docs, the tutorials and use case videos have grown by leaps and bounds. Today it is much easier to get started. My recommendation is that one should do their own performance tests for their hardware, operating environment and use cases, do the POCs, and understand the Pulsar source codebase and design principles to a fair degree before adopting it. Also one needs to create monitoring dashboards, and alerts for operational readiness. We started with these.

When we started out, we built wrappers to reduce the surface area of usage. For the exposed libraries, we tested those features in Pulsar really well. We discussed the issues that we discovered with the community and fixed most of those ourselves.

6. Beyond your current use case, what are some additional use cases/projects where you would like to use Pulsar in your organization in the future?

As of today, we have roughly 1,700 topics in production in a single cluster, where a single producer at peak produces 0.2 million messages per minute, and we have seen a single consumer consume up to 0.7 million messages per minute with a 9 node cluster. We ingest close to 50 GB data per day. We have tested pulsar for 10x more and we are sure it would scale for that. Going ahead we plan to put Pulsar functions, transactions, delayed messages to use. These are all mature features now and we are confident these would work well for us.

7. If you could wave a magic wand and change one thing about Pulsar, what would it be?

Pulsar has grown very fast in terms of the breadth of its features. The Pulsar team encourages committers to keep merging code to help serve their use cases. While that is great, contributors sometimes only contribute enough to serve their use cases and that can lead to rough edges and feature parity issues in some cases. We would also like the core to be more stable and less fluid, along with a lot more test coverage. Perhaps a community-based QA team or forum would go a long way. Kudos to the Pulsar team for working really hard to keep the core stable. The Apache PMC [Product Management Committee] is doing a really good job prioritising the right things and we are confident that the magic (of a stable, scalable Pulsar) will eventually happen!

Additional question for our developer relations team:

8. What topic or topics do you think are under-covered in the existing documentation and community content?

Real production environments and their use cases are not as well documented. There are some of them on the stramnative website but there is much more that can be covered. 

Want to try out Apache Pulsar? Sign up now for Astra Streaming, our fully managed Apache Pulsar service. We’ll give you access to its full capabilities entirely free through beta. See for yourself how easy it is to build modern data applications and let us know what you’d like to see to make your experience even better. 

Behind the Innovator takes a peek behind the scenes with learnings and best practices from leading architects, operators, and developers building cloud-native, data-driven applications with Apache Cassandra™, Apache Pulsar™ and open-source technologies in unprecedented times.

Share

One-stop Data API for Production GenAI

Astra DB gives JavaScript developers a complete data API and out-of-the-box integrations that make it easier to build production RAG apps with high relevancy and low latency.