CompanyApril 21, 2021

How Apache Pulsar helps Vivy empower people to take charge of their health

Jonathan Ellis
Jonathan EllisTechnology
How Apache Pulsar helps Vivy empower people to take charge of their health

Behind the Innovator takes a peek behind the scenes with learnings and best practices from leading architects, operators, and developers building cloud-native, data-driven applications with Apache Cassandra™, Apache Pulsar™, and open-source technologies. 

Apache Pulsar is used by hundreds of companies to solve distributed messaging problems at scale. Some of these use cases are well-publicized, like Splunk’s or Tencent’s, but many are not.  We’ve been sitting down with Pulsar users to increase awareness of this useful technology.

Today’s interview is with Kirill Merkushev, Head of Backend at Vivy, a digital healthcare platform.

What industry do you work in and what is your role?

We’re an Allianz-backed startup on a mission to build the leading digital health platform for personalized interaction towards better health. We connect people to the services they need to manage, improve and optimize their health. We have an app on mobile platforms, an organizational portal accessible via web, and a backend system on top of AWS to glue it together. I work as a Head of Backend, thus I’m responsible for anything server related, data-flows going through the backend, final tech stack decisions and infrastructure development/maintenance.

What projects or use cases led to you adopting Pulsar?

From the very beginning we wanted to build a scalable system whose different components would be as independent from each other as possible. That led us to event-driven, reactive approaches, and in particular, an event-sourced system. One more point to that was that event-sourced systems are usually hard to design at the beginning, however they are quite nice to extend, allowing us to try different approaches and features really quickly. We didn't use any framework, just a bare-minimum—an idea of event, storage of the event, and projections.

Initially we chose Kafka for the quick start, and we wanted to offload that later to some service provider. However, we later decided to search for something we could operate internally and grow our own expertise on that. At that moment we'd built a simple event-gateway, which abstracted the storage from our services, so we could try different systems more or less without changing the service code (the event-gateway project is here). We knew that Kafka is not an easy system to operate due to the tightly coupled brokers and data storage, and also at that moment Kafka wasn't the best system to integrate with reactive, non-blocking libraries, so we found Pulsar, an alternative we really liked from the development point of view—the client is really nice, as well as from the operational point of view; it supports separate bookies and less load on ZooKeeper. 

How did you decide that Pulsar was the right technology to adopt?  Were there specific features you wanted to take advantage of?

Among the features we considered were tiered storage, as we planned to have unlimited retention (for event sourcing that matters a lot), flexible subscription model (we use exclusive at the moment, however we want to try per-key subscription), authorization via different methods including certificates and JWT (JSON Web Token), and an easy way to get it up and running.

What was the most useful resource or content to you as you got started?

Official docs for sure are the most useful one. It's not great in terms of programming examples and admin API, there are many things to improve here, but still that's the most valuable. Next goes the source code of the examples and tests. Later, Kafka-on-Pulsar, to see how exactly it compared to Kafka. Also we experimented a lot within tests using pulsar testcontainer. We didn't spend much time on the performance side of things, as we had other priorities first, I just quickly checked on a pair of consumer/producer and found that it has a bunch of space to grow for us — so we focused more on the development and operational parts of documentation and overall experience.

We also joined the official Slack channel and got quite a few answers on some questions there. We even reported some memory leaks in a client, which got fixed in just a few hours.

What advice would you have for other organizations who are considering Pulsar?

Make sure you understand the subscription models. Pulsar supports multiple subscription types like shared and exclusive. This is different from what Kafka provides, and we actually started to use the wrong one until we realized where we went wrong. So it's always better to reproduce as much as possible a real environment and get through some edge cases, like losing the network when acknowledging messages, handling new connections of the same consumer group - that’s what gave us some trouble to understand in the early stages.

Beyond your current use case, what are some additional use cases/projects where you would like to use Pulsar in your organization in the future?

We're looking forward to try Pulsar’s built-in cross-region replication which would be extremely helpful to add more resilience. We experimented with Pulsar functions — that looks nice and I think we could find a good work for them in international setup to ease some synchronization challenges and replication across regions.

If you could wave a magic wand and change one thing about Pulsar, what would it be?

That's for sure https://github.com/apache/pulsar/issues/5059—if we had that feature a year ago, that would save me personally hundreds hours of coding and experiments!

Thank you, Kirill!

Want to try out Apache Pulsar? Sign up now for Astra Streaming, our fully managed Apache Pulsar service. We’ll give you access to its full capabilities entirely free through beta. See for yourself how easy it is to build modern data applications and let us know what you’d like to see to make your experience even better. 

Share

One-stop Data API for Production GenAI

Astra DB gives JavaScript developers a complete data API and out-of-the-box integrations that make it easier to build production RAG apps with high relevancy and low latency.