Success StoriesSeptember 1, 2021

How Verinovum Helps Improve Healthcare with Cleaner, Enriched Data

Arlene Go
Arlene Go
How Verinovum Helps Improve Healthcare with Cleaner, Enriched Data

“The flexibility of the data model has been of huge benefit for our architecture. That resulted in significant time savings for our team. Data modeling now takes probably half or a third of the time that we used to spend.”

— Ryan Campbell, Chief Vision Officer, Verinovum
 

What if outcomes for common diseases like diabetes, congestive heart failure, asthma, and COPD could be improved with better, more actionable clinical data? Verinovum is on a mission to do just that. 

Headquartered in Tulsa, Oklahoma, Verinovum provides clean, complete, and accurate clinical data to enable healthcare payers, providers, and partner organizations to improve business and patient outcomes. We sat down with Ryan Campbell, Chief Vision Officer and cofounder of Verinovum, to chat about the company’s evolution as a specialist in healthcare data curation and enrichment.

1. Tell us about Verinovum’s mission. 

I helped found Verinovum in 2013 with a focus on the health information exchange (HIE) market. In the course of learning to manage clinical data and how to improve its quality, we naturally became an expert in data curation. Today our business revolves around gathering clinical data from disparate sites, aggregating and cleaning it, then providing it to clients for initiatives like risk assessments and population health programs.  

2. What's the biggest challenge that data presents to your organization? 

We've aligned with some of the industry standards and have developed our own data model based in part on feedback from clients. We have a lot of data that comes in daily, so our challenge is always keeping up with the scale and volume, yet still maintaining performance. Also, we need to make sure that the platforms we use are flexible so that we can add to our data model and change it quickly if needed. 

Obviously, data variation is an additional challenge. There can be different interfaces and operational processes related to electronic health record (EHR) systems, creating a lot of variation in data. Many hospital systems use their own code systems, so we need to translate those using standardized rules, for example, when it comes to units and provider name information.

In addition, a lot of the data that comes out of the EHR doesn't come out as molecules, so to speak. It comes out as an atom. So we have to split it, process everything, then put molecules back together so that we're handing over more complete information to our clients.

3. What’s the bigger barrier to success with data and data strategies in your business? Is it culture or technology? 

The data is gold, but the understanding of data is not very deep across healthcare in general. Everybody knows they need it, but the data's complex. There’s a lack of in-depth understanding of relationships and impacts. We spend a lot of time talking about these issues with our clients. I would say the technology is important, but it's probably secondary to barriers created by limited understanding of data.

4. How long have you been using Cassandra, and what makes Cassandra particularly well-suited to what you're doing?

We've had Cassandra for nearly three years. I would say we've only become effective at using it in the last year or so. That had a lot to do with our partnering with DataStax, so that we could talk to people who know how to use Cassandra more effectively. 

With Cassandra, we see that having the node replication is a significant benefit. We do a lot of write-read calculations throughout the curation process. There are many transactional states that we maintain. It's kind of a blockchain-ish sort of process. We've found the solution to be high performing and agile for our most challenging initiatives involving high-volume transactions and our needs for flexibility in the data model.

5. With Cassandra, are you doing things now that you previously could not? And how were you doing things in the past?

Before Cassandra, we basically sat on a relational data structure with SQL Server. We were able to do a lot of things from a functional perspective, but weren’t able to scale or replicate to the level we needed. Plus with the relational data structure, when we needed to change the data model, it was a pretty significant process. The way we have it set up now, when we need to change the data model, we make a change, test it, and we're done. It doesn't take long.

6. Why did you choose to work with DataStax?

In choosing DataStax, it really came down to who we felt had the most expertise and where we could get the most value for our money. We felt we could effectively grow our use of the product with our DataStax partnership. We appreciate having a communication line to individuals at DataStax who are experts and can provide guidance, and maybe even offer some implementation help for other things we could do to better take advantage of our Cassandra environment.

For instance, I consulted with a DataStax expert about some different data patterns and characteristics we were seeing. He provided a recommendation about splitting those clusters apart to better optimize for those data patterns, helping us to avoid what could have been serious performance impacts. As we plan for the future and higher-scale activities, these conversations are particularly helpful.

7. How does your Cassandra environment help your architecture team?

Now when we architect, we no longer have to try to think about every potential variation in the data model that we may or may not need down the road. We can do it more in line with immediate needs. With Cassandra, it's so easy to make any necessary changes to the data model when those needs arise. And the way we've interfaced our application with Cassandra, it's easy for it to read those changes as they happen, without our having to do a whole lot of work. The flexibility of the data model has been of huge benefit for our architecture. That resulted in significant time savings for our team. Data modeling now takes probably half or a third of the time that we used to spend. 

8. What benefits do you see in terms of cost savings or unlocking new revenue opportunities?

From a total cost of ownership perspective, we save quite a bit with Cassandra and DataStax. It certainly doesn't cost as much to buy and maintain some nodes as it does to buy and maintain massive servers for running enterprise relational databases. Given the hardware expense and the cost of being able to scale, even with an on-premises environment, we definitely have a more favorable TCO now.

On the revenue side, we’ve gained extra agility in our go-to-market strategies. We can do planning around new projects with new clients—and because of the flexibility of our platform—we don't have to do a lot of pre-building. We don't have to partake in the “build it and they will come'' model. Instead, we can let them come, and then we can quickly build what they need from the foundational platform we have with Cassandra and the application we’ve built around it.

9. Are there benefits for developer productivity?

One of the things we’re looking forward to with DataStax Astra DB is how it can help accelerate development cycles, so our developers could essentially have their own Cassandra space to work in. If they needed to try some different data models and scenarios, they could do that easily.

10. Any advice you would give to other businesses out there that are starting on a similar road?

What I would say is: don't do what we did. We initially brought in Cassandra without professional service support. That was not a good move. We have some very smart folks on our team, and they did their research and thought they could do this. We got it up and running, but we weren't skilled enough at tuning it or understanding some of the specifics. It makes no sense to do it without the experts.

11. How do you see the future for Verinovum?

When the business side begins to understand what they can do with the data and the technology, and as the metadata and transactional data around the primary data expands, we’re in a strong position to help get it all cleansed, curated, and usable for expanding into risk management, adjudications, and other possibilities.

We've seen a data explosion in many other industries, but in healthcare I think this is only the beginning. The data is usable at certain micro levels, but it's not easy to use at the macro level. I think the data explosion is going to be much larger than what people perceive right now and hopefully Verinovum is a big part of making all that data usable.

12. How do your clients see the future and the benefits with cleaner, enriched data? 

Our growing client base of payers and large analytics providers are really interested in how we can help expand the ways they can use data and speed up operational processes. For a payer, they're typically 60 to 90 days behind data, by the time a claim is filed or adjudicated. 

The payers are getting better at this, but for a patient on an early disease threshold, that 60- or 90-day window can be pretty significant, in terms of a patient’s health and quality of life going forward, and also in costs. By supplementing traditional processes with clean, trustworthy, reliable clinical data to help with early disease detection—closing that 60-day, 90-day, or sometimes 120-day window—from a cost perspective, saves payers and providers a lot of money. And if you catch a condition early, you usually have better treatment options, better quality of life going forward, versus detecting it later.

Take for example a health intervention program that can help keep a pre-diabetic from progressing to full onset of diabetes. For the patient, that could mean avoiding years of health issues and insulin shots. It also means years of better cost performance for the insurance company, so it's a win-win. These are the things that help insurance companies, help patients, and over the long-term, should help drive down healthcare costs. 

To learn how DataStax helped Verinovum, Inc. reduce its IT infrastructure expenses, check out this case study.

Behind the Innovator takes a peek behind the scenes with learnings and best practices from leading architects, operators, and developers building cloud-native, data-driven applications with Apache Cassandra™, Apache Pulsar,  and open-source technologies in unprecedented times.

Share

One-stop Data API for Production GenAI

Astra DB gives JavaScript developers a complete data API and out-of-the-box integrations that make it easier to build production RAG apps with high relevancy and low latency.