TechnologyMarch 23, 2011

Deploying Cassandra across Multiple Data Centers

Eric Gilmore
Eric Gilmore
Deploying Cassandra across Multiple Data Centers

Cassandra is designed as a distributed system, for deployment of large numbers of nodes across multiple data centers. Key features of Cassandra’s distributed architecture are specifically tailored for multiple-data center deployment. These features are robust and flexible enough that you can configure the cluster for optimal geographical distribution, for redundancy for failover and disaster recovery, or even for creating a dedicated analytics center replicated from your main data storage centers.

Settings central to multi-data center deployment include:

Replication Factor and Replica Placement Strategy – NetworkTopologyStrategy (the default placement strategy) has capabilities for fine-grained adjustment of the number and location of replicas at the data center and rack level.

Snitch – For multi-data center deployments, it is important to make sure the snitch has complete and accurate information about the network, either by automatic detection (RackInferringSnitch) or details specified in a properties file (PropertyFileSnitch).

Consistency Level – Cassandra provides consistency levels that are specifically designed for scenarios with multiple data centers: LOCAL_QUORUM and EACH_QUORUM. Here, “local” means local to a single data center, while “each” means consistency is strictly maintained at the same level in each data center.

Putting it all Together

Your specific needs will determine how you combine these ingredients in a “recipe” for multi-data center operations. For instance, an organization whose chief aim is to minimize network latency across two large service regions might end up with a relatively simple recipe for two data centers like the following:

Replica Placement Strategy: NetworkTopologyStrategy (NTS)

Replication Factor: 3 for each data center, as determined by the following strategy_options settings in cassandra.yaml:

strategy_options:
DC1 : 3
DC2 : 3

Snitch: RackInferringSnitch. Administrators configure the network topology of the two data centers in such a way that Cassandra can accurately extrapolate the details automatically with RackInferringSnitch.

Write Consistency LevelLOCAL_QUORUM

Read Consistency LevelLOCAL_QUORUM

For all applications that write and read to Cassandra, the default consistency level for both reads and writes is LOCAL_QUORUM. This provides a reasonable level of data consistency while avoiding inter-data center latency.

Visualizing It

In the following depiction of a write operation across our two hypothetical data centers, the darker grey nodes are the nodes that contain the token range for the data being written.

 

Note that LOCAL_QUORUM consistency allows the write operation to the second data center to be anynchronous. This way, the operation can be marked successful in the first data center – the data center local to the origin of the write – and Cassandra can serve read operations on that data without any delay from inter-data center latency.

Learning More about It

For more detail and more descriptions of multiple-data center deployments, see Multiple Data Centers in the DataStax reference documentation. And make sure to check this blog regularly for news related to the latest progress in multi-DC features, analytics, and other exciting areas of Cassandra development.

 

Share

One-stop Data API for Production GenAI

Astra DB gives JavaScript developers a complete data API and out-of-the-box integrations that make it easier to build production RAG apps with high relevancy and low latency.