DataStax Enterprise 2.1 Documentation

Initializing Multiple Data Center Clusters on DataStax Enterprise

This documentation corresponds to an earlier product version. Make sure this document corresponds to your version.

Latest DSE documentation | Earlier DSE documentation

In this scenario, a mixed workload cluster has more than one data center for each type of node. For example, if the cluster has 4 Hadoop nodes, 4 Cassandra nodes, and 2 Solr nodes, the cluster could have 5 data centers: 2 data centers for Hadoop nodes, 2 data centers for Cassandra nodes, and 1 data center for Solr nodes. A single data center cluster has only 1 data center for each type of node.

Data replication can be distributed across multiple, geographically dispersed data centers, between different physical racks in a data center, or between public cloud providers and on-premise managed data centers. Data replicates across the data centers automatically and transparently - no ETL work is necessary to move data between different systems or servers. You can configure the number of copies of the data in each data center and Cassandra handles the rest, replicating the data for you. To configure a single data center cluster, see Initializing a Multiple Node Cluster in a Single Data Center.

Prerequisites

To correctly configure a multi-node cluster with multiple data centers, requires:

  • DataStax Enterprise is installed on each node.
  • The total number of nodes in the cluster.
  • A name for the cluster.
  • The IP addresses of each node in the cluster.
  • Which nodes will serve as the seed nodes. (DataStax Enterprise nodes use this host list to find each other and learn the topology of the ring.)
  • If the nodes are behind a firewall, make sure you know what ports you need to open. See Configuring Firewall Port Access.
  • Other configuration settings you may need are described in Choosing Node Configuration Options and Node and Cluster Configuration.

This information is used to configure the following properties on each node in the cluster:

Configuration Example

This example describes installing a six node cluster spanning two data centers. The steps for configuring multiple data centers on binary and packaged installations are the same except the configuration files are located in different directories.

Location of the property files in packaged installations:

  • /etc/dse/cassandra/cassandra.yaml
  • /etc/dse/cassandra/cassandra-topology.properties
  • /etc/dse/dse.yaml

Location of the property files in binary installations:

  • <install_location>/resources/cassandra/conf/cassandra.yaml
  • <install_location>/resources/cassandra/conf/cassandra-topology.properties
  • <install_location>/resources/dse/conf/dse.yaml

Note

After changing properties in these files, you must restart the node for the changes to take effect.

To configure a cluster with multiple data centers:

  1. Suppose you install DataStax Enterprise on these nodes:

    10.168.66.41
    10.176.43.66
    10.168.247.41
    10.176.170.59
    10.169.61.170
    10.169.30.138
  2. Assign tokens so that data is evenly distributed within each data center by calculating the token assignments with the Token Generating Tool and offset the token for the second data center:

    Node IP Address Token Offset Data Center
    node0 10.168.66.41 0 NA DC1
    node1 10.176.43.66 56713727820156410577229101238628035242 NA DC1
    node2 10.168.247.41 113427455640312821154458202477256070485 NA DC1
    node3 10.176.170.59 10 10 DC2
    node4 10.169.61.170 56713727820156410577229101238628035252 10 DC2
    node5 10.169.30.138 113427455640312821154458202477256070495 10 DC2
  1. Stop the nodes and clear the data.

    • For packaged installs, run the following commands:

      $ sudo service dse stop (stops the service)

      $ sudo rm -rf /var/lib/cassandra/* (clears the data from the default directories)

    • For binary installs, run the following commands from the install directory:

      $ ps auwx | grep dse (finds the Cassandra and DataStax Enterprise Java process ID [PID])

      $ sudo kill <pid> (stops the process)

      $ sudo rm -rf /var/lib/cassandra/* (clears the data from the default directories)

  2. Modify the following property settings in the cassandra.yaml file for each node:

    • initial_token: <token from previous step>
    • -seeds: <internal IP_address of each seed node>
    • listen_address: <localhost IP address>

    node0:

    initial_token: 56713727820156410577229101238628035242
    seed_provider:
       - class_name: org.apache.cassandra.locator.SimpleSeedProvider
          parameters:
            - seeds: "10.168.66.41,10.176.170.59"
    listen_address: 10.176.43.66

    Note

    You must include at least one node from each data center. It is a best practice to have at more than one seed node per data center.

node1 to node5

The properties for the rest of the nodes are the same as Node0 except for the initial_token and listen_address:

Node initial_token listen address
node1 56713727820156410577229101238628035242 10.176.43.66
node2 113427455640312821154458202477256070485 10.168.247.41
node3 10 10.176.170.59
node4 56713727820156410577229101238628035252 10.169.61.170
node5 113427455640312821154458202477256070495 10.169.30.138
  1. For each node, change the dse.yaml file to specify the snitch to be delegated by the DseDelegateSnitch. For more information about snitches, see the About Snitches. For example, to specify the PropertyFileSnitch, enter:

    delegated_snitch: org.apache.cassandra.locator.PropertyFileSnitch
  2. Determine a naming convention for each data center and rack, for example: DC1, DC2 or 100, 200 and RAC1, RAC2 or R101, R102.

  3. In the cassandra-topology.properties file, assign data center and rack names to the IP addresses of each node, and assign a default data center name and rack name for unknown nodes. For example:

    # Cassandra Node IP=Data Center:Rack
    10.168.66.41=DC1:RAC1
    10.176.43.66=DC2:RAC1
    10.168.247.41=DC1:RAC1
    10.176.170.59=DC2:RAC1
    10.169.61.170=DC1:RAC1
    10.169.30.138=DC2:RAC1
    
    # default for unknown nodes
    default=DC1:RAC1
  4. After you have installed and configured DataStax Enterprise on all nodes, start the seed nodes one at a time, and then start the rest of the nodes.

    Note

    If the node has restarted because of automatic restart, you must stop the node and clear the data directories, as described above.

  5. Check that your ring is up and running:

    • Packaged installs: nodetool ring -h localhost
    • Binary installs:
    $ cd /<install_directory>
    $ bin/nodetool ring -h localhost
    

    ../../_images/nodetool_dc_results.png

Balancing the Data Center Nodes

When you deploy a Cassandra cluster, you need to use the partitioner to distribute roughly an equal amount of data to nodes. You also use identifiers for each data center (see step 5) in a formula to calculate tokens that balance nodes within a data center (DC). For example, assign each DC a numerical name that is a multiple of 100. Then for each DC, determine the tokens as follows: token = (2^127 / num_nodes_in_dc * n + DC_ID) where n is the node for which the token is being calculated and DC_ID is the numerical name.

Frequently Asked Questions

1. Can all the application data be 100% owned by Cassandra nodes?

There is no ownership. You set a replication factor (RF) for each data center, including the virtual analytics one. So, you might have one copy of the data in each of C1, C2, C3, and Analytics (AN), for example. Regardless of what data center or nodes you write to, the data is replicated to all four data centers. Replication is configured per keyspace.

For example, one keyspace with RF = {C1:1, C2:1, C3:1, AN:0}, and a different keyspace with RF = {C1:0, C2:0, C3:0, AN:1}. In such a configuration, if you write into the first keyspace, the analytics nodes do not have any copies of the data. If you write into the second keyspace only the analytics nodes have copies. If you write data, such as flat files, directly into CFS, by default only the AN nodes have copies of the data. The assumption is only the AN nodes need access to the flat files because their only use is for analytics. This is actually accomplished by having a Cassandra File System (CFS) keyspace, where AN has a RF > 0 and the others have RF=0.

2. Through replication, can you give the analytics nodes all of the application data?

Yes, as exemplified in the previous answer: RF = {C1: 1, C2:1, .... AN:1}

3. If all the analytics data is written to column families (CF), how can application nodes get a copy of the data?

The destination CFs used for the output of the analytics jobs are in a keyspace where only the Cassandra data centers have a RF > 0 (that is, the output of the analytics jobs do not need to be stored on the analytics nodes. There are common exceptions to this. If the output does not really belong in Cassandra, for some reason, such as the output is for analysis and not part of the operational data set, you can write the output into a keyspace where only the analytics DC had RF > 0. If you want both Cassandra and analytic nodes to have copies of the data, it is just a matter of setting the RF correctly on the keyspace you write to.

4. If you add or remove a node from a ring, do all nodes in the data centers need to be rebalanced?

You need to rebalance a data center after adding or removing a node. Nodes in other data centers do not have to be rebalanced. You need to stagger tokens between data centers for maximum data availability.

More Information About Configuring Data Centers

Links to more information about configuring a data center: