DataStax Enterprise 3.1 Documentation

Multiple data center deployment

This documentation corresponds to an earlier product version. Make sure this document corresponds to your version.

Latest DSE documentation | Earlier DSE documentation

In this scenario, a mixed workload cluster has more than one data center for each type of node. For example, if the cluster has 4 Hadoop nodes, 4 Cassandra nodes, and 2 Solr nodes, the cluster could have 5 data centers: 2 data centers for Hadoop nodes, 2 data centers for Cassandra nodes, and 1 data center for Solr nodes. A single data center cluster has only 1 data center for each type of node.

Data replication can be distributed across multiple, geographically dispersed data centers, between different physical racks in a data center, or between public cloud providers and on-premise managed data centers. Data replicates across the data centers automatically and transparently - no ETL work is necessary to move data between different systems or servers. You can configure the number of copies of the data in each data center and Cassandra handles the rest, replicating the data for you. To configure a single data center cluster, see Single data center deployment.

Prerequisites

To correctly configure a multi-node cluster with multiple data centers, requires:

  • Installing DataStax Enterprise on each node.
  • Choosing a name for the cluster.
  • For a mixed-workload cluster, knowing the purpose of each node.
  • Getting the IP address of each node.
  • Determining which nodes are seed nodes. (DataStax Enterprise nodes use this host list to find each other and learn the topology of the ring.)
  • Developing a naming convention for each data center and rack, for example: DC1, DC2 or 100, 200 and RAC1, RAC2 or R101, R102.
  • Other possible configuration settings are described in The cassandra.yaml configuration file.

In DataStax Enterprise 3.0.1 and later, the default consistency level has changed from ONE to QUORUM for reads and writes to resolve a problem finding a CassandraFS block when using consistency level ONE on a Hadoop node.

Configuration example

This example describes installing a 6 node cluster spanning 2 data centers. The steps for configuring multiple data centers on binary and packaged installations are the same except the configuration files are located in different directories.

Property file locations in packaged installations:

  • /etc/dse/cassandra/cassandra.yaml
  • /etc/dse/cassandra/cassandra-topology.properties
  • /etc/dse/dse.yaml

Property files locations in binary installations:

  • <install_location>/resources/cassandra/conf/cassandra.yaml
  • <install_location>/resources/cassandra/conf/cassandra-topology.properties
  • <install_location>/resources/dse/conf/dse.yaml

Note

After changing properties in these files, you must restart the node for the changes to take effect.

To configure a cluster with multiple data centers:

  1. Suppose you install DataStax Enterprise on these nodes:

    node0 10.168.66.41 (seed1)
    node1 10.176.43.66
    node2 10.168.247.41
    node3 10.176.170.59 (seed2)
    node4 10.169.61.170
    node5 10.169.30.138
  2. If the nodes are behind a firewall, open the required ports for internal/external communication. See Configuring firewall port access.

  3. If DataStax Enterprise is running, stop the nodes and clear the data:

    • Packaged installs:

      $ sudo service dse stop (stops the service)

      $ sudo rm -rf /var/lib/cassandra/* (clears the data from the default directories)

    • Binary installs:

      From the install directory:

      $ sudo bin/dse cassandra-stop (stops the process)

      $ sudo rm -rf /var/lib/cassandra/* (clears the data from the default directories)

  4. Modify the following property settings in the cassandra.yaml file for each node:

    • num_tokens: 256 See Setting up virtual nodes in About virtual nodes.

    • -seeds: <internal IP_address of each seed node>

    • listen_address: <localhost IP address>

    • auto_bootstrap: false (Add this setting only when initializing a fresh cluster with no data.)

    • Comment out auth_replication_options and replication_factor:

      # Replication strategy to use for the auth keyspace.
      # Following an upgrade from DSE 3.0 to 3.1, this should be removed
      #auth_replication_strategy: org.apache.cassandra.locator.SimpleStrategy
      
      # Replication options to use for the auth keyspace.
      # Following an upgrade from DSE 3.0 to 3.1, this should be removed
      #auth_replication_options:
      #   replication_factor: 1
      

    Note

    If you have not disabled both auth_replication_strategy and replication_factor, you will see an error. For information about correcting this error, see Issues in the release notes.

    node0:

    cluster_name: 'MyDemoCluster'
    num_tokens: 256
    seed_provider:
       - class_name: org.apache.cassandra.locator.SimpleSeedProvider
         parameters:
             - seeds:  "10.168.66.41,10.176.170.59"
    listen_address: 10.168.66.41

    Note

    You must include at least one node from each data center. It is a best practice to have at more than one seed node per data center.

node1 to node5:

The properties for the rest of the nodes are the same as Node0 except for the listen_address:

Node listen address
node1 10.176.43.66
node2 10.168.247.41
node3 10.176.170.59
node4 10.169.61.170
node5 10.169.30.138
  1. If necessary, change the dse.yaml file on each node to specify the snitch to be delegated by the DseDelegateSnitch. For more information about snitches, see the About Snitches. For example, to specify the PropertyFileSnitch:

    delegated_snitch: org.apache.cassandra.locator.PropertyFileSnitch
  2. In the cassandra-topology.properties file, use your naming convention to assign data center and rack names to the IP addresses of each node, and assign a default data center name and rack name for unknown nodes. For example:

    # Cassandra Node IP=Data Center:Rack
    10.168.66.41=DC1:RAC1
    10.176.43.66=DC2:RAC1
    10.168.247.41=DC1:RAC1
    10.176.170.59=DC2:RAC1
    10.169.61.170=DC1:RAC1
    10.169.30.138=DC2:RAC1
    
    # default for unknown nodes
    default=DC1:RAC1
  3. After you have installed and configured DataStax Enterprise on all nodes, start the seed nodes one at a time, and then start the rest of the nodes.

    Note

    If the node has restarted because of automatic restart, you must stop the node and clear the data directories, as described above.

  4. Check that your cluster is up and running:

    • Packaged installs: nodetool status
    • Binary installs: <install_location>/bin/nodetool status

    ../../_images/nodetool_status_dse_multi.png

More information about configuring data centers

Links to more information about configuring a data center: