DataStax Enterprise 3.1 Documentation

Single data center deployment

This documentation corresponds to an earlier product version. Make sure this document corresponds to your version.

Latest DSE documentation | Earlier DSE documentation

In this scenario, data replication is distributed across a single data center in mixed workload clusters. For example, if the cluster has 3 Hadoop nodes, 3 Cassandra nodes, and 2 Solr nodes, the cluster has 3 data centers: one for each type of node. (A multiple data center cluster has more than one data center for each type of node.)

Data replicates across the data centers automatically and transparently - no ETL work is necessary to move data between different systems or servers. You can configure the number of copies of the data in each data center and Cassandra handles the rest, replicating the data for you. To configure a multiple data center cluster, see Multiple data center deployment.

Prerequisites

To correctly configure a multi-node cluster, requires the following:

  • Installing DataStax Enterprise on each node.
  • Choosing a name for the cluster.
  • For a mixed-workload cluster, knowing the purpose of each node.
  • Getting the IP address of each node.
  • Determining which nodes are seed nodes. (DataStax Enterprise nodes use this host list to find each other and learn the topology of the ring.)
  • Other possible configuration settings are described in The cassandra.yaml configuration file.

In DataStax Enterprise 3.0.1 and later, the default consistency level has changed from ONE to QUORUM for reads and writes to resolve a problem finding a CassandraFS block when using consistency level ONE on a Hadoop node.

Configuration example

This example describes installing a 6 node cluster spanning 2 racks in a single data center.

Location of the property file:

You set properties for each node in the cassandra.yaml file. This file is located in different places depending on the type of installation:

  • Packaged installations: /etc/dse/cassandra/cassandra.yaml
  • Binary installations: <install_location>/resources/cassandra/conf/cassandra.yaml

Note

After changing properties in the cassandra.yaml file, you must restart the node for the changes to take effect.

To configure a mixed-workload cluster:

  1. Suppose the nodes have the following IPs and one node per rack will serve as a seed:

    • node0 110.82.155.0 (Cassandra seed)
    • node1 110.82.155.1 (Cassandra)
    • node2 110.82.155.2 (Cassandra)
    • node3 110.82.155.3 (Analytics seed)
    • node4 110.82.155.4 (Analytics)
    • node5 110.82.155.5 (Analytics)
    • node6 110.82.155.6 (Search - seed nodes are not required for Solr.)
    • node7 110.82.155.7 (Search)
  2. If the nodes are behind a firewall, open the required ports for internal/external communication. See Configuring firewall port access.

  3. If DataStax Enterprise is running, stop the nodes and clear the data:

    • Packaged installs:

      $ sudo service dse stop (stops the service)

      $ sudo rm -rf /var/lib/cassandra/* (clears the data from the default directories)

    • Binary installs:

      From the install directory:

      $ sudo bin/dse cassandra-stop (stops the process)

      $ sudo rm -rf /var/lib/cassandra/* (clears the data from the default directories)

  4. Modify the following property settings in the cassandra.yaml file for each node:

    • num_tokens: 256 See Setting up virtual nodes in About virtual nodes.

    • -seeds: <internal IP_address of each seed node>

    • listen_address: <localhost IP address>

    • auto_bootstrap: false (Add this setting only when initializing a fresh cluster with no data.)

    • Comment out the following sections:

      # Replication strategy to use for the auth keyspace.
      # Following an upgrade from DSE 3.0 to 3.1, this should be removed
      #auth_replication_strategy: org.apache.cassandra.locator.SimpleStrategy
      
      # Replication options to use for the auth keyspace.
      # Following an upgrade from DSE 3.0 to 3.1, this should be removed
      #auth_replication_options:
      #   replication_factor: 1
      

    Note

    If you have not disabled both auth_replication_strategy and replication_factor, you will see an error. For information about correcting this error, see Issues in the release notes.

    node0:

    cluster_name: 'MyDemoCluster'
    num_tokens: 256
    seed_provider:
      - class_name: org.apache.cassandra.locator.SimpleSeedProvider
        parameters:
             - seeds: "110.82.155.0,110.82.155.3"
    listen_address: 110.82.155.0
    

    node1 to node7:

    The properties for the rest of the nodes are the same as Node0 except for the listen_address:

    Node listen address
    node1 110.82.155.1
    node2 110.82.155.2
    node3 110.82.155.3
    node4 110.82.155.4
    node5 110.82.155.5
    node6 110.82.155.6
    node7 110.82.155.7
  5. After you have installed and configured DataStax Enterprise on all nodes, start the seed nodes one at a time, and then start the rest of the nodes.

    Note

    If the node has restarted because of automatic restart, you must stop the node and clear the data directories, as described above.

  6. Check that your cluster is up and running:

    • Packaged installs: nodetool status
    • Binary installs: <install_location>/bin/nodetool status

    ../../_images/nodetool_status_dse_single.png