Apache Cassandra™ 2.0

EC2MultiRegionSnitch

Use with Amazon EC2 in multiple regions.

Use the EC2MultiRegionSnitch for deployments on Amazon EC2 where the cluster spans multiple regions.

The region name is treated as the data center name and availability zones are treated as racks within a data center. For example, if a node is in the us-east-1 region, us-east is the data center name and 1 is the rack location. (Racks are important for distributing replicas, but not for data center naming.)

The dc_suffix options in the cassandra-rackdc.properties file defines the data centers used by this snitch. Any other lines are ignored. The location of this file depends on the type of installation:
  • Packaged installs: /etc/cassandra/conf/cassandra-rackdc.properties
  • Tarball installs: install_location/conf/cassandra-rackdc.properties

For example, for two regions with the data centers named for their workloads and two cassandra data centers:

For each node in the us-east region, specify its data center in cassandra-rackdc.properties file:
  • node0

    dc_suffix=_1_cassandra

  • node1

    dc_suffix=_1_cassandra

  • node2

    dc_suffix=_2_cassandra

  • node3

    dc_suffix=_2_cassandra

  • node4

    dc_suffix=_1_analytics

  • node5

    dc_suffix=_1_search

This results in four us-east data centers:
us-east_1_cassandra
us-east_2_cassandra
us-east_1_analytics
us-east_1_search
For each node in the us-west region, specify its data center in cassandra-rackdc.properties file:
  • node0

    dc_suffix=_1_cassandra

  • node1

    dc_suffix=_1_cassandra

  • node2

    dc_suffix=_2_cassandra

  • node3

    dc_suffix=_2_cassandra

  • node4

    dc_suffix=_1_analytics

  • node5

    dc_suffix=_1_search

This results in four us-west data centers:
us-west_1_cassandra
us-west_2_cassandra
us-west_1_analytics
us-west_1_search
Note: The data center naming convention in this example is based on the workload. You can use other conventions, such as DC1, DC2 or 100, 200.

Other configuration settings

This snitch uses public IP as broadcast_address to allow cross-region connectivity. This means you must configure each node for cross-region communication:

  1. Set the listen_address to the private IP address of the node, and the broadcast_address is set to the public IP address of the node.

    This allows Cassandra nodes in one EC2 region to bind to nodes in another region, thus enabling multiple data center support. (For intra-region traffic, Cassandra switches to the private IP after establishing a connection.)

  2. Set the addresses of the seed nodes in the cassandra.yaml file to that of the public IP (private IP are not routable between networks). For example:
    seeds: 50.34.16.33, 60.247.70.52
    

    To find the public IP address, from each of the seed nodes in EC2:

    $ curl http://instance-data/latest/meta-data/public-ipv4
  3. Be sure that the storage_port or ssl_storage_port is open on the public IP firewall.

Keyspace strategy options

When defining your keyspace strategy options, use the EC2 region name, such as ``us-east``, as your data center name.

Show/hide