Use this snitch for deployments on Amazon EC2 where the cluster spans multiple regions.
Use the EC2MultiRegionSnitch for deployments on Amazon EC2 where the cluster spans multiple regions.
The region name is treated as the data center name and availability zones are treated as racks within a data center. For example, if a node is in the us-east-1 region, us-east is the data center name and 1 is the rack location. (Racks are important for distributing replicas, but not for data center naming.)
- Packaged installs: /etc/cassandra/conf/cassandra-rackdc.properties
- Tarball installs: install_location/conf/cassandra-rackdc.properties
For example, for two regions with the data centers named for their workloads and two cassandra data centers:
us-east_1_cassandra us-east_2_cassandra us-east_1_analytics us-east_1_search
us-west_1_cassandra us-west_2_cassandra us-west_1_analytics us-west_1_search
Other configuration settings¶
This snitch uses public IP as broadcast_address to allow cross-region connectivity. This means you must configure each node for cross-region communication:
- Set the listen_address to the
private IP address of the node, and the broadcast_address is set to the
public IP address of the node.
This allows Cassandra nodes in one EC2 region to bind to nodes in another region, thus enabling multiple data center support. (For intra-region traffic, Cassandra switches to the private IP after establishing a connection.)
- Set the addresses of the seed nodes in the cassandra.yaml file to that
of the public IP (private IP are not routable between networks). For
seeds: 188.8.131.52, 184.108.40.206
To find the public IP address, from each of the seed nodes in EC2:
$ curl http://instance-data/latest/meta-data/public-ipv4Note: Do not make all nodes seeds, see Internode communications (gossip).
- Be sure that the storage_port or ssl_storage_port is open on the public IP firewall.
Attention: Be sure this document version matches your product version