Configuring data consistency
Consistency refers to how up-to-date and synchronized a row of Cassandra data is on all of its replicas. Cassandra extends the concept of eventual consistency by offering tunable consistency. For any given read or write operation, the client application decides how consistent the requested data must be.
Tunable consistency for client requests¶
Consistency levels in Cassandra can be configured to manage availability versus data accuracy. A tutorial in the CQL documentation compares consistency levels using cqlsh tracing. You can configure consistency on a cluster, data center, or individual I/O operation basis. Consistency among participating nodes can be set globally and also controlled on a per-operation basis (for example insert or update) using Cassandra’s drivers and client libraries.
About write consistency¶
The consistency level determines the number of replicas on which the write must succeed before returning an acknowledgment to the client application. ¶
|ALL||A write must be written to the commit log and memtable on all replica nodes in the cluster for that partition.||Provides the highest consistency and the lowest availability of any other level.|
|EACH_QUORUM||Strong consistency. A write must be written to the commit log and memtable on a quorum of replica nodes in all data centers.||Used in multiple data center clusters to strictly maintain consistency at the same level in each data center. For example, choose this level if you want a read to fail when a data center is down and the QUORUM cannot be reached on that data center.|
|QUORUM||A write must be written to the commit log and memtable on a quorum of replica nodes.||
Provides strong consistency if you can tolerate some level of failure.
|LOCAL_QUORUM||Strong consistency. A write must be written to the commit log and memtable on a quorum of replica nodes in the same data center as the coordinator node. Avoids latency of inter-data center communication.||Used in multiple data center clusters with a rack-aware replica placement strategy ( NetworkTopologyStrategy) and a properly configured snitch. Fails when using SimpleStrategy. Use to maintain consistency locally (within the single data center).|
|ONE||A write must be written to the commit log and memtable of at least one replica node.||Satisfies the needs of most users because consistency requirements are not stringent.|
|TWO||A write must be written to the commit log and memtable of at least two replica nodes.||Similar to ONE.|
|THREE||A write must be written to the commit log and memtable of at least three replica nodes.||Similar to TWO.|
|LOCAL_ONE||A write must be sent to, and successfully acknowledged by, at least one replica node in the local data center.||In a multiple data center clusters, a consistency level of ONE is often desirable, but cross-DC traffic is not. LOCAL_ONE accomplishes this. For security and quality reasons, you can use this consistency level in an offline datacenter to prevent automatic connection to online nodes in other data centers if an offline node goes down.|
|ANY||A write must be written to at least one node. If all replica nodes for the given partition key are down, the write can still succeed after a hinted handoff has been written. If all replica nodes are down at write time, an ANY write is not readable until the replica nodes for that partition have recovered.||Provides low latency and a guarantee that a write never fails. Delivers the lowest consistency and highest availability.|
|SERIAL||Achieves linearizable consistency for lightweight transactions by preventing unconditional updates.||You cannot configure this level as a normal consistency level, configured at the driver level using the consistency level field. You configure this level using the serial consistency field as part of the native protocol operation. See failure scenarios.|
|LOCAL_SERIAL||Same as SERIAL but confined to the data center. A write must be written conditionally to the commit log and memtable on a quorum of replica nodes in the same data center.||Same as SERIAL. Used for disaster recovery. See failure scenarios.|
Even at low consistency levels, the write is still sent to all replicas for the written key, even replicas in other data centers. The consistency level just determines how many replicas are required to respond that they received the write.
SERIAL and LOCAL_SERIAL write failure scenarios¶
- CQL query-configured consistency level of ALL
- Driver-configured serial consistency level of SERIAL
- Replication factor of 3
A WriteTimeout with a WriteType of CAS occurs and further reads do not see the write. If the node goes down in the middle of the operation instead of before the operation started, the write is committed, the value is written to the live nodes, and a WriteTimeout with a WriteType of SIMPLE occurs.
Under the same conditions, if two of the nodes are down at the beginning of the operation, the Paxos commit fails and nothing is committed. If the two nodes go down after the Paxos proposal is accepted, the write is committed to the remaining live nodes and written there, but a WriteTimeout with WriteType SIMPLE is returned.
About read consistency¶
The consistency level specifies how many replicas must respond to a read request before returning data to the client application.
Cassandra checks the specified number of replicas for data to satisfy the read request.
|ALL||Returns the record after all replicas have responded. The read operation will fail if a replica does not respond.||Provides the highest consistency of all levels and the lowest availability of all levels.|
|EACH_QUORUM||Returns the record once a quorum of replicas in each data center of the cluster has responded.||Same as LOCAL_QUORUM|
|QUORUM||Returns the record after a quorum of replicas has responded from any data center.||Ensures strong consistency if you can tolerate some level of failure.|
|LOCAL_QUORUM||Returns the record after a quorum of replicas in the current data center as the coordinator node has reported. Avoids latency of inter-data center communication.||Used in multiple data center clusters with a rack-aware replica placement strategy ( NetworkTopologyStrategy) and a properly configured snitch. Fails when using SimpleStrategy.|
|ONE||Returns a response from the closest replica, as determined by the snitch. By default, a read repair runs in the background to make the other replicas consistent.||Provides the highest availability of all the levels if you can tolerate a comparatively high probability of stale data being read. The replicas contacted for reads may not always have the most recent write.|
|TWO||Returns the most recent data from two of the closest replicas.||Similar to ONE.|
|THREE||Returns the most recent data from three of the closest replicas.||Similar to TWO.|
|LOCAL_ONE||Returns a response from the closest replica in the local data center.||Same usage as described in the table about write consistency levels.|
|SERIAL||Allows reading the current (and possibly uncommitted) state of data without proposing a new addition or update. If a SERIAL read finds an uncommitted transaction in progress, it will commit the transaction as part of the read. Similar to QUORUM.||To read the latest value of a column after a user has invoked a lightweight transaction to write to the column, use SERIAL. Cassandra then checks the inflight lightweight transaction for updates and, if found, returns the latest data.|
|LOCAL_SERIAL||Same as SERIAL, but confined to the data center. Similar to LOCAL_QUORUM.||Used to achieve linearizable consistency for lightweight transactions.|
About the QUORUM level¶
The QUORUM level writes to the number of nodes that make up a quorum. A quorum is calculated, and then rounded down to a whole number, as follows:
(sum_of_replication_factors / 2) + 1
The sum of all the replication_factor settings for each data center is the sum_of_replication_factors.
For example, in a single data center cluster using a replication factor of 3, a quorum is 2 nodes―the cluster can tolerate 1 replica nodes down. Using a replication factor of 6, a quorum is 4―the cluster can tolerate 2 replica nodes down. In a two data center cluster where each data center has a replication factor of 3, a quorum is 4 nodes―the cluster can tolerate 2 replica nodes down. In a five data center cluster where each data center has a replication factor of 3, a quorum is 8 nodes.
If consistency is top priority, you can ensure that a read always reflects the most recent write by using the following formula:
(nodes_written + nodes_read) > replication_factor
For example, if your application is using the QUORUM consistency level for both write and read operations and you are using a replication factor of 3, then this ensures that 2 nodes are always written and 2 nodes are always read. The combination of nodes written and read (4) being greater than the replication factor (3) ensures strong read consistency.
Configuring client consistency levels¶
You can use a new cqlsh command, CONSISTENCY, to set the consistency level for the keyspace. The WITH CONSISTENCY clause has been removed from CQL commands. You set the consistency level programmatically (at the driver level). For example, call QueryBuilder.insertInto with a setConsistencyLevel argument. The consistency level defaults to ONE for all write and read operations.