The anti-patterns described here are implementation or design patterns that ineffective and/or counterproductive in Cassandra production installations. Correct patterns are suggested in most cases.
Storing SSTables on a Network Attached Storage (NAS) device is of limited use. Using a NAS device often results in network related bottlenecks resulting from high levels of I/O wait time on both reads and writes. The causes of these bottlenecks include:
There are exceptions to this pattern. If you use NAS, ensure that each drive is accessed only by one machine and each drive is physically close to the node.
DataStax recommends using the default heap space size for most use cases. Exceeding this size can impair the Java virtual machine's (JVM) ability to perform fluid garbage collections (GC). The following table shows a comparison of heap space performances reported by a Cassandra user:
|Heap||CPU utilization||Queries per second||Latency|
|40 GB||50%||750||1 second|
|8 GB||5%||8500 ||10 ms|
|||Not maxed out.|
Defining one rack for the entire cluster is the simplest and most common implementation. Multiple racks should be avoided for the following reasons:
To use racks correctly:
Multiple-gets may cause problems. One sure way to kill a node is to buffer 300MB of data, timeout, and then try again from 50 different clients.
You should architect your application using many single requests for different rows. This method ensures that if a read fails on a node, due to a backlog of pending requests, an unmet consistency, or other error, only the failed request needs to be retried.
Ideally, use the same key reading for the entire key or slices. Be sure to keep the row sizes in mind to prevent out-of-memory (OOM) errors by reading too many entire ultra-wide rows in parallel.
Do not use super columns. They are a legacy design from a pre-open source release. This design was structured for a specific use case and does not fit most use cases. Super columns read entire super columns and all its sub-columns into memory for each read request. This results in severe performance issues. Additionally, super columns are not supported in CQL 3.
Use composite columns instead. Composite columns provide most of the same benefits as super columns without the performance issues.
The Byte Ordered Partitioner is not recommended.
The RandomPartitioner is recommended because all writes occur on the MD5 hash of the key and are therefore spread out throughout the ring amongst tokens ranging from 0 to 2127 -1. This partitioner ensures that your cluster evenly distributes data by placing the key at the correct token using the key's hash value. Even if data becomes stale and needs to be deleted, this ensures that data removal also takes place while evenly distributing data around the cluster.
Reads take time for every request, as they typically have multiple disk hits for uncached reads. In workflows requiring reads before writes, this small amount of latency can affect overall throughput. All write I/O in Cassandra is sequential so there is very little performance difference regardless of data size or key distribution.
Cassandra was designed to avoid the need for load balancers. Putting load balancers between Cassandra and Cassandra clients is harmful to performance, cost, availability, debugging, testing, and scaling. All high-level clients, such as Astyanax and pycassa, implement load balancing directly.
Be sure to test at scale and production loads. This the best way to ensure your system will function properly when your application goes live. The information you gather from testing is the best indicator of what throughput per node is needed for future expansion calculations.
To properly test, set up a small cluster with production loads. There will be a maximum throughput associated with each node count before the cluster can no longer increase performance. Take the maximum throughput at this cluster size and apply it linearly to a cluster size of a different size. Next extrapolate (graph) your results to predict the correct cluster sizes for required throughputs for your production cluster. This allows you to predict the correct cluster sizes for required throughputs in the future. The Netflix case study shows an excellent example for testing.
Linux has a great collection of tools. Become familiar with the Linux built-in tools. It will help you greatly and ease operation and management costs in normal, routine functions. The essential list of tools and techniques to learn are:
Be sure to use the recommended settings in the Cassandra documentation.
Also be sure to consult the Planning a Cassandra Cluster Deployment documentation, which discusses hardware and other recommendations before making your final hardware purchases.