Under the contrib/stress directory you can find the Java-based stress utility. This is a tool for benchmarking and load testing a Cassandra cluster.
Use Apache ant to to build the stress testing tool:
There are three different modes of operation:
If no specific operation is specified, stress will insert 1M rows.
The options available are:
-o, –operation <operation>
Sets the operation mode, one of ‘insert’, ‘read’, ‘rangeslice’, or ‘indexedrangeslice’
-n <NUMKEYS>, –num-keys <NUMKEYS>
Number of keys to write or read. Default is 1,000,000.
-c <COLUMNS>, –columns <COLUMNS>
Number of columns per key. Default is 5.
-d <NODES>, –nodes <NODES>
Nodes to perform the test against.(comma separated, no spaces). Default is “localhost”.
-y <TYPE>, –family-type <TYPE>
Sets the ColumnFamily type. One of ‘Standard’ or ‘Super’. If using super, set the -u option also.
-u <SUPERCOLUMNS>, –supercolumns <SUPERCOLUMNS>
Use the number of supercolumns specified. You must set the -y option appropriately, or this option has no effect.
-g <COUNT>, –get-range-slice-count <COUNT>
Sets the number of rows to slice at a time and defaults to 1000. This is only used for the rangeslice operation and will NOT work with the RandomPartioner. You must set the OrderPreservingPartioner in your storage configuration (note that you will need to wipe all existing data when switching partioners.)
Only used for reads. By default, stress will perform reads on rows with a guassian distribution, which will cause some repeats. Setting this option makes the reads completely random instead.
The interval, in seconds, at which progress will be output.
1M inserts to given host:
contrib/stress/bin/stress -d 192.168.1.101
1M reads from given host:
contrib/stress/bin/stress -d 192.168.1.101 -o read
10M inserts spread across two nodes:
contrib/stress/bin/stress -d 192.168.1.101,192.168.1.102 -n 10000000