During compaction, Cassandra combines multiple data files to improve the performance of partition scans and to reclaim space from deleted data.
Compaction is a periodic background process. During compaction Cassandra merges SSTables by combining row fragments, evicting expired tombstones, and rebuilding indexes. Because the SSTables are sorted, the merge is efficient (no random disk I/O). After a newly merged SSTable is complete, the input SSTables are reference counted and removed as soon as possible. During compaction, there is a temporary spike in disk space usage and disk I/O.
Cassandra provides two compaction strategies:
- SizeTieredCompactionStrategy: The default compaction strategy. This strategy gathers SSTables of similar size and compacts them together into a larger SSTable. You can configure this strategy with CQL-configurable thresholds.
- LeveledCompactionStrategy: Introduced in Cassandra 1.0, this strategy creates SSTables of a fixed, relatively small size (5 MB by default) that are grouped into levels. Within each level, SSTables are guaranteed to be non-overlapping. Each level (L0, L1, L2 and so on) is 10 times as large as the previous. To enable this strategy, set the compaction strategy for the table.
Compaction can impact reads that are not fulfilled by the cache because temporary increases in disk I/O and utilization occur. However, after a compaction completes, off-cache read performance improves since there are fewer SSTable files on disk that need to be checked.
In Cassandra 1.2 and later, you can configure when compaction (eviction) occurs for tombstones for TTL-configured and deleted columns with the tombstone parameters. Setting these parameters helps you avoid having to manually performing compaction to recover disk space.
You can set subproperties for the chosen compaction strategy:
You can specify a number of compaction parameters in the cassandra.yaml file:
Starting with Cassandra 1.2.5, the compaction_throughput_mb_per_sec parameter works better with large partitions because compaction is throttled to the specified total throughput across the entire system. In older releases, Cassandra only checked the compaction throughput between partitions, so large partitions could cause spikes in I/O demand.
Cassandra provides a start-up option for testing compaction strategies without affecting the production workload.
For information about compaction metrics, see Compaction metrics.
About full compactions
A full compaction applies only to SizeTieredCompactionStrategy. It merges all SSTables into one large SSTable. However, a full compaction is not recommended because the large SSTable that is created will not be compacted until the amount of actual data increases four-fold (or min_threshold). Addtionally, during runtime, full compaction is I/O and CPU intensive and can temporarily double disk space usage when no old versions or tombstones are evicted.
To initiate a full compaction for all tables in a keyspace use the nodetool compact command.
Related Nodetool commands