email iconemail phone iconcall

What’s New in Cassandra 2.0: Prototype Triggers Support

By Aleksey Yeschenko -  August 31, 2013 | 17 Comments

Warning: as of Cassandra 2.0.0, the ITrigger interface and the rest of the triggers implementation are not final - and will change in 2.1. Please be aware of this before using triggers in production until at least Cassandra 2.1.


New Cassandra 2.0 prototype triggers rely on logged batches, originally added in Cassandra 1.2, to implement a flexible, atomic, eventually consistent mechanism for reacting to - and augmenting - write operations.

Cassandra triggers have instead of the event activation time and partition-level granularity. A coordinator node executes triggers before actually applying the mutations (locally or on the remote nodes), giving you the ability to alter the mutations-to-be, augment them with extra mutations, or execute any arbitrary code, really *. The coordinator takes the original mutations (potentially modified by the trigger), adds the extra mutations created by the trigger, and applies them together as one single logged batch, guarantying atomicity and eventual consistency.

It follows that triggers on counter tables are generally not supported (counter mutations are not allowed inside logged batches for obvious reasons - they aren't idempotent).

There are multiple potential use cases for Cassandra triggers:

  • extra input validation - enforcing constraints beyond the data type validation performed by Cassandra
  • replicating or migrating modifications from one table or keyspace to another
  • incrementally updating a materialised view derived from one or more tables
  • logging any mutations that meet particular conditions
  • implementing alerts/notifications
  • performing any other application-specific logic

Credit for the implementation goes to Vijay Parthasarathy.

Implementing a Trigger

The current (as of C* 2.0.0) ITrigger interface itself is extremely simple:

public interface ITrigger
	 * Called exactly once per CF update, returned mutations are atomically updated.
	 * @param key - Row Key for the update.
	 * @param update - Update received for the CF
	 * @return modifications to be applied, null if no action to be performed.
	public Collection<RowMutation> augment(ByteBuffer key, ColumnFamily update);

It does (currently) expose some internal classes that should be explained:

  • RowMutation represents changes to one or more tables so that 1) all the tables belong to the same keyspace, and 2) all the changes have the same partition key. These changes are grouped into ColumnFamily objects (source).
  • ColumnFamily here shall contain the cells to be inserted and/or removed from their respective tables - one ColumnFamily of changes per table (source).

The ColumnFamily object passed to the augment method is mutable, thus it's technically possible to interfere and alter the original mutation. It's also possible to create additional mutations for any table in any keyspace that will be performed together with the original changes as a single logged batch.

See the simplistic inverted index implementation for the augmented mutations example.


To create a trigger, you must first build a jar with a class implementing the ITrigger interface and put it into the triggers directory on every node, then perform a CQL3 CREATE TRIGGER request to tie your trigger to a Cassandra table (or several tables).

conf/triggers is the default location for the trigger jars, but it can be redefined by setting the cassandra.triggers_dir system property.

To add the trigger to a table, run

CREATE TRIGGER <name> ON [<keyspace>.]<table> USING '<class>'

to remove one, use

DROP TRIGGER <name> ON [<keyspace>.]<table>

Future Work

The current implementation is experimental, and there is some work to do before triggers in Cassandra can be declared final and production-ready. CREATE TRIGGER should support parametrisation, so that triggers could be reused between different tables and configured without a need for external configuration files. It would be nice to be able to define triggers in CQL3 in addition to pure Java. And an API that doesn't reveal the internals (RowMutation and ColumnFamily classes) would be preferable to the current one.

That said, please do experiment with the current implementation and share your feedback - it will affect the final trigger design.

* while we do use a separate class loader for trigger classes, we don't sandbox the execution of triggers in any way. Be extra careful with the code that goes in augment - it can negatively affect the whole node.

DataStax has many ways for you to advance in your career and knowledge.

You can take free classes, get certified, or read one of our many white papers.

register for classes

get certified

DBA's Guide to NoSQL


  1. Great news. There are few more questions about triggers implementation left:

    1. How exception handling works. Interface doesn’t allow to through checked exceptions. Will exception be ignored or mutation operation will be failed? I would prefer to have more options to work with it

    2. Would be great to execute triggers asynchronously to actual operations, in background

    3. Add configurable timeout handling

  2. Aleksey Yeschenko says:

    1. The operation will fail. No exceptions in 2.0, but anything could change in 2.1.

    2. You have to return a result fast-ish, but there is nothing blocking you from scheduling extra operations in the background (in a different thread).

    3. Maybe?

  3. Deepak Nulu says:

    I understand from reading this blog post that triggers are executed before the changes are written to the disk. Is my understanding correct? If so, is there a mechanism to get notified after changes are written to disk?

  4. Daniel Jones says:

    Is there any way of dealing with CQL types in triggers? The TypeCodec class is package protected, and so I’m not seeing how one would create a new key for a row mutation.

  5. Aleksey Yeschenko says:

    Daniel, you don’t need TypeCodec for that. Use org.apache.cassandra.db.marshal.* classes instead.

  6. Gerard says:

    Hi, Is it possible for a trigger to act on TTL expiration of records? Or is there any other way?

  7. Aleksey Yeschenko says:

    > Is it possible for a trigger to act on TTL expiration of records? Or is there any other way?

    No, there is no (possible) way to do that at all, given the way TTL cells are implemented.

  8. Tuan says:

    Aleksey Yeschenko,

    I have a problem like this when following the Cassandra Example.

    Please advise.

  9. Aleksey Yeschenko says:

    Tuan: I have replied to the SO question and updated the example that ships with Cassandra.

  10. Tuan says:

    Aleksey Yeschenko,

    Thank you for your update.
    Could you share the CQL command for creating a ColumnFamily and its InvertedIndex Column Family.

    I still have the following error when execute the sample code:

  11. Tuan says:

    Sorry for the previous post. This is the error
    Caused by: java.lang.NullPointerException
    at org.apache.cassandra.db.RowMutation.addOrGet(
    at org.apache.cassandra.db.RowMutation.addOrGet(

  12. Ramesh says:

    I need an example code for triggers. Can some one please give me some sample code ?

  13. Robert Walter says:

    When a write operation is performed, will the associated trigger be invoked on all nodes in the cluster or just the nodes/partitions where the write is performed?

  14. Kiran Kumar Dasari says:

    How can I fetch the data of all the columns of the row that belongs to that primary/partition key, even though updated data in few of those columns???

  15. suman says:

    I want to use triggers mechanism for logging. Using augment method, how do I get handle on

    the “query string” that was fired
    authenticated user information
    Is there another way to get query string programmatically?

  16. Christopher Smith says:

    As others have mentioned, I think it’d also be really helpful to have a way to have “read only” triggers that fired *after* the completion of a write (potentially with varying consistency levels). There are lots of cases where I’d like to be able to stream out completed mutations, even if they aren’t yet guaranteed to be completed everywhere. This would help not just with logging, but also things like maintaining an inverted index (where in a lot of cases it is perfectly fine to have the index be out of date, but it is a bit of a PITA to have it potentially be *ahead* of the row), and by being outside of the mutation you could also comfortably have it write to other partitions instead of being restricted to the local partition key.

    Right now it feels like the best way to do that would be to have my ITrigger use a ThreadPool/executor that executed with a microsleep, typically using “ANY” consistency writes. That exposes a lot of potential issues with memory pressure/workload balancing I’d rather avoid/have Cassandra manage. I can mitigate that somewhat by having the ITrigger write to a queue, but that requires serializing out all the mutation data, whereas Cassandra could just hold to a pointer to the changelog/SSTable row and use its own worker pools, which feels like it’d be much more efficient.

    Any thoughts on such a read-only, “after mutation” trigger?

  17. rektide says:

    Has anything changed in 2.1? Are any changes in the works that might go in to 2.2?

    I also agree with Chris Smith: “read-only” triggers would be great for many use cases- trying to tell other external systems, hey, stuff has changed.


Your email address will not be published. Required fields are marked *

Subscribe for newsletter: