Data consistency in DSE Search
DSE Search provides full text search feature for Cassandra. In principle, columns stored in the database can be indexed by Solr, which is plugged into Cassandra using the secondary index API. Each Solr index indexes only the data stored on a single node. Let’s take a closer look at what happens when you insert new data or query your database.
Solr-indexed data, just as any other kind of data, can be saved to Cassandra using any of the supported Cassandra clients, e.g. CQL. You open a connection to any node in the cluster, and issue an
UPDATE statement. We refer to this node as the coordinator. The coordinator selects the replicas to contact basing on the row key, partitioner and replication strategy. Then it sends your new data there and waits for some or all of them to confirm saving the data in their commit log, memtable and solr index. If you configured your replication factor to be 3, three nodes would get the data. Every node that got the new data confirms saving the data to the coordinator. The coordinator waits until enough nodes confirm, depending on the configured write consistency level, and returns control to the user.
What happens if one of those nodes is down? This depends on the consistency level you specified in the CQL statement. If the consistency level was ONE, then only one replica has to confirm getting the data, so your INSERT would run just fine. You don’t need to worry about the failed node – it will get the data once it is up, and if you enabled hinted handoff feature, the data will be additionally stored on the coordinator node in the meantime. If you set higher consistency level, the coordinator would have to wait for more nodes to respond. In case of consistency level ALL with one node or more nodes down, your insert would fail.
DSE Search allows also to store / update data using Solr HTTP interface. The write path is the same as if you queried using CQL. You can set the desired write consistency level with appending “cl” parameter to the Solr POST request:
You can query Solr-indexed columns in two ways: by CQL or by Solr HTTP interface. Contrary to the write path, this time the read path depends on the interface used.
To issue a Solr query using CQL, you have to append a
solr_query predicate to the
SELECT * FROM <column family> WHERE solr_query = '<solr query>';
The query is processed by the Cassandra storage engine in the following way:
- The coordinator selects a set of Solr shards that covers the entire token ring.
- The query is executed on each of the selected nodes using the Solr index to return the row keys matching the ‘solr_query’ predicate.
- The resulting row keys are sent back to the coordinator and merged.
- The coordinator fetches the documents corresponding to the merged row keys at consistency level LOCAL_ONE.
- The coordinator returns the resulting rows to the client.
Cassandra version up to 1.1 uses a simple greedy algorithm for selecting the nodes answering queries involving secondary indexes. It does prefer selecting nodes that are close to the coordinator according to the configured snitch, but it doesn’t try very hard to minimize the number of queried nodes, nor to balance the load.
To query using Solr API, send a request in the following format:
http://<host>:<port>/solr/<keyspace>.<column family>/select?q=<Solr query>
You can specify consistency level here, just as in the update request, but… it will be ignored. Searches by Solr API rely on Solr sharding with a little help of thin DSE integration layer to select shards. The only supported consistency level is ONE. The query is processed as follows:
- A proper list of Solr shards is selected by a dedicated DSE component. This component uses heuristics to query the closest shards and also tries to minimize the total number of shards queried. The algorithms used here are constantly being improved. Currently the selection depends only on the node initially queried, so if you query the same node over and over again, the set of shards is always the same and you should get repeatable results. However, if you query a different node, a different set of shards can be selected and the results for the same query might be different.
- Due to the fact that you can have several replicas of the same document in your Solr data-center (RF > 1), a proper token range predicate is added to each query before sending it to execution. Each query returns documents with row keys within the appended token range(s). Token ranges are selected in such a way that each token range is handled by exactly one node. Therefore, there is no need for additional duplicate removal and merging the results is a simple and fast union.
- Queries are executed by Solr and results are sent back to the coordinator node.
- Results are merged by Solr and returned to the user in form of an XML document.
For maximizing DSE search query performance we recommend querying through Solr API with possibly high replication factor. The higher the replication factor, the less shards will be selected and the lower will be the overall cluster load. Due to the fact that shards are statically assigned basing on the coordinator node, you should distribute your queries among all the nodes, to avoid hotspotting. If you need consistency with Solr API, the only option is to use consistency level ALL for writes. The only supported consistency level for CQL Solr queries is LOCAL_ONE. CQL is also the only way to obtain columns not stored in the Solr index.
|Supported CL for writes||all levels||all levels|
|Supported CL for reads in a Solr-only cluster||LOCAL_ONE||ONE|
|Supported CL for reads in a mixed workload cluster||LOCAL_ONE||ONE|
|Minimum number of queries executed (per search)||N||N / RF|
|Maximum number of queries executed (per search)||N * RF||N|
Updated: May 21st, 2015