Robust and scalable distributed queries with DSE Search 4.0

By Sergio Bossa -  April 10, 2014 | 3 Comments

Starting from version 4.0, DataStax Enterprise (DSE) Search got more robust and scalable distributed queries, thanks to a custom implementation built on top of Netty.
But before talking about its advantages over the old HTTP-based queries, let’s introduce how distributed queries work and why we improved them.

DSE Search provides a custom distributed search implementation on top of Apache Solr: the main challenge, and objective, is to “cover” all the token ranges in the Cassandra ring, in order to query the distributed index in full, and only query the minimum number of required nodes, because every node could own more than one token range, depending on the configured replication factor.

distributed query

In the above figure, we have four nodes, each one owning two token ranges (replication factor of 2); when node 1 receives a distributed query request, it acts as a coordinator for the distributed search, executing the following algorithm:

  1. Select the optimum number of nodes to cover the ring, where “optimum” means minimal and with no overlapping token ranges (itself and node 3 in the example above).
  2. Send the query to the selected nodes and collect the identifiers of the documents satisfying the query.
  3. Merge the documents according to their score.
  4. Send a query by id to retrieve the actual data of the final documents chosen during the previous step.

Steps #2 and #4 involve two request/response phases with possibly the coordinator node itself as well as remote nodes; this was previously implemented by relying on vanilla Solr distributed query capabilities, that is, by running queries on either the local or remote HTTP server, which caused a few major problems.

First one was a distributed deadlock:

deadlock1

As in the figure above, a distributed deadlock could happen if two (or more) clients concurrently queried two nodes (1) exhausting the HTTP server request serving threads and preventing each node to talk to the other (2) to complete the distributed query.

To make things worse, even the local node was queried via HTTP:

deadlock2

Here, the two nodes concurrently query the same coordinator node (1), exhausting threads and making it impossible to even satisfy the local query (2).

Overall, regardless of the distributed deadlock either happening or not, it was clear we had some relevant performance problems as well: querying the local node via HTTP doesn’t make sense, and above all, the thread-per-request model is not going to scale with the number of requests.

Hence, we rewrote the distributed search implementation on top of Netty, because of:
* Top performance (see graph below).
* Transparent transport APIs, which allowed us to keep a unified API to talk with both local and remote nodes.
* Easy to use codec APIs, which allowed us to keep the same binary encoding as in vanilla Solr.
* Out of the box SSL support.

The way it works is completely transparent to other parts of the system, as well as to the user, but the performance difference is amazing:

quantiles

The above graph compares Tomcat versus Netty latencies on our test setup, showing up to more than 50% drop for higher percentiles.

  • Netty uses non-blocking IO: each request thread doesn’t wait for the response to come back as in the request-per-thread model, but becomes suddenly available to process the next request, and gets later notified when the response becomes available; this avoids the deadlock problem altogether, and allows us to serve a higher number of concurrent requests, faster and with way less threads.
  • Our implementation uses Netty local in-vm transport to query the local index, avoiding expensive HTTP localhost requests.
  • Our implementation gets a little bit smarter than that as well, by using a smart connection pooling algorithm which fairly allocates connections, to avoid creating too many connections toward too little hosts, which would starve clients trying to query other hosts, and prioritizes less busy connections, in order to process queued requests as fast as possible.

If you’re impatient to try it (and you should), look no further than your dse.yaml and set the shard_transport_options.type to netty on all your DSE Search cluster nodes.



Comments

  1. sanjay says:

    Great Article, Thanks! 4.0 will give the unparalleled query performance.

  2. best says:

    A bit like comparing apples to oranges. Netty is a low level API, while Tomcat is a Servlet container.
    And since Tomcat supports servlet spec 3.1, you could have used non-blocking I/O in Tomcat too.

  3. Sergio Bossa Sergio Bossa says:

    Unfortunately, Tomcat non-blocking IO would have not solved all issues (like, HTTP overhead even on local requests), if not with much more invasive changes.
    Also, Netty is much more optimized for performance.

Comments

Your email address will not be published. Required fields are marked *




Subscribe for newsletter: