Dial C* for Operator: Unlocking Advanced Cassandra Configurations
DataStax Kubernetes Operator for Apache Cassandra™ manages Cassandra clusters in Kubernetes. It provides configuration management, scaling Cassandra (adding and removing nodes), and error handling, and as part of K8ssandra it makes the necessary configuration changes required for common operations like repair and backup and restore. This post focuses on some of the configuration management capabilities of Cass Operator through a series of examples. For some general background on Cass Operator, check out the DataStax documentation.
The first set of examples do not require any advanced understanding of Cass Operator or Kubernetes in general. The advanced examples assume a deeper understanding of Kubernetes as they cover topics like init containers and StatefulSets.
The examples have been tested against Cass Operator 1.6.0, the latest version as of this writing. The full source of the examples can be found in the cassandradatacenter-examples repository on GitHub.
These examples demonstrate how to do basic configuration of Cassandra and of Kubernetes resources.
Cassandra and JVM
The snippet of the CassandraDatacenter manifest demonstrates how to configure
jvm-options for a Cassandra 3.11.10 deployment.
spec: config: cassandra-yaml: authenticator: PasswordAuthenticator authorizer: CassandraAuthorizer compaction_throughput_mb_per_sec: 128 tombstone_warn_threshold: 5000 tombstone_failure_threshold: 200000 unlogged_batch_across_partitions_warn_threshold: 25 jvm-options: initial_heap_size: "512m" max_heap_size: "512m" heap_size_young_generation: "256m" garbage_collector: CMS max_tenuring_threshold: 5
The properties under
cassandra-yaml map directly to properties in
cassandra-yaml. The properties under
jvm-properties configure heap and garbage collector settings in the
jvm.options configuration file.
Note: Most but not all properties in
cassandra-yaml and in
jvm.options are supported at this time by Cass Operator.
Cass Operator configures cluster to use racks and
GossipingPropertyFileSnitch. This example configures a multi-rack cluster spread across three availability zones:
spec: size: 3 racks: - name: rack1 nodeAffinityLabels: topology.kubernetes.io/zone: us-east1-a - name: rack2 nodeAffinityLabels: topology.kubernetes.io/zone: us-east1-b - name: rack3 nodeAffinityLabels: topology.kubernetes.io/zone: us-east1-c
The CassandraDatacenter declares a three node cluster with three racks. Cass Operator makes a best effort to evenly distribute nodes across racks. In this example the racks will be balanced having one node each. Cass Operator uses node affinity to pin each rack to a different availability zone.
topology.kubernetes.io/zone is a common label that is applied to all worker nodes.The CassandraDatacenter declares a three node cluster with three racks. Cass Operator makes a best effort to evenly distribute nodes across racks. In this example the racks will be balanced having one node each. Cass Operator uses node affinity to pin each rack to a different availability zone.
Note: Your Kubernetes cluster must have at least three worker nodes with the appropriate labels for this example to work.
Cass Operator uses pod anti-affinity by default to ensure that Cassandra pods are isolated from one another. Kubernetes will not schedule multiple Cassandra pods on the same worker node.
Cassandra pod resources
In general it is a good practice to specify resource requirements for Kubernetes applications and services. Cassandra is no exception. A pod can have multiple containers. The Cassandra pod includes one init container and two main containers.
server-config-init container generates all of the configuration files in
cassandra container runs Cassandra. The container runs the management-api as PID 1. It manages the lifecycle of Cassandra. Because it is the primary process in the container, the management-api's logs go to stdout. This means that if you execute kubectl logs <cassandra-pod> -c cassandra you will get back the management-api's logs and not Cassandra's.
server-system-logger container is a lightweight busybox container that tails
/var/log/cassandra/system.log. You can view Cassandra's logs with
/kubectl logs <cassandra-pod> -c server-system-logger.
This example illustrates how to specify CPU and memory requirements for each of these containers.
spec: resources: requests: cpu: 2 memory: 2Gi configBuilderResources: requests: cpu: 1 memory: 125Mi limits: cpu: 1 memory: 250Mi systemLoggerResources: requests: cpu: 100m memory: 20Mi limits: cpu: 100m memory: 30Mi
Kubernetes will only schedule the Cassandra pods on worker nodes with enough resources to satisfy the requests.
The following examples are advanced for a couple reasons. First, they require more understanding of Kubernetes types and concepts like StatefulSets, init containers, and volumes. Secondly, the examples touch on implementation details of Cass Operator.
Cass Operator creates a StatefulSet for each rack. The template property of a StatefulSet fully describes the pods that will be created.
Custom pod labels
Suppose you want to add custom labels to the Cassandra pods. There is no specific property like
podLabels, in the
CassandraDatacenter object to do this. The
podTemplateSpec property however, makes it possible.
spec: podTemplateSpec: metadata: labels: env: dev app.kubernetes.io/part-of: examples app.kubernetes.io/version: "0.47.3" spec: containers: 
Cass Operator will add each of these labels to the template of the StatefulSet. This in turn means that they will be added to each of the pods.
Note: If the containers property is omitted, then we get a validation error about a null value; thus, we set it to an empty array.Cass Operator will add each of these labels to the template of the StatefulSet. This in turn means that they will be added to each of the pods.
This next example demonstrates how to add an environment variable to the cassandra container.
spec: podTemplateSpec: spec: containers: - name: cassandra env: - name: HELLO value: WORLD
In and of itself this may not seem particularly useful, but it actually highlights something very interesting: how Cass Operator applies merge semantics with
podTemplateSpec. As previously mentioned, we know that Cass Operator already defines the
cassandra container. It also defines several default environment variables for the container. The declaration in this
podTemplateSpec does not replace the default
cassandra container nor does it replace the default environment variables. The environment variable will be added to the list of environment variables along with the default ones.
Now let's take a look at how to add an init container to the Cassandra pod.
spec: podTemplateSpec: containers:  initContainers: - name: hello image: busybox args: - /bin/sh - -c - echo Hello World
Init containers run in the order declared. The hello container will run before the server-config-init container. If we want the server-config-init container to run first, we can make the following change:
spec: podTemplateSpec: containers:  initContainers: - name: server-config-init - name: hello image: busybox args: - /bin/sh - -c - echo Hello World
Cass Operator will run server-config-init first using its default configuration.
This final example builds off of the previous ones and is borrowed from the K8ssandra project. K8ssandra deploys Cass Operator along with additional components, one of which is Reaper. Reaper manages repair operations for Cassandra. Reaper relies on JMX to perform repair operations. Cassandra has remote JMX disabled by default; so, it has to be enabled in order for Reaper to function properly. JMX authentication has to be configured as well because Cassandra enables it when remote JMX is enabled.
JMX access is configured in the
cassandra/etc/cassandra/cassandra-env.sh script. It checks to see if the
LOCAL_JMX environment variable is set. This happens in the
JMX credentials are stored in
/etc/cassandra/jmxremote.password. We need to add a set of credentials to that file. We can use an init container for this. There is another detail to be worked out. Cass Operator does not create a volume mount for
/etc/cassandra by default.
Cass Operator creates an
emptyDir volume named
server-config-init init container and the cassandra container both mount the volume at
server-config-init generates configuration files and writes them into this directory. When the
cassandra container starts it copies everything from
/etc/cassandra. This provides us with the necessary information needed to build the init container.
spec: podTemplateSpec: containers: - name: cassandra env: - name: LOCAL_JMX value: no initContainers: - name: server-config-init - name: jmx-credentials image: busybox env: - name: JMX_USERNAME valueFrom: secretKeyRef: name: jmx-credentials key: username - name: JMX_PASSWORD valueFrom: secretKeyRef: name: jmx-credentials key: password args: - /bin/sh - -c - echo "$JMX_USERNAME $JMX_PASSWORD" > /config/jmxremote.password volumeMounts: - name: server-config - mountPath: /config
Try to run
nodetool without credentials using
kubectl exec -it <cassandra-pod> -c cassandra -- nodetool status. It should fail with a
SecurityException that says credentials are required. It should succeed if you run
kubectl exec -it <cassandra-pod> -c cassandra -- nodetool -u cassandra -pw cassandra status.
Cass Operator provides a variety of options to configure Cassandra, the JVM, and the StatefulSets that it generates. When you need to configure something that Cass Operator does not expose or when you need something more advanced like an init container with additional volumes, podTemplateSpec offers tremendous flexibility. That flexibility comes with risks, though. The remote JMX example is entirely dependent on implementation details of Cass Operator. It would be a nice enhancement for Cass Operator to enable you to configure things like init containers and sidecar containers without being tightly coupled to implementation details.