Under either of these conditions, you need to restart the nodes as real-time Cassandra nodes before upgrading as described in the following procedures:
If you are not upgrading under these conditions, restarting the nodes as real-time Cassandra nodes is an extra step that you can skip. Restarting the nodes as real-time Cassandra nodes prevents unwanted schema changes from occurring when you start the upgraded node.
To upgrade a tarball release
Stop the first node to be upgraded and restart it as a real-time Cassandra node:
Follow the instructions in Upgrading a Tarball Installation of DataStax Community 1.0.x/1.1 to DataStax Enterprise 2.2.x.
Start each node as a real-time Cassandra node during the rolling upgrade.
Validate the upgrade of each node.
After all nodes are upgraded and the schemas agree, use another rolling restart to put the nodes back to their original roles as Hadoop or Solr nodes:
dse cassandra -t
dse cassandra -s
To upgrade a packaged release
Stop the dse service, and then disable Hadoop or Solr by setting options in /etc/default/dse:
Run the Yum (CentOS/RHEL/Oracle Linux) or Aptitude (Debian/Ubuntu) update commands.
Run the install commands shown in Installing the DataStax Enterprise Package on Debian and Ubuntu or Installing the DataStax Enterprise Package on RHEL-based Distributions.
Start the first node.
Configure the node: Open the old cassandra.yaml. Open the new cassandra.yaml:
Diff the new and old cassandra.yaml files. Merge the diffs by hand from the old file to the new one except do not merge the snitch setting.
Configure the snitch setting and complete the upgrade as described in Completing the Configuration and Starting Up the Upgraded Node.
Start up each node as a real-time Cassandra node during the rolling upgrade (leave HADOOP or SOLR disabled).
After all nodes are upgraded and the schemas agree, you can use a rolling restart to set the nodes back to their original roles as Hadoop or Solr nodes.
To restart nodes