DataStax Enterprise 4.5

Installing DataStax Enterprise using the binary tarball

Use this install method for 32-bit platforms.

For a complete list of supported platforms, see DataStax Enterprise Supported Platforms.

Prerequisites

  • All Linux platforms:
    • Latest version of Oracle Java SE Runtime Environment 7, not OpenJDK. See Installing the Oracle JRE.
    • Java Native Access (JNA). The recommended versions are 3.2.4 to 3.2.7. Do not install version 4.0 and above. See Installing the JNA.
  • Debian/Ubuntu distributions:
  • RedHat-compatible distributions:
    • If installing on a 64-bit Oracle Linux distribution, first install the 32-bit versions of glibc libraries.
    • If you are using an older RHEL-based Linux distribution, such as CentOS-5, you may need to replace the Snappy compression/decompression library; see the DataStax Enterprise 4.5.0 Release Notes.
    • Before installing, make sure EPEL (Extra Packages for Enterprise Linux) is installed. See Installing EPEL.

Also see Recommended production settings and the DataStax Enterprise Reference Architecture white paper.

The binary tarball runs as a stand-alone process.

Procedure

These steps install DataStax Enterprise. After installing, you must configure and start DataStax Enterprise.

In a terminal window:

  1. Check which version of Java is installed:
    $ java -version

    If not Oracle Java 7, see Installing the Oracle JRE.

    Important: Package management tools do not install Oracle Java.
  2. Download and extract the DataStax Enterprise and OpsCenter (optional) tarballs:
    • $ curl -L http://username:password@downloads.datastax.com/enterprise/dse.tar.gz | tar xz
    • $ curl -L http://downloads.datastax.com/community/opscenter.tar.gz | tar xz
    where username and password are the DataStax account credentials from your registration confirmation email.
    Note: For production installations, DataStax recommends installing the OpsCenter separate from the cluster. See the OpsCenter documentation.

    The files are downloaded and extracted into the dse-4.5.x directory.

  3. If you do not have root access to the default directories locations, you can define your own directory locations as described in the following steps or change the ownership of the directories:
    • /var/lib/cassandra
    • /var/log/cassandra
    • /var/lib/spark
    • /var/log/spark
    $ sudo mkdir -p /var/lib/cassandra; sudo chown -R  $USER: $GROUP /var/lib/cassandra
    $ sudo mkdir -p /var/log/cassandra; sudo chown -R  $USER: $GROUP /var/log/cassandra
    $ sudo mkdir -p /var/lib/spark; sudo chown -R  $USER: $GROUP /var/lib/spark
    $ sudo mkdir -p /var/log/spark; sudo chown -R  $USER: $GROUP /var/log/spark
  4. (Optional) If you do not want to use the default data and logging directories, you can define your own directory locations:
    1. Make the directories for data and logging directories:
      $ mkdir install_location/dse-data
      $ cd dse-data
      $ mkdir commitlog
      $ mkdir saved_caches
    2. Go the directory containing the cassandra.yaml file:
      $ cd install_location/resources/cassandra/conf
    3. Edit the following lines in the cassandra.yaml file:
      data_file_directories: install_location/dse-data
      commitlog_directory: install_location/dse-data/commitlog
      saved_caches_directory: install_location/dse-data/saved_caches
  5. (Optional) If you do not want to use the default Spark directories, you can define your own directory locations:
    1. Make the directories for the Spark lib and log directories.
    2. Go the directory containing the spark-env.sh file:
      • Installer-Services and Package installations: /etc/dse/spark/spark-env.sh
      • Installer-No Services and Tarball installations: install_location/resources/spark/conf/spark-env.sh
    3. Edit the spark-env.sh file to match the locations of your Spark lib and log directories, as described in Spark configuration.

Results

DataStax Enterprise is ready for configuration.

What's next

Show/hide