DataStax Enterprise 3.0 Documentation

Getting Started with Analytics and Hadoop in DataStax Enterprise

This documentation corresponds to an earlier product version. Make sure this document corresponds to your version.

Latest DSE documentation | Earlier DSE documentation

In DataStax Enterprise, you can run analytics on your Cassandra data via the platform's built-in Hadoop integration. The Hadoop component in DataStax Enterprise is not meant to be a full Hadoop distribution, but rather enables analytics to be run across DataStax Enterprise's distributed, shared-nothing architecture. Instead of using the Hadoop Distributed File System (HDFS), DataStax Enterprise uses Cassandra File System (CassandraFS) keyspaces for the underlying storage layer. This provides replication, data location awareness, and takes full advantage of Cassandra's peer-to-peer architecture.

DataStax Enterprise supports running analytics on Cassandra data with the following Hadoop components:

Starting and stopping a DSE Analytics node

The way you start up a DSE Analytics node depends on the type of installation, tarball or packaged.

Tarball installation

From the install directory, use this command to start the analytics node:

bin/dse cassandra -t

The analytics node starts up.

From the install directory, use these commands to stop the analytics node:

  1. Get the dse process ID from the top of output from this command:

    ps auwx | grep dse
  2. Run the cassandra-stop command using the process ID (PID) from the top of the output.

    bin/dse cassandra-stop <PID>

Packaged installation

  1. Enable Hadoop mode by setting this option in /etc/default/dse: HADOOP_ENABLED=1

  2. Start the dse service <start-dse> using this command:

    sudo service dse start

    The analytics node starts up.

You stop an analytics node using this command:

sudo service dse stop

Hadoop getting started tutorial

In this tutorial, you download a text file containing a State of the Union speech and upload the file to the CassandraFS. Next, you run a classic MapReduce job that counts the words in the file and creates a sorted list of word/count pairs as output. The mapper and reducer are provided in a JAR file. Download the State of the Union speech now.

This tutorial assumes you started an DataStax Enterprise 3.0.7 analytics node on Linux. Also, the tutorial assumes you have permission to perform Hadoop and other DataStax Enterprise operations, for example, that you preface commands with sudo if necessary.


  1. Unzip the downloaded file into a directory of your choice on your file system.

    This file will be the input for the MapReduce job.

  2. Create a directory in the CassandraFS for the input file using the dse command version of the familiar hadoop fs command.

    cd <dse-install>
    bin/dse hadoop fs -mkdir /user/hadoop/wordcount/input
  3. Copy the input file that you downloaded to the CassandraFS.

    bin/dse hadoop fs -copyFromLocal
  4. Check the version number of the hadoop-examples-<version>.jar. On tarball installations, the JAR is in the DataStax Enterprise resources directory. On packaged and AMI installations, the JAR is in the /usr/dse/hadoop directory.

  5. Get usage information about how to run the MapReduce job from the jar file. For example:

    bin/dse hadoop jar /<install_location>/resources/hadoop/hadoop-examples-

    The output is:

    2013-10-02 12:40:16.983 java[9505:1703] Unable to load realm info from SCDynamicStore
    Usage: wordcount <in> <out>

    If you see the SCDynamic Store message, just ignore it. The internet provides information about the message.

  6. Run the Hadoop word count example in the JAR.

    bin/dse hadoop jar
      /<install_location>/resources/hadoop/hadoop-examples- wordcount

    The output is:

    13/10/02 12:40:36 INFO input.FileInputFormat: Total input paths to process : 0
    13/10/02 12:40:36 INFO mapred.JobClient: Running job: job_201310020848_0002
    13/10/02 12:40:37 INFO mapred.JobClient:  map 0% reduce 0%
    . . .
    13/10/02 12:40:55 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=19164
    13/10/02 12:40:55 INFO mapred.JobClient:   Map-Reduce Framework
    . . .
  7. List the contents of the output directory on the CassandraFS.

    bin/dse hadoop fs -ls /user/hadoop/wordcount/output

    The output looks something like this:

    Found 3 items
    -rwxrwxrwx   1 root wheel      0 2013-10-02 12:58 /user/hadoop/wordcount/output/_SUCCESS
    drwxrwxrwx   - root wheel      0 2013-10-02 12:57 /user/hadoop/wordcount/output/_logs
    -rwxrwxrwx   1 root wheel  24528 2013-10-02 12:58 /user/hadoop/wordcount/output/part-r-00000
  8. Using the output file name from the directory listing, get more information about the output file using the dsetool utility.

    bin/dsetool checkcfs /user/hadoop/wordcount/output/part-r-00000

    The output is:

    Path: cfs://
      INode header:
        File type: FILE
        User: root
        Group: wheel
        Permissions: rwxrwxrwx (777)
        Block size: 67108864
        Compressed: true
        First save: true
        Modification time: Wed Oct 02 12:58:05 PDT 2013
        Block count: 1
        Blocks:                               subblocks   length   start     end
          (B) f2fa9d90-2b9c-11e3-9ccb-73ded3cb6170:   1    24528       0   24528
              f3030200-2b9c-11e3-9ccb-73ded3cb6170:        24528       0   24528
        Block locations:
        f2fa9d90-2b9c-11e3-9ccb-73ded3cb6170: [localhost]
        All data blocks ok.
  9. Finally, look at the output of the MapReduce job--the list of word/count pairs using the dse version of the familiar hadoop fs -cat command.

    bin/dse hadoop fs -cat /user/hadoop/wordcount/output/part-r-00000

    The output is:

    "D."  1
    "Don't  1
    "I  4
    . . .

Hadoop demos

After starting Hadoop, you can run these additional Hadoop demos:

  • Portfolio Manager demo: Demonstrates a hybrid workflow using DataStax Enterprise.
  • Hive Demo: Demonstrates using Hive to access data in Cassandra.
  • Mahout Demo: Demonstrates Mahout support in DataStax Enterprise by determining which entries in the sample input data file remained statistically in control and which have not.
  • Pig Demo: Create a Pig relation, perform a simple MapReduce job, and put the results back into CassandraFS or into a Cassandra column family.
  • Sqoop Demo: Migrates data from a MySQL database containing information from the North American Numbering Plan.

Setting the replication factor

The default replication for system keyspaces is 1. This replication factor is suitable for development and testing of a single node, not for a production environment. For production increase the replication factors to at least 2. This ensures resilience to single-node failures. For example:

[default@unknown] UPDATE KEYSPACE cfs
   WITH placement_strategy = 'org.apache.cassandra.locator.NetworkTopologyStrategy'
   AND strategy_options={Analytics:3};

For more information, see Changing replication settings.

Configuration for running jobs on a remote cluster

This information is intended for advanced users.

To connect to external addresses:

  1. Make sure that the hostname resolution works properly on the localhost for the remote cluster nodes.

  2. Copy the dse-core-default.xml and dse-mapred-default.xml files from any working remote cluster node to your local Hadoop conf directory.

  3. Run the job with dse hadoop.

  4. If you need to override the JT location or if DataStax Enterprise cannot automatically detect the JT location, before running the job, define the HADOOP_JT environment variable:

    HADOOP_JT=<jobtracker host>:<jobtracker port> dse hadoop jar ....
  5. If you need to connect to many different remote clusters from the same host:

    1. Before starting the job, copy the remote Hadoop conf directories fully to the local node (into different locations).
    2. Select the appropriate location by defining HADOOP_CONF_DIR.