DataStax Enterprise 3.0 Documentation

About the Cassandra File System

This documentation corresponds to an earlier product version. Make sure this document corresponds to your version.

Latest DSE documentation | Earlier DSE documentation

The Cassandra File System (CassandraFS) replaces the Hadoop Distributed File System (HDFS). It is designed to simplify the operational overhead of Hadoop by removing the single points of failure in the Hadoop NameNode and to offer easy Hadoop integration for Cassandra users. When an analytics node starts up, DSE creates a default CassandraFS rooted at cfs:/ and an archive file system named cfs-archive.

Configuring a CFS superuser

A CFS superuser is the DSE daemon user, the user who starts DataStax Enterprise. A cassandra superuser, set up using the CQL CREATE USER . . . SUPERUSER command, is also a CFS superuser.

A CFS superuser can modify files in the CassandraFS without any restrictions. Files that a superuser adds to the CassandraFS are password-protected.

Using multiple Cassandra File Systems

DataStax Enterprise 2.1 and later support multiple CassandraFS's. Some typical reasons for using an additional CassandraFS are:

  • To isolate hadoop-related jobs
  • To configure keyspace replication by job
  • To segregate file systems in different physical data centers
  • To separate Hadoop data in some other way

Creating multiple Cassandra File Systems

To create an additional CassandraFS:

  1. Open the core-site.xml file for editing. This file is located in:

    • Packaged installations: /etc/dse/hadoop
    • Binary installations: /<install_location>/resources/hadoop/conf
  2. Add one or more property elements to core-site.xml using this format:

    <property>
      <name>fs.cfs-<filesystem name>.impl</name>
      <value>com.datastax.bdp.hadoop.cfs.CassandraFileSystem</value>
    </property>
    
  3. Save the file and restart Cassandra.

    DSE creates the new CassandraFS.

To access the new CassandraFS, construct a URL using the following format:

cfs-<filesystemname>:<path>

For example, assuming the new file system name is NewCassandraFS:

hadoop fs -copyFromLocal /tmp/giant_log.gz cfs-NewCassandraFS://cassandrahost/tmp

hadoop fs distcp hdfs:/// cfs-NewCassandraFS:///