DataStax Enterprise (DSE) includes a Cassandra-enabled Hive MapReduce client. Hive is a data warehouse system for Hadoop that allows you to project a relational structure onto data stored in Hadoop-compatible file systems, and to query the data using a SQL-like language called HiveQL. HiveQL is extensible and lets you upload custom user-defined functions to manipulate the data in your queries. DSE removes the needs to run a stand-alone Hive MetaStore and automaps any existing column families into Hive tables. This means you can start the Hive client on any analytics node and run MapReduce queries directly on data already stored in Cassandra. Additionally, you can use Hive with CassandraFS just as you would in a regular Hadoop implementation and define Hive tables and load them with data using the regular HiveQL SQL-like syntax.
DataStax Enterprise 2.0 and later support Hive 0.8.1. This version includes a JDBC compliant user interface to connect to and work with Hive from inside the server. It also includes support for binary data and support for wide rows (up to 2 billion columns).
Metadata about the objects you define in Hive is stored in a database called the metastore. In regular HDFS-based Hive, when you run Hive on your local machine, your DDL commands create objects in a local metastore that is not available to other Hive clients. In DataStax enterprise, the Hive metastore is implemented as a keyspace within Cassandra. This automatically makes it a shared metastore without any additional configuration required.
This section provides information about:
Hive generates MapReduce jobs for most of its queries. Hive MapReduce jobs are submitted to the job tracker node for the DataStax Enterprise cluster. Hive clients will automatically select the correct job tracker node upon startup. You set the job tracker node for Hive as you would for any analytics node. Use the dsetool commands to manage the job tracker.
When you install DataStax Enterprise using the packaged or AMI distributions, you can start Hive as follows:
or in a binary distribution:
To connect to Hive via the JDBC driver, start Hive on one of the Hadoop nodes as follows:
dse hive --service hiveserver
or in a binary distribution:
<install_location>/bin/dse hive --service hiveserver
DataStax Enterprise allows you to use Hive with CassandraFS just as you would in a regular Hadoop implementation. You can define Hive tables and load them with data using the regular HiveQL SQL-like syntax. In this type of usage, you create your Hive tables using the CREATE TABLE command.
hive> CREATE TABLE invites (foo INT, bar STRING) PARTITIONED BY (ds STRING);
You can then load a table using the LOAD DATA command. The HiveQL Manual provides more information about the HiveQL syntax. In this usage, your loaded data resides in the cfs keyspace. Your Hive metadata store also resides in Cassandra in its own keyspace.
hive> LOAD DATA LOCAL INPATH '<install_location>/resources/hive/examples/files/kv2.txt' OVERWRITE INTO TABLE invites PARTITION (ds='2008-08-15'); hive> LOAD DATA LOCAL INPATH '<install_location>/resources/hive/examples/files/kv3.txt' OVERWRITE INTO TABLE invites PARTITION (ds='2008-08-08'); hive> SELECT count(*), ds FROM invites GROUP BY ds;
The paths to the Hive example files shown in the example LOAD commands above are for the binary distribution.
You can change Hive storage properties listed in the serdeproperties and tblproperties on the fly using DataStax Enterprise 2.1 and later. Using the Hive SET command, set properties in the hive session. The settings become effective for the next query. Using DataStax Enterprise 2.0 and earlier, you had to use ALTER TABLE to change the storage properties.
DataStax Enterprise uses a custom storage handler to allow direct access to data stored in Cassandra through Hive.
To access data stored in Cassandra, first define a database in Hive that maps to a keyspace in Cassandra. One way you can map them is by making sure that the name is the same in both Hive and Cassandra. For example:
hive> CREATE DATABASE PortfolioDemo;
Optionally, if your Hive database and Cassandra keyspace use different names (or the Cassandra keyspace does not exist), you can declare keyspace properties in your external table definition using the TBLPROPERTIES clause. If the keyspace does not yet exist in Cassandra, Hive will create it.
For example, in the case where the keyspace exists in Cassandra but under a different name:
hive> CREATE DATABASE MyHiveDB; hive> CREATE EXTERNAL TABLE MyHiveTable(row_key string, col1 string, col2 string) STORED BY 'org.apache.hadoop.hive.cassandra.CassandraStorageHandler' TBLPROPERTIES ( "cassandra.ks.name" = "MyCassandraKS" )
Or if the keyspace does not exist in Cassandra yet and you want to create it:
hive> CREATE EXTERNAL TABLE MyHiveTable(row_key string, col1 string, col2 string) STORED BY 'org.apache.hadoop.hive.cassandra.CassandraStorageHandler' TBLPROPERTIES ( "cassandra.ks.name" = "MyCassandraKS", "cassandra.ks.repfactor" = "2", "cassandra.ks.strategy" = "org.apache.cassandra.locator.NetworkTopologyStrategy");
The default host is localhost.
In DataStax Enterprise 2.0 and earlier, all automatically created hive tables relied on the SERDE property to map typed data in the Cassandra column family to stings. Hive did not store Cassandra data in a typed manner.
In DataStax Enterprise 2.1 and later, automatically created hive tables use the following logic for mapping Cassandra Types to Hive:
|Cassandra Type||Hive Type|
|all other types||binary|
An external table in Hive maps to a column family in Cassandra. The STORED BY clause specifies the storage handler to use, which for Cassandra is org.apache.hadoop.hive.cassandra.CassandraStorageHandler. The WITH SERDEPROPERTIES clause specifies the properties used when serializing/deserializing data passed between the Hive table and Cassandra.
Validate types using the cassandra.cf.validatorType. Set the value of the validatorType to the Cassandra types that map to Hive types.
hive> DROP TABLE IF EXISTS StockHist; create external table StockHist(row_key string, column_name string, value double) STORED BY 'org.apache.hadoop.hive.cassandra.CassandraStorageHandler' WITH SERDEPROPERTIES ("cassandra.ks.name" = "PortfolioDemo", "cassandra.cf.validatorType" = "UTF8Type,UTF8Type,DoubleType" );
This forces the columns to be deserialized from CassandraTypes into Strings.
The TBLPROPERTIES clause specifies CassandraFS and MapReduce properties for the table. For example:
hive> CREATE EXTERNAL TABLE Users(userid string, name string, email string, phone string) STORED BY 'org.apache.hadoop.hive.cassandra.CassandraStorageHandler' WITH SERDEPROPERTIES ( "cassandra.columns.mapping" = ":key,user_name,primary_email,home_phone") TBLPROPERTIES ( "cassandra.range.size" = "100", "cassandra.slice.predicate.size" = "100" );
For static Cassandra column families that model objects (such as users), mapping them to a relational structure is straightforward. In the example above, the first column of the Hive table (userid) maps to the row key in Cassandra. The row key in Cassandra is similar to a PRIMARY KEY in a relational table and should be the first column in your Hive table. If you know what the column names are in Cassandra, you can map the Hive column names to the Cassandra column names as shown above.
However, for dynamic column families (such as time series data), all rows likely have a different set of columns, and in most cases you do not know what the column names are. To convert this type of column family to a Hive table, you would convert a wide row in Cassandra to a collection of short rows in Hive using a special set of column names (row_key, column_name, value). For example:
hive> CREATE EXTERNAL TABLE PortfolioDemo.Stocks (row_key string, column_name string, value double) STORED BY 'org.apache.hadoop.hive.cassandra.CassandraStorageHandler';
Optionally, you can add a WITH SERDEPROPERTIES clause to map meaningful column names in Hive to the Cassandra row key, column names and column values. For example:
hive> CREATE EXTERNAL TABLE PortfolioDemo.PortfolioStocks (portfolio string, ticker string, number_shares int) STORED BY 'org.apache.hadoop.hive.cassandra.CassandraStorageHandler' WITH SERDEPROPERTIES ("cassandra.columns.mapping" = ":key,:column,:value" );
Using cassandra.columns.mapping, you can use a mapping of meaningful column names you assign in the Hive table to Cassandra row key, column/subcolumn names and column/subcolumn values. In the mapping, :key is a special name reserved for the column family row key, :column for column names, :subcolumn for subcolumn names (in super column families), and :value for column (or subcolumn) values. If you do not provide a mapping, then the first column of the Hive table is assumed to be the row key of the corresponding Cassandra column family.
Once you have defined your external tables in Hive, you can query the database to select from the Hive table. For example:
hive> SELECT * FROM PortfolioDemo.Stocks;
Any other query besides a SELECT * in Hive runs as a MapReduce job.
Once you have defined an external table object in Hive that maps to a Cassandra column family, you can move the results of MapReduce queries back into Cassandra using the INSERT OVERWRITE TABLE command. For example:
hive> CREATE EXTERNAL TABLE PortfolioDemo.HistLoss (row_key string, worst_date string, loss string) STORED BY 'org.apache.hadoop.hive.cassandra.CassandraStorageHandler';
hive> INSERT OVERWRITE TABLE PortfolioDemo.HistLoss SELECT a.portfolio, rdate, cast(minp as string) FROM ( SELECT portfolio, MIN(preturn) as minp FROM portfolio_returns GROUP BY portfolio ) a JOIN portfolio_returns b ON (a.portfolio = b.portfolio and a.minp = b.preturn);
The SERDEPROPERTIES clause specifies the properties used when serializing/deserializing data passed between the Hive table and Cassandra. You can add a WITH SERDEPROPERTIES clause to map meaningful column names in Hive to the Cassandra row key, column names and column values.
The following properties can be declared in a WITH SERDEPROPERTIES clause:
The TBLPROPERTIES clause specifies CassandraFS and MapReduce properties for the table. The following properties can be declared in a TBLPROPERTIES clause:
You can change performance settings in the following ways:
Using the set Hive command. For example: set mapred.reduce.tasks=32;
In the mapred-site.xml file.
Packaged installations: /etc/dse/hadoop/mapred-site.xml
Binary installations: <install_location>/resources/hadoop/conf/mapred-site.xml
This is a system setting so if you change it you must restart the analytics nodes.
Speeding up map reduce jobs:
Increase your mappers to one per CPU core by setting mapred.tasktracker.map.tasks.maximum in mapred-site.xml.
Accessing rows with 100,000 columns or more:
In the TBLPROPERTIES clause, set the cassandra.range.size and cassandra.slice.predicate.size to fetch one row with 100,000 columns at once. Although this requires more disk usage and scan runs, it is better to fetch one row with 100,000 columns at once than fetching 1000 rows with 100,000 columns at a time.
Increasing the number of map tasks to maximize performance:
Improving Counter Performance:
For example, when performing select count(1) from <column family>;, you can improve the speed of the counter by setting cassandra.enable.widerow.iterator=false. This setting causes all columns after the 1000th column to be ignored for each row, thus improving the speed of the counter.
Out of Memory Errors:
When your mapper or reduce tasks fail with OOMs, turn the mapred.map.child.java.opts setting in Hive to:
SET mapred.child.java.opts="-server -Xmx512M"
You can also lower memory usage by turning off map output compression in mapred-site.xml.