Splice Machine

The Hadoop RDBMS

The Splice Machine Hadoop RDBMS combines the best of both a traditional RDBMS and a modern scale out infrastructure.   Enterprises will find all of the key functionality they have currently in their current RDBMS databases in a Hadoop RDBMS:

  • Joins
  • Secondary indexes
  • Aggregations
  • Reliable updates through ACID transactions
  • Ability to support a high concurrency of small reads and writes
  • Support for OLTP or OLAP workloads

Learn more about the features


Splice Machine designed The Hadoop RDBMS to replace overwhelmed RDBMSs like Oracle, MySQL , IBM DB2 and Microsoft SQL Server that companies are finding are too expensive to scale, yet want to keep their traditional SQL rather than rewrite applications or lose functionality by moving to NoSQL.

What is it?

The Hadoop RDBMS by Splice Machine provides the best of both worlds by providing traditional SQL together with the scale out infrastructure of Hadoop.

Features include a true ANSI SQL database with the ability to affordably scale out to petabytes of data.  The Hadoop RDBMS offers real time updates with transactional integrity, distributed, parallelized query execution and high concurrency on a flexible general purpose database platform.

Splice Machine is built on two proven technology stacks:  Apache Derby and HBase/Hadoop.  This allows for distributed, parallelized query execution that works with all of the standard Hadoop Distributions.

Splice Machine is a general purpose RDBMS that can be used for a variety of applications, but common ones that can benefit from the scale of Hadoop include Digital Marketing, Operational Data Lake, Operational Applications, Operational Analytics and Internet of Things.

Learn more about the features

More information

Forbes Magazine talks about the Splice Machine Hadoop RDBMS