Standard ANSI SQL

Key SQL functionality includes:

  • Transactions
  • Joins
  • Secondary indexes
  • Aggregations
  • Sub-queries
  • Triggers
  • Constraints
  • User-defined functions (UDFs)
  • Column-level security
  • Stored Procedures
  • Views
  • Virtual Tables
  • Window Functions

Splice Machine provides a true ANSI SQL database on Apache Hadoop® through the proven SQL processing of Apache Derby™.

Standard SQL enables companies to tap into existing SQL-trained resources while alternative SQL variants often used with Hadoop require training and code changes.

Splice Machine provides ODBC and JDBC drivers so companies can have seamless connectivity to BI tools such as Tableau® and MicroStrategy® and SQL tools such as Toad® and DbVisualizer.

Affordable Scale-Out Architecture

Leveraging the proven auto-sharding of Apache HBase®, the Splice Machine Hadoop RDBMS can easily scale out with commodity servers from terabytes to petabytes of data.

As part of its automatic auto-sharding, HBase horizontally partitions or splits each table into smaller chunks or shards that are distributed across multiple servers.

Using the inherent failover and replication capabilities of HBase and Hadoop, Splice Machine can support applications that demand high availability.

Advanced, In-Memory Technology

Splice Machine embeds Apache Spark™ – a fast, open source engine for large-scale data processing – to accelerate OLAP queries.

With advanced features such as spill-to-disk, resiliency to node failure and computation pipelining, Spark in-memory processing delivers unprecedented performance. It recently set a record for the fastest sort of 1 petabyte of data.

Introducing Lambda Architecture-in-a-Box

With the new scale-out RDBMS systems, you can now get all the benefits of Lambda with a much simpler architecture.

Although Lambda Architecture enables a continuous processing of real-time data, it has traditionally been a painful process that gets the job done at a great cost.  With the new scale-out RDBMS systems, you can now get all the benefits of Lambda with a much simpler architecture.

Developers can use standard SQL to ingest, access, update, and analyze the database without worrying about what compute engine to use because the Splice Machine optimizer picks the right compute engine, already integrated, based on the nature of the query.

This makes Splice Machine ideal to build new, real-time, reactive applications as well as a platform that can offload data from existing databases for existing applications.

Learn more about Lambda in a Box

Unprecedented Support for
Simultaneous OLTP & OLAP Workloads

With in-memory technology from Spark and scale-out capabilities from Hadoop, the Splice Machine RDBMS provides outstanding performance for simultaneous OLAP and OLTP workloads.

The Splice Machine RDBMS was designed to offload workloads from overwhelmed RDBMSs like Oracle, MySQL, IBM DB2 and Microsoft SQL Server that companies are finding are too expensive to scale. Splice Machine provides cost-effective scale out on commodity hardware, but unlike NoSQL databases, it provides standard SQL, eliminating the need to rewrite existing applications.

Splice Machine automatically sends OLTP queries to HBase/Hadoop and OLAP queries to Spark. With separate process for HBase and Spark, Splice Machine isolates the workloads and ensures that OLTP response times remain flat as OLAP loads increase.

Real-Time Updates with Transactional Integrity

Splice Machine has distributed
Snaphot Isolation

Database transactions ensure that real-time updates can be reliably executed without data loss or corruption (e.g., guarantee that the data and secondary indexes are updated atomically).

Splice Machine provides full ACID (Atomicity, Consistency, Isolation, Durability) transactions across rows and tables, using a lockless snapshot isolation design that uses Multiple Version Concurrency Control (MVCC) to create a new version of the record every time it is updated.

With each transaction having its own virtual “snapshot”, transactions can execute concurrently without any locking. This leads to very high throughput and avoids troublesome deadlock conditions.

High-Performance, Distributed Computing Architecture

Splice Machine’s high-performance, distributed computing architecture delivers massive parallelization for predicates, joins, aggregations, and functions by pushing computation down to each distributed data shard.

On each HBase physical node, the Splice Machine RDBMS has a separate process and memory space for HBase and Spark.

For OLTP queries, Splice Machine uses HBase co-processors to distribute OLTP computation across regions (i.e, shards).

For OLAP queries, Splice Machine creates RDDs on Spark from HBase and uses Spark operators to distribute processing across Spark Workers.

Leverages Hadoop and Spark Ecosystems

Splice Machine enables developers to leverage power libraries and tools in the Spark and Hadoop ecosystems.

Splice Machine can execute federated queries on data in external databases, libraries and files using Virtual Table Interfaces (VTIs). This includes pre-built Spark libraries (over 140 and growing) for machine learning and stream analysis.

Splice Machine also provides a MapReduce Input/Output API to run custom, batch-oriented analyses by Hadoop tools (e.g., MapReduce, Hive™, Spark™, Storm™, Kafka™, Pig™, MLlib, Mahout™) to execute queries against data in Splice Machine.

High Concurrency

In the age of Big Data, companies need applications to provide the right results, right now, to users.

Since thousands, if not millions, of people are reading and updating data simultaneously, high concurrency of small reads and writes are vital to act on this data in real time.

Data warehouses, MPP databases and in-memory analytics databases do not have the ability to achieve high levels of concurrency, making them inadequate for operational applications.

Free High Quality Images Download Free Stock Images Download Free Images Download YouTube Videos. Mp3,MP4 Converter