Online Predictive Processing (OLPP)
Make Predictive Analytics Actionable

Splice Machine OLPP is a scale-out SQL RDBMS that performs fast OLTP and in-memory OLAP on the same platform with machine learning and streaming.

The Power of Predictive Applications

With Splice Machine’s OLPP Platform, companies can deploy predictive applications quickly and with less complexity, enabling business transformation without business disruption. Predictive applications can generate lasting business and customer benefits in a variety of use cases, such as:

Supply Chain

Predict supply chain events and proactively optimize inventory allocation

Maintenance

Predict IoT outages and proactively deploy parts and service to keep capital equipment running

Healthcare

Predict patient conditions and proactively advise doctors and nurses to save lives

Fraud Detection

Predict fraud to proactively safeguard business and consumers

Flexible Deployment Options

The Splice Machine OLPP Platform fits your organization through multiple deployment options, including as a service through AWS Marketplace, or on your own clusters on premise or in a co-located facility.

A Scale-Out Architecture for Predictive Applications

Splice Machine delivers an open-source data platform that incorporates the proven scalability of HBase™ and the in-memory performance of Apache Spark™. The cost-based optimizer uses advanced statistics to choose the best compute engine, storage engine, index access, join order and join algorithm for each task. In this way, Splice Machine can concurrently process transactional and analytical workloads at scale.

Support All the Players on Your Team

  • Architects
  • Software Developers
  • Data Scientists
  • DevOps

Power Predictive Applications

Modern applications are predictive. They learn from experience. That requires a new architecture that can ingest voluminous amounts of data and processes large-sale transactions and analytics concurrently.

Splice Machine replaces traditional RDBMS and Data Warehouse solutions, simplifying your architecture, reducing cost and improving scalability and performance.

Develop Applications That Rock

Applications cannot wait for MapReduce to crawl through big data. They need to produce results in the moment, and they need to do that consistently, regardless of data growth and exploding usage.

Splice Machine powers big data applications using industry standard SQL on a scale-out architecture so you can focus on the business logic.

Maximize Efficiency

Data scientists continuously clean and transform raw data into features that provide machine learning models with true predictive signal.

Using Splice Machine’s notebook environment and Spark integration, data scientists can easily leverage the speed of in-memory computation and transactional in-place data updates to rapidly experiment with new features, parameters, and models. This provides continuously improving predictive power with more accurate models that are trained more frequently in addition to real-time reports and dashboards.

Scale Out Your Database Without the Headaches

Scaling databases can be a headache when data grows as quickly as it does these days. Most databases hit a maximum capacity and, beyond that, it becomes very hard to scale further. Often, you have to spend lots of money and start over.

With Splice Machine, you can scale out dynamically when the need for capacity grows or back when it decreases, so you only pay for what you really need. Plus as a DBaaS, we have eliminated the complexity of the Hadoop stack. You provision, connect, and query. We make sure the containers are healthy, backed up and secure.

Architects

Power Predictive Applications

Modern applications are predictive. They learn from experience. That requires a new architecture that can ingest voluminous amounts of data and processes large-sale transactions and analytics concurrently.

Splice Machine replaces traditional RDBMS and Data Warehouse solutions, simplifying your architecture, reducing cost and improving scalability and performance.

Software Developers

Develop Applications That Rock

Applications cannot wait for MapReduce to crawl through big data. They need to produce results in the moment, and they need to do that consistently, regardless of data growth and exploding usage.

Splice Machine powers big data applications using industry standard SQL on a scale-out architecture so you can focus on the business logic.

Data Scientists

Maximize Efficiency

Data scientists continuously clean and transform raw data into features that provide machine learning models with true predictive signal.

Using Splice Machine’s notebook environment and Spark integration, data scientists can easily leverage the speed of in-memory computation and transactional in-place data updates to rapidly experiment with new features, parameters, and models. This provides continuously improving predictive power with more accurate models that are trained more frequently in addition to real-time reports and dashboards.

DevOps

Scale Out Your Database Without the Headaches

Scaling databases can be a headache when data grows as quickly as it does these days. Most databases hit a maximum capacity and, beyond that, it becomes very hard to scale further. Often, you have to spend lots of money and start over.

With Splice Machine, you can scale out dynamically when the need for capacity grows or back when it decreases, so you only pay for what you really need. Plus as a DBaaS, we have eliminated the complexity of the Hadoop stack. You provision, connect, and query. We make sure the containers are healthy, backed up and secure.

What's Going on at Splice Machine

Asked on Slack

How fast can Splice Machine ingest data?

How much more storage does Phoenix require vs. Splice Machine?

Twitter Feed