Splice Machine’s New Open-Source RDBMS Sandbox Goes Live on Amazon Web Services (AWS)
Company invests in developer adoption by going open source, launching a community site and creating a turnkey test and evaluation platform
San Francisco, CA – July 18, 2016 – Splice Machine, the open-source RDBMS powered by Hadoop and Spark, today announced a cloud-based sandbox for developers to put its new open-source 2.0 Community Edition to the test. In addition, the Company has released the general availability of an Open-Source stand-alone and cluster download, the general availability of its V2.0 and the launch of its developer community site.
The Splice Machine V2.0 sandbox is powered by Amazon Web Services (AWS) and allows developers to initiate a cluster in minutes. The sandbox allows the developer to choose the number of nodes in the cluster and each node’s type to accommodate a range of tests, from small to enterprise scale.
Open Source and Community Site
Splice Machine is now available in a free, full-featured Community edition and a licensed Enterprise edition. The Enterprise edition license includes 24/7 support, and includes devops features such as backup and restore, LDAP support, Kerberos support, encryption, and column-level access privileges.
“I am very excited about Splice Machine opening its software and developing a community,” said Monte Zweben, co-founder and CEO, Splice Machine. “We are committed to making it as easy as possible for developers to get Splice Machine and test it at scale. Our Community edition is a fully functional RDBMS that enables teams to completely evaluate Splice Machine, while our Enterprise edition contains additional DevOps features needed to securely operate Splice Machine, 24×7.”
To support the growing Splice Machine community, the Company has launched a community website that includes: tutorials, videos, a developer forum, a GitHub repository, a StackOverflow tag and a Slack channel. These resources are rich with code to help developers, data scientists and DevOps learn to use Splice Machine, and will continue to grow with contributions from the entire community.
V2.0 General Availability
Now generally available, Splice Machine 2.0 integrates Apache Spark, a fast, open-source engine for large-scale data processing, into its existing Hadoop-based architecture, creating a flexible, hybrid database that enables businesses to perform simultaneous OLAP and OLTP workloads.
Splice Machine 2.0 features include:
Scale-Out Architecture – Cost-effectively scales out on commodity hardware with proven auto-sharding on HBase and Spark
Transactional SQL – Supports full ACID properties in a real-time, concurrent system
In-Memory Technology – Achieves outstanding performance for OLAP queries with in-memory technology from Apache Spark
Resource Isolation – Allows allocation of CPU and RAM resources to operational and analytical workloads, and enables queries to be prioritized for workload scheduling
Management Console – A Web UI that allows users to see the queries that are currently running, and to then drill down into each job to see the current progress of the queries, and to identify any potential bottlenecks.
Compaction Optimization – The compaction of storage files is now managed in Spark rather than HBase, providing significant performance enhancements and operational stability
Apache Kafka-enabled Streaming – Enables the ingestion of real-time data streams
Virtual Table Interfaces – Allows developers and data scientists to use SQL with data that is external to the database, such as Amazon S3, HDFS, or Oracle
Ideal for powering real-time operational and analytical applications, Splice Machine simplifies the Lambda architecture, so businesses no longer have to manage the complexity of integrating multiple compute engines to ingest, serve, or analyze data. With Splice Machine’s “Lambda in a Box” architecture, developers and data scientists can store their data all in one place and just write SQL.
“We are excited about v2.0 of Splice Machine, said Tom Beale, Chief Technology Officer, Corax. “The new version has a closer relationship between Spark and relational data storage. This enables us to continue to utilize big data computation and storage technology at the same time interfacing directly with our cyber risk quantification SaaS platform.”
The new architecture includes the ability to easily access external data and libraries. The Splice Machine RDBMS can execute federated queries on data in external databases and files using Virtual Table Interfaces (VTIs). It can also execute all pre-built Spark libraries (over 130 and growing) for machine learning, stream analysis, data integration and graph modeling.
Added Zweben, “Digital marketers, financial institutions, life science and cybersecurity companies all need to process mixed OLTP and OLAP workloads and prefer technologies with a vibrant community. The open source community provides a lifespan beyond any single company’s tenure, and a rich source of skill sets to tap that can expand, customize, and operate the technology.”
For those looking to experiment in the sandbox or download Splice Machine, please visit http://www.splicemachine.com/get-started. To learn more about what’s new with Splice Machine, come to our webinar on Thursday, July 28 https://attendee.gotowebinar.com/register/3117132333371213570.
About Splice Machine
Splice Machine is disrupting the $30 billion traditional database world with the open-source RDBMS powered by Hadoop and Spark, for mixed operational and analytical workloads. The Splice Machine RDBMS executes operational workloads on Apache HBase and analytical workloads on Apache Spark.
Splice Machine makes it easy to create modern, real-time, scaleable applications, or to offload operational and analytical workloads from expensive Oracle, Teradata, and Netezza systems. Typical use cases are ETL, operational reporting or real-time applications.
Splice Machine is headquartered in the South of Market (SOMA) neighborhood of San Francisco. For more information about Splice Machine, please visit splicemachine.com.