Splice Machine set to debut managed relational database service in clouds for OLTP and OLAP workloads



Open source relational database company Splice Machine announced Wednesday that its data platform will be available on AWS as a database-as-a-service (DBaaS) later this year. With Splice Machine, users can both power applications and perform analytics without the need for ETL and separate analytical databases, such as Amazon RedShift or Snowflake.

With Splice Machine’s Cloud RDBMS, companies get complete ANSI-SQL, ACID compliant transactions, secondary indexes, referential integrity, triggers, stored procedures and more – capabilities that applications depend upon. As Splice Machine is a hybrid transactional/analytical processing (HTAP) system, there is no need to stitch together RDBMS’s and data warehouses with fragile ETL processes.

Splice Machine manages the time-consuming database administration tasks for its users. Traditional scale-up databases discourage growing datasets, charging outrageous fees for incremental capacity. Splice Machine allows users to add or remove capacity when needed. Its scale-out architecture means that an unlimited amount of concurrent users and applications can access the database without eroding performance and preserving ACID properties.

Splice Machine’s incremental backup and recovery backs up to Amazon S3 for true disaster recovery and its scale-out architecture uses replication to ensure availability of service

“Modern data-intensive applications typically ingest Big Data at high speeds and require transactional and analytical capabilities in the same package,” said Monte Zweben, co-founder and CEO of Splice Machine. “To address this challenge, companies often build complex systems consisting of multiple compute and storage engines. Splice Machine already simplifies this process by providing a hybrid solution, where an optimizer chooses between compute engines. Now, we are taking the next logical step by removing the need to manage the database. Users only need to know SQL – Splice Machine does the rest.”

“While there are other DBaaS offerings already out there that are just the legacy databases remotely hosted, or a No/NewSQL variant built for specialized analytics, caching or object use cases, we see Splice Machine as now getting ahead of potentially the largest market opportunity – every large enterprise database-driven application,” said Mike Matchett, senior analyst, Taneja Group. “This new service presents a way to migrate almost any existing traditional or legacy database application to the cloud to gain scalability and cloud economics, and immediately enable big data, IoT, and machine learning initiatives. That’s at least three “wins” in one move (i.e. cloud transformation, big data analytics and application refresh).”

Splice Machine is currently accepting early adopters to evaluate its Cloud RDBMS and is offering incentives for companies to sign up. Ideal customers need to have the power an application with an RDBMS; perform extensive analytics; have a desire not to move data back and forth between data engines with data loads between 5 terabytes to 2 petabytes. Consumers will be able to start a trial during this quarter.

Last November, Splice Machine released version 2.5 of its data platform for intelligent applications. The new version strengthens its ability to concurrently run enterprise-scale transactional and analytical workloads, frequently referred to as HTAP (Hybrid Transactional and Analytical Processing).

Version 2.5 of Splice Machine introduces columnar external tables that enable hybrid columnar and row-based querying. Columnar external tables can be created in Apache Parquet, Apache ORC or text formats. Columnar Storage improves large table scans, large joins, aggregations or groupings while the native row-based storage is used for write-optimized ingestion, single-record lookups/updates and short scans.

Its in-memory caching via pinning gives the ability to move tables and columnar data files into memory for lightning-fast data access. It avoids multiple table scans or writes to high-latency file systems such as Amazon S3. The capability allows data to be stored on very inexpensive storage while being very performant in-memory when required in applications.

Leave a Reply

WWPI – Covering the best in IT since 1980