New Striim v3.7.4 release bolsters SQL-based streaming and database connectivity for Kafka

Striim Inc., provider of end-to-end, real-time data integration and streaming analytics platform, launched on Monday version 3.7.4 of the Striim platform, bolstering its ease of use, connectivity, manageability, and scalability for delivering streaming analytics applications involving Apache Kafka.

Striim 3.7.4 introduces new utilities specifically designed to speed the adoption of Kafka as part of an end-to-end flow. These utilities help users quickly and easily scale Kafka applications by gathering baseline performance metrics for real world applications that involve parsing, formatting, buffer management, and external connectivity. These enhancements in the Kafka producer, consumer, and broker metrics help increase the monitoring, manageability and scalability of streaming applications.

SQL-query-based processing and analytics, a drag-and-drop UI, configuration wizards, and custom utilities such as these make the Striim platform the easiest solution to deliver end-to-end streaming integration and analytics applications involving Kafka.

In addition, Striim has bolstered the platform’s connectivity with hundreds of data sources and targets to include a new real-time Smart NetFlow Reader. The Striim Cloud Readiness offering for Kafka has also been expanded, enabling writing from Kafka queues to AWS Redshift and S3, Google Cloud, and several Microsoft Azure solutions including Azure SQL Server, Azure Storage, and Azure HDInsight.

The Striim platform offers end-to-end, real-time data integration and streaming analytics across all aspects of streaming data management, including Kafka.

With the Striim solution, companies can continuously ingest from Kafka as a data source, and/or continuously write to Kafka as a target. The Striim offering also ships with Apache Kafka built into the platform, allowing Kafka to become transparent to the user, and its capabilities harnessed without having to code APIs.

The Striim platform’s enterprise-grade, SQL-based integration with Apache Kafka has been generally available for several product releases, and boasts numerous deployments among Fortune 500 customers. These customers are using the Striim solution to enable high-volume, high-velocity data correlation and analytics involving Kafka data, along with other enterprise data sources.

Based on input from these production customers, Striim has further strengthened the platform’s ease-of-use, connectivity, manageability, and scalability in support of Kafka-related deployments.

“It’s a daunting challenge, integrating multiple tiers when building Streaming Applications with Kafka as an underlying message store. Striim makes that problem go away,” said Alok Pareek, co-founder and EVP of Products at Striim. “For several years, Striim has been the leader in defining an integrated Streaming Data Platform that includes not just Kafka, but also SQL-based applications and universal connectivity with a wide variety of event delivery semantics. With the 3.7.4 release, we have added Kafka diagnostic utilities, advanced monitoring metrics, and additional connectors to reduce the complexity of managing Kafka in production environments.”

In the area of stream processing, Striim has augmented its solution’s Exactly Once Processing (E1P) guarantees across the data pipeline, spanning the entire end-to-end streaming architecture. Because Apache Kafka is built into the Striim platform, a parallel Kafka stream can act as an intermediary persistent store, helping to ensure that users are processing and writing data once and only once.

In June last year,  Striim announced that it has launched version 3.6.1 of its end-to-end streaming integration + intelligence platform to deliver advances in streaming data integration, with a focus on enterprise-grade deployments and enhanced ease of use.

As companies move toward hybrid cloud infrastructures, they need to be able to easily move data from on-premise to Cloud environments in real time. With this in mind, the Striim platform introduces support for both Amazon Redshift and S3 as data targets. Now users can ingest and process data from virtually any data source, including transactional data from enterprise databases and load that data in real time into their Amazon Cloud environment.

Native Integration with Apache Kafka not only gives users the ability to ingest from, or write to Kafka, it also lays the foundation for Streaming Replay and Application Decoupling. Striim users can now quickly and easily leverage Kafka within the Striim platform to make every data source re-playable, enabling recovery even for streaming sources that cannot be rewound. This also acts to decouple applications, enabling multiple recoverable applications to be powered by the same data source.

A parallel Kafka stream can also act as an intermediary persistent store, helping to ensure that users are processing and writing the data “once and only once.” This is critical for enterprise-grade applications that need exactly once processing.


Leave a Reply

WWPI – Covering the best in IT since 1980