insideBIGDATA Latest News – 9/12/2020

Print Friendly, PDF & Email

In this regular column, we’ll bring you all the latest industry news centered around our main topics of focus: big data, data science, machine learning, AI, and deep learning. Our industry is constantly accelerating with new products and services being announced everyday. Fortunately, we’re in close touch with vendors from this vast ecosystem, so we’re in a unique position to inform you about all that’s new and exciting. Our massive industry database is growing all the time so stay tuned for the latest news items describing technology that may make you and your organization more competitive.

Hazelcast Advances Leadership, Lowers Barrier of Adoption to In-Memory Digital Integration Hubs

Hazelcast, a leading open source in-memory computing platform, announced a new major feature and a number of enhancements to its in-memory data grid (IMDG), Hazelcast IMDG. Among the updates are preview support for managing distributed data using SQL, out-of-the-box support for Kerberos, additional tuning options for Intel® Optane™ DC Persistent Memory Modules and quicker cluster rebalancing. With the latest version of Hazelcast IMDG, use cases such as digital integration hubs gain improved performance, scalability and resiliency.

The combination of business leaders requiring tailored views of data and the proliferation of data sources is stressing legacy architectures and infrastructure. An emerging answer to these challenges is a new architecture, digital integration hubs, that is a data architecture that provides a single access point and standardized API that can be called upon by multiple applications. These hubs are often deployed to reduce workloads on backend systems, to accelerate access to data hosted in backend databases and mainframes, and to provide a common API to a variety of data sources  to integrate new technologies into legacy architectures. Hazelcast empowers proven, in-memory digital integration hubs by providing object storage in RAM, write-through caching, distributed processing and predefined connectors to many popular data sources.

“Evolving legacy architectures, especially those with heterogeneous backends, toward new data channels pose a technological challenge of how to put all the pieces together while keeping the overall complexity maintainable. Digital integration hubs provide a central access point to those backend data sources that multiple applications can call upon uniformly,” said Kelly Herrell, CEO of Hazelcast. “Given that the hubs are the central component of these architectures, it needs to be fast, scalable, reliable and secure. The Hazelcast In-Memory Computing Platform provides the necessary capabilities that not only fulfill the requirements of the largest enterprises, but significantly simplifies the deployment and operational experiences when working with these innovative new digital integration hub architectures.”

Dgraph Labs Launches Slash GraphQL, the GraphQL-Native Database Backend-As-A-Service

Dgraph, creators of the advanced graph database, announced the launch of Slash GraphQL, a fully managed GraphQL backend service powered by the industry’s first native GraphQL database, Dgraph.

GraphQL is fast becoming the data language of the modern Web. Since Facebook released GraphQL as an open source query language in 2015, the technology has introduced major breakthroughs that simplify how developers access, query and iterate on data.

Despite the growing interest in GraphQL, a common stumbling block is that GraphQL is an API language, not a running system. To get started with GraphQL, developers face the hard choice of either building and maintaining their own GraphQL backend, which takes a long period of time and excessive engineering resources, or using a complicated overlay on top of a relational  database that inevitably runs into the n+1 problem, slow deep traversal query speeds, and other scalability issues. Both choices impose additional complexity with underlying services and table structures which slow developers down.

“Slash GraphQL takes away the work of building a fast and scalable GraphQL backend,” said Manish Jain, CEO and founder at Dgraph. “With Slash GraphQL, developers click a button and are presented with a /graphql endpoint. They set their GraphQL schemas and immediately get a production-ready backend. Right away they can start querying and mutating data, without any coding whatsoever.”

SirionLabs Launches SirionM&A, an AI-Driven M&A Platform that Expedites Legal Review and Contract Analysis by 70%

SirionLabs announced the release of its new platform SirionM&A. The AI-powered platform simplifies the complex M&A process by reducing the time associated with manual diligence by 70%, enabling full transparency into contract clauses and obligations, and shortening contract review cycle times. 

“We’ve heard from many organizations who want an easier way to complete M&A due diligence at the speed they require without sacrificing the integrity of the review process,” said Ajay Agrawal, founder and chairman of SirionLabs. “SirionM&A cuts review and analysis down to hours versus weeks and months, which will be game changing in terms of how fast corporate transactions get done.  With far better accuracy, speed and insights than traditional due diligence solutions, firms can now allocate more time on strategies focused on the best way to merge and integrate to realize true synergy savings.”

Teradata Expands Vantage Support for Data Science

Teradata (NYSE: TDC), the cloud data analytics platform company, announced enhancements to its Vantage platform, making collaborative and frictionless data science a reality. By significantly increasing the collaboration between data scientists, business analysts, data engineers, business leads and others who may use different tools and languages, Vantage allows organizations to realize faster time to value and reduced costs with stronger data governance and security.

With this enhanced level of support for data science on Vantage, businesses can achieve end-to-end data science workflows on a single, scalable, reliable and secure platform, without the need to create data silos or sample data. This enables a wide range of personas to collaboratively run complex analytics in a self-service manner, on one platform and with the same data, ensuring efficient operationalization. This shared journey ensures consistent stakeholder buy-in, a fail-fast approach that delivers timely course corrections, and a consensus-based method for delivering long-term business outcomes.

“As businesses increasingly rely on virtual connectivity to keep operations running, collaboration has emerged as the key success factor for many functions, especially analytics,” said Sri Raghavan, Director of Data Science and Advanced Analytics Product Marketing at Teradata. “With greater functionality and expanded support for data science to help analytics users better communicate and collaborate, Vantage helps remove friction from the data science process so customers can derive mission-critical insights at record speed and scale. And given the broad deployment options for Vantage – public clouds, including AWS, Azure, and Google Cloud Platform later this year, as well as hybrid and multi-cloud – customers are given the flexibility to leverage the platform’s enhanced data science capabilities in the environment of their choice.”

Qlik Expands Insight Advisor To Deliver Industry’s Most Robust AI-Driven Cloud Analytics Experience

Qlik® announced enhancements to Insight Advisor, its intelligent AI assistant built directly into Qlik Sense®, to deliver the industry’s most robust augmented intelligence capabilities for cloud analytics. Drawing on Qlik’s unique Associative Engine, combined with investments in natural language processing (NLP) and cognitive technology, Insight Advisor deepens Qlik’s augmented analytics with AI-driven assistance that adds value to every interaction with data for cloud analytics users.

“Analytics users want to do more with their data, but often struggle with where to look or what next steps to take. Insight Advisor gives these users a complete and powerful AI assistant, built directly into Qlik Sense, to help guide them along every step of their data exploration and analysis journey,” said James Fisher, Chief Product Officer at Qlik. “Qlik Sense users are only a click or question away from the assistance needed to derive more insights and value from data. And, with every interaction, Insight Advisor learns alongside them, creating a virtuous cycle where they become smarter together, increasing users’ data literacy and data usage.”

The Kinetica Streaming Data Warehouse Eliminates Legacy Analytics Latency with Simultaneous Streaming and Data Analysis

Kinetica announced the latest release of The Kinetica Streaming Data Warehouse, a unified data analytics platform that delivers real-time analysis on incoming data streams, while incorporating all of an organization’s data and applying cutting-edge location intelligence and machine learning-powered predictive analytics. This release of The Kinetica Streaming Data Warehouse serves the most demanding enterprise requirements, including best-in-class security, resiliency, and ad hoc analytics on petabyte-scale data sets.

Whereas the traditional data warehouse ingests and stores batch data for analysis, The Kinetica Streaming Data Warehouse ingests and stores a wide variety of data types and analyzes the data in real time as it is received. The key difference is that traditional data warehouses can perform analytics, but cannot run them in real time and are limited in the type of analytics they support. The Kinetica Streaming Data Warehouse is the only platform able to transform data in motion into immediate, usable insights, which require a streaming data warehouse.

“The Kinetica Streaming Data Warehouse serves organizations running analytics at scale that are blocked by unacceptably stale analytical results,” said Irina Farooq, CPO at Kinetica. “It is an ideal solution for organizations trying to both incorporate high-velocity streaming data and glean the full business context of all of their data, taking into account factors like location, time, and interrelationships. The platform is uniquely powerful because it analyzes and inferences streaming and historical data of many different types in real time at petabyte scale.”

University of Maryland Launches New Social Data Science Center with Support from Facebook

The University of Maryland’s College of Behavioral and Social Sciences (BSOS) and the College of Information Studies (iSchool) are launching a Social Data Science Center to help researchers better access, analyze and use powerful social science data. Such data is critical to understanding and addressing many of the pressing challenges facing the nation and world.

UMD’s new Social Data Science Center (SoDa) leverages the university’s strengths in survey methods, measurement, information management, data visualization, and analytics. Facebook is providing support for the center’s research and education programs over the next three years. SoDa already is collaborating with Facebook and with other universities to address the COVID-19 pandemic through a public survey tool.

DataStax Lowers Barriers to NoSQL Adoption with Storage-Attached Indexing for Apache Cassandra

DataStax announced the technical achievement and general availability of Storage-Attached Indexing (SAI), fundamentally advancing indexing in Apache Cassandra™. Storage-Attached Indexing is a highly scalable, globally-distributed index for Apache Cassandra available on Astra and DataStax Enterprise (DSE). DataStax has also opened a Cassandra Enhancement Proposal (CEP) with the Apache Cassandra project to share this with the open source community so all users of the popular, open source database can benefit. 

Developers require a simple experience to leverage the power of Apache Cassandra for application development. Apache Cassandra is the proven open-source, NoSQL database of the internet’s largest applications and hardened by the world’s top enterprises. Storage-Attached Indexing is a robust and powerful index in Apache Cassandra, making the open-source, scale-out, cloud-native NoSQL database more usable. With Storage-Attached Indexing, developers now have accessibility to familiar indexing and queries – such as  WHERE clauses –  in Apache Cassandra. 

“Developers have typically faced a tradeoff between scalability, ease of use, and operations when choosing NoSQL,” said Ed Anuff, Chief Product Officer at DataStax. “Storage-Attached Indexing gives developers robust, new indexing that eliminates many of these tradeoffs, making development and data modeling in Apache Cassandra easier to use, while also increasing stability and performance and giving architects and operators fewer moving parts to manage.” 

Cloudian Announces Flash-optimized Object Storage Software Enabling Over 3x Better Price/Performance Than Competitive Offerings

Cloudian® announced that its HyperStore® object storage software is now flash-optimized, enabling enterprises to meet the needs of performance-intensive workloads such as data analytics, AI/ML, cloud-native applications and rapid backup and recovery. Compared to competitive flash-based object storage systems, Cloudian’s new solution delivers more than 3X better price/performance. It also features the unique ability to deploy flash and HDD-based nodes within an adaptive hybrid architecture. This allows customers to further reduce total cost by 40% by tiering less frequently used data to lower-cost HDD-based storage. Cloudian’s flash-optimized HyperStore is available as both a software-only solution and as a pre-configured appliance, the HyperStore Flash 1000 Series.

“With object storage increasingly displacing legacy SAN and NAS systems, our customers need new options for performance-intensive workloads such as analytics and ultra-rapid data restore,” said Jon Toor, chief marketing officer at Cloudian. “Our new flash solution not only addresses this need but also includes an industry-first feature that automatically manages data across flash and HDD-based platforms to deliver the optimal mix of performance, cost and capacity.”

Hasura Brings Instant GraphQL to MySQL and SQL Server for Rapid API Development to Unlock Siloed Data

Hasura, the data access infrastructure company, announced that it has added GraphQL support for MySQL and early access support for SQL Server to its existing support for PostgreSQL. Hasura now supports three of the most popular database technologies. A great deal of valuable data lives inside existing MySQL and SQL Server databases, and developers want to access that data to build new applications. By expanding its support to more database types, Hasura makes it easy for developers to access that data with a modern GraphQL-based API with inbuilt security, scalability and governance.

“Data lives in lots of places, and in many different databases. We want our users to be able to access that data instantly with Hasura’s secure, scalable data access infrastructure so adding support for MySQL and SQL Server was our obvious next step. It opens up huge potential for all the developers who need to access the vast amounts of data that lives in MySQL and SQL Server today. Now they can enjoy instant data access with a modern GraphQL API, and Hasura’s built-in security, governance and scalability features will get their applications into production quickly and safely,” said Hasura co-founder and CEO Tanmai Gopal.

ScyllaDB Unveils One-Step Migration from Amazon DynamoDB to Scylla NoSQL Database

ScyllaDB announced a major new update to its Scylla Migrator tool, which now enables live replication of DynamoDB databases to Scylla NoSQL databases in a single step. The new capabilities give DynamoDB users an easy way to tap Scylla’s superior performance and lower overall costs.

Database cloning has never been simpler. Scylla Migrator uses Spark to duplicate existing tables, automatically capturing the live stream of changes and directing them to the Scylla database. Organizations using DynamoDB can now quickly move to hybrid or multi-cloud database deployments, with some data on DynamoDB servers and some on Scylla with complete synchronization. Scylla Migrator supports entire DynamoDB database migrations as well.

“We’ve taken our DynamoDB-compatible API to another level by integrating DynamoDB’s streaming capabilities into Scylla,” said Dor Laor, CEO and Co-Founder, ScyllaDB. “Now DynamoDB users can easily migrate or extend their systems to a faster, more reliable, more cost-effective platform.”

dunnhumby Reduces Time to Action for Data Scientists via New Tool on Microsoft Azure

dunnhumby, a leader in customer data science, has launched its new web-based application on Microsoft Azure, enabling data scientists to deliver customer insights faster, driving profitability and customer loyalty.

dunnhumby Model Lab automates many of the repetitive, time-consuming tasks for data scientists, allowing them to focus instead on the modeling that delivers greatest value. The application uses machine-learning technology, hosted in Azure to achieve high performance, reduce run time, and allow data scientists to quickly explore many algorithms.

Model Lab is designed to solve complex retail challenges, such as understanding customer churn and predicting propensity to purchase and in what channel, instore versus online. The tool helps retailers and brands build loyalty and profitability by focusing on the shopper experience. Azure enables data scientists to take advantage of Model Lab through a simple subscription, and to get them up and running virtually instantly. The Azure-based service gives users the benefit of always working with the latest software, with no need to worry about updates. New features and experiments are provided automatically, so users can experience the very latest in advanced machine-learning technology.

“dunnhumby Model Lab already empowers dunnhumby’s data scientists, and it has been integral in enabling them to create millions of models for retailers and brands around the world rapidly and efficiently,” said Kyle Fugere, Head of Innovation and Ventures at dunnhumby. “We are democratizing customer data science for everyone and making Model Lab available on Microsoft Azure means that it is now accessible for retailers, brands, and businesses, large and small.”

Fujitsu Offers High-End, Software-Defined Qumulo Solution to Master Petabytes of Unstructured Data

Fujitsu introduces a new storage solution which leverages software-defined storage technology to enable enterprises to master petabytes of data distributed across multiple data centers and the cloud. The disruptive file data platform from Qumulo is the most advanced solution for managing and accessing file data, paving the way for the creation of new services and applications in large-scale enterprise storage solutions.

Enterprises are recognizing the opportunities from analyzing multiple sources of data to supercharge business operations. Processes as diverse as diagnostic imaging, modeling, simulations, LIDAR, GIS, genetic sequencing and video production all revolve around the creation and use of unstructured data. However, managing substantial amounts of file data is often challenging, especially as it can be distributed between the network edge – coming from IoT devices – as well as on-premises and the cloud.

“We’re seeing plenty of claims that data is the new gold,” says Olivier Delachapelle, Head of Category Management, Product Sales Europe at Fujitsu. “That’s certainly the case, but before the data is of any value, all those petabytes of unstructured data must be mined, managed and refined. Fujitsu’s new approach with the Qumulo solution is a custom-built, high-performance information repository. It scales without limits and enables customers to rapidly shift through vast amounts of data to find the nuggets of true value.”

GigaSpaces Announces Version 15.5, Simplifying and Scaling Hybrid and Multi-Cloud Deployments to Empower Digital Transformation Initiatives

GigaSpaces, the provider of InsightEdge, the fast in-memory data and analytics processing platform, announced the release of GigaSpaces InsightEdge version 15.5, to simplify and scale hybrid and multi-cloud deployments to empower digital transformation initiatives. GigaSpaces version 15.5 adds new efficiency and automation capabilities to its Ops Manager module for easier monitoring, management and automatic provisioning of resources across all environments, to ensure maximum performance and availability at the optimal cost. The platform also provides smooth testing and deployment of microservices with no management overhead or service disruption. With version 15.5, GigaSpaces is the only in-memory computing vendor that enables the creation of a hybrid cluster that can be provisioned both on-premise and in a public cloud, eliminating the need for data replication between clusters.

“Our customers are accelerating their digital transformation initiatives and moving processes and services from offline to online,” said Yoav Einav, Vice President Product at GigaSpaces. “With version 15.5 we continue to deliver on our goal to simplify deployments and management across all environments including hybrid and multi-cloud with smart monitoring and automatic scaling to ensure the highest service levels even during unplanned peaks, while eliminating time consuming manual operations and resource overprovisioning.”

ThoughtSpot 6.2 Launches to Empower Enterprises to Connect, Share, and Utilize Insights Faster Than Ever

In the midst of distributed workforces, cross-team collaboration and the need for instant access to insights for an array of business planning purposes has never been more critical. ThoughtSpot’s search and AI-driven analytics platform was designed to unlock valuable insights for employees at every level of an organization, from frontline employee to C-suite executive. The company is announcing ThoughtSpot 6.2, which includes new exploration, collaboration and visualization capabilities, to help organizations unlock unprecedented value from their data in record time. 

Organizations can connect their data, share and collaborate on analysis, and take action on insights faster than ever before with ThoughtSpot 6.2. With new features like DataFlow, Embrace for SAP and Teradata, and the ThoughtSpot bulk loader, enterprises have more flexibility and choice on how they leverage their data for search and AI-driven analytics, wherever that data originates. Users can also now share a particular visualization with a colleague so that they can work off of the same view within one pinboard through Individual Chart Sharing. With the same ease of use as Google Docs, users can share and request edit access for a specific visualization, rather than an entire report or dashboard, making it easy to work together on specific and relevant data sets. With a new Subscribe Assistant feature, creators can also now subscribe users to specific headlines within a pinboard, making it even easier to get the appropriate data and KPIs in front of the people who need them.

“At ThoughtSpot, we know that data is only as valuable as the insights it produces, and it’s our mission to make it as quick and easy as possible to get access to the right insights that allow our customers to make the most informed decisions. Now, more than ever, time is of the essence for businesses, who need intelligence infused with every decision they make,” said Ajeet Singh, Cofounder & Executive Chairman, ThoughtSpot. “The combination of features we’re launching in ThoughtSpot 6.2 is a huge step forward in helping our customers collaborate, share insights, and most importantly, take action, faster than ever before.”

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*