insideBIGDATA Latest News – 1/20/2020

Print Friendly, PDF & Email

In this regular column, we’ll bring you all the latest industry news centered around our main topics of focus: big data, data science, machine learning, AI, and deep learning. Our industry is constantly accelerating with new products and services being announced everyday. Fortunately, we’re in close touch with vendors from this vast ecosystem, so we’re in a unique position to inform you about all that’s new and exciting. Our massive industry database is growing all the time so stay tuned for the latest news items describing technology that may make you and your organization more competitive.

MariaDB Platform Goes Cloud Native, Powers a New Generation of Modern Applications with Smart Transactions

MariaDB® Corporation announced the availability of MariaDB Platform X4, a cloud-native, open source database that makes it easier than ever for developers to build modern applications using smart transactions and cloud-native data storage. Modern applications require access to vast amounts of data optimized for analytical queries and machine learning models so that transactions can be augmented with data-driven insights turning them into smart transactions. With a new breed of smart engines and significantly simplified design, MariaDB Platform X4 puts smart transactions in the hands of everyone, including tens of millions of developers who already use MariaDB for transactional-only workloads, changing the way applications are built.

“The use of mobile devices and the rapid pace of technology has fundamentally changed how we interact with applications and what we expect from them,” said Gregory Dorman, vice president of distributed systems and analytics, MariaDB Corporation. “This creates different requirements for how these modern applications work. The trick is to add the smarts without impacting the performance of transactions, which is why we implemented a dual storage layout for data: row based for transactions and columnar for true analytics. MariaDB Platform X4 is a huge step to make modern applications easy to develop and gives everyone an opportunity to experience the benefits without a huge upfront investment.”

AtScale Brings Big Data Analytics Scale and Performance across Heterogeneous Data Platforms with 2020.1 Release

AtScale, the intelligent data virtualization provider for advanced analytics, announced an unprecedented leap in multi-cloud and hybrid cloud analytics, data platform flexibility and time-to-analysis with the launch of its Adaptive Analytics 2020.1 platform release. Redefining traditional data virtualization and delivering upon the promise of cloud transformation, AtScale 2020.1 provides secure, self-service analysis while reducing compute costs by 10x, query performance improvements of more than 12.5x and enhanced user concurrency by 61x. Delivering on the promise of a single enterprise view of all analytics data, AtScale’s enhanced autonomous data engineering alleviates the performance and scale challenges of traditional data federation, manual data engineering and reliance on query caches. Additional enhancements in AtScale 2020.1 include a virtual cube catalog for simplified management of data assets and granular policy control that integrates natively with existing enterprise data catalog offerings.  

“AtScale 2020.1 is a major step toward achieving our long-term vision of delivering intelligent data virtualization to every enterprise,” said Christopher Lynch, Executive Chairman and CEO of AtScale. “This release enables enterprises to alleviate the scale and performance limitations associated with their legacy analytics platforms and seamlessly embrace agile, hybrid cloud and multi-cloud data platforms, ensuring organizations have the ability to make informed decisions based upon all of their data.”   

ArangoDB Boosts Multi-Model Database Performance with Release of ArangoDB 3.6

ArangoDB, a leading open source native multi-model database, announced the GA release of ArangoDB 3.6. ArangoDB 3.6 introduces OneShard, the ability to restrict individual databases to one node in a cluster, to ArangoDB’s Enterprise offering, and also includes major performance improvements that increase query speeds up to 30x faster. A database created with OneShard enabled is bound to a single database server node, but still replicated synchronously to additional nodes. This ensures the high-availability and fault tolerance of a cluster setup and performance similar to a single instance, as well as the possibility to run transactions with ACID guarantees. OneShard is ideal for use cases with graph traversals and JOIN-heavy queries, as well as multi-tenant applications.

“In conversations with our community, we found many of our users expressed the need for the high-availability and fault-tolerant benefits of a cluster, but they didn’t necessarily want to scale horizontally and sacrifice performance,” said Claudius Weinberger, CEO and co-founder of ArangoDB. “With the release of ArangoDB 3.6, we are pleased to offer developers a solution with OneShard, as well as a plethora of additional performance improvements.”

KNIME on Amazon Web Services Now Available to Productionize AI/ML

KNIME, a unified software platform for creating and productionizing data science, announced the availability of KNIME on AWS, its commercial offering for productionizing artificial intelligence (AI)/machine learning (ML) solutions on Amazon Web Services (AWS). KNIME on AWS is designed to allow customers to assemble and deploy ML solutions across the enterprise at scale and securely on AWS and to gain tangible value quickly. The offering is now featured in AWS Marketplace, including free trials.

Many enterprises seek to create value by deploying ML and AI solutions but can lack the data scientists, data platform engineers, experience, money and time necessary to make a meaningful impact quickly. The result is that teams and individuals lacking this set of highly technical skills are left out of the innovation loop and are unable to realize the potential that their data offers. Further, there are many steps in the process of bringing an AI/ML solution into production that require a transfer of context and knowledge from data preparation to analysis and modeling to deployment.

KNIME on AWS is a visual data workflow editor that allows customers of all skill levels to extract and prepare their data from Amazon Simple Storage Service (Amazon S3), Amazon Redshift, or other sources; utilize AWS AI/ML services along with custom data science to build an impactful model; and deploy this solution “as a service” or to an analytics application. In each step, the solution is underpinned by the storage, compute, security and scale of AWS. This end-to-end solution from data to deployment can be realized with no coding required, and scheduling/automation can be employed in order to create a continuous stream of insights or decisions with minimal manual effort required.

“As an Advanced Consulting Partner in the AWS Partner Network (APN) that has developed KNIME connectors for seamless data integration and visualization, EPAM is uniquely positioned to help enterprise customers realize the full benefits of KNIME on AWS,” explained Eli Feldman, CTO of Advanced Technology at EPAM. “As more companies leverage next-gen technologies like AI and ML for better decision-making, we’re proud to work with AWS and KNIME to help our customers gain data-driven insights to achieve greater agility in driving innovation.”

Splice Announces 3.0 Version of its Intelligent, Distributed SQL Data Platform

Splice Machine, provider of a scalable SQL database that enables companies to modernize their legacy and custom applications to be agile, data-rich, and intelligent, announced the next major release of its platform, Splice Machine 3.0, available in early Q1 2020. With exhaustive SQL support, excellent performance for all workloads, native machine learning and AI capabilities, and unified deployment on-premise and on the cloud, Splice Machine’s intelligent SQL platform is uniquely positioned as the database of choice for application modernization.

“We are excited about Splice Machine’s new 3.0 version with its new disaster recovery capability, offering the ability to have any number of replicated clusters ready for failover instantly,” said Charles Boicey, co-founder and Chief Innovation Officer at Clearsense LLC. “Additionally, Splice’s new ML Manager 2.0 has Jupyter notebook integration with MLflow and in-database deployment, making machine learning productization easier than ever before.”

Noodle.ai and SMS digital Launch AI-Fueled Application for the Steel Industry

A Leading Enterprise Artificial Intelligence® provider Noodle.ai and the digitalization team at SMS group, a trailblazer in digitalization for plant and equipment used in steel and nonferrous-metals production and processing, launched MPV. MPV is the first joint AI-driven application for the steel industry.

As steel industry margins continue to shrink, one promising way for manufacturers to increase profitability is to pursue more advanced, high-strength steel production for applications such as automotive and electrical. However, production of these advanced steel grades requires much tighter control of the overall production process, which is impacted by numerous parameters across the mill.

The MPV (Mechanical Properties Variability) application utilizes artificial intelligence and machine learning to create a unique ‘sense, predict, and recommend’ framework that addresses challenges associated with the variability of mechanical properties in steel production. Mechanical properties include things such as yield strength, tensile strength, and elongation. The application senses patterns within mill data to fully understand the drivers of mechanical property variability. It then predicts when increased variability will occur and recommends the optimal input parameters, or PDI settings, required to optimize target mechanical properties such as yield strength, tensile strength, and elongation. As a result, the MPV application can help steel manufacturers achieve cost savings 3 ways: by reducing mechanical properties variability, reducing alloy costs due to better variability control, and minimizing out-of-spec production, which are sold as secondary grades or scrapped. One steel manufacturer using MPV is anticipating savings $2M per year.

“Our ability to deploy AI to produce steel with tighter tolerances allows us to address the requirements of high margin segments such as automotive and electrical, which immediately impacts our top line revenues in addition to the obvious cost savings,” said Denis Hennessy, Director of Product Development at Big River Steel, after implementing the MPV application.

Netradyne Captures and Analyzes Over 1 Billion Minutes and 500 Million Miles of Driving Video Data using its Vision-based Driver and Road Safety Platform, Driveri®

Netradyne, a leader in artificial intelligence (AI) and edge computing focusing on driver and fleet safety, has announced that its Driveri® vision-based driver recognition safety program has captured and analyzed one billion minutes of driving video data and 500 million miles, 3D mapping the most U.S. roads in history.

Powered by AI, Driveri captures every minute of every driving day of thousands of fleet drivers across the country, analyzing road conditions, driving events and violations. The collection of analyzed data provides new opportunities that haven’t existed before to shape safety standards for the future of transportation. By 3D mapping millions of miles of U.S. roads, Netradyne’s contextual data is powering accident risk reduction, driver recognition, statistical modeling, and advanced autonomous vehicle development.

“We believe transportation can be transformed without sacrificing safety, said Avneesh Agrawal, chief executive officer of Netradyne . “One of the main issues facing autonomous vehicles today is missing, deep contextual data. Capturing video of a road one time is not enough. Our data represents hundreds of trips, capturing changing nuances and conditions of roads and highways.”

AI + Quantum Flow Boosts Deep Learning Speed 10x – 15x Faster – Powered by pqlabs.ai

PQ Labs Inc, unveiled QuantaFlow AI architecture. The new architecture includes a classical RISC-V processor, a QuantaFlow Generator and a QF Evolution Space.

QuantaFlow simulates a virtual transformation / evolution space for qf-bit registers. A classical single-core RISC-V processor is implemented to provide logical control, results observation retrieval, etc. The QuantaFlow Generator converts input data from low dimensional space to high dimensional space and then starts continuous transformation / evolution. The process is of minimum granularity, highly parallel in nature and asynchronous. By the end of the process information needs to be extracted from the evolution space by Bit Observer unit. In addition, Hot-Patching can be used to change the evolution path of qf-bits dynamically. When a more significant deformation for the evolution space are needed, the RISC-V processor will issue a warm-“reboot” to the evolution space. All these operations can be executed in a blink of time. With the help of these dynamic operations, QuantaFlow is possible to run all kinds of neural network models e.g. ResNet-50 (2015), MobileNet (2017), EfficientNet (2019), etc.) without speed degradation or hitting the “memory wall.” By comparison, GPUs and ASIC AI accelerators degrade performance in newer models (MobileNet, EfficientNet), because these new models are all memory-bound.

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*