insideBIGDATA Latest News – 10/20/2020

Print Friendly, PDF & Email

In this regular column, we’ll bring you all the latest industry news centered around our main topics of focus: big data, data science, machine learning, AI, and deep learning. Our industry is constantly accelerating with new products and services being announced everyday. Fortunately, we’re in close touch with vendors from this vast ecosystem, so we’re in a unique position to inform you about all that’s new and exciting. Our massive industry database is growing all the time so stay tuned for the latest news items describing technology that may make you and your organization more competitive.

MariaDB SkySQL Adds Distributed SQL for Scalability and Elasticity in Major New Update

MariaDB®Corporation today announced a major expansion of MariaDB SkySQL cloud database. With this update, SkySQL now runs the latest version of MariaDB Platform X5, which most notably added distributed SQL capabilities for global scale. With the ability to be deployed as clustered or distributed, MariaDB SkySQL addresses customers’ specific needs all within one powerful, indestructible cloud database.

“We built MariaDB SkySQL to reduce the complexities introduced by first-generation cloud databases,” said Michael Howard, CEO, MariaDB Corporation. “The current landscape requires a smorgasbord of cloud services to get a single job done – AWS RDS for simple transactions, Aurora for availability and performance, Redshift for cloud data warehousing and Google Spanner for distributed SQL. SkySQL gives you all these capabilities in one elegant cloud database that delivers a consistent MariaDB experience regardless of the way you deploy it.”

TigerGraph Demonstrates Scalability to Support Massive Data Volumes, Complex Workloads and Real-World Business Challenges

TigerGraph, the scalable graph database for the enterprise, announced the results of the first comprehensive graph data management benchmark study using nearly 5TB of raw data on a cluster of machines – and the performance numbers prove graph can scale with real data, in real time. The company used the Linked Data Benchmark Council Social Network Benchmark (LDBC SNB), recognized as the reference standard for evaluating graph technology performance with intensive analytical and transactional workloads. TigerGraph is the industry’s first vendor to report LDBC benchmark results at this scale. TigerGraph is able to run deep-link OLAP queries on a graph of almost nine billion vertices (entities) and more than 60 billion edges (relationships), returning results in under a minute.

“This benchmark and these results are significant, both for TigerGraph and the overall market. While TigerGraph has multiple customers in production with 10X data size and number of entities/relationships, this is the first public benchmark report where anyone can download the data, queries, and perform the benchmark. No other graph database vendor or relational database vendor has demonstrated equivalent analytical capabilities or performance numbers,” said Dr. Yu Xu, CEO and founder, TigerGraph. “If there was lingering uncertainty about graph’s ability to scale to accommodate large data volumes in record time, these results should eliminate those doubts. Graph is the engine that enables us to answer high-value business questions with complex real data, in real time, at scale. TigerGraph’s ongoing work in advanced graph analytics has been validated by market recognition, innovative customer applications and continued product evolution – and these benchmark results confirm the company’s position as a clear market leader, succeeding where other vendors have failed.”

NXP Announces Expansion of its Scalable Machine Learning Portfolio and Capabilities 

NXP Semiconductors N.V. (NASDAQ: NXPI) announced that it is enhancing its machine learning development environment and product portfolio. Through an investment, NXP has established an exclusive, strategic partnership with Canada-based Au-Zone Technologies to expand NXP’s eIQ™ Machine Learning (ML) software development environment with easy-to-use ML tools and expand its offering of silicon-optimized inference engines for Edge ML.

Au-Zone’s DeepView™ ML Tool Suite will augment eIQ with an intuitive, graphical user interface (GUI) and workflow, enabling developers of all experience levels to import datasets and models, rapidly train, and deploy NN models and ML workloads across the NXP Edge processing portfolio. To meet the demanding requirements of today’s industrial and IoT applications, NXP’s eIQ-DeepViewML Tool Suite will provide developers with advanced features to prune, quantize, validate, and deploy public or proprietary NN models on NXP devices. It’s on-target, graph-level profiling capability will provide developers with unique, run-time insights to optimize NN model architectures, system parameters, and run-time performance. By adding Au-Zone’s DeepView run-time inference engine to complement open source inference technologies in NXP eIQ, users will be able to quickly deploy and evaluate ML workloads and performance across NXP devices with minimal effort. A key feature of this run-time inference engine is that it optimizes the system memory usage and data movement uniquely for each SoC architecture.

“NXP’s scalable applications processors deliver an efficient product platform and a broad ecosystem for our customers to quickly deliver innovative systems,” said Ron Martino, Senior Vice President and General Manager of Edge Processing at NXP Semiconductors. “Through these partnerships with both Arm and Au-Zone, in addition to technology developments within NXP, our goal is to continuously increase the efficiency of our processors while simultaneously increasing our customers’ productivity and reducing their time to market. NXP’s vision is to help our customers achieve lower cost of ownership, maintain high levels of security with critical data, and to stay safe with enhanced forms of human-machine-interaction.”

Neo4j Announces the First Graph Machine Learning for the Enterprise

Neo4j, a leader in graph technology, announced the latest version of Neo4j for Graph Data ScienceTM, a breakthrough that democratizes advanced graph-based machine learning (ML) techniques by leveraging deep learning and graph convolutional neural networks.

Until now, few companies outside of Google and Facebook have had the AI foresight and resources to leverage graph embeddings. This powerful and innovative technique calculates the shape of the surrounding network for each piece of data inside of a graph, enabling far better machine learning predictions. Neo4j for Graph Data Science version 1.4 democratizes these innovations to upend the way enterprises make predictions in diverse scenarios from fraud detection to tracking customer or patient journey, to drug discovery and knowledge graph completion. 

Graph embeddings are a powerful tool to abstract the complex structures of graphs and reduce their dimensionality. This technique opens up a wide range of uses for graph-based machine learning.

“We are thrilled to bring cutting-edge graph embedding techniques into easy-to-use enterprise software,” said Dr. Alicia Frame, Lead Product Manager and Data Scientist at Neo4j. “What we’ve brought to bear with the latest version of Neo4j for Graph Data Science democratizes state-of-the-science techniques, and makes it possible for anyone to use graph machine learning. This is a game-changer for what can be achieved with predictive analysis.”

Sisense Empowers SDL with Data and Analytic Insight to Help Drive Global Business Expansion

Sisense, a leading analytics platform for builders, announced that SDL has provided all employees with instant access to easily consumable customer and project analytics using the Sisense platform. This has accelerated data-driven decision-making and the ability to manage customer projects for SDL, the intelligent language and content company. The ability to provide almost real-time visibility into customer demand holistically was never possible before the company began using the Sisense platform.

“As a company of our size — and global reach — we’re inundated with data. Sometimes it can be difficult to find simple answers,” said Marion Shaw, Director of Data and Analytics, SDL. “We’re really delighted with the capability of the Sisense platform; it’s far beyond what we’ve ever had, and its flexibility helps us give the business the answers they need. It’s also constantly evolving, helping us to stay ahead of market demands.”

Fluree Open Sources Its Entire Web3 Data Platform 

Fluree, a market leader in secure data management, is releasing its core source code under the AGPL open source license. Developers can now pull from and contribute to Fluree on Github, in turn building a new internet ecosystem that promotes data-centric security, traceability, and global interoperability.

“By open sourcing our technology, we reject the status quo practice of locking data up in proprietary format, and instead solidify our commitment to building best-in-class open source solutions to modern data management problems,” said Fluree Co-CEO Brain Platz. “We are offering enterprises a bridge from vendor lock-in towards a future of complete data ownership, portability, and interoperability.” 

Espressive Barista Understands 1.3 Billion Employee Language Phrases Out-of-the-Box, Bridging Gap Between AI and the Semantics of Human Language

Espressive, a pioneer in artificial intelligence (AI) for enterprise service management (ESM), announced its AI-based virtual support agent, Espressive Barista, now understands 1.3 billion phrases across 14 major enterprise services teams and nine languages. Barista represents a paradigm shift in the delivery of AI-based employee self-help. Powered by an advanced natural language processing (NLP) engine and sophisticated machine learning capabilities, Barista bridges the gap between AI and the semantics of human language. Espressive also announced that Barista is the first virtual support agent to integrate with Interactive Voice Response (IVR) systems for enterprise service management. The new integration reduces help desk calls by offering direct access to Barista for employees on hold, providing answers in three seconds. With Barista, enterprises can benefit from the highest help desk call deflection, while increasing employee adoption and workforce productivity.

“Many people question whether AI really understands what it reads. After all, it doesn’t have the common sense to understand human language,” said Pat Calhoun, founder and CEO of Espressive. “We believe you can solve this issue by bridging the gap between AI and the semantics of human language with enough data, sophisticated technology, and talent. We recognized that the success of Amazon Alexa and Google Home is predicated on a high degree of accuracy due to millions of consumers who use them daily in addition to an army of data scientists, computational linguists, developers, and machine learning engineers tuning the AI engine behind the scenes. So, we designed Barista to replicate that model. I’m proud to announce that today, Barista understands over 1.3 billion enterprise phrases with a high degree of accuracy, and that number grows daily. That’s why our customers experience the highest ticket deflection in the industry.”

RudderStack Launches RudderStack Cloud – Customer Data Platform Built for Developers Offers Key Integrations Including Snowflake and DBT

RudderStack, which provides a Customer Data Platform (CDP) designed specifically for developers, launched the next generation of its SaaS offering –  RudderStack Cloud, the most efficient, affordable and sophisticated customer data product for developers. Offering integrations with platforms such as Snowflake and DBT, RudderStack Cloud solves the data silo problem by enabling data engineers to unify their data and add CDP functionality on top of their own warehouse.

Traditional CDPs have tried to solve for data collection and activation, but unfortunately most of them make the problem worse by creating additional data silos and integration gaps. Data engineers often find themselves stuck in the middle, only partially leveraging the power of tools like Snowflake and DBT because other components of the stack don’t integrate with their larger data workflow. RudderStack puts developers, their preferred tools and modern architectures front and center, helping data engineers and their companies discover powerful new opportunities in the way they connect these critical systems and put them to work across the organization.

“It’s time for a new approach in the way companies architect their customer data stacks and how CDPs fit into the toolset, and that’s exactly what we’re building at RudderStack,” said Soumyadeb Mitra, CEO of RudderStack. “By enabling developers to turn their warehouse into a CDP, we’re removing data silos, solving security concerns, and making the richest possible data more widely available across the entire organization.”

Treasure Data Provides Game-Changing Analytics for Brands with Launch of Treasure Insights

Treasure Data™ introduced new product capabilities for its Customer Data Platform (CDP) that provide game-changing analytics to brands. Treasure Data announced 15 new integrations, bringing the total number of connectors in its network to more than 170. Finally, with this release Treasure Data also launched an in-store SDK (software development kit) that provides retailers a complete, unified view of the shoppers’ journey. 

“Treasure Data empowers businesses to build insights at the speed of customer decisions,” said Rob Parrish, Vice President of Product, Treasure Data. “Backed by our industry-leading customer data management capabilities, Treasure Data continues to build on its comprehensive solution to further accelerate time to value for our customers.” 

Informatica Announces Advanced Capabilities in Enterprise Cloud Data Management to Help Businesses Swiftly Transform in the Cloud

Informatica, the enterprise cloud data management leader, announced new advanced capabilities designed to help customers rapidly become cloud-first, cloud-native in this global pandemic. IDC predicts continued double-digit growth in infrastructure digital transformation in 2020 during the pandemic as companies increasingly invest in the cloud to accelerate their digital transformation efforts. Informatica has been at the forefront of enterprise cloud data management, continuously innovating to help its customers succeed in the Cloud-AI era.

“Customer-focused innovation with a pulse on the industry is what drives Informatica’s market leadership in enterprise cloud data management,” said Jitesh Ghai, SVP and GM, Data Management, Informatica. “We have made significant enterprise scale, cloud-native and AI-powered investments in product and platform innovation as shown in our Gartner Magic Quadrant leadership in all five key categories of data management. As businesses transform themselves using cloud analytics to stay competitive amidst a global pandemic, Informatica is well-positioned to help them succeed in the Cloud-native and AI era.”

GoSpotCheck Builds on Google Cloud for Real-Time Activity Tracking for Some of the World’s Biggest Brands

GoSpotCheck, the software company reimagining how tomorrow’s workforce works, announced that it has integrated Looker, the business intelligence (BI) and analytics platform from Google Cloud, to create a platform for building customized data experiences that accelerate business outcomes for its customers.

GoSpotCheck (GSC) is a mobile task management platform that connects frontline workers with corporate goals and directives, creates a shared view of the field, and helps leaders make better decisions, faster. By deploying Looker, GSC was able to create 225 customized data experiences that seamlessly fit into existing workflows to deliver real-time data at the point of need, and reduce the overall time needed to build reports by 70%. Today, GSC delivers insights 95% faster to hundreds of its top enterprise customers worldwide, including Dole, Fruit of the Loom, Save A Lot, and Under Armour.

“A lot of our customers operate in complex ecosystems where they have a lot going on, and with Looker we ensure that data isn’t one of the things they need to worry about. We provide the right amount of data to different levels of users in the enterprise and visualize it in the ways  they want to consume it based on their role or business objectives. Being able to get the right reporting to these different layers of stakeholders provides incredible value to our customers and a serious competitive advantage,” said Jeff Wrona, VP of Strategic Implementations at GSC.

Kespry Collaborates with Microsoft to Deliver Kespry Perception Analytics for Intuitively Searching and Analyzing Complex Visual and Geospatial Data

Kespry, a leading visual search and analytics solution provider, announced the availability of Kespry Perception Analytics. The solution is designed for industrial use cases requiring comprehensive analysis of complex visual data, including asset condition tracking and identifying business-impacting anomalies.  Kespry Perception Analytics vertically integrates as an ISV solution for the Microsoft Dynamics 365 and Power Platform.

At the heart of Kespry Perception Analytics is a knowledge graph that accurately maps a company’s entire library of visual data, including media files and photogrammetric output, by types of physical assets, their specific geographic location, and the types and times of issues identified. The platform provides a comprehensive toolset to ingest and index the data, as well as leverage Microsoft’s Azure machine learning (ML) to generate insights on the data. What differentiates Kespry Perception Analytics is its intuitive search and analytics capabilities that enable reliability and maintenance teams to query data without any coding knowledge. It offers interactive dashboards and data visualization tools to analyze the health of assets across the company.

“Kespry Perception Analytics delivers unprecedented business insight and solves major problems for industrial companies that have struggled to get meaningful value from visual data in a timely manner,” said George Mathew, CEO, Kespry. “It provides companies with a more complete view of the state of assets than just depending on telemetry data alone. It’s designed with a simple interface to help users intuitively navigate through complex analysis with ease.”

Privacera Platform 4.0 Automates Enterprise Data Governance Lifecycle

Privacera, a cloud data governance and security leader founded by the creators of Apache Ranger™, announced the general availability of version 4.0 of the Privacera Platform, an enterprise data governance and security solution for machine learning and analytic workloads in the public cloud. Driven by increasing customer demand, Privacera 4.0’s new features include: access workflows for faster on-boarding and customized data access; expanded discovery for seamless data tagging in complex infrastructures; and an encryption gateway for automated encryption and decryption abilities.

“For enterprises to truly maximize the value of their data, they must ensure they know exactly where their sensitive data is located and who has access to it, which can be a very time-consuming and manual process for many,” said Srikanth Venkat, VP of Product at Privacera. “Privacera 4.0 is a direct response to this need, and we’ve made significant improvements to provide our customers the most seamless experience possible. We’ve made the entire governance lifecycle completely automated for our customers, ensuring they’re protected across even the most complex of infrastructures.” 

Data Analytics Customers Value Choice and Simplicity; Teradata’s New Flexible Cloud Pricing Provides Both

Recognizing that data analytics workloads, usage patterns, and utilization rates can vary widely across an organization, Teradata (NYSE: TDC), the cloud data analytics platform company, announced flexible cloud pricing options to make it easy for enterprises to grow, and benefit from data analytics in the cloud. In keeping with Teradata’s aim to provide its customers with simplicity and choice, the company now offers two flexible cloud pricing models: Blended and Consumption. Blended Pricing is best suited for high usage and provides the ultimate in billing predictability while delivering the lowest cost at scale. Consumption Pricing is an affordable, pay-as-you-go option best suited for ad hoc queries and workloads with typical or unknown usage that delivers cost transparency for easy departmental chargeback. With broad availability of both models, enterprises can expect more choice, lower risk, higher efficiency, and greater transparency from Teradata. These options are crucial in today’s unpredictable market where technologies, supply chains, and customer expectations can shift abruptly, leaving companies with stranded data analytics investments if their software fails to provide enough flexibility to evolve as needs change.

“If 2020 has taught us anything, it’s that change happens fast, and having simple, flexible cloud pricing options gives customers the freedom needed to optimize their data analytics investments,” said Hillary Ashton, Chief Product Officer at Teradata. “Different analytic use cases have vastly different utilization patterns at different points in time, which means that having choice in pricing models enables Teradata to offer the best one for each customer scenario ranging from a small ad hoc discovery system to a large production analytics environment.”

Cyxtera Brings Innovative AI/ML Compute as a Service Offering to Federal Market

Cyxtera, a global leader in data center colocation and interconnection services, announced the availability of its landmark Artificial Intelligence/Machine Learning (AI/ML) compute as a service offering for government agencies needing leading-edge infrastructure to power AI workloads. The first-of-its-kind offering in the market leverages the NVIDIA DGX™ A100 system and will be delivered via Cyxtera’s Federal services exchange platform, which is certified as FedRAMP Ready at the High Impact Level, from the company’s highly secure data centers in Northern Virginia and Dallas-Fort Worth.

The availability of Cyxtera’s AI/ML compute as a service solution provides government agencies, as well as their contractors and sub-contractors, greater performance, agility and rapid deployment of infrastructure to support AI workloads. The offering also eliminates the need for significant capital outlays and extended provisioning cycles typically required for systems and supporting infrastructure designed to meet the needs of government AI-related projects.

“Bringing the power of the NVIDIA DGX-powered AI/ML compute as a service offering to the government market further enhances Cyxtera’s ability to deliver a robust set of secure, leading-edge infrastructure options to meet the evolving needs of our federal government customers,” said Leo Taddeo, President of Cyxtera Federal Group and Chief Information Security Officer for Cyxtera. “With Cyxtera’s FedRAMP Ready status at the High Impact Level for on-demand infrastructure and interconnection solutions built for sensitive federal government data, our team is able to provide public sector customers with a leading-edge solution aligned with the federal government’s ‘Cloud Smart’ prioritization in IT modernization efforts.”

Couchbase Advances Edge Computing Innovation with 2.8 Release of Couchbase Lite and Sync Gateway

Couchbase, the creator of the enterprise-class, multicloud to edge NoSQL database, announced version 2.8 of Couchbase Lite and Couchbase Sync Gateway for mobile and edge computing applications. Available now, the release gives organizations the power to take full advantage of a distributed cloud architecture, creating always-fast, always-on applications that guarantee business uptime even in a disconnected computing environment.

“Enterprises of all types are continuing to explore what edge computing has to offer, and we’re starting to approach the point where it reaches its full potential,” said Ravi Mayuram, Senior Vice President of Engineering and CTO, Couchbase. “By making applications less and less reliant on synchronization with a central server, we’re giving enterprises the tools they need to take full advantage of edge. Regardless of their environment, enterprises can seamlessly spin up new edge deployments as and when they’re needed and take advantage of enhanced data transfer capabilities that make edge applications smarter than ever.”

Machine Learning Comes to MariaDB Open Source Database with MindsDB Integration

MindsDB, the open source AI layer for existing databases, announced their official integration with the widely used open source relational database, MariaDB Community Server. This integration fills a longstanding demand of database users for the ability to bring machine learning capabilities to the database and democratize ML use. MindsDB helps apply machine learning models straight in the database by providing an AI layer that allows database users to deploy state-of-the-art machine learning models using standard SQL queries. The use of AI-Tables helps database users leverage predictive data inside the database for easier and more effective machine learning projects.

“As MindsDB sets out to democratize machine learning, we’re excited to offer ML capabilities to the MariaDB community,” said MindsDB co-founder, Adam Carrigan. “MariaDB shares our vision and understands that putting machine learning tools in the hands of the users that know their data best is the most effective way to solve their problems.”

Nexla Launches Nexsets to Reimagine DataOps and Drive Self-Service

Nexla, a converged data fabric, announced the launch of Nexsets, an innovative technology that makes data operations collaborative and enables data teams to easily enrich, secure, share, and validate data. The Nexla platform applies continuous intelligence on data to automate time- consuming engineering tasks. The result is a new way to drive self-service data integration and transformation. With the launch of Nexsets, now business users have access to curated data views that make it easy to connect data with any application with minimal engineering support.

Data environments are increasingly complex and working with data across disparate systems is challenging. However, integrating data, creating and managing APIs, maintaining security, and transforming and preparing data for a wide variety of systems and applications, all of these activities put a huge burden on engineering resources. As a result, data engineers struggle to meet the needs of business users, and in turn, business users struggle to leverage data to move the business forward.

Upsolver Creates First-Ever Truly Open Cloud Lakehouse, Releases Native Ingestion Connectors To Redshift And Snowflake

Upsolver, provider of a cloud-native lakehouse engine, announced the release of native ingestion connectors to Amazon Redshift and Snowflake (NYSE: SNOW), creating the first-ever truly open cloud lakehouse. Using Upsolver’s platform, enterprises can now easily switch between data warehouses and data lake query engines, across multiple vendors.

While data warehouses are excellent for business intelligence, they cannot address all modern enterprises’ data processing needs, such as streaming, text search, and machine learning. And although data lakes are a cost-effective way to store vast amounts of data, they are complex to manage and require expensive engineering expertise (on-premise and in the cloud). Upsolver’s cloud lakehouse engine empowers organizations to now achieve the cost and flexibility advantages of a data lake combined with the ease-of-use of a data warehouse. 

“Solutions like Redshift and Snowflake are amazing for making data valuable, but one database cannot solve all use cases,“ said Ori Rafael CEO of Upsolver. “Organizations should be able to leverage multiple database engines and easily switch between them according to their use case, in-house skills, and cost restrictions. This is the vision of the open cloud lakehouse and Upsolver is the engine that powers it.”

Micro Focus Introduces New Data Analytics Capability to Drive Full-Stack AIOps

Micro Focus (LSE: MCRO; NYSE: MFGP) announced the release of its ITOM “Collect Once Store Once” Data Lake (COSO), utilizing an open access data platform built on Vertica to drive full-stack AIOps across the broad set of Micro Focus monitoring and automation solutions. COSO is now an integrated part of Micro Focus Operations Bridge, Network Operations Management and Data Center Automation. This approach to providing a full spectrum of reporting and insights across multi-domain monitoring, management and patch compliance is only available from Micro Focus.

“The diversity of data available to IT operations today makes it challenging to solve complex issues across multi-cloud and on-premises IT services,” said Tom Goguen, Micro Focus Chief Product Officer. “COSO now offers a unique collection and storage capability built on Vertica’s powerful, high-speed data analytics platform. Combine COSO with our world-class discovery, monitoring, process automation and patch management tools, and you can have full-stack AIOps today to identify root cause and restore service faster than ever before.”

O’Reilly launches powerful new tool for learning in the flow of work:
O’Reilly Answers

O’Reilly, the source for insight-driven learning on technology and business, announced the launch of O’Reilly Answers, an advanced natural language processing (NLP) engine that delivers quick, contextually relevant answers to challenging technical questions posed by users through O’Reilly online learning. With a one-click integration into Slack, O’Reilly Answers helps users learn from and discover the content that moves business forward.
Leveraging advanced machine learning techniques, the O’Reilly Answers search engine provides relevant highlights and snippets from O’Reilly’s library of expert content across thousands of O’Reilly’s titles, pointing users directly to only the most applicable resources and eliminating noise. To encourage deeper discovery, the feature allows users to drill down into full content pieces from referenced titles. To further improve productivity, all functions of O’Reilly Answers are available through a simple Slack Integration.

“Over half of all O’Reilly usage is non-linear learning – finding fast solutions that can quickly be applied to work. Taking time to dig up resources can mean the difference between moving to the next step or stalling on a project,” said Laura Baldwin, President, O’Reilly Media. “As we fall into step with the new pace of organizational change, O’Reilly Answers helps bridge the gap between learning and knowledge, eliminating the need for lengthy training sessions and helping users get back to work with the tools they need to get the job done.”

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*