insideBIGDATA Latest News – 5/19/2020

Print Friendly, PDF & Email

In this regular column, we’ll bring you all the latest industry news centered around our main topics of focus: big data, data science, machine learning, AI, and deep learning. Our industry is constantly accelerating with new products and services being announced everyday. Fortunately, we’re in close touch with vendors from this vast ecosystem, so we’re in a unique position to inform you about all that’s new and exciting. Our massive industry database is growing all the time so stay tuned for the latest news items describing technology that may make you and your organization more competitive.

Zebra Medical Vision Secures Its 5th FDA Clearance, Making Its Vertebral Compression Fractures AI Solution Available in the U.S.

Zebra Medical Vision, the deep-learning medical imaging analytics company, announced its fifth FDA 510(k) clearance for its Vertebral Compression Fractures (VCF) product. The company’s latest AI solution automatically identifies findings suggestive of compression fractures, enabling clinicians to place patients that are at risk of osteoporosis in treatment pathways to prevent potentially life-changing fractures. The VCF product expands the company’s growing AI1™ (all-in-one) bundle of FDA cleared AI solutions, which has now received a fourth US patent in its bone health series. Nearly half of all women and a quarter of men over the age of 50 will suffer an osteoporotic fracture in their lifetime. According to the National Osteoporosis Foundation (NOF), the cost of osteoporosis-related fragility fractures to the U.S. is estimated to be $52 billion annually. Osteoporosis, also referred to as “The Silent Killer,” is the most common preventable cause of fractures, causing more than 2 million cases of broken bones in the U.S. alone every year. 

Zebra-Med is the first AI start-up in medical imaging to receive FDA clearance for a population health solution, which leverages AI to stratify risk, improve patient’s quality of life, and reduce cost of care. The software substantially increases detection rates, raising the number of patients eligible for treatment, without the need for additional staff, imaging or radiation. Sites that run fracture-prevention programs or population-management programs use Zebra Medical Vision to systematically onboard people to these programs and initiate further examinations and treatment.

“Identifying patients at risk for osteoporosis has a significant impact on patients’ well-being, as 70 percent of vertebral compression fractures are underdetected globally” says Ohad Arazi, CEO of Zebra Medical Vision. “These missed care opportunities are especially vital during this era of COVID-19, when many patient procedures have been postponed, and providers are dealing with substantial backlogs. The VCF product—our fifth FDA-cleared solution on the market—will allow us to expand our reach in the U.S. and help more clinicians and caregivers identify a large number of these fractures”.

Supahands Announces Launch of The Supahands Opus Infrastructure to Support Global Growth and Competitive Advantage

Supahands, the global data labeling partner for quality Machine Learning and Artificial Intelligence training data sets, today announced the launch of its Opus Infrastructure (OI). 

The OI offers a fully-managed customer experience for a wide variety of data labeling needs, such as image annotation, sentiment tagging and data transcription. Featuring Supahands’ sophisticated proprietary technology, the OI enables organizations to boost operational agility and accelerate competitive advantages by engaging an end-to-end managed service that produces quality training data for Machine Learning and Artificial Intelligence at scale, with customisable technology and project-specific workflows.

“We are committed to helping our customers stay at the forefront of their industry by understanding the need and importance of high-quality training data, and we pride ourselves in ensuring that every project we deliver is completed at high quality and high accuracy – with minimal disruption to the customers.” said Mark Koh, CEO and co-founder at Supahands. 

Hazelcast Simplifies High-Performance Stream Processing for Edge Computing Environments

Hazelcast, a leading in-memory computing platform, announced significant optimizations to its real-time event streaming engine, Hazelcast Jet, further establishing it as a premier solution for edge computing environments. In addition to increased platform performance, this release simplifies integrations with Red Hat OpenShift and Kubernetes, as well as the expansion of support for additional transactional sinks.

Whether the use case is the data streaming from a sensor in a manufacturing plant or an autonomous vehicle, the time it takes to process data can mean the difference between success and catastrophic failure. As such, edge computing is about placing compute power as close to the point of data origination as possible to derive value and implement the resulting insights – all within microseconds.

“We view the blink of an eye as a minuscule amount of time, but in reality, it takes 300,000 microseconds,” said Kelly Herrell, CEO of Hazelcast. “By placing processing power at the edge and turbocharging it with Hazelcast’s ultra-low latency stream processing capabilities, new applications will be enabled that will innovate our business capabilities and improve our daily lives.”

Spark NLP 2.5 Delivers State-of-the-art Accuracy for Spell Checking and Sentiment Analysis

John Snow Labs announced the immediate availability of the new major version of Spark NLP – the widely used natural language processing library in the enterprise. The library can be used from Python, Java, and Scala API’s and comes with over 150 pre-trained models & pipelines.

“When we started planning for Spark NLP 2.5 a few months ago, the world was a different place. We have been blown away by the use of Natural Language Processing for early outbreak detections, question-answering chatbot services, text analysis of medical records, monitoring efforts to minimize the spread of COVID-19, and many more.” – said Maziyar Panahi, a lead contributor to Spark NLP.

Franz Inc. Advances Knowledge Graph Visualization with Enhanced Gruff Integrated into AllegroGraph

Franz Inc., an early innovator in Artificial Intelligence (AI) and leading supplier of Semantic Graph Database technology for Knowledge Graph Solutions, announced Gruff 8, a browser-based graph visualization software tool for exploring and discovering connections within enterprise Knowledge Graphs. Gruff 8, which has been integrated into AllegroGraph 7, enables users to visually build queries and visualize connections between data without writing code, which speeds discoveries and enhances the ability to uncover hidden connections within data.

“By augmenting Knowledge Graphs with visualizations, users can determine insights that would otherwise elude them,” said Jans Aasman, CEO of Franz Inc. “Gruff’s dynamic data visualizations increase users’ understanding of data by instantly illustrating relevant relationships, hidden patterns and data’s significance to outcomes. Gruff also helps make data actionable by displaying it in a way that decision-makers can see the significance of data relative to a business problem or solution.”

Iguazio GPU-as-a-Service Solution Becomes Certified for NVIDIA DGX-Ready Software Program

Iguazio, the data science platform for real-time machine learning applications, announced that the company’s solution for utilizing GPU-as-a-Service has been certified as part of the NVIDIA DGX-Ready Software program. Managing and controlling resources as part of ML pipelines, which span data processing, model training, and inference, can pose challenges such as inefficient resource sharing, workload and data scaling, and high development and operations overhead.

“Iquazio provides a platform that automates the entire machine learning pipeline from start to finish,” said Yaron Haviv, Co-Founder & CTO at Iguazio. “Certifying the Iguazio Data Science Platform in the DGX-Ready Software program allows users to leverage our optimized AI workflow solution on NVIDIA DGX systems with ease and confidence.”

MariaDB SkySQL Adds ‘Power Tier’ for Enterprises That Demand Distinction

MariaDB® Corporation announced the immediate availability of MariaDB SkySQL Power, the first database-as-a-service (DBaaS) offering that lets enterprises customize options and configurations to fit their distinct requirements. Built on top of SkySQL’s Foundation, which delivers the complete MariaDB Platform experience in the cloud, Power adds important benefits such as the ability to customize instance types to maximize efficiency and resource utilization for a lower total cost of ownership (TCO), and the ability to meet specific enterprise security, high availability or disaster recovery requirements.

“With SkySQL Power, we’re listening to our customers instead of telling them how they should work in the cloud,” said Michael Howard, CEO of MariaDB Corporation. “Traditional DBaaS solutions don’t let enterprises express their uniqueness through their deployments. They offer standard database templates forged from spreadsheets, rather than real usage. With SkySQL, we’re taking a different approach. Our customers get convenience through SkySQL Foundation and, if they have specific requirements, they can get a custom deployment that meets their needs through Power.”

Deep Lens First to Integrate Cancer Genetic Data into AI Platform to Rapidly Match Patients to Precision Therapies and Clinical Trials

Deep Lens, a software company focused on a groundbreaking approach to faster recruitment of the best-suited cancer patients to clinical trials, has integrated proprietary molecular data parsing and management technology into the company’s award-winning clinical trial screening and enrollment platform, VIPER™. This breakthrough integration will enable cancer care teams, clinical trial sponsors, and trial coordinators to immediately and automatically match patients based on the genetic profile of their cancers to the best precision therapies and oncology clinical trials.

“Integrating this molecular data parsing and management technology into VIPER and continuing to work with University of Miami on integrating future genomics advancements will ensure that we have the leading and most up-to-date information and technologies,” stated TJ Bowen, Ph.D., co-founder and Chief Scientist at Deep Lens. “Now, cancer care teams, clinical trial sponsors, and trial coordinators can leverage an AI-enabled workflow platform that aggregates all relevant data sources for even faster automation and improved patient matching to increase clinical trial enrollment.”  

MemVerge Introduces Big Memory Computing

MemVerge™, the inventor of Memory Machine software, introduced what’s next for in-memory computing: Big Memory Computing. This new category is sparking a revolution in data center architecture where all applications will run in memory. Until now, in-memory computing has been restricted to a select range of workloads due to the limited capacity and volatility of DRAM and the lack of software for high availability. Big Memory Computing is the combination of DRAM, persistent memory and Memory Machine software technologies, where the memory is abundant, persistent and highly available.

GoodData Announces New Collaborative Data Modeling Solution

GoodData®, a leader in end-to-end analytics solutions, announced the release of a new web-based logical data model (LDM) modeler. The new LDM modeler brings new capabilities and tooling and enables the simplification of data modeling when starting a new data product or extending existing enterprise reporting.

“Achieving smooth collaboration between data engineers and data analysts is the greatest challenge facing enterprises building new data products for customers, or a reporting portal with a roll-out plan for every company branch. With our recent release, companies will bring analytics faster to their markets, both internal and external,” said Zdenek Svoboda, GoodData’s VP of Product & Marketing and co-founder.

DataStax Astra Now Available, Bringing Apache Cassandra Performance, Reliability, and Scale to the Cloud

DataStax announced the general availability of DataStax Astra, a database-as-a-service (DBaaS) for Apache Cassandra™ applications, simplifying cloud-native Cassandra application development. The DBaaS reduces deployment time from weeks to minutes, removing the biggest obstacle to using Cassandra, which is behind many of the most heavily used applications in the world.

“Astra represents a breakthrough for anyone who wants to use Cassandra in the cloud,” said Ed Anuff, chief product officer at DataStax. “We’ve been delivering products built on Cassandra to enterprises that deploy global-scale data for over a decade. Our enterprises and users have been asking for Cassandra-as-a-Service in the cloud. We’re happy to offer Astra as that experience.”

Harbr Emerges From Stealth to Help Organizations Deliver Secure Virtual Collaboration On Data And Models

Harbr, a private data ecosystem platform for enterprise data managers, emerged from stealth. The Harbr platform enables organizations to rapidly exchange, monetize and collaborate on data and models with their customers, suppliers and partners.

The Harbr team spent decades building and deploying data platforms in large enterprises and witnessed data consumers spending too much time acquiring data, making it useful and getting it to where it needed to be. Internal data was often locked inside data lakes and data warehouses that could never satisfy all the use cases in a typical enterprise. Meanwhile the external data opportunities to increase revenue, reduce cost and accelerate innovation were growing dramatically, but most enterprises were ill-equipped to take advantage of them.

“All large enterprises struggle to realize the value of their data, in part because every company needs data that starts or ends with their customers, suppliers and partners,” said Richard Winter, CEO, Wintercorp. “Perhaps the most critical issue now, is the speed at which companies can exchange and collaborate on data and models with other organizations. This will likely become even more critical in the coming years.”

Databricks Launches Global University Engagement Program to Help Train the Next Generation of Data Scientists

Databricks, the data and AI company, announced the Databricks University Alliance, a global program to help university students get hands-on experience using Databricks for both in-person learning and in virtual classrooms. The program, available at no cost to institutions of higher education, aims to expose students to the online tools and cloud platforms that enable highly scalable data science and machine learning, allowing them to gain critical knowledge and experience needed to drive innovation as they join data teams across the global workforce.

“At Databricks, our mission to help data teams solve the world’s toughest problems extends far beyond the present. We are committed to driving the next generation of business innovation through machine learning and AI,” said Matei Zaharia, chief technologist at Databricks and assistant professor of Computer Science at Stanford University. “Our goal with this program is to continue to help build the capacity of qualified and prepared data scientists, engineers, and analysts who can work with industry-scale data sets on public cloud environments.”

Scale Computing Launches New Performance Tier of HC3 Appliances for Databases and VDI

Scale Computing, a leader in edge computing, virtualization and hyperconverged solutions, announced the HC3250DF, the first of a new class of HC3 appliances designed to enhance  support for performance-intensive use cases such as database analytics and high density Virtual Desktop Infrastructure (VDI) deployments. With faster storage, more CPU, and faster networking options, the HC3250DF is specifically designed for the needs of performance computing for both the enterprise and the SMB and from the core data center to the edge. 

“Enterprise and SMB customers have a need for an HCI solution that can service larger databases, mid-range VDI, and other performance-intensive use cases,” said Steve McDowell, Senior Technology Analyst at Moor Insights & Strategy. “Scale Computing hits the mark with this latest addition to its core HC3 virtualization system, which has become well-known for its ability to be quickly deployed and easily managed and scaled.”

DeepCube Launches Industry-First Deep Learning Software Accelerator to Enable Real-World AI Deployments

DeepCube, a deep learning pioneer, announced the launch of the only software-based inference accelerator that drastically improves deep learning performance on any existing hardware.

Today, deep learning deployments are very limited and are primarily optimized for the cloud; and, even in these cases, they incur extensive processing costs, significant memory requirements, and expensive power costs, due to intensive computing demands. These challenges also plague deep learning deployments on edge devices, including drones, mobile devices, security cameras, agricultural robots, medical diagnostic tools and more, where the current size and speed of deep neural networks has limited their potential.

DeepCube focuses on research and development of deep learning technologies that improve the real-world deployment of AI systems. The company’s numerous patented innovations include methods for faster and more accurate training of deep learning models and drastically improved inference performance on intelligent edge devices. DeepCube’s proprietary framework can be deployed on top of any existing hardware (CPU, GPU, ASIC) in both data centers and edge devices, enabling over 10x speed improvement and memory reduction.

“Many deep learning frameworks were developed by researchers, for researchers, and are not applicable to commercial deployment, as they are hindered by technological limitations and high cost requirements for real-world applications,” said Dr. Eli David, Co-Founder of DeepCube. “DeepCube’s technology can enable true deep learning capabilities within autonomous cars, agricultural machines, drones, and could even help potentially monitor for and prevent future global health crises, much like the one we are facing now in 2020.”

TYAN Launches AI-Optimized Server Platforms Powered by NVIDIA V100S Tensor Core GPUs

TYAN®, an industry-leading server platform design manufacturer and a subsidiary of MiTAC Computing Technology Corporation, has launched the latest GPU server platforms that support the NVIDIA® V100S Tensor Core and NVIDIA T4 GPUs for a wide variety of compute-intensive workloads including AI training, inference, and supercomputing applications.

“An increase in the use of AI is infusing into data centers. More organizations plan to invest in AI infrastructure that supports the rapid business innovation,” said Danny Hsu, Vice President of MiTAC Computing Technology Corporation’s TYAN Business Unit. “TYAN’s GPU server platforms with NVIDIA V100S GPUs as the compute building block enables enterprise to power their AI infrastructure deployment and helps to solve the most computationally-intensive problems.”

Tableau 2020.2 Introduces New Data Model for Powerful Multi-Source Analysis, Metrics for KPI Monitoring

Tableau Software, a leading analytics platform, announced the general availability of Tableau 2020.2, which delivers a brand new data model that simplifies the analysis of complex data with no coding or scripting skills required. Now it’s even easier for customers to answer complex business questions that span multiple database tables at different levels of detail. The latest release also introduces Metrics, a new mobile-first way for customers to instantly monitor key performance indicators (KPIs).

“Now more than ever, organizations need a combination of speed, agility, and empowerment to ensure everyone is able to make data-driven decisions quickly,” said Francois Ajenstat, Chief Product Officer at Tableau Software. “With new data modeling capabilities, Tableau 2020.2 reduces the effort needed to analyze even the most complex datasets and simplifies analysis for anyone, regardless of expertise. In addition, Metrics enables everyone to instantly access the most vital data at their fingertips so they can make better decisions about their business from anywhere.”

Diffbot Announces Access Through Microsoft Excel and Google Sheets to Supercharge Data Collection Needs

Diffbot – an AI startup that’s indexed the entire web – announced the availability of its Diffbot Knowledge Graph (DKG) query services within Google Sheets and Microsoft Excel to instantly power users’ existing databases with comprehensive and publicly available information about companies and organizations, all with human-level discernment and zero manual research or entry. 

“Millions of people, including myself, rely on spreadsheets daily to gain valuable insights,” said Diffbot CEO Mike Tung. “And now, with the seamless integration of the DKG with both Sheets and Excel, everyday users have access to an extra layer of rich, clean and accurate information, all in one place.”

ScaleOut Software Announces Digital Twin Streaming Service, Introducing Breakthrough Real-Time Analytics for Data-in-Motion

ScaleOut Software announced the general availability of its Digital Twin Streaming Service™, a breakthrough SaaS solution harnessing in-memory cloud computing to provide real-time analytics. By creating “real-time digital twins” that simultaneously analyze telemetry from thousands of streaming data sources, this breakthrough solution enables customers to take advantage of deeper introspection and better real-time decision-making without waiting to query data at rest in data lakes.

With its ability to immediately analyze data in motion from individual data sources in milliseconds and make immediate use of dynamic context for each data source, the ScaleOut Digital Twin Streaming Service can fundamentally change the way industries like real-time monitoring, healthcare, logistics, retail and financial services process live data streams to make critical decisions in the moment.

“Processing streaming data with real-time digital twins revolutionizes streaming analytics by enabling deep introspection in real-time and individualizing it on the fly for each data source,” said Dr. William L. Bain, founder and CEO of ScaleOut Software. “We built the ScaleOut Digital Twin Streaming Service to help our customers dramatically improve situational awareness in their live systems, spanning thousands or even millions of data sources. Whether tracking a fleet of rental cars or a population of smart watch users, real-time digital twins are a game changer for streaming analytics.”

Run:AI creates first fractional GPU sharing for Kubernetes deep learning workloads

Run:AI, a company virtualizing AI infrastructure, today released the first fractional GPU sharing system for deep learning workloads on Kubernetes. Especially suited for lightweight AI tasks at scale such as inference, the fractional GPU system transparently gives data science and AI engineering teams the ability to run multiple workloads simultaneously on a single GPU, enabling companies to run more workloads such as computer vision, voice recognition and natural language processing on the same hardware, lowering costs. 

Today’s de facto standard for deep learning workloads is to run them in containers orchestrated by Kubernetes. However, Kubernetes is only able to allocate whole physical GPUs to containers, lacking the isolation and virtualization capabilities needed to allow GPU resources to be shared without memory overflows or processing clashes. 

Run:AI’s fractional GPU system effectively creates virtualized logical GPUs, with their own memory and computing space that containers can use and access as if they were self-contained processors. This enables several deep learning workloads to run in containers side-by-side on the same GPU without interfering with each other. The solution is transparent, simple and portable; it requires no changes to the containers themselves.

To create the fractional GPUs, Run:AI had to modify how Kubernetes handled them. “In Kubernetes, a GPU is handled as an integer,” said Dr Ronen Dar, co-founder and CTO of Run:AI.  “You either have one or you don’t. We had to turn GPUs into floats, allowing for fractions of GPUs to be assigned to containers.” Run:AI also solved the problem of memory isolation, so each virtual GPU can run securely without memory clashes.

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*