insideBIGDATA Latest News – 2/13/2023

Print Friendly, PDF & Email

In this regular column, we’ll bring you all the latest industry news centered around our main topics of focus: big data, data science, machine learning, AI, and deep learning. Our industry is constantly accelerating with new products and services being announced everyday. Fortunately, we’re in close touch with vendors from this vast ecosystem, so we’re in a unique position to inform you about all that’s new and exciting. Our massive industry database is growing all the time so stay tuned for the latest news items describing technology that may make you and your organization more competitive.

Domino Data Lab Announces Domino Code Assist to Help Close Data Science Talent Gap and Democratize Advanced Analytics 

Domino Data Lab, provider of a leading Enterprise MLOps platform trusted by over 20% of the Fortune 100, announced the general availability of Domino Code Assist (DCA), a groundbreaking new product for business analysts that enables enterprises to rapidly close the data science talent gap and democratize data science. With DCA, Domino Data Lab enters the low-code space to elevate business analysts using Python and R into the data science world. 

“Python is the new Excel and Domino Code Assist is about pushing the adoption of code-first data science in a way that has not been conceived of before,” said Sean Otto, PhD., Director of Analytics at AES Corporation. “By enabling both our advanced and low-code team members with a common platform to deliver AI and ML with Domino, we can accelerate the innovative use of our data across a wide variety of technical and non-technical data practitioners and data-adjacent parts of our business.”

ConnectWise integrates with OpenAI to solve complex problems and save time for MSPs

ConnectWise, the software company dedicated to the success of IT solution providers, announced an integration with OpenAI’s ChatGPT, the cutting-edge generative AI language model, with its remote monitoring and management (RMM) tools—ConnectWise Automate™ and ConnectWise RMM™.

“The explosive growth of artificial intelligence tools like OpenAI’s ChatGPT present huge potential in the technology sector, particularly in use cases for MSPs,” said Raghu Bongula, chief technology officer, ConnectWise. “ConnectWise has long been a pioneer in building innovative solutions for MSPs, and we’ve been prototyping with AI for some time; our engineers love challenging tasks and finding new ways to solve problems for our partners. We had plans to bring AI to our RMM and the ConnectWise Asio™ platform later this year, but with OpenAI, this accelerated our launch plans. We’re excited to add this as a core capability within Asio.”

Aiven Continues Open Source Streaming Ecosystem Innovation with General Availability for Aiven for Apache Flink®

Aiven, the open source cloud data platform, announced that its fully-managed service, Aiven for Apache Flink®, is now generally available, delivering the first managed service for Apache Flink® that can be deployed in the cloud of your choice including all three major cloud service providers, AWS, Google Cloud and Microsoft Azure.

Over the past year, the ability for companies to analyze data in real time has transformed from a ‘nice-to-have’, to a ‘must-have’ tool to provide rich and contextual experiences to their customers and end users while staying ahead of the competition. With the addition of Aiven for Apache Flink, Aiven now offers a complete open source streaming ecosystem complementing and integrating with Aiven for Apache Kafka® to satisfy most real time data application scenarios and bring a robust, scalable and production-grade streaming infrastructure to the cloud of your choice.

“Having a production-grade Apache Flink offering is a game changer for real-time data processing use cases, driving business insights. Aiven for Apache Flink enables developer productivity with the best open source technologies and makes developers’ lives easier,” said Jonah Kowall, Vice President of Product Management at Aiven. “Aiven for Apache Flink is the glue and intelligence tying together our platform capabilities between Apache Kafka® along with popular databases such as PostgreSQL, OpenSearch, and more integrations coming every week. Our experience brings a cloud streaming data infrastructure that is resilient, scalable and can be easily migrated and managed across cloud vendors through the Aiven console. I am very excited about what the future of this service holds for our customers and the stream processing market in general, as more organizations harness the power of real-time data processing.”

SOCi Releases Revolutionary OpenAI Integration into Review Response Management Tool 

SOCi, Inc., the marketing platform for multi-location brands, announced the release of its newest innovation – the integration of OpenAI’s ChatGPT natural language model into SOCi’s award winning review response management tool to enable instant intelligent responses to online reviews. The release is the first in a line of “Genius” products to be released by SOCi that enable highly intelligent and automated workflows across numerous major marketing channels like search, social, reviews, ads, and more.  

SOCi’s latest release integrates with major review sites, including Google, Facebook, and Yelp, and utilizes advanced machine learning algorithms to provide fast and accurate responses to customer reviews in real-time. The integration with ChatGPT enables the tool to promptly respond to reviews in a way that is personalized, engaging, and highly relevant to each customer, helping businesses to build better relationships with their customers and increase customer satisfaction.

“We’re excited to marry the power of the SOCi platform with the intelligence of ChatGPT to empower businesses to care for their customers more efficiently and effectively in a brand positive manner,” said Alo Sarv, CTO of SOCi, Inc. “Our goal is to intelligently streamline critical tasks for our clients, transforming the way they interact with our software from merely a workflow tool to a strategic marketing ally.”

KnowledgeLake Unveils Enhanced Process Automation and Digital Workflow Tools

KnowledgeLake, a leading provider of Intelligent Document Processing (IDP) solutions, announced the immediate availability of its “Ontario” Update through the company’s cloud-first software platform.

The Ontario update expands the KnowledgeLake platform’s capabilities to help organizations solve their most complex business process and workflow automation challenges on a single platform. The update also elevates the customer experience by simplifying document and data collection and providing greater transparency into the status of applications, transactions and other processes.

“We’ve taken a major step forward in our process automation capabilities,” said Ron Cameron, founder and CEO of KnowledgeLake. “We know that not every workflow is the same, and business processes are often messy or non-linear. We’ve developed an even more powerful process automation engine to support all the various contingencies, conditions and applications modern business processes require. Organizations will be able to address their most complex workflows and automate them beautifully in one easy to use interface. At the same time, they can bring customers directly into those workflows.” 

DataStax Launches New Cloud Service to Spur Rapid Web3 Innovation

DataStax, the real-time AI company, announced the launch of Astra Block, a new service available in the company’s Astra DB cloud database that empowers developers to quickly build and scale Web3 applications fueled by the entire, real-time dataset from the Ethereum blockchain. With Astra Block, developers can stream enhanced data from the industry’s dominant blockchain in a matter of minutes, then scale Web3 experiences virtually infinitely on Astra DB — the serverless cloud database built on the open source Apache Cassandra® database.

Astra Block is making the previously daunting task of cloning the entire blockchain dataset possible with the click of a button, leveraging the real-time speed, massive scale and zero downtime provided by Cassandra. The new service allows advanced querying and real-time analytics to be run at sub-second speeds, enabling developers to build new blockchain-based functionalities into their applications. For example, developers can build applications with the capability to analyze any transaction from the entire blockchain history, including crypto or NFTs, for instant, accurate insights. Astra DB’s CDC and streaming features ensure that the clone of the chain is updated in real time, as new blocks are mined. 

“Blockchains are changing the way information is stored and used, but their potential is largely unrealized,” said Ed Anuff, chief product officer at DataStax. “These distributed ledgers open up a whole new world of innovation similar to what we saw with digital content 20 years ago or social media 15 years ago – that is, the possibilities are only limited by our imaginations. Crypto currencies, non-fungible tokens, and smart contracts have drawn a lot of attention, but there are many other areas that will benefit from blockchain innovation—healthcare, real estate, IoT, cybersecurity, music, identity management, and logistics, to name a few.” 

Qrvey 8.0 Embedded Analytics Product Release

Qrvey, the embedded analytics platform built for SaaS companies, announced that it has released Qrvey 8.0, a major new release that incorporates features and functionality that further increases the ability of their customers to leverage this complete embedded analytics layer. These key elements make it even easier to optimize data structures and processes, incorporate a wider array of formats, and provide end-users a richer experience that makes the resulting applications “stickier” across organizations.

“This release reflects our core focus on offering a platform that fits the way SaaS companies need to operate” said David Abramson, CTO of Qrvey. “Our customers have an incredibly wide variety of use cases they must adapt to, and the choices and flexibility we deliver fits the demands of their product lifecycle. By incorporating these types of features, we’ve done the work, so they don’t have to.”

Weaviate releases a generative search module

Weaviate announced the release of a generative search module for OpenAI’s GPT-3, and other generative AI models (Cohere, LaMDA) to follow. This module allows Weaviate users and customers to easily integrate with those models and eliminates hurdles that currently limit the utility of such models in business use cases.

OpenAI’s ChatGPT chatbot (based on GPT-3) has captivated the world with its surprisingly human-like ability to respond to queries in natural language. However, such generative models have so far been limited by a centralized and generic knowledge base that leaves them unable to answer business-specific questions.

Weaviate’s generative module removes this limitation by allowing users to specify that the model work from users’ own Weaviate vector database. This best-of-both-worlds solution combines language abilities like those of ChatGPT with a vector database that is relevant, secure and easily updated in real time. Such a solution is also far less prone to hallucination.

“Generative models like the one used to create ChatGPT display very impressive language abilities but their business applications are limited because they are trained on data that’s freely available on the internet. Meanwhile, about 80% of the world’s data exists behind firewalls–for good reasons, because it is confidential or proprietary. So, we created a generative AI module to integrate with GPT-3 and other models (like LaMDA), allowing our open-source users and customers to leverage the model’s language abilities with their own Weaviate vector databases that are easily updated and capable of protecting sensitive data.”

SaaS Data Protection Leader Keepit Launches Cutting-Edge Solution: Keepit for Power BI 

Keepit, a leader in cloud data protection and management, announced the launch of its backup and recovery solution for Power BI, Microsoft’s business intelligence solution platform for aggregating, analyzing, visualizing, and sharing data. With the release of Keepit for Power BI, Keepit is extending its lead as the premier data protection service for Microsoft’s cloud solutions. Power BI is the first of the Microsoft Power Platform services to be added to Keepit’s solutions, with support for Power Apps and Power Automate planned for later in 2023.

“Microsoft is currently investing heavily in the Power Platform, and Power BI is a major part of that platform,” said Paul Robichaux, Keepit’s Senior Director of Product Management and Microsoft MVP. “Power BI is a market leader in the business intelligence space, and the business intelligence space is growing exponentially. With Keepit for Power BI, organizations can protect the data they use to drive their business decisions against data loss and downtime. Keepit is thrilled to add this product to our market-leading range of Microsoft cloud data protection solutions.” Releases Production Boards Designed to Meet the Challenges of Integrating ML Into Next Generation Embedded Edge Applications, the machine learning company enabling effortless ML deployment and scaling at the embedded edge, announced availability of two new PCIe-based production boards that scales embedded edge ML deployments for key customers. The availability of these two new commercially deployable board-level products demonstrates the commitment of’s mission to simplify ML scalability at the embedded edge. The company also announced its Palette™ software that provides a pushbutton experience for developing complete end-to-end ML applications targeting the heterogeneous Machine Learning SoC (MLSoC™) platform.

“We’re excited to bring these new form factor boards, programmed with our Palette software, to market for our customers because they address a growing need for a combined complete software and hardware solution within the developer community,” said Krishna Rangasayee, CEO and Founder, “Developers being empowered to not only develop but to deploy any ML vision application with 10x better performance will be a game changer for our ever-expanding list of customers.

AtScale Expands Databricks Integration with Support for Databricks SQL

AtScale, a leading provider of semantic layer solutions for modern business intelligence and data science teams, announced an expansion of its integration with Databricks, the lakehouse company.

AtScale enables Databricks customers to build a “Semantic Lakehouse” to democratize data, enable self-service business intelligence (BI), and deliver a high-performance analytics experience without extracting data from their cloud lakehouse. AtScale autonomously orchestrates Databricks infrastructure to optimize analytics performance and radically simplify analytics data pipelines while leveraging the full capabilities of Databricks. Analytics consumers interact with data managed by AtScale through SQL, MDX, DAX, REST or Python APIs, or with common BI platforms including Excel, Microsoft Power BI, Tableau, and Looker.

“Databricks has redefined the economics of cloud data management by building a scalable infrastructure for enterprise AI,” said Dave Mariani, Founder and CTO at AtScale. “Our focus on extending the analytics experience for organizations pursuing lakehouse architectures is strategic for both AtScale and Databricks.”

Daasity Builds ELT+ for Commerce on the Snowflake Data Cloud

Daasity Inc. announced that it has launched ELT+ for Commerce, Powered by Snowflake. ELT+ for Commerce will benefit customers by enabling consumer brands selling via eCommerce, Amazon, retail, and/or wholesale to implement a full or partial data and analytics stack. 

“Brands using Daasity and Snowflake can rapidly implement a customizable data stack that benefits from Snowflake’s dynamic workload scaling and Secure Data Sharing features,” said Dan LeBlanc, Daasity co-founder and CEO. “Additionally, customers can leverage Daasity features such as the Test Warehouse, which enables merchants to create a duplicate warehouse in one click and test code in a non-production environment. Our goal is to make brands, particularly those at the enterprise level, truly data-driven organizations.”

One AI Powers 50,000 Companies and Developers with Advanced Language AI Capabilities

One AI, a platform that enables developers to add language AI to products and services, announced that its technology is powering over 50,000 companies and developers. The language AI startup presents a strong growth story as more businesses begin to discover the uses for the powerful and customizable technology solution – from summarizing articles, to analyzing customer service calls for emotion and sentiment, to extracting interest rates and terms from financial documents and websites.

“For years humans have learned to adapt themselves to work and communicate with machines. Now, we are teaching computers to speak like humans using Natural Language Processing (NLP),” said Amit Ben, co-founder and CEO, One AI. “Last year, the technology finally reached mainstream recognition and utilization, partially thanks to the success of language generation models and tools such as ChatGPT.”

causaLens launches the first operating system for decision making powered by Causal AI

causaLens, the London deep tech company and pioneer of Causal AI, announced the launch of decisionOS, the first operating system using cause-and-effect reasoning for all aspects of an enterprise’s decision-making.

Causal AI is the technology that identifies the underlying web of causes, to provide critical decision-making insights that current machine learning fails to deliver. The technology is gaining momentum rapidly, with ‘big tech’ companies such as Microsoft, Amazon, Google, Meta, Spotify, Netflix and Uber who are investing heavily in the development of Causal AI, and it was recently featured in various Gartner hype cycles.

decisionOS optimizes business decisions by embedding Causal AI models into decision workflows at any level of an organization. Now enterprise users in all industry sectors will be able to go beyond relying on past patterns and correlations to make predictions, instead understanding cause and effect relationships to generate actionable insights that factor in business objectives and resource constraints. For example, retailers can use the recommendations and insights provided by decisionOS to decide on the best pricing for individual products, across specific locations, while considering the prevailing business environment.

“All decisions in the enterprise require a causal understanding,” said Darko Matovski, causaLens CEO. “The best decisions are made when domain experts’ knowledge is combined with AI that truly understands cause-and-effect, and reasons like humans do. decisionOS makes it possible to evaluate different scenarios and design optimal actions for your business while providing full transparency of the decision-making process.”

Mona Announces New AI Fairness Feature for Its Monitoring Platform

Mona, a leading intelligent AI monitoring platform, announced a new feature that advances its monitoring capabilities with the introduction of a new feature for assessing AI fairness.

As many organizations began relying on AI in critical business functions, it has become important to identify and address algorithmic bias. The key to detecting such bias is understanding the behavior of machine learning models across different protected segments of the data within the context of the business function that the AI serves. Mona’s new AI fairness features make it simple to do that by providing a comprehensive view of the behavior of the AI system, as well automatically detecting any potential issues of bias.

“Unlike other existing tools for detecting bias and fairness issues in AI, Mona is the only platform that connects the model’s performance with the business outcomes, and also enables bias detection at a granular level within specific segments of the data,” says Itai Bar-Sinai, CPO & Co-founder at Mona. “We’re proud to offer a solution that helps organizations trust that their AI-driven applications are ethical and up to the highest standards.”

Section’s Latest Platform Enhancements Make it Easy  to Stand Up and Deploy Mastodon at Scale

Section, a leading cloud-native hosting platform, announced it is making it easier than ever to deploy and scale a Mastodon server; in just a few clicks, developers can use Section’s global platform to ensure a superior user experience at a fraction of the cost. With the open-source Mastodon software seeing explosive growth in interest and adoption, communities find themselves looking for solutions that can help run self-hosted social networks for a geographically dispersed base of users. Section’s platform automates the management of workloads like Mastodon using easily adjusted, rules-based parameters, making it ideally suited to easily distribute and scale these Mastodon instances globally. Simultaneously, the company has announced support for Persistent Volume storage, better enabling distributed deployment of Mastodon and other complex workloads.

“For those looking to create their own Mastodon server, the technical headaches, management decisions and ballooning costs that come with getting started can be incredibly debilitating,” said Stewart McGrath, Section’s CEO. “Even the so-called ‘one-click’ solutions quickly begin to show their cracks as communities grow and become geographically dispersed. Given the growth Mastodon is experiencing, this requires planning ahead – and Section has customized its platform to make it painless and cost-effective to deploy your Mastodon server at scale.” Unveils Generative AI, Large Language Models in XO Platform V10.0 to Enhance Enterprise Communications, a leading conversational AI platform and solutions company, announced the release of the Experience Optimization (XO) Platform Version 10.0. The upgrade enables easier and more open integration with global enterprise systems, allows businesses and individuals to deploy intelligent virtual assistants with minimal-to-no training, and can be continuously scaled and enhanced for high performance. 

One of the key features that has introduced in this release is the ability to enable large language models (LLMs) such as OpenAI’s GPT-3 and other generative AI technologies, drastically simplifying the design, development and management of virtual assistants. 

The introduction of cutting-edge zero-shot and few-shot models leverages the power of LLMs and generative AI, and eliminates the need for initial training data. By helping design conversations, create training data, test data, and rewriting responses with emotion, these technologies minimize the efforts needed to efficiently create virtual assistants that are truly intelligent and intuitive. It’s an exciting new era for virtual assistants, and is at the forefront of this technology.

“Our customers have been deploying highly complex use cases involving voice automation, personalization, omnichannel experience, and fulfillment,” said CEO and Founder Raj Koneru. “This has underscored the need for a perpetual cycle of improvisation for virtual assistants in terms of ease of development, training, scalability, personalization, and performance, which we have addressed with some of the industry-first innovations in V10.0. Also, by creatively tapping into the potential of generative AI models like GPT-3 and other LLMs, we’ve paved the way for future innovations.” 

New Colossal-AI Released: Offers Hardware Savings Up To 46x for AI-Generated Content and Boasts Novell Automatic Parallelism

Colossal-AI, a leading open source system for maximizing the speed and scale of training, inference and fine-tuning of large deep learning models, today unveiled a new product release version 0.2.0. This new version includes two pre-configured recipes, one for Stable Diffusion 2.0 and the other for BLOOM. The recipes are designed to support instant training and inference of models for AI-Generated Content (AIGC) and allow for significantly reduced hardware costs, of up to 46 times. In addition, the new release also features automatic parallelism, which is not offered by other solutions and enables instant distributed training on multiple computing devices with a single line of code.

“Our customers are always looking for ways to optimize their deep learning infrastructure to reduce costs, and with our latest release, we’ve given them two important integrations into Stable Diffusion and BLOOM as well as automatic parallelism, a world-first technology, making our product the most efficient solution available on the market today for AIGC and other large AI models on the market today,” says Prof. James Demmel, Co-founder and CSO at HPC-AI Tech, and professor at UC Berkeley. “By making our innovations available as open-source software, we are bringing the latest advances in AI within reach for everyone worldwide at lower costs, leading to rapid and wide adoption of Colossal-AI.”

d-Matrix Launches New Chiplet Connectivity Platform to Address Exploding Compute Demand for Generative AI

d-Matrix, a leader in high-efficiency AI-compute and inference processors, announced Jayhawk, the industry’s first Open Domain-Specific Architecture (ODSA) Bunch of Wires (BoW) based chiplet platform for energy efficient die-die connectivity over organic substrates. Building on the back of the Nighthawk chiplet platform launched in 2021, the 2nd generation Jayhawk silicon platform further builds the scale-out chiplet based inference compute platform. d-Matrix customers will be able to use the inference compute platforms to manage Generative AI applications and Large Language Model transformer applications with a 10-20X improvement in performance. 

Large transformer models are creating new demands for AI inference at the same time that memory and energy requirements are hitting physical limits. d-Matrix provides one of the first Digital In-Memory Compute (DIMC) based inference compute platforms to come to market, transforming the economics of complex transformers and Generative AI with a scalable platform built to handle the immense data and power requirements of inference AI. Improving performance can make energy-hungry data centers more efficient while reducing latency for end users in AI applications.

“With the announcement of our 2nd generation chiplet platform, Jayhawk, and a track record of execution, we are establishing our leadership in the chiplet ecosystem,” said Sid Sheth, CEO of d-Matrix. “The d-Matrix team has made great progress towards building the world’s first in-memory computing platform with a chiplet-based architecture targeted for power hungry and latency sensitive demands of generative AI.”

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter:

Join us on LinkedIn:

Join us on Facebook:

Speak Your Mind