insideBIGDATA Latest News – 9/21/2023

Print Friendly, PDF & Email

In this regular column, we’ll bring you all the latest industry news centered around our main topics of focus: big data, data science, machine learning, AI, and deep learning. Our industry is constantly accelerating with new products and services being announced everyday. Fortunately, we’re in close touch with vendors from this vast ecosystem, so we’re in a unique position to inform you about all that’s new and exciting. Our massive industry database is growing all the time so stay tuned for the latest news items describing technology that may make you and your organization more competitive.

Lenovo Delivers AI at the Edge, Bringing Next Generation Intelligence to Data

Lenovo (HKSE: 992) (ADR: LNVGY) announced new edge AI services and solutions designed to enable mass deployment of remote computing capabilities that will significantly accelerate AI readiness and empower new AI applications for any business. New Lenovo TruScale for Edge and AI brings the proven cost benefits of Lenovo TruScale’s Infrastructure as-a-Service model to the broadest and most comprehensive edge portfolio on the market, enabling customers to leverage a pay-as-you-go model to quickly deploy powerful edge computing and gain AI-powered insights directly at the source of data creation. Lenovo is also expanding its broad portfolio with the new Lenovo ThinkEdge SE455 V3, bringing the most powerful edge server to the market and delivering breakthrough efficiency to support the most intensive remote AI workloads.  

Coupled with Lenovo’s AI Innovators program, end-to-end solutions and AI-ready technology, the breakthrough edge offerings simplify the creation and deployment of next generation AI applications to help businesses of any size pioneer transformation with AI-powered insights that can be immediately used to improve outcomes across store aisles, manufacturing floors, hospital rooms, commercial kitchens and service desks all over the world. 

“Lenovo is committed to being the most trusted partner and empowering our customers’ intelligent transformation by simplifying AI deployment,” said Kirk Skaugen, President of Lenovo Infrastructure Solutions Group. “With today’s news, Lenovo continues to push the boundaries of what is possible at the edge, making it easier than ever before to deploy critical edge computing that efficiently delivers transformative AI-powered insights for any business, anywhere.”  

Deci Unveils Generative AI Foundation Models and Dev Suite, Enabling Rapid Performance and Cost Efficiency

Deci, the deep learning company harnessing AI to build AI, announced the launch of innovative generative AI Foundation Models, DeciDiffusion 1.0 and DeciLM 6B, as well as its inference Software Development Kit (SDK) – Infery LLM. These groundbreaking releases are setting a new benchmark for performance and cost efficiency in the realm of generative AI.

The intensive computational requirements for training and inference of generative AI models hinder teams from cost-effectively launching and scaling gen AI applications. Deci’s innovations directly address this gap, making scaling inference efficient, cost-effective, and ready for enterprise-grade integration. By using Deci’s open-source generative models and Infery LLM, AI teams can reduce their inference compute costs by up to 80% and use widely available and cost-friendly GPUs such as the NVIDIA A10 while also improving the quality of their offering. The models introduced by Deci cater to diverse applications, ranging from content and code generation to image creation and chat applications, among many others.

Models introduced by Deci include ‘DeciDiffusion 1.0’, a blazing-fast text-to-image model that generates quality images in less than a second, 3 times faster than the renowned Stable Diffusion 1.5 model. Next in the spotlight is DeciLM 6B, a 5.7 billion parameter model. While its accuracy stands toe-to-toe with industry giants like LLaMA 2 7B, Falcon-7B, and MPT-7B, what truly sets it apart is its blazing inference speed—clocking in at an astonishing 15 times faster than the Meta LLaMA 2 7B. Rounding out the lineup is ‘DeciCoder,’ a 1 billion parameter code generation LLM released a few weeks ago. Not only do these models deliver unparalleled inference speed, but they also provide equivalent or better accuracy.

“For generative AI to truly revolutionize industries, teams need mastery over model quality, the inference process, and the ever-pivotal cost factor,” said Yonatan Geifman, CEO and co-founder of Deci. “At Deci, our journey and extensive collaborations with the world’s AI elite have equipped us to craft a solution that’s nothing short of transformative for enterprises diving into Generative AI. With our robust array of open-source models and cutting-edge tools, we’re setting the stage for teams to redefine excellence in their generative AI ventures.”

JFrog Introduces Native Integration for Hugging Face, Delivering Robust Support for ML Models to Harmonize DevOps, Security and AI

JFrog Ltd. (“JFrog”) (Nasdaq: FROG), the Liquid Software company and creators of the JFrog Software Supply Chain Platform, introduced ML Model Management capabilities, an industry-first set of functionality designed to streamline the management and security of Machine Learning [ML] models. The new ML Model Management capabilities in the JFrog Platform bring AI deliveries in line with an organization’s existing DevOps and DevSecOps practices to accelerate, secure and govern the release of ML components.

“Today, Data Scientists, ML Engineers, and DevOps teams do not have a common process for delivering software. This can often introduce friction between teams, difficulty in scale, and a lack of standards in management and compliance across a portfolio,” said Yoav Landman, Co-founder and CTO, JFrog. “Machine learning model artifacts are incomplete without Python and other packages they depend on and are often served using Docker containers. Our customers already trust JFrog as the gold standard for artifact management and DevSecOps processes. Data scientists and software engineers are the creators of modern AI capabilities, and already JFrog-native users. Therefore, we look at this release as the next logical step for us as we bring machine learning model management, as well as model security and compliance, into a unified software supply chain platform to help them deliver trusted software at scale in the era of AI.”  

MFour Mobile Unleashes Cutting-Edge AI Solutions to Democratize and Simplify Access to Consumer Data

MFour, a leading consumer intelligence platform, announced the launch of DANI and its new AI Survey Builder – two state-of-the-art AI tools designed to empower anyone to conduct, analyze and implement market research with more accuracy and speed than a team of experts. 

Traditional market research is costly and can take months for collection and analysis, DANI, short for “Data Analysis & Navigation Instructor”, makes it possible to gain immediate insights after data collection.  The intuitive, user-friendly tool enables anyone to use language prompts to instantly query and analyze surveys and insights to better understand consumer preferences and behaviors. DANI harnesses MFour’s unified dataset, drawing from over 2.4 billion proprietary data points from survey responses, location data, and app and web traffic to provide accurate, multi-source validated responses. 

In parallel, AI Survey Builder, integrated with Surveys on the Go, the largest mobile panel in the U.S., simplifies the creation of tailored surveys. The AI-powered survey tool leverages advanced algorithms and over a decade of survey creation data and implementation for higher efficiency and accuracy. Responses are then generated from MFour’s active panel of survey participants and validated against their behavioral data to provide the highest accuracy. 

“Traditional market research requires a great deal of time, money and effort. MFour is moving market research forward by removing the middleman to put expert-quality research directly into the hands of every professional,” said Chris St. Hilaire, Founder & CEO of MFour. “AI Survey Builder and DANI make it easy for anyone to collect and analyze data instantly at a fraction of the cost. In today’s fast-paced business landscape data is key to success, and we’re leveling the playing field.”

DataStax Delivers New JSON API: Enabling the World’s Largest Community of Developers to Easily Build Generative AI Applications with the Astra DB Vector Database

DataStax, the real-time AI company, announced a new JSON API for Astra DB – the popular database-as-a-service built on the open source Apache Cassandra® – delivering on one of the most highly requested user features, and providing a seamless experience for Javascript developers building AI applications.

Available via the open source data API gateway, Stargate, the new JSON API lets JavaScript developers easily leverage Astra DB as a vector database for their large language model (LLM), AI assistant, and real-time generative AI projects. It provides a naturally integrated way to work with Astra DB as a document database and now has compatibility with Mongoose, the most popular open source object data modeling library for MongoDB. This makes it simple for JavaScript developers – the largest community of developers in the world – to build generative AI applications with vector search, using the library they know and love.

The Astra DB vector database is designed for building real-world, production-level AI applications with real-time data. With simultaneous search and update on distributed data and streaming workloads, Astra DB provides ultra-low latency and highly relevant results that eliminate redundancies. This creates more responsive, more accurate production generative AI applications that reduce hallucinations with real-time data updates, and increase responsiveness with concurrent queries and updates.

With the JSON API, JavaScript developers are no longer required to have a deep understanding of Cassandra Query Language (CQL) to work with Astra DB and Cassandra. Instead, they can continue to write in the language they’re familiar with to quickly develop AI applications – a necessity in the current leading-edge business environment.

“Traditionally, in order for developers to use Astra DB they had to have familiarity with CQL, a powerful but sometimes intimidating programming language,” said Ed Anuff, chief product officer, DataStax. “With the introduction of the JSON API, we’re democratizing access to Astra DB’s capabilities, making it more intuitive and accessible for the more than 13 million global JavaScript developers. With Astra DB’s new vector search capabilities, virtually any developer can now use Astra DB to build powerful generative AI applications using real-time data.”

Elastic Announces AI Assistant for Observability and General  Availability of Universal Profiling

Elastic® (NYSE: ESTC), the company  behind Elasticsearch®, announced the launch of Elastic AI Assistant for Observability and  general availability of Universal Profiling™, providing site reliability engineers (SREs), at all  levels of expertise, with context-aware, relevant, and actionable operational insights that are  specific to their IT environment. 

“With the Elastic AI Assistant, SREs can quickly and easily turn what might look like machine  gibberish into understandable problems that have actionable steps to resolution,” said Ken Exner, chief product officer, Elastic. “Since the Elastic AI Assistant uses the Elasticsearch  Relevance Engine on the user’s unique IT environment and proprietary data sets, the responses  it generates are relevant and provide richer and more contextualized insight, helping to elevate  the expertise of the entire SRE team as they look to drive problem resolution faster in IT  environments that will only grow more complex over time.” 

ibi Unleashes the Power of Legacy Systems with Open Data Hub for Mainframe

ibi, a scalable data and analytics software platform that makes data easy to access and analytics easy to consume, announced the launch of ibi™ Open Data Hub for Mainframe. A breakthrough solution for organizations struggling with data retrieval from legacy data environments, the offering revolutionizes the access and utilization of mainframe data by directly bringing this data into web-based business analytics capabilities.

“The launch of the ibi Open Data Hub solution marks a significant milestone for ibi customers and mainframe users across industries, as it provides real-time access to static mainframe data, which to date has been difficult to access, directly by BI analytics solutions,” said Dan Ortolani, vice president, support engineering, ibi, business units of Cloud Software Group. “This innovative offering democratizes access to the full power of mainframe data while maintaining its security and compliance. The implications for business users, developers, and data scientists are unparalleled, as it can be used in virtually any BI client. The solution is also compelling for existing ibi WebFOCUS users who can seamlessly extend their efforts to mainframe data.”

Qlik Announces Qlik Staige to Help Organizations Manage Risk, Embrace Complexity and Scale the Impact of AI

Qlik® announced Qlik Staige™, a holistic set of solutions to help customers confidently embrace the power of Artificial Intelligence (AI) and deliver tangible value. With Qlik Staige, customers can innovate and move faster by making secure and governed AI part of everything they can do with Qlik – from experimenting with and implementing generative AI models to developing AI-powered predictions.

Every organization is looking to AI for competitive advantage, but adoption is difficult. Leaders are cautious about going too fast due to risk, governance, and trust concerns. Qlik Staige helps organizations build a trusted data foundation for AI, leverage modern AI-enhanced analytics, and deploy AI for advanced use cases.

“Qlik understands that organizations are looking for pragmatic ways to leverage AI to make better, faster decisions, right now,” said Mike Capone, CEO of Qlik. “Our competitors have made many announcements that promise future products or vision. Our difference is that Qlik customers are already using AI, including leveraging a proven and trusted LLM and a full range of AI-enhanced analytics. Additionally, with Qlik Staige, our customers and partners are transforming organizations – as evidenced by more than 100,000 AI models built using Qlik AutoML.”

Granica Introduces Chronicle AI for Deep Data Visibility for Amazon S3 and Google Cloud Storage Customers

Granica, the AI efficiency platform, announced Chronicle, a new generative AI-powered SaaS offering that provides visibility and analytics into how data is accessed in cloud object stores. Chronicle is part of Granica’s AI efficiency platform, the company’s enterprise solution bringing novel, fundamental research in data-centric AI to the commercial market. Granica Chronicle provides rich analytics and observability for data with a deep focus on access. Chronicle also speeds time to value for new customers of Granica Crunch, the platform’s data reduction service, and Granica Screen, the data privacy service.

Cloud object stores — such as Amazon S3 and Google Cloud Storage — represent the largest surface area for breach risk given these repositories are optimized to store large volumes of file and object data for AI/ML training and big data analytics. With this surface area continuing to rapidly expand at petabyte scale, lack of visibility makes it hard for teams to optimize application environments for cost, ensure compliance, enable chargebacks, improve performance and more.

“Historically, the options for free visibility tools are either too simplistic or overly complex to solve cloud storage access issues businesses face day to day,” said Rahul Ponnala, CEO and co-founder of Granica. “These datasets are also typically siloed off, making it hard for teams to see the full picture around how their data is being accessed. Granica Chronicle delivers the sweet spot customers are looking for: a user-friendly analytics environment bringing data visibility across disparate silos together in one, cohesive place.”

Astronomer Introduces New Capabilities to Enable The Future of Effortless, Cost-Effective Managed Airflow

Astronomer, a leader in modern data orchestration, introduced new capabilities to Astro, its Apache Airflow-powered platform. These capabilities include a new architecture and deployment model with competitive consumption-based pricing, ensuring teams of all sizes can harness the advanced capabilities of Astro for data orchestration.  Airflow’s flexibility and extensibility has placed it at the center of the Machine Learning data stack as the critical orchestrator of Machine Learning Operations (MLOps). With the explosion of demand for Artificial Intelligence (AI), Astronomer’s new capabilities enable organizations to rapidly harness the potential of AI and natural language processing, accelerating the development of next-gen applications with precision and agility.

Today, Astronomer is excited to announce a new component of the Astro platform: The Astro Hypervisor. Unlike conventional architectures that focus on the bare minimum required to run open-source projects on cloud-based containers, the introduction of Astro Hypervisor adds an entirely new dimension to the Astro platform that allows greater visibility and control over the Airflow deployments that Astronomer runs for their customers. 

Pete DeJoy, Founder and SVP of Product at Astronomer, shared “With Astro, customers can now set up and optimize their Airflow deployments faster than ever before via a self-serve product that is priced on infrastructure consumption. However, this is just the beginning. Our ambitious product roadmap is poised to drive rapid innovation while delivering best-in-class cost and performance profiles. We’re excited to continue pushing the boundaries of modern orchestration and collaborating with the most advanced companies in the world to solve their toughest data problems.”

Timeplus Open Sources its Powerful Streaming Analytics Engine for Developers Globally

Timeplus, creator of one of the industry’s fastest and most powerful streaming analytics platforms, announced that it has licensed its core engine, Proton, as open source for developers worldwide. Timeplus has developed an innovative, unified streaming + historical analytics platform, with its historical online analytical processing (OLAP) using ClickHouse. This means businesses can now seamlessly generate ad hoc reports over very large datasets, using both historical data and live streaming data. And they can accomplish this faster and at a lesser cost than with other streaming frameworks.

“Timeplus is a company built by engineers, for engineers,” said Ting Wang, co-founder and CEO at Timeplus. “While developers have been thrilled with the simplicity and elegance of our product, many have asked us to go open source. We listened and are excited to license our software as open source and contribute code to ClickHouse that will benefit developers everywhere. Users will gain value from an amazing combination of best-in-class real-time OLAP analytics and powerful, lightweight stream processing.”

SolarWinds Continues Ongoing Business Evolution With New and Upgraded Service Management and Database Observability Solutions 

SolarWinds (NYSE:SWI), a leading provider of simple, powerful, secure observability and IT management software, announced the launch of new service management and database observability solutions designed to help companies achieve operational excellence, better business outcomes and accelerated innovation across the enterprise. 

Announced at today’s SolarWinds Day virtual summit, the new Enterprise Service Management  (ESM) and upgraded SQL Sentry® solutions are part of the company’s ongoing transformative strategy to uniquely unify observability and service management. As a leader in enterprise software for nearly 25 years, SolarWinds provides customers with the tools they need to get maximum value from their digital innovation efforts and increase productivity within complex IT environments. With a broad customer base across IT ops, DevOps, SecOps, AIOps, and CloudOps teams, SolarWinds has built on this success with transformative efforts to evolve its business, expand its product portfolio of industry-leading, AI-enabled solutions, and enhance its go-to-market strategy. 

“As the challenges our customers face evolve, we are committed to evolving our solutions and business alongside them to ensure we consistently meet their needs,” said Sudhakar Ramakrishna, SolarWinds President and CEO. “At SolarWinds, we have a simple but basic strategy for success: listening to, and learning from, our customers. Our foremost priority is helping our customers stay innovative, competitive, and productive. By giving them precisely the tools they need for today’s most pressing challenges and keeping pace as they—and the industry—transform, we’ve been able to grow, mature, and advance our own business.”

Telmai Redefines Data Reliability, New Release Simplifies and Accelerates Enterprise Adoption of Data Observability 

Telmai, the AI-driven data observability platform built for open architecture, unveiled its latest release featuring seven category-defining features designed to simplify and accelerate data observability adoption for the enterprise. With the growth of the data ecosystem over the past few years, enterprises are seeing an accelerated need for continuous, reliable data flowing through their pipelines. High-quality, consistent, and reliable data is the foundation for AI/ML, including generative AI and analytics-based data products.

Telmai’s release empowers data engineers/architects and product owners to harness powerful time travel features, build multi-attribute data contracts, conduct in-depth root cause analysis of data failures, and gain greater control over data privacy and residency via its new private cloud offerings.

“We are excited to bring Telmai’s most significant release to the market to date,” said Mona Rakibe, co-founder and CEO of Telmai. “We’ve drawn inspiration from our enterprise customers to build a product that redefines the future of data reliability.”

TruEra Launches TruEra AI Observability, the First Full Lifecycle AI Observability Solution Covering Both Generative and Traditional AI

TruEra launched TruEra AI Observability, the first full-lifecycle AI observability solution providing monitoring, debugging, and testing for ML models in a single SaaS offering. TruEra AI Observability now covers both generative and traditional (discriminative) ML models, meeting customer needs for observability across their full portfolio of AI applications, as interest in developing and monitoring LLM-based apps is accelerating.

Initial development of LLM-based applications is dramatically increasing since the launch of ChatGPT. However, LLM-based applications have well-known risks for hallucinations, toxicity and bias. TruEra AI Observability offers new capabilities for testing and tracking LLM apps in development and in live use, so that risks are minimized while acceleratingLLM app development. The product capabilities were informed by the traction of TruLens – TruEra’s open source library for evaluating LLM applications.

“TruEra’s initial success was driven by customers in banking, insurance, and other financial services, whose high security requirements were well met by existing TruEra on-prem solutions,” said TruEra Co-founder, President and Chief Scientist Anupam Datta. “Now, with TruEra AI Observability, we are bringing ML monitoring, debugging, and testing to a broader range of organizations, who prefer the rapid deployment, scalability, and flexibility of SaaS. We were excited to see hundreds of users sign up in the early beta period, while thousands have engaged with our hands-on educational offerings and community. The solution brings incredible monitoring and testing capabilities to everyone developing machine learning models and LLM applications.”

YugabyteDB 2.19: Bridging an Application’s Journey from Lift-and-Shift Migration to Massive Scale

Yugabyte, the modern transactional database company, announced the general availability of YugabyteDB 2.19 with bimodal query execution and built-in cloud native connection management. Together, these capabilities expand the reach of distributed PostgreSQL to applications at every scale while simplifying application architecture and migration.

“Developers around the world turn to PostgreSQL for its familiarity and ease of use,” said Karthik Ranganathan, co-founder and CTO of Yugabyte. “YugabyteDB turbocharges PostgreSQL with built-in resilience, seamless scalability, and more. YugabyteDB 2.19 builds upon our market-leading compatibility to meet the evolving needs of modern applications and future-proof them with a unified platform that can power applications that need global scale and those that are still growing—all through the power of new dynamic query execution.”

DiffusionData Releases Diffusion 6.10

DiffusionData, a pioneer and leader in real-time data streaming and messaging solutions, announced the release of Diffusion 6.10. The latest developer-centric enhancements to the framework aim to free up resource, speed and simplify development and reduce operational costs.  

Riaz Mohammed, CTO at DiffusionData, said: “We spend a lot of time speaking to developers and architects about how we can improve our framework, what do they need that we’re not providing and what can we do better. This latest release is mainly based on the feedback we’ve received. The enhancements address architectural and technical challenges many of our customers face on a daily basis. We are very grateful for the community and client input we get which helps us to further strengthen our market position as a global leader in real-time data streaming.” 

Dremio Unveils Next-Generation Reflections: A Leap Forward in Accelerating SQL Query Performance on the Data Lakehouse 

Dremio, the easy and open data lakehouse, announced the launch of its next-generation Reflections technology, revolutionizing the landscape of SQL query acceleration. Dremio Reflections pave the way for sub-second analytics performance across an organization’s entire data ecosystem, regardless of where the data resides. This transformative technology redefines data access and analysis, ensuring that insights can be derived swiftly and efficiently and at 1/3 the cost of a cloud data warehouse.

Reflections are Dremio’s innovative SQL query acceleration technology. Queries using Reflections often run 10 to 100 times faster than unaccelerated queries. The new launch introduces Dremio Reflection Recommender, a ground-breaking capability that lets you accelerate Business Intelligence workloads in seconds. Reflection Recommender automatically evaluates an organization’s SQL queries, and generates a recommended Reflection to accelerate them. Reflection Recommender eliminates arduous manual data and workload analysis, ensuring the fastest, most intelligent queries are effortless and only a few keystrokes away. Reflection Recommender is easy to use and puts advanced query acceleration technology into the hands of all users, saving significant time and cost.

“Dremio Reflections accelerate SQL queries by orders of magnitude, eliminating the need for BI extracts/imports and enabling companies to run their most mission-critical BI workloads directly on a lakehouse,” said Tomer Shiran, founder of Dremio. “With automatic recommendations and next-generation incremental updates, we’ve made it even easier for organizations to take advantage of this innovative technology.”

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter:

Join us on LinkedIn:

Join us on Facebook:

Speak Your Mind