insideBIGDATA Latest News – 5/18/2023

Print Friendly, PDF & Email

In this regular column, we’ll bring you all the latest industry news centered around our main topics of focus: big data, data science, machine learning, AI, and deep learning. Our industry is constantly accelerating with new products and services being announced everyday. Fortunately, we’re in close touch with vendors from this vast ecosystem, so we’re in a unique position to inform you about all that’s new and exciting. Our massive industry database is growing all the time so stay tuned for the latest news items describing technology that may make you and your organization more competitive.

Crux Announces SaaS Offering to Intelligently Integrate and Accelerate the Adoption of External Data for Analytics

Crux, a pioneer in the external data integration, transformation, and observability space, announced the launch of the Crux External Data Platform (“EDP”), the first SaaS offering that enables enterprises to automate the onboarding of any external dataset directly from vendors into their organization, driving better, faster decisions. The new cloud platform allows data teams to onboard and transform external data for analytics use up to ten times faster than traditional manual methods.

“Advances in external data integration capabilities are disrupting a multi-billion-dollar category. By eliminating the pre-processing bottlenecks organizations now face, this platform will do for data integration what automation did for infrastructure with the rise of the cloud,” said Will Freiberg, CEO of Crux. “The cloud made it possible for enterprises to reduce infrastructure and maintenance costs, consolidate on-premises data warehouses, scale on-demand, and access critical resources in minutes. The Crux External Data Platform is similarly transformative, allowing data engineers to onboard external data products into their data warehouse or cloud analytics environment in minutes.”

AWS Announces Amazon Aurora I/O-Optimized

Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), announced Amazon Aurora I/O-Optimized, a new configuration for Amazon Aurora that offers improved price performance and predictable pricing for customers with input/output (I/O)-intensive applications. With the new Aurora configuration, customers only pay for their database instances and storage consumption with no charges for I/O operations. Customers can now confidently predict costs for their most I/O-intensive workloads, regardless of I/O variability, helping to accelerate their decision to migrate more of their database workloads to AWS. Today, hundreds of thousands of customers, including Airbnb, Atlassian, and Samsung, rely on Aurora, a fully managed MySQL- and PostgreSQL-compatible relational database that provides the performance and availability of commercial databases at up to one-tenth the cost. For customers with I/O-intensive applications like payment processing systems, ecommerce, and financial applications, I/O-Optimized offers improved performance, increasing throughput and reducing latency to support customers’ most demanding workloads. With Aurora I/O-Optimized, customers can maximize the value of their cloud investment and optimize their database spend by choosing the Aurora configuration that best matches their I/O consumption patterns. 

“We launched Amazon Aurora with the aim of providing customers with a relational database, built for the cloud, that offered the performance and availability of commercial databases at up to one-tenth the cost. Since then, we have continued innovating to improve performance while offering customers simplicity and flexibility with solutions like Amazon Aurora Serverless v2,” said Rahul Pathak, vice president of Relational Database Engines at AWS. “Now, with Aurora I/O-Optimized, we’re giving customers great value for their high-scale I/O-intensive applications, and an even better option for customers looking to migrate their most demanding workloads to Aurora and the cloud.”

Lightbend Launches Akka Distributed Cluster for Next-Gen Edge Computing Across Multiple Data Centers

Lightbend, the company providing cloud-native microservices frameworks for some of the world’s largest brands, announced the launch of Akka Distributed Cluster (Akka v.23.05), which brings innovative new capabilities for accelerating data delivery to users; maintaining application availability even in the event of a cloud provider outage; cutting costs by minimizing data storage expenses and reducing server data traffic; and conserving developer time. These new features, combined with Akka, one of the most powerful platforms for distributed computing, give enterprises looking to build mission-critical cloud native applications in less time than ever. 

“I have long maintained that the demarcation between ‘cloud’ and ‘edge’ is not a clear boundary, but a spectrum of environments based on the specific requirements of the application use case,” said Jonas Bonér, Lightbend’s founder and CEO. “Akka Distributed Cluster brings industry-first capabilities that blur the distinctions between cloud and edge and continue our progress in building a new paradigm for tomorrow’s cloud-to-edge continuum.”   

Smartling introduces Smartling Translate, a translation portal enabling instantaneous high-quality on-brand translations for enterprises requiring a secure environment

Smartling, Inc., the enterprise translation company, announced the introduction of Smartling Translate, a translation portal enabling instantaneous, high-quality, secure and on-brand machine translations into hundreds of languages using Smartling’s patent-pending LanguageAI platform. 

For global enterprises, with worldwide translation requirements and employees spread across time zones and departments, Smartling Translate offers a perfect complement to their localization teams to easily translate documents and text quickly, securely and at much lower cost than human translation. Unlike the public cloud, all content entered into Smartling Translate is processed in a private, safe and secure environment. 

Smartling Translate enables quick and easy translation by allowing users to copy and paste text or drag and drop files up to 200MB, eliminating the need to set up complex workflows or training. Powered by Smartling’s AI-based Neural-Machine Translation Hub, Smartling Translate produces the highest quality, most relevant translation based on language pair, content complexity, term bases, machine translation engines, a customer’s translation memory, and optional GPT-enabled enhancements  ensuring on-brand translations based on each customer’s brand voice, style, and terminology.

“Smartling Translate is a self-service translation portal that can be used by anyone in the company, and leverages their brand terminology, style guide and translation memory in a safe and secure platform to create more accurate, on brand and fluent translations. It is the fastest and easiest way to translate virtually any file type into over 140 languages by simply dragging and dropping,” said Bryan Murphy, CEO, Smartling.

Clootrack’s Customer Experience (CX) Analytics is now Powered by GPT4, ChatGPT & GPT-3 AI

Clootrack, an AI-driven platform capable of analyzing billions of Customer Experience (CX) reviews for enterprises to gain qualitative insights in real-time, has enhanced the platform with ChatGPT’s transformative, powerful, and versatile language model feature for delivering highest accuracy and efficiency in insights with the launch of AskClootrack. This addition has leapfrogged CX analytics to a new level by enabling insights professionals to get qualitative insights from public and private enterprise data with high accuracy in seconds. These include data from eCommerce sites, forums, blogs, social media, customer care tickets, open-ended NPS surveys, website/app feedback, and chatbots. 

AskClootrack, powered by GPT-4, ChatGPT, and GPT-3, has been designed to generate actionable, highly reliable, and verifiable granular insights from millions of customer reviews. The users will be able to understand the context of customer reviews and generate a response with qualitative data. This unprecedented level of understanding of their customers enables businesses to make more informed decisions about product development, innovation, marketing, and customer service. It can answer precisely and thoroughly from the customer data so that companies can use this feature to immediately improve their products, services, and customer service processes, leading to increased customer satisfaction and loyalty. 

Shameel Abdulla, CEO of Clootrack, commented on the integration on the Clootrack platform. Shameel said, “The newly added feature is a game-changer for customer experience analytics and offers a distinct competitive advantage. The ability for customer experience leaders to converse with customer feedback and make instant decisions will enable brands to fast-track their journey to customer-centricity in action. Introducing the AskClootrack into the Clootrack platform will significantly enhance collaboration in enterprise decision-making and speed up execution”. 

SolarWinds Adds Transformative AI Features to IT Service Management Solutions

SolarWinds (NYSE:SWI), a leading provider of simple, powerful, and secure observability and IT management software, announced it is adding transformative artificial intelligence (AI) and machine learning (ML) capabilities to its IT service management (ITSM) solutions. The new AI features include a virtual agent to help users solve everyday IT problems and guided incident resolution to empower agents with the information they need to resolve complex issues effectively. 

“Digital transformation, application modernization, and the move to the cloud have dramatically increased the complexity of digital services,” said Cullen Childress, GVP of product management at SolarWinds. “This means the number of potential problems impacting user experience has also increased substantially. SolarWinds’ ITSM solutions are a significant focus that we are investing in. This includes its Service Desk, which enables teams to focus more on important business priorities rather than mundane, time-consuming tasks. By leveraging advanced AI and powerful automation, SolarWinds makes users more productive, supports agents more efficiently, and helps ensure companies are more successful.”

NeuroBlade Announces Industry’s First Processor for Analytics, Speeding Workloads up to 100x 

NeuroBlade, pioneering the new standard for data analytics acceleration that will speed time to insight and improve query performance on petabyte-sized datasets, announced that the NeuroBlade SQL Processing Unit (SPU™) will be available with select Dell Power Edge servers. This solution will provide customers with the reliability and security they have come to expect from Dell Technologies, coupled with the industry’s first processor architecture proven to accelerate high throughput data analytics workloads. 

“This collaboration with Dell Technologies significantly strengthens our go-to-market strategy and reinforces the rapidly increasing market demand for new innovative and powerful solutions,” said Elad Sity, CEO and co-founder of NeuroBlade. “The work we have done enables organizations to keep up with their exponential data growth, while taking their analytics performance to new levels, and creating a priceless competitive advantage for them. This success couldn’t have been achieved without our engineering team, who have been collaborating with companies like Dell Technologies to unlock this new standard for data analytics.” 

The NeuroBlade SPU G200 PCI-e acceleration card, announced today, is a processor solely built for data analytics, uniquely delivering consistently high throughput regardless of query complexity. The NeuroBlade system is designed to integrate into existing data center environments seamlessly. It connects into any database query engine without requiring changes to existing data, queries, or code, and can improve performance of analytics workloads such as business intelligence, data warehouses, data lakes, ETL, and more. 

data.world Launches a Data Catalog Platform with Generative AI Bots

data.world announced the introduction of the data.world Data Catalog Platform with new generative AI-powered capabilities for improving data discovery. data.world is the industry’s most-used data catalog with more than 2 million users, including enterprise customers with tens of thousands of active users. Now with native generative AI integrations, even more people can use data.world to discover data and unlock organizational knowledge – regardless of expertise level.

This is the first time that data.world has introduced generative AI capabilities into its Data Catalog Platform. Archie Bots integrate the power and flexibility of data.world’s knowledge graph-architecture with LLMs, including, but not limited to, OpenAI GPT. These capabilities were developed through data.world’s AI Lab and in partnership with customer design partners who tested early integrations. 

On the Heels of Google I/O, PaLM 2 AI Debuts in Sendbird’s Chatbot API

Sendbird, the global in-app conversations platform with over 300 million monthly active users, announced it has integrated PaLM 2, Google Bard’s new large language model (LLM), into its low-code chatbot API. Available now, Sendbird is one of the first organizations to power chatbots with Google’s latest AI conversational engine for a commercial product.

“We were given early access to fully integrate Bard’s LLM PaLM 2 into our chatbot API by Google’s I/O’s release,” said John S. Kim, CEO and Co-founder of Sendbird. “This gives our customers even more ways to supercharge chatbots. We’ve already seen fantastic use cases taking off with our ChatGPT integration. Adding Google’s generative AI opens the door to additional possibilities– and this is only the beginning. We expect to announce more generative AI capabilities throughout this year.”

Boost.ai Unveils Large Language Model Enhancements to Conversational AI Platform

Boost.ai, a leading conversational AI solution provider, announced Version 12 of its platform, the first of a series of planned updates by the company to incorporate Large Language Model (LLM)-enriched features. This iteration is focused on key customer experience (CX) improvements, including content suggestion, content rewriting and accelerated generation of training data. The new update will take advantage of Generative AI to suggest messaging content to AI Trainers within the boost.ai platform, generating suggested responses and resulting in drastically reduced implementation times for new intents. With this latest release, boost.ai reinforces its commitment to researching, developing, releasing, and maintaining responsible implementations of LLM-powered, enterprise-quality conversational AI features in order to further enhance the customer experience.

“LLM technology offers great promise, but most applications just aren’t properly designed to securely and scalably support real-world businesses. With worries about accuracy or even inappropriate behavior, established institutions like banks could not risk direct access to this iteration of generative AI – until now,” said Jerry Haywood, CEO of boost.ai. “By pairing LLMs with our conversational AI, we’re able to ensure accuracy and open the door for customers in sensitive industries like financial services. We’re proud to be pioneering a way forward for businesses to harness this tech right now. It’s available for customers to use and enhance their existing solution, and to help them achieve speed to value significantly sooner whilst minimizing the risks currently dominating headlines.”

Airtable’s New Embedded Artificial Intelligence Capabilities Make Modern AI Accessible Across the Enterprise

Airtable introduced Airtable AI, the easiest and fastest way to deploy AI-powered applications for the enterprise. As companies evaluate the breakthroughs in modern AI and the best way to implement them across their organizations, Airtable’s new AI components and intuitive no-code interface makes it simple for teams to integrate powerful AI capabilities into their own data and workflows. With these capabilities embedded into Airtable’s next-generation platform, organizations can power a wide range of processes all in one place – from producing job requirements, to managing marketing campaigns, to planning new research and product development initiatives. 

“With AI breakthroughs that are capable of a broad range of reasoning and creative work, every form of knowledge work faces imminent transformation,” says Airtable co-founder and CEO Howie Liu. “Our vision is to help enterprises embed AI into every workflow across their organization. Airtable’s no-code approach, which enables companies to rapidly build highly engaging apps, now offers the ability to embed and customize AI components to power any use case.” 

Moises.ai Unveils Orchestrator, Plug-and-Play AI For Music Tech Companies

With Orchestrator by Moises.ai, any business, service, content creator, artist, or rightsholder can channel AI technology to support their vision. With an easy and intuitive interface, Orchestrator allows anyone to drag-and-drop different modules, such as stem separation or lyric transcription using AI, and see what happens. Orchestrator provides a drag-and-drop environment for both quickly testing novel concepts and seamlessly implementing at scale.

The Orchestrator interface removes barriers like cost and timing when businesses want to experiment with AI but have limited resources. From a few tracks to a million tracks, Moises.ai can handle it, offering the most competitive pricing and the most exciting features via Orchestrator. 

“Orchestrator’s no-code interface opens doors for fast, intuitive, and easy adoption by companies interested in adopting emerging tech like AI. It is in tune with our mission to democratize access to state-of-the-art technology, no matter the bandwidth,” says CEO and co-founder Geraldo Ramos. “With an easy and intuitive interface, anyone can get up and running in less than 5 minutes and start processing their first batch of media.”

Credo AI unveils GenAI Guardrails to help organizations harness generative AI tools safely and responsibly

Credo AI, a global leader in Responsible AI governance software, announced the general availability of GenAI Guardrails, a powerful new set of governance capabilities designed to help organizations understand and mitigate the risks of generative AI. GenAI Guardrails is powered by Credo AI’s policy intelligence engine and provides organizations with a control center to ensure the safe and responsible use of generative AI across the enterprise.

“In 2023, every company is becoming an artificial intelligence company,” said Navrina Singh, CEO and founder of Credo AI. “Generative AI is akin to a massive wave that is in the process of crashing—it’s unavoidable and incredibly powerful. Every single business leader I’ve spoken with this year feels urgency to figure out how they can ride the wave, and not get crushed underneath it. At Credo AI, we believe the enterprises that maintain a competitive advantage — winning in both the short and long term — will do so by adopting generative AI with speed and safety in equal measure, not speed alone. We’re grateful to have a significant role to play in helping enterprise organizations adopt and scale generative artificial intelligence projects responsibly.” 

Application of Graph Technology to Geospatial Data, Meet the Foursquare Graph

Foursquare, a leading independent geospatial technology platform, announced its geospatial knowledge graph, a novel way of organizing geospatial datasets using graph technologies and the H3 grid system to transform how businesses derive value from location data.

“Data is an essential resource for every company today, but rarely is it maximized to its full potential,” said Gary Little, President and CEO of Foursquare. “A pioneering use of H3 and graph technologies, the Foursquare Graph will harmonize the company’s full product suite, allowing for unprecedented querying, visualization capabilities, and advanced analytics to solve complex technical challenges that enable customers to unlock key business insights with ease and speed. This innovation will empower businesses to realize more value in geospatial data insights than previously possible.”

UiPath Unveils New AI-powered Features and Developer Experiences to Speed Automation Across All Knowledge Work

UiPath (NYSE: PATH), a leading enterprise automation software company, announced its latest platform features that help customers discover, automate, and operate at scale with AI-powered automation. The new features are designed to elevate how organizations can take action on information flows between the various systems, people, and communications necessary to get work done. The result is faster automation creation, time-to-value, and productivity gains.

AI is fundamentally changing how people work, altering the digital transformation strategies of business leaders who must accomplish more with fewer resources, drive growth, and maximize the value across business models in increasingly compressed time frames. With organizations under pressure to exceed these objectives, the UiPath Business Automation Platform is expanding its suite of products that provide every worker and developer with opportunities to shift from idea to action by automating more processes with access to enterprise-grade AI, new developer experiences, and enhanced governance and support capabilities.

“The continuous innovation of AI-powered automation in the UiPath Platform equates to limitless potential for organizations to meet their goals faster. Our open, flexible, and enterprise-ready platform enables customers to harness innovation through the AI ecosystem, including the newest foundational models and generative experiences,” said Graham Sheldon, Chief Product Officer at UiPath. “Customers want a single platform that enables end-to-end business process transformation. Developers, IT professionals, and business users can use AI responsibly with UiPath’s built-in enterprise-grade security, governance, and compliance. The release of new technologies and capabilities in our platform further accelerates how the C-level leaders can transform their businesses with automation.”

Grafana Labs Announces New Tools for Metrics Cost Management in Grafana Cloud

Grafana Labs, the company behind the open and composable operational dashboards, announced updates to its fully managed Grafana Cloud observability platform: The powerful new Adaptive Metrics feature, which enables teams to aggregate unused and partially used time series data to lower costs, is now available for broader public access. This feature leverages enhanced insights into metrics usage recently added to Grafana Cloud’s Cardinality Management dashboards, which are now available in all Grafana Cloud tiers, both free and paid. Together these advancements, powered by the open source project Grafana Mimir, help organizations rapidly scale at cloud native pace while optimizing metric cardinality and controlling costs.

“While we’ve seen the value that Prometheus brings to organizations, we’ve also seen its popularity lead to rapid adoption and uncontrolled costs,” said Tom Wilkie, CTO at Grafana Labs. “In fact, we even had this problem at Grafana Labs, running our own Prometheus monitoring for Grafana Cloud. One of our clusters had grown to over 100 million active series, and 50% of them were unused. We started thinking about how we could solve this problem, and Adaptive Metrics was the answer. We’ve reduced that cluster by 40%, and we’re excited to share this powerful capability with our Grafana Cloud users.” 

Pega Launches Pega Process Mining with Generative AI-Ready APIs to Enable Continuous Workflow Optimization

Pegasystems Inc. (NASDAQ: PEGA), the low-code platform provider empowering the world’s leading enterprises to Build for Change®, announced the launch of Pega Process Mining, which will make it easier for Pega users of all skill levels to find and fix process inefficiencies hindering their business operations. These intuitive process mining capabilities – along with generative AI-ready APIs – will be seamlessly integrated within Pega Platform™, providing organizations with a unified solution to continuously optimize their Pega workflows.

“To ensure an exceptional experience for your customers and employees, workflow optimization must be an ongoing pursuit and not just an occasional effort. But today’s process mining tools and methods are too cumbersome and time consuming to perform on a regular basis,” said Eric Musser, general manager, intelligent automation, Pega. “Pega Process Mining makes it more accessible for anyone in the business to quickly and easily root out process inefficiencies. This helps organizations continuously optimize their employee and customer experiences and brings them one step closer to becoming an autonomous enterprise.”

Galileo Unveils ML Data-Quality Intelligence Platform for Faster, More Accurate Computer Vision Models

Galileo, the machine-learning (ML) data intelligence company for unstructured data, announced the launch of its proprietary data-quality intelligence platform, called Galileo Data Intelligence for Computer Vision. The first-ever solution to solve for data quality issues across the entire ML workflow, the Galileo platform will allow data scientists and ML engineers to automate the ‘needle in the haystack’ approach, reducing model production time by 10x, improving model accuracy by 15% across the board and reducing data labeling costs for human-labeled datasets by 40%.

As the global datasphere expands, 80% of the anticipated 163 zettabytes available by 2025 will be unstructured, increasing the risk of errors and model production inefficiencies by forcing data scientists and ML engineers to manually track down and diagnose problems within models. The vast majority — 84% — of data scientists and ML engineers report that this ‘needle in a haystack’ approach to model error detection is “an issue for their teams at least some of the time,” according to a recent survey.

By adding just a few lines of Python code during the model training process, the innovative Galileo Data Intelligence for Computer Vision platform automatically identifies problematic data that negatively impacts model performance, then suggests effective solutions for data-science teams to seamlessly address the issue. With the Galileo platform, engineers will be able to address a major bottleneck in the data-science workflow, which will allow for more efficiency and accuracy in iterations as well as in image classification, object detection and semantic segmentation (pixel-level) models.

“Data science applications across industries are rapidly expanding. Unfortunately, so too are the challenges for ML and data science practitioners, many of whom are forced to spend untold amounts of time managing data quality issues to create high-quality models — an issue our team has experienced firsthand, and one we sought to resolve by founding Galileo,” said Vikram Chatterji, co-founder and CEO of Galileo. “Galileo Data Intelligence for Computer Vision will create significant efficiencies for our customers — allowing data scientists to work more quickly and effectively than ever before across the cyclical ML workflow, whether that be data preparation ahead of labeling, during training iterations or in monitoring production models.”

Zendesk announces powerful AI designed exclusively for intelligent CX

Zendesk, Inc. introduced Zendesk AI, an intelligence layer that makes personalized, efficient and more empathetic customer experiences (CX) accessible for all companies. The new offering combines decades of Zendesk’s unique data and insights with new AI technologies, including the company’s proprietary models, as well as large language models (LLMs). 

“More than 90% of our customers already use AI within Zendesk, and we are building on this great foundation with a new solution that any business can use immediately,” said Tom Eggemeier, CEO, Zendesk. “Generative AI has significant benefits for agents, admins and businesses that want to deliver the best customer experience, and Zendesk AI will help them instantly see tangible value in cost savings and thousands of hours a month in gained productivity.”

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*