insideBIGDATA Latest News – 5/8/2023

Print Friendly, PDF & Email

In this regular column, we’ll bring you all the latest industry news centered around our main topics of focus: big data, data science, machine learning, AI, and deep learning. Our industry is constantly accelerating with new products and services being announced everyday. Fortunately, we’re in close touch with vendors from this vast ecosystem, so we’re in a unique position to inform you about all that’s new and exciting. Our massive industry database is growing all the time so stay tuned for the latest news items describing technology that may make you and your organization more competitive.

Zetaris Introduces the Fluid Data Vault

Zetaris, a leading data analytics platform, announces its Fluid Data Vault™ toolkit enables the automation of integrations of big data and streaming data sources. According to Zetaris, users can now go directly from source systems to a Data Vault without having to replicate the raw data across multiple storage layers.

“One of the key goals at Zetaris is improving access to your data,” said Vinay Samuel, founder and CEO of Zetaris. “By introducing Fluid Data Vault, we can demonstrate how Zetaris enables the Data Vault of the future by integrating big data and streaming data sources in real-time with less replication.”

DataStax Simplifies Real-time AI Deployment with the Launch of Luna ML

DataStax, the real-time AI company, announced the launch of Luna ML, a support service for Kaskada Open Source, the unified event processing engine for real-time machine learning (ML). Luna ML supports customers with modern, open source event processing for ML, enabling customers to deploy Kaskada with professional support from DataStax.

“Adding an open source component to an ML platform that supports experimentation and production environments can be complex,” said Davor Bonaci, chief technology officer and executive vice president, Datastax. “With Luna ML, customers can confidently take steps toward implementing a real-time AI platform, and deploy Kaskada Open Source with the benefit of support from the Kaskada experts at DataStax.”

Metabob Launches New AI Generative VSCode Extension for Software Code Debugging and Refactoring Tool

Metabob announced a new artificial intelligence (AI)-powered virtual studio code (VSCode) extension for its debugging and refactoring tool that speeds software development code review by 60%. It provides users with functional context-sensitive code recommendations to fix errors right in the integrated development environment (IDE) – and it is specifically trained to fix code generated by AI tools, such as GitHub Copilot and ChatGPT.

“With recent advancements in AI, we set out to solve this problem and free up time for developers. Software developers are valuable for their ability to produce code, and their productivity is reduced when fixing bugs,” said Massi Genta, CEO, Metabob. “The VSCode extension helps to make our tool available to even more developers. This way, Metabob’s AI can quickly find and explain errors in plain English and recommend solutions on how to fix them.”

Kore.ai Unveils Experience Optimization (XO) Platform V10.1 Equipped with Smart Co-Pilot and Advanced Generative AI Capabilities 

Kore.ai, a leading conversational AI platform and solutions company, announced the release of the Kore.ai Experience Optimization (XO) Platform Version 10.1, featuring enhanced capabilities for building chatbots using generative AI. Kore.ai has introduced capabilities that leverage generative AI and large language models (LLMs) for the creation and deployment of intelligent conversational experiences–making it five times faster with one-third of the operational efforts required compared to conventional methods.

“Generative AI language models and conversational AI work at their highest potential when used together,” said Kore.ai CEO and Founder, Raj Koneru. “LLMs are especially helpful in handling tasks that would otherwise require creating large datasets and training various blank slate models. Through the V10.1 release, we are bringing the power of generative language models to all stages of bot development, simplifying the process of building sophisticated virtual assistants.” 

Trovata Launches First Generative AI Finance & Treasury Tool

Trovataa leader in bank APIs and cash management, announced the first generative AI entrant in the fintech space. Trovata AI leverages OpenAI’s ChatGPT technology to accelerate the company’s vision of automating cash workflows and business intelligence for corporate finance, accounting, & treasury departments. The company is rolling out a beta of Trovata AI to select customers beginning in May.

“The economic environment is rapidly shifting and finance teams are under more pressure than ever to provide business intelligence and manage their cash effectively, but they have few real tech-driven resources to help them do that,” said Brett Turner, founder and CEO of Trovata. “Trovata AI completely changes that, helping finance teams operate with great leverage and proactivity while still managing business risk.”

MosaicML Launches Inference API and Foundation Series for Generative AI; Leading Open Source GPT Models, Enterprise-Grade Privacy and 15x Cost Savings 

MosaicML, a leading Generative AI infrastructure provider, announced MosaicML Inference and its foundation series of models for enterprises to build on. This new offering allows developers to quickly, easily, and affordably deploy Generative AI models for 15x less than other comparable services. With the addition of inference capabilities, MosaicML now offers a complete, end-to-end solution for Generative AI training and deployment at the most efficient cost available today. Generative AI models have quickly become a catalyst for innovation across industries from healthcare to financial services to e-commerce. However, off-the-shelf models have well-documented issues around data security, model transparency, and availability. Access to the alternative—custom Generative AI models—has been limited, until now. 

“We believe that MosaicML Inference is a game-changer for Generative AI. It radically reduces the cost of serving large models and enables enterprises to do so in their own secure environments. Together with the MosaicML Foundation Series, enterprises now have more capabilities than ever before to achieve their own state-of-the-art AI without concerns about cost, scale, and security.” – Naveen Rao, CEO

Precisely Announces New Data Quality Service and Powerful Data Catalog-Driven User Experience in its Market-Leading Data Integrity Suite

Precisely, a leader in data integrity, announced the latest innovations in the Precisely Data Integrity Suite. Customers can now quickly build data pipelines, integrate data into new cloud platforms using hundreds of connectors, and easily access enhanced data observability, geo addressing, and data enrichment capabilities. Its unified data catalog also enables Suite services to seamlessly interoperate and powers a new searchable user interface that reveals a full inventory of business and technical metadata.

“With these latest advancements, the Data Integrity Suite provides a seamless experience like no other, empowering businesses to harness trusted data for critical decision-making,” said Anjan Kundavaram, CPO at Precisely. “Customers can now effortlessly access the highest quality data, proactively observe it to prevent issues downstream, unlock essential context, and make it available in the environment of their choosing. The data catalog provides the unique common thread, meaning different services can be easily combined for maximum value.”

Honeycomb Launches First-of-kind Natural Language Querying for Observability Using Generative AI

Honeycomb, a leading observability platform used by high-performing engineering teams to investigate the behavior of cloud applications, announced that it is the first observability platform to launch fully-executing Natural Language Querying using generative AI for its new capability, Query Assistant. This development dramatically scales the platform’s query power and makes observability more usable for all engineering levels.

“The best developer tools are increasingly going to be the ones that get out of your way and become invisible,” said Charity Majors, CTO of Honeycomb. “Observability shouldn’t require you to master complicated tools or languages that force you to constantly switch context and piece together clues to get answers to complex problems. The only thing observability tools should encourage you to focus on is your own curiosity about what’s happening in your system.”

Qrvey Incorporates Advanced AI into Embedded Analytics

Qrvey, the embedded analytics layer built specifically for SaaS companies, has added advanced AI functionality to their product roadmap after initial proof of concepts quickly demonstrate valuable functionality. The use of ChatGPT within Qrvey’s platform allows end users to quickly identify outliers, patterns, and forecasts, and even to suggest other questions or ways of visually presenting the findings. Together, these functions make it even easier for business users to quickly derive actionable information from even the most complex data.

“We’ve been exploring the use of AI since the inception of our product and have offered Natural Language processing since the beginning for things like sentiment analysis. The emergence and refinement of technologies like ChatGPT allow us to take this type of functionality to the next level and allow our customers to deliver faster and easier insights to their end users,” said David Abramson, CTO of Qrvey. “Analytics is really at the nexus of the AI revolution, a trend which is singularly focused on answering complex questions from a wide array of data. Embedded AI allows these answers to be quickly synthesized and presented in a way anyone can access and understand. We’re very focused on fully leveraging this technology in upcoming releases.”

Introducing New Relic Grok, the Industry’s First Generative AI Observability Assistant

New Relic (NYSE: NEWR), the all-in-one observability platform for every engineer, announced New Relic Grok, the generative AI assistant for observability. New Relic Grok reduces the toil of manually sifting through data, makes observability accessible to all regardless of prior experience, and unlocks insights from any telemetry data source. Leveraging OpenAI’s large language models (LLMs) and New Relic’s unified telemetry data platform, New Relic Grok allows engineers to use natural language prompts to perform tasks previously done via traditional user interfaces—setup instrumentation, troubleshoot issues, build reports, manage accounts, and more. This accelerates New Relic customers’ efforts to consolidate telemetry data in its platform, increases the volume of queries that uncover insights, and enables new teams to adopt observability.

“Ever since we invented cloud APM in 2008, we have pioneered innovations years ahead of competitors. New Relic Grok is the continuation of this DNA and defines how generative AI will transform our industry,” said New Relic CEO Bill Staples. “New Relic Grok makes observability dramatically simpler, democratizes access to instant insights, and helps engineering teams realize the true potential of observability.”

Teradata Operationalizes Artificial Intelligence (AI) Models at Scale with Integration of Google Cloud’s Vertex AI, Teradata VantageCloud and ClearScape Analytics

Teradata (NYSE: TDC) announced the integration and general availability of Google Cloud’s Vertex AI with Teradata VantageCloud and ClearScape Analytics, the complete cloud analytics and data platform. By operationalizing sophisticated Vertex AI models with the scalability and performance of ClearScape Analytics, customers can move from experimenting with AI to achieving AI-driven business success across a multitude of use cases.

“Our customers are investing in the power of AI to fuel their digital transformations and achieve tangible business outcomes that have a real-world impact on their businesses,” said Hillary Ashton, Chief Product Officer at Teradata. “Our openness and scalability facilitate the operationalization of Vertex AI’s models across an organization and its mission-critical use cases – such as customer churn, fraud detection, predictive maintenance, and supply chain optimization. Customers are able to make bold business decisions, driven by data, that keep them ahead of the competition.”

InfluxData Unveils Future of Time Series Analytics with InfluxDB 3.0 Product Suite

InfluxData, creator of a leading time series platform InfluxDB, announced expanded time series capabilities across its product portfolio with the release of InfluxDB 3.0, its rebuilt database and storage engine for time series analytics. InfluxDB 3.0 is available today in InfluxData’s cloud products, including InfluxDB Cloud Dedicated, a new product for developers that delivers the performance, power, and flexibility of InfluxDB with the security of a fully managed service. InfluxData also announced InfluxDB 3.0 Clustered and InfluxDB 3.0 Edge to give developers next-gen time series capabilities in a self-managed database. 

“InfluxDB 3.0 is a major milestone for InfluxData, developed with cutting-edge technologies focused on scale and performance to deliver the future of time series,” said Evan Kaplan, CEO, InfluxData. “Built on Apache Arrow, the most important ecosystem in data management, InfluxDB 3.0 delivers on our vision to analyze metric, event, and trace data in a single datastore with unlimited cardinality. InfluxDB 3.0 stands as a massive leap forward for both time series and real-time analytics, providing unparalleled speed and infinite scalability to large data sets for the first time.”

dotData announces dotData Feature Factory Signaling a Paradigm Shift in Enterprise Data Solutions

dotData, a leading provider of platforms for feature discovery, announced the public availability of dotData Feature Factory. The newly released platform provides advanced functionality that empowers data scientists with a data-centric approach to feature engineering powered by reusable feature discovery assets that have never been available until now. dotData Feature Factory enables a paradigm shift in enterprise data solutions and will replace dotData Py, a Python-based data science automation engine first introduced in 2018. 

“This new product provides our heart and core as an independent product,” said Ryohei Fujimaki, Ph.D., founder and CEO of dotData. “In past years, we have kept validating that Feature Discovery is the biggest pain in enterprise data solutions. The vision of the new dotData Feature Factory is to enhance all data solutions for enterprise organizations.”

Smartling announces patent-pending technology that enables breakthrough improvements in translation using AI

Smartling, Inc., the enterprise translation company, announced the introduction of patent-pending technology that brings human-quality machine translation closer to reality. With the help of Smartling’s AI technology, enterprises can now achieve breakthrough improvements in translation quality. This includes implementing style guidelines, brand voice, locale-specific conventions, grammatically accurate terminology handling and the proper use of linguistic gender preferences in translations.

The main challenge in implementing state-of-the-art generative AI in localization is having a scalable, end-to-end platform to make use of large language models (LLMs) that render the desired output with the necessary linguistic, factual and cultural level of precision. Smartling’s patent-pending technology addresses this challenge by designing a repeatable and predictable prompt engineering process to obtain the most suitable prompt templates for each specific use case. 

Smartling employs patent-pending, generative AI-powered technology that enables the use of LLMs, such as GPT-4 and ChatGPT, to ensure machine translation follows a company’s style guidelines, maintains brand voice and further perfects the grammar of the automated translation output. Additionally, this technology uses LLMs for machine translation to greatly improve a customer’s preferred terminology handling and eliminate gender bias in translations. 

“Smartling’s pioneering use of generative AI and LLM technology is reshaping the way our customers translate and localize.” said Bryan Murphy, CEO of Smartling. “As a result of this new, patent-pending technology, Smartling can provide significantly more accurate and fluent translations for companies in their own brand voice, in their own style and at a fraction of the cost. In fact, our customers have been increasing their volume of machine translated content by over 225% per year while reducing their cost of translation by 60%.”  

Tasq.ai Releases First Human Guidance Solution for Generative AI at Mass Scale

Tasq.ai, the data management, collection, and annotation company, announced the release of a first-of-its-kind one-stop-shop solution to build and fine-tune generative AI models at scale. The platform will dramatically decrease the time it takes to create AI models by centralizing the entire data process, from data collection and labeling to model training, validation, and regularly finetuning the model to improve performance.

Globally, enterprises are already leveraging the power of foundation models (large AI models trained on unlabeled data) as OpenAI releases ChatGPT-4, Google launches Bard, Meta launches LLaMA in beta, and over 70% of organizations will be increasing their AI budgets within the next three years. However, most companies and organizations are not equipped to take advantage of foundation models due to their enormous size. Foundation models are difficult and expensive to host, and using off-the-shelf versions in production could lead to poor performance and a high risk of governance and compliance violations. Even the seemingly straightforward task of offline-tuning Chat-GPT for a specific professional use case is often costly, complicated, and produces ineffective results.

“Generative AI is responsible for a modern day ‘Gold Rush’ of applicability that enables companies to provide their customers with tremendous value and potentially a significant first-mover advantage if they do so quickly.” Says Erez Moscovich, CEO of Tasq.ai. “Our new human guidance solution for Generative AI enables the adaptation of these large foundation models to serve specific purposes faster than ever while maintaining the highest quality, all with little effort on behalf of the data scientists. It’s a game changer that finally allows companies to overcome the biggest bottleneck to unlocking the value that Generative AI creates.”

AtScale Introduces Code-first Data Modeling Capabilities for its Semantic Layer Platform

AtScale, a leading provider of semantic layer solutions for modern business intelligence and data science teams, announced new capabilities within its semantic layer platform to support code-first data modelers including developers, analytics engineers, and data scientists. These new capabilities tightly integrate with AtScale’s existing no-code visual modeling framework and provide flexibility to build and manage data models and metric definitions within the semantic layer using code-based modeling frameworks.

“Analytics engineers and other code-first data modelers need the flexibility of a markup language and automation scripts to build and maintain the sophisticated data models underlying a robust semantic layer,” said Dave Mariani, founder and CTO for AtScale. “AtScale’s modeling language is built on best practices of dimensional analytics and seamlessly integrates with our metrics serving engine, ensuring optimal performance and cost efficiency of analytics queries, while maintaining tight integration with analytics layer tools.”

Moveworks Launches Creator Studio: A No-Code, Generative AI Platform for Building Any Conversational AI Use Case Across the Enterprise

Moveworks, a leading conversational AI platform for the enterprise, announced Creator Studio — a no-code, generative AI platform for building any conversational AI use case in minutes. The platform leverages advanced large language models (LLMs) and generative AI to provide a natural language interface that serves as an enterprise-wide co-pilot for employees. Creator Studio makes it easy for anyone — regardless of their department or technical skillset — to quickly generate conversational AI use cases, search for information and take action across every application, and integrate Moveworks with any system across the enterprise.

“Employees face daily challenges when interacting with various business systems — from updating an account in a CRM, to checking the status of a purchase order,” said Varun Singh, Founder and President of Moveworks. “They often rely on service owners and admins to bridge this gap, which ultimately results in gross inefficiency. With Creator Studio, we are finally empowering service owners to automate their repetitive tasks within minutes using the power of generative AI.” 

project44 Unveils “Movement GPT” Providing First-Ever Generative AI Assistant for Supply Chain 

project44, a leading supply chain visibility platform, announced the launch of a new artificial intelligence within project44’s platform, Movement. Using the capabilities of generative AI, Movement GPT is a breakthrough enhancement that finally delivers on the decades-old promise of autonomous supply chains.  

As the launch of ChatGPT has shown, generative AI has the potential to reshape entire industries, including the supply chain industry. However, generative AI is only as good as the data used to train it. project44 is uniquely positioned to deliver a capability like Movement GPT because of its unique dataset gathered from tracking 1 billion shipments representing $1 trillion in customer inventory across 181 countries. project44 is also the only visibility company to cover all modes and geographies within a single platform. 

“For the first time, we are training generative AI models to become experts in supply chain,” said Jett McCandless, project44 Founder and CEO. “We are thrilled to deliver Movement GPT to supply chain professionals to make their work more efficient and enable them to overcome the complexities of today’s supply chains.” 

Airbyte Releases API to Automate Data Movement for Large-scale Deployments

Airbyte, creators of the fastest-growing open-source data integration platform, announced the public launch of the Airbyte API. Now, engineers can manage large deployments efficiently, integrate Airbyte Cloud with other tools in the modern data stack, and leverage Airbyte functionality to automate data movement.

Software and data engineers can now automate various data integration tasks by interacting with Airbyte Cloud programmatically through the API, rather than only manually through the Airbyte Cloud user interface. Other companies that want to provide data movement within their own products can also use the API to embed Airbyte Cloud’s data movement capabilities directly into their product offerings.

“The Airbyte API is enabling data engineers to manage their data infrastructure efficiently, freeing up time to concentrate on the core value of their business,” said Riley Brook, product lead at Airbyte. “Since the beta release, we’ve seen a fast rate of adoption as a go-to tool for data teams. Now with the Airbyte API being publicly available, we expect to see that accelerate even further.”

Bedrock Analytics Unveils ChatGPT Integration for CPG Insight Automation at Scale

Bedrock Analytics, a prominent leader in data analytics and artificial intelligence (AI) for Consumer Packaged Goods (CPG) manufacturers, announced their latest platform upgrade, which will leverage generative AI to automate insights that will help CPG brands better compete in an omni-channel retail environment. 

This advanced technology has been developed using Open AI’s ChatGPT, an open-source AI  model that Bedrock has further enhanced to offer an unparalleled solution exclusively designed for the CPG industry. With the latest upgrade, Bedrock now delivers instantaneous and relevant insights to help CPG brands make data-driven decisions in real-time.  This upgrade is effective immediately for all customers in all countries and powers most all visualizations & analyses within the Bedrock Platform.

“The CPG industry is seeking productivity and competitive advantages as the market cools,” said Will Salcido, Bedrock CEO and Founder. “Not only do they have to harmonize disparate data sources into a single source of truth to get a true view of their performance, but they also need to find the time to extract the insights to tell the stories that retail buyers will find persuasive. Without the proper tools in place, this can be an incredibly time-consuming process that only a handful of experts can pull off.  We empower our CPG customers to fly at hyperspeed.”

LivePerson upgrades its Conversational Cloud platform with trustworthy AI capabilities to redefine how businesses put Generative AI and LLMs to work

LivePerson (Nasdaq: LPSN), a global leader in Conversational AI, announced the launch of its upgraded Conversational Cloud platform, which now features trustworthy Generative AI and Large Language Model (LLM) capabilities. These new features are built on the foundation of LivePerson’s unique expertise and data set powering AI conversations across every major industry for the world’s biggest brands. The upgraded Conversational Cloud combines the power of LLMs with LivePerson’s safe and responsible AI to boost human productivity across voice and messaging channels, driving better outcomes for businesses, their employees, and their customers.

“Businesses have always dreamed of automating truly human-like conversations on a massive scale, but the effort and expense put this dream out of reach until the dawn of generative AI and LLMs. But the hard truth is, these technologies are not fit for business use right out of the box,” said LivePerson founder and CEO Rob LoCascio. “Our new trustworthy AI capabilities offer guardrails designed to make LLMs safe and effective for even the world’s biggest brands — all while bringing digital experiences for their employees and end-consumers to new heights.”

Satori Announces Availability of Contextual Data Access to Cut Time-to-Data from Weeks to Seconds for Analytics, Data Science and Engineering

Satori, a leading data security platform, announced the availability of contextual data access. This capability allows users to obtain instant, just-in-time access to relevant data when they need it. Companies that manage sensitive data can now get time-to-value from data faster than ever before while meeting security and compliance requirements.

Data access management is taking a huge toll. Companies are unable to realize their true data-driven potential because manual processes for getting access to data create significant delays, which incur high operational costs and add significant security, compliance and privacy risks. 

“It’s 2023 – users should no longer wait weeks for data,” said Ben Herzberg, Chief Scientist and VP of Marketing, Satori. “Nor should data or DevOps engineers be spending large chunks of their time babysitting data, manually enabling and disabling access to it. Companies should fly with their data.”

HEAVY.AI Launches HEAVY 7.0, Introducing Machine Learning Capabilities

HEAVY.AI, an innovator in advanced analytics, announced general availability of HEAVY 7.0. The new product adds innovative machine learning capabilities, enabling telcos and utilities to perform in-database predictive modeling and simulate any scenario to uncover key insights. HEAVY 7.0 also incorporates a streamlined Heavy Immerse experience and enhanced support for HeavyRF’s operational configuration. 

“For telcos and utilities, delivering the best service to their customers means constantly analyzing, investigating and learning from the immense amounts and vast sources of data available to them. But analyzing complex geospatial data combined with customer and radiofrequency data, is a cumbersome and error-prone process,” said Jon Kondo, CEO, HEAVY.AI. “HEAVY 7.0 provides tools and features that make it fast and easy for these organizations to analyze any type of data and uncover insights that are critical for their business.”

Retrocausal Revolutionizes Manufacturing Process Management with Industry-First Generative AI LeanGPT offering

Retrocausal, a leading manufacturing process management platform provider, announced the release of LeanGPT™, its proprietary foundation models specialized for the manufacturing domain. The company also launched Kaizen Copilot™, Retrocausal’s first LeanGPT application that assists industrial engineers in designing and continuously improving manufacturing assembly processes and integrates Lean Six Sigma and Toyota Production Systems (TPS) principles favored by Industrial Engineers (IEs). The industry-first solution gathers intelligence from Retrocausal’s computer vision and IoT-based floor analytics platform Pathfinder. In addition, it can be connected to an organization’s knowledge bases, including Continuous Improvement (CI) systems, Quality Management Systems (QMS), and Manufacturing Execution Systems (MES) systems, in a secure manner.

“We trained Retrocausal’s generative AI LeanGPT models on specialized knowledge needed for manufacturing,” said Dr. Zeeshan Zia, CEO of Retrocausal. “Using our new LeanGPT-powered Kaizen Copilot application with our Pathfinder floor analytics platform gives IE’s all the information they need to excel in their roles, including domain expertise, organizational knowledge, and automated process observations, eliminating the need for tedious field studies or combing through unwieldy knowledge bases, while staying firmly rooted in Lean principles.”

Northern Light Announces Generative AI “Question Answering” Capability for SinglePoint Strategic Research Portals

Northern Light announced a generative AI “question answering” capability within Northern Light’s SinglePoint™ knowledge management platform for market research and competitive intelligence.

Northern Light’s question answering capability is built on top of OpenAI’s GPT-3.5 Turbo large language model. Last month, OpenAI, an AI research and deployment company and the developer of ChatGPT, exposed a developer API for GPT-3.5 Turbo making it practical for commercial developers of enterprise-class applications to use it.

Northern Light’s implementation of generative AI in SinglePoint addresses a critical requirement for AI-based research systems that synthesize published content: source citations. When a user asks SinglePoint a direct question, the answer is automatically generated and presented in narrative form, with live links to the source material from which the answer is derived so users can click through to the source documents of greatest interest to explore a given answer in more detail. Sources may be from any content collection within the client organization’s SinglePoint portal, including business news, original primary or licensed secondary market research, thought leaders’ commentary, technology white papers, conference abstracts, or industry and government databases.

“In generative AI, last month seems like a decade ago, given how quickly both user interest and the technology are evolving,” Northern Light CEO C. David Seuss said. “We are working with OpenAI’s GPT-3.5 Turbo model, which has the merits of being both fast and impressively accurate when using text from high quality market and competitive intelligence content of the type that Northern Light provides to its clients. OpenAI has made switching to another model supported by the API easy and quick, so if GPT-4 or some future model becomes a better option down the road, we will switch to it then.”

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*