Heard on the Street – 10/19/2021

Print Friendly, PDF & Email

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this new regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

Informatica’s Recent IPO Announcement. Commentary by Matthew Scullion, CEO of Matillion.

“Informatica filing for its IPO highlights the fast accelerating demand for data integration, in a world where data has become the new commodity; in fact where data is changing every aspect of how we work, live and play. And that phenomenon has only accelerated post-pandemic as companies race to digitise and become more data led. But this demand is also being driven mostly in the Cloud, where legacy practices for moving, transforming, synchronizing and orchestrating data no longer apply.  As organizations continue to put data to work in the cloud, they will and are looking to solutions that are born in and built for the cloud.”

For data investments to succeed, organizations need to focus on their data health. Commentary by Christal Bemont, CEO, Talend. “

The data management industry is evolving as more companies realize that being data-driven is required for their success. This realization has thrown all things data into the spotlight – from Gartner naming data fabric as an emerging technology in their annual Hype Cycle report to Informatica filing for IPO. However, the data management market has fundamentally changed since Informatica was last public. Today, 78% of executives still face challenges in effectively working with data to make decisions, and most of the data solutions in the market today aren’t solving the right problems. Data integration cannot succeed without data quality and governance. And a successful enterprise data strategy isn’t just about moving data around anymore. It’s about providing healthy data that everyone can trust and access to guide their intelligent business decisions, ultimately driving organizational success.” 

Frances Haugen Testimony Analysis and Reaction. Commentary by Jenny Lee, Partner at the law firm Arent Fox LLP.

“What is most striking about Haugen’s complaints to the Securities Exchange Commission is what they reveal to be lacking in the U.S. legal system. They are an indirect mechanism to confront the concerns at hand. The SEC regulations address harm inflicted on investors, not on consumers, children, or social media users per se. So the heart of the SEC complaints are getting at the misrepresentations Facebook allegedly made to investors. What’s missing in the U.S. as of yet is any single federal regulator whose process or tip-line can be triggered to help oversee corporate conduct affecting issues of consumer protection, children’s protection, or online content users. As the testimony to the Commerce committee shows, part of the whistleblower’s mission is to get Congress to act to create a new regime, or to clarify existing consumer-protection rules or communications rules, so that they impose substantive restrictions on social media products—such restrictions that we currently do not have. Haugen’s whistleblower reports come at a time when multiple consumer-protection related concerns are coalescing in numerous spaces: political expression online, online bullying, public health/pandemic or democracy misinformation, the attention-economy harms, as well as data privacy rules. In Silicon Valley, experts in this area especially on the new field of the “attention economy” have been writing about these issues for years. Ultimately, Haugen’s reports, focusing on divisive algorithms and Instagram’s effects on girls are two micro-level examples of the broader issues that have long been the subject of debate among tech industry veterans, and are just now starting to be debated on Capitol Hill and among regulators as to how to supervise a new product, social media services. It’s an extremely poignant time for all stakeholders to jump into the dialogue, as federal lawmakers pick up steam in efforts to scrutinize and potentially restrict social media companies’ conduct. Possible agencies that may become active participants in this debate include the CFPB, FTC, and FCC. It will be easier for agencies to wield existing regulations in the consumer-protection space than for Congress to write and pass legislation. However, with increasing disclosures of corporate information such as Haugen’s whistleblower reports, there is more data available that could serve as a catalyst for swifter Congressional action as well.”

What’s Holding DevOps Back. Commentary by Brian Rue, CEO and Co-founder of Rollbar.

“One of the biggest challenges that teams face when moving to DevOps is that releasing new code more often means that no matter how much testing you do, there will be more code-related incidents. Developers get woken up in the middle of the night and get burned out. The best way to address this is with automatic remediation – to automatically execute known remediation runbooks (e.g. restart an erroring server) or automatically restore the system to a known-good state (e.g. turn off a feature flag, or roll back to a previous version). Additionally, teams adopting DevOps are often surprised that while their monitoring/observability tools work great for infra issues, they don’t help much at all for code issues. The best solution is to introduce a code-focused tool that uncover code errors.”

No-Code Artificial Intelligence is the Next Frontier. Commentary by Michelle Zhou, Co-Founder & CEO at Juji.

“Creating artificial intelligence (AI) is not rocket science, but in many ways, it’s harder. Building or managing custom AI requires deep AI expertise, sophisticated coding and a long development cycle — not to mention massive amounts of training data that few organizations have. In fact, only a small number of organizations can even afford to create their own AI leading to many organizations left behind and the creation of an AI gap. This is why no-code AI is the future. One place no-code AI is being embraced is through conversational AI. One-on-one conversations are perhaps the most effective way to engage with people. Since human-driven conversations are hard to scale, organizations have begun enlisting the help of conversational AI agents, or chatbots. Through this, we have learned that to be successful, conversational no-code AI must support the entire AI development life cycle including setup, deployment, and management; offer rapid no-code customization by reusing pre-built AI instead of starting from scratch; and bring real-time monitoring for AI behavior with the ability to upgrade AI instantly without interrupting existing deployments. No-code conversational AI democratizes the adoption of AI and bridges the AI divide as it enables every organization to rapidly set up, deploy and manage powerful AI solutions.”

10 Years on and AI/ML are Finally Getting the Attention They Deserve. Commentary by Mon Ray, Researcher in Applied ML and Anti-Abuse at GitLab.

“AI has come a long way — but in terms of technology and fairness, it still has ways to go. For optimal, productive adoption and deployment of AI technology, enterprises should apply DevOps best practices to AI. Similar to the principles of a CI/CD pipeline, AI should always be improving through iterative delivery and collaboration. A collaborative culture of continuous integration is essential to AI’s success. When AI/ML is used in the right way (i.e. responsible AI/ML), it will aid humans to solve real problems. In our industry, we’re starting to see the use of responsible AI/ML, MLOps, applied ML, and changes in decision science evolving and consolidating. We are using AI/ML in many novel causes — from clinical trials to medical coding and billing, to Netflix recommendation and emails — it is genuinely uplifting the way the community makes decisions. The applications of AI/ML are endless and can help software make smarter suggestions, more personalized recommendations, and automate tasks previously unthinkable. The more we understand how to responsibly use AI/ML, the more we can make humans come closer together as a community.”

Utilizing emerging technologies to connect and streamline disconnected processes and accelerate revenue. Commentary by Randy Littleson, Chief Marketing Officer, Conga.

“As companies pivot from legacy to more modern, subscription and usage-based business models, the importance of customer self-service and digitizing contracting processes will continue to grow. However, despite the array of hardware and software available to businesses today, most organization’s internal processes related to quoting and proposing, negotiating and agreeing to a contract are still fragmented and slow. While some may consider disparate business processes such as this ‘business as usual,’ these siloed operations can cause more harm than leaders realize, such as potential drops in employee productivity and leaving customers with a negative perception of a company. Even worse, hindered operational procedures can negatively impact a company’s revenue operations (RevOps) or mission critical tasks associated with revenue generation for a company. In order to avoid potential harm to a company’s RevOps, organizations can utilize emerging technologies to connect and streamline disconnected processes. For example, with automation, organizations can provide more frictionless experiences for employees and customers through configure, price, to quote (CPQ) applications, automated billing cycles and the facilitation of sales contracts. Additionally, companies can utilize AI and ML to expedite revenue operations by quickly identifying key terms and clauses in an agreement. This helps streamline the contract renewal process and close business deals faster, ensuring businesses don’t leave revenue on the table. In short, there are numerous benefits when organizations adopt and implement emerging technologies. By taking the time to digitize an organization’s commercial operations, businesses will see an increase in employee productivity, improved customer perception of their company and overall a faster RevOps pipeline.”

Utilizing automation to streamline disconnected processes and accelerate revenue. Commentary by Rachel Sokol, Head of Healthcare Research, Olive.

“For 19 months now, the COVID-19 pandemic has forced businesses, especially hospitals and health systems, to take a critical look at how to streamline disconnected processes and alleviate the administrative burden on workers. According to a recent report, 64% of healthcare executives agree that there will never be enough staff to handle the volume of patient and member data at their organizations. In a world with significant stressors, businesses are looking for new ways of working that can improve their employees’ well-being. For example, 92% of clinicians agree that too much time spent on administrative tasks is a major contributor to healthcare worker burnout and 93% believe AI will be good for their career. AI further supports ongoing growth with 31% of healthcare executives citing margin pressures as a  consequence of not automating. This past year has caused many employers across the globe to reevaluate their business operations and seek new ways to improve revenue, and healthcare is no exception. Across the country, the healthcare workforce remains overburdened giving AI an opportunity to improve daily workflows.”

Powering the Modern World Through Open-Source Data. Commentary by Shen Li, Head of Global Business at PingCAP.

“Organizations of all types depend on massive, rapidly growing, and evolving datasets to deliver more intelligent services and achieve business growth. Due to its importance, data should be treated as a utility. Just like water, gas, or electricity, data should be accessible, simple to leverage, and reliable. But data can only become a utility with support from an open-source database, robust data integration, and data management framework. Next-generation databases provide the underpinning of making big data and AI a reality. By having a scale-to-fit attribute that ensures the right amount of data is available for ML algorithms training in real-time which enables fast data set convergence and ultimately more accurate inferences. Ultimately, by positioning data as a resource – as important as any utility today – organizations are able to get the right data, at the right time to enhance their services and improve their business outcomes.”

The imperative investment that can enhance any business’s ROI. Commentary by Arun Kumar, SVP of Data and Insights at Hero Digital.

“In today’s digital-first, and in many cases the digital-only world, customer expectations are sky high with respect to the experiences they have with every interaction they have with a brand. Uber, Amazon, Airbnb have redefined the CX and customers now expect nothing less from the digital shop around the corner, so to speak. That means every brand needs to step in and step up their data game. Rapidly generating insights from the data they collect on their customers through the use of technology, surfacing insights to the right stakeholders, creating and modifying experiences at key touchpoints and learning on the go-are all par for the course. Brands need to get into the minds of the consumers at every stage of the journey and deliver precisely what customers are looking for, and that is precisely what a good insights-oriented data analytics program can help them do.”

Year-old Machine Learning Models Already Obsolete: How Enterprises Can Keep Up. Commentary by Andy Wishart, CPO at Agiloft.

“Today, there are few areas in business that remain untouched by the impact of AI and machine learning. According to 2020 data, reducing operational costs is the top reason enterprises adopt ML. However, the data you used to train your ML model twelve months ago may not provide the accuracy you need today. Further, ML technology is evolving rapidly that base models are now obsolete after only 12 months causing technical debt that hinders enterprise innovation and productivity, which in turn hurts the bottom line. This expensive dilemma has led to the rise of ModelOps, a practice that proactively manages the lifecycle of different kinds of operationalized AI and ML models. 76% of AI-focused executives say achieving cost reductions is a main benefit of investing in ModelOps, with almost half (42%) describing it as crucial to the business. Since this remains a top priority, many CTOs and CIOs are tasked with using ModelOps to ensure their enterprise ML models and technology are constantly updated—not a small task. For enterprise leaders, the simplest, yet often overlooked, resolution to this problem is to only adopt enterprise systems that are agile and configurable enough to adapt to the businesses’ growth and technology evolution. There is a high demand for data scientists as companies lack the technical talent required to build scalable AI solutions, but companies who are unable to hire the right talent are facing the risk of being left behind. To remedy this, a growing number of companies are turning to no-code platforms for machine learning. With a no-code ML system, pre-built and custom modules can be tailored by your enterprise to meet its exact needs without writing custom code, so deployment times and costs are a fraction of those required for other systems. Enterprise tools with no-code platforms like contract lifecycle management (CLM) allow for ease of integration with other systems, enable easy data migration, and provide tools for annotating training data without requiring expertise in data science or ML. For these reasons, it is important for CIOs and IT leaders to adopt flexible business solutions that will enable the organization to scale and adapt quickly.”

Why We Need AIOps to Win the AI Race. Commentary by Steve Escaravage, Senior Vice President at Booz Allen.

“National security is increasingly becoming a digital enterprise – and winning in the digital battlespace of the future requires continued advancements in artificial intelligence focused on speed, collaboration, and scale. Taking algorithms from the lab to the field will enable us to scale AI, which is key in the global race for AI supremacy. For AI to work sooner, better, and faster, we need to operationalize more AI programs now so we can start collecting real-world data and learning from them. A crucial step in scaling AI is to adopt an AI Operations (AIOps) framework, which increases an organization’s success rate in deploying AI while ensuring critical ethics, security, and privacy components are prioritized early in development. An AIOps framework helps to both evolve the AI development process as well as integrate that process across the organization, ultimately enabling scalable, sustainable, and coordinated AI capability. This framework should have several key components, including mission engineering, responsible AI with human-centered design, data engineering and data operations (DataOps), machine learning (ML) engineering and ML operations (MLOps), systems engineering and DevSecOps, reliability engineering, infrastructure and cybersecurity engineering, operational feedback loops, and a dedicated AI team. Ensuring your organization has these needs met before making a substantial investment in AI will help close the gap between conceptual innovation and real-world deployment.”

AI-Powered Forecasting for Successful Peak Planning. Commentary by Steve Denton, CEO of Ware2Go, a UPS company.

“Disruptors in the supply chain industry are moving to a technology-first approach powered by machine learning. Implementing machine learning and AI a novel idea in our space, but it shouldn’t be. The most powerful asset in the hands of the supply chain industry is data. We have tons of it. The integration of OMS, WMS, and TMS systems within top tier WMS provides rich data sets that enable greater speed and efficiency in an industry that has long needed a technological overhaul. Rather than relying on tedious legacy forecasting models, we can use machine learning to give quick analyses based on historical order and shipping data. Integrating shipping data into our forecasting helps merchants plan more accurately on both the procurement and the fulfillment sides of the supply chain. By aggregating historical sales data, demand planning, promotional calendars, and supply chain insights in our reporting, supply chain technology can show merchants real-world levers they can pull to optimize their supply chain. Some of those levers are: placing inventory in the closest proximity to the end consumer, getting ahead of and minimizing stock outs, and optimizing available inventory to the appropriate sales channels — all while keeping merchants nimble to make real time adjustments during peak and to adapt to changes in the market. Ultimately, merchants who prioritize forecasting this year will be better equipped to meet holiday delivery deadlines and grow consumer preference for their brand.”

How AI-based Cameras Can Fix America’s Road Congestion Issue. Commentary by Gabriel McFadden, senior business development manager at Cubic Transportation Systems.

“Imagine a world where traffic flows through the busiest intersections with ease. Now, imagine all of this being achieved through strategically placed AI-enabled, intelligent traffic system (ITS) cameras. According to recent research, over 155,000 AI-based cameras will be in use for traffic management by 2025. That is a significant jump from the 33,000 AI-based cameras being used for traffic management in 2020. AI-based cameras can be implemented on busy roads and highways, blending traditional computer vision and AI to detect and track all the moving objects in its proximity. The cameras then utilize AI to determine exactly what the objects are, which helps cities establish patterns on which roads are the most congested, where traffic delays occur the most, and which types of vehicles are typically involved. AI-based cameras can be deployed in intersections across the globe, from urban locations with a high density of drivers, bikers, and pedestrians, to suburban and rural communities in mid-size cities and small towns. Cities like Seattle, Tacoma, San Francisco, and Reno have already adopted ITS technology and are improving traffic flow. No matter the population or congestion level, AI-based cameras are an effective solution for cities’ drivers to be safer and more efficient.”

Defining and Mitigating Bias in AI. Commentary by Alix Melchy, Vice President of AI at Jumio.

“Bias can infiltrate artificial intelligence (AI) algorithms in numerous ways to skew results and provide information that’s not fair or objective. Not to mention, it’s threatening the credibility and evolution of modern AI technology. As AI is adopted for an increasing number of business functions and data analysis, the bias is concerning to many experts. So, to prevent bias in AI, we need to understand the three different types of AI bias: model bias, sampling bias and fairness bias: (i) Model Bias: This type of bias occurs when machine learning models are overly simple and fail to capture the trends present in the dataset. To confirm whether the machine learning model contains model bias or not, you must ask it many questions and test different scenarios within the data. Running these tests confirms if the model performance changes when one data point changes or when a different sample of data is used to train or test the model; (ii) Sampling Bias: This happens when the machine learning model is using data sets where the population sample is not representative of the country’s conditions. For example, a dataset that used to be considered the benchmark for testing facial recognition software contained data that was 70% male and 80% white. Sampling bias can be prevented by prioritizing diversity in your design and development teams because teams that lack any diverse perspective will create experiences based on their homogeneous background and abilities; (iii) Fairness Bias: Even if sensitive variables such as gender, ethnicity and sexual identity are excluded, AI systems learn to make decisions based on training data, which may contain skewed human decisions or represent historical or social inequities. To mitigate fairness bias, AI algorithms must be adjusted to account for these biases and attempt to right the scales. By understanding different types of AI biases, your organization can take proactive steps to prevent bias from getting in the way of accurate data collection and skewing results.”

A “human-in-the-loop” approach that includes a diverse team of data annotators is needed to mitigate bias in AI and combat malicious online content. Commentary by Michael Ringman, CIO at TELUS International.

“AI programs are created by humans, and whether we like it or not, humans inherently have biases that can be taught to the algorithms that power AI solutions. For instance, AI-powered content review and moderation solutions are becoming increasingly commonplace to help brands manage their online communities and combat malicious and inappropriate user-generated content (UGC). However, if the algorithms supporting these solutions inherit human biases, customers may be subjected to toxic content and ultimately lose trust with a given brand. Vast amounts of UGC are being created each day (every minute, Instagram users post 347,222 stories), and humans alone don’t have the capacity to review, identify harmful content and remove it fast enough. This is where AI and machine learning come in to help quickly flag and remove inappropriate content. But to achieve reliable and sustainable success in this regard, brands must train and develop their machine learning algorithms to mitigate discrimination and biases. Incorporating a “human-in-the-loop” approach with a diverse team labeling the content that will become the training datasets for these content moderation platforms is an essential starting point. Knowing that AI models are not “set it and forget it” solutions and must be continuously monitored and improved by human experts who provide insightful feedback into the process can further mitigate instances of bias and more accurately identify inappropriate content before it does harm. Without human involvement, brands are at risk of the AI and machine learning’s assumptions being flawed, leaving a negative impression on customers if inappropriate content is kept online.” 

On the paradigm shifts needed to establish greater harmony between artificial intelligence (AI) and human intelligence (HI). Commentary by Kevin Scott, CTO of Microsoft.

“Comparing AI to HI is a long history of people assuming things that are easy for them will be easy for machines and vice versa, but it’s more the opposite. Humans find becoming a Grand Master of chess, or performing very complicated and repetitive data work, difficult, whereas machines are easily able to do those things. But on things we take for granted, like common sense reasoning, machines still have long way to go. … AI is a tool to help humans do cognitive work. It’s not about whether AI is becoming the exact equivalent of HI—that’s not even a goal I’m working toward.”

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*