Heard on the Street – 9/27/2021

Print Friendly, PDF & Email

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this new regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

Amplitude Filing for Direct Listing on Nasdaq. Commentary by Jeremy Levy, CEO at Indicative.

“To some extent, Amplitude’s valuation and filing are wins for everyone in product analytics, including Indicative. Amplitude’s achievement is a massive validation for our market. If the company launched today, though, it would not have this same level of success because the market is clearly transitioning to the cloud data warehouse model; something Amplitude is simply not compatible with. And while this model has been written about at length by firms like Andreesen and Kleiner, the more tangible predictor of this trend is the continued acceleration of growth at Snowflake and other cloud data providers like Amazon and Google. Amplitude has been able to leverage strong word of mouth and an easy integration to this point. But being incompatible with what has rapidly become accepted as the ideal way to build a data infrastructure — meaning products that can interface directly with the cloud data warehouse — is a serious threat to their continued growth. Amplitude’s requirements for replicating and operationalizing customers’ data reflect a decades-old approach. Their solution is built for today but not for tomorrow. In time, especially given the increased scrutiny of shareholders and earnings reports, the shortcomings of Amplitude’s approach will catch up with them.”

A New Culture of AI Operationalization is Needed to Bring Algorithms from the Playground to the Business Battleground. Commentary by Mark Palmer, SVP, Engineering at TIBCO.

“Data science is taking off and failing at the same time. A recent survey by NewVantage Partners found that 92% of companies are accelerating their investment in data science, however, only 12% of these companies deploy artificial intelligence (AI) at scale—down from the previous year. Companies are spending more on data science, but using less of it, so we need to bring AI from the playground to the battleground. The problem is that most firms have yet to establish a culture of AI operationalization. Technology, while not the answer, helps put wind behind the sails of that cultural change. For example, Model operationalization (ModelOps) helps AI travel the last mile from the data science laboratory, or playground, to the business user, or the battleground—like an Uber Eats for algorithms. ModelOps makes it easy to understand how to secure and manage algorithms deployment, allowing business leaders to get comfortable with AI. It also encourages collaboration between data scientists and business leaders, allowing them to bond as a team. The other benefit of a culture of AI operationalization is bias identification and mitigation. Reducing bias is hard, but the solution is often hidden in plain sight—AI operationalization teams help firms more easily assess bias and decide how to act to reduce it. A culture of AI operationalization helps data scientists focus on research and deliver algorithms to the business in a transparent, safe, secure, unbiased way.”

Strong DevOps Culture Starts with AIOps and Intelligent Observability. Commentary by Phil Tee, CEO and founder of Moogsoft.

“DevOps is a culture about the collective ‘we’ and building a blameless, team-centric workplace. But DevOps must be supported by tools that enable collaboration on solutions that will impact the collective whole. AIOps with intelligent observability helps shore up a strong DevOps culture by encouraging collaboration, trust, transparency, alignment and growth. By leveraging AIOps with intelligent observability, DevOps practitioners remove individual silos and give teams the visibility they need to collaborate on incidents and tasks. By getting their eyes on just about everything, employees can connect across teams, tools and systems to find the best solutions. And professional responsibilities seamlessly transfer between colleagues. AI also automates the toil out of work, so teams leave menial tasks at the door, do more critical thinking and bond over building better technologies. AIOps with intelligent observability enhances the transparency and collaboration of your DevOps culture, encourages professional growth and shuts down toxic workplace egos to create a more innovative, more agile organization.”

Machine Learning Tech Makes Product Protection Accessible to Retailers of All Sizes. Chinedu Eleanya, founder and CEO of Mulberry.

“More and more companies are turning to machine learning — but often too late in their development. Yes, machine learning can open up new product opportunities and increase efficiency through automation. But to truly take advantage of machine learning in a tech solution, a business needs to plan for that from the beginning. Attempting to insert aspects of machine learning into an existing product can, at worst, result in features for the sake of “machine learning features” and, at best, require rebuilding aspects of the existing product. Starting early with machine learning can require more upfront development but can end up being the thing that separates a business from existing solutions.”

Artificial intelligence risks to privacy demand urgent action. Patricia Thaine, CEO of Private AI.

“The misuse of AI is undoubtedly one of the most pressing human rights issues the world is facing today—from facial recognition for minority group monitoring to the ubiquitous collection and analysis of personal data. ‘Privacy by Design’ must be core to building any AI system for digital risk protection. Thanks to excellent data minimization tools and other privacy enhancing technologies that have emerged, even the most strictly regulated data [healthcare data] are being used to train state-of-the-art AI systems in a privacy-preserving way.”

Why iPaaS is a fundamental component of enterprise technology stacks. André Lemos, Vice President of Products for iText.

“Integration Platform-as-a-Service (iPaaS) is rapidly becoming a fundamental component of enterprise technology stacks. And it makes total sense. IT organizations worldwide are dealing with an increasing number of software systems. Whether they are installed within the corporate network, in a cloud service provider’s infrastructure, or offered by a third-party SaaS provider, business groups want to use more software. And that creates a lot of fragmentation and complexity, especially when those systems need to be connected together or data needs to be shared between them. Selecting an iPaaS platform has as much to do with the features as the ecosystem. Without a healthy catalog of systems to choose from, the platform is practically useless. Remember that the goal of an iPaaS platform is to make connecting disparate systems easier and simpler. Before there was iPaaS, companies had to create their own middleware solutions which took valuable engineering resources to both develop and maintain. With iPaaS, developers and IT resources can simply select systems to include in their workflow.”

The data scientist shortage and potential solutions. Commentary by Digital.ai’s CTO and GM of AI & VSM Platform, Gaurav Rewari.

“More than a decade ago, the McKinsey Global Institute called out an impending data scientist shortage of more than 140,000 in the US alone. Since then, the projections for a future shortfall have only become more dire. A recent survey from S&P Global Market Intelligence and Immuta indicates that 40% of the 500+ respondents who worked as data suppliers said that “they lacked the staff or skills to handle their positions.” Further, while the Chief Data Officer role was gaining prominence, 40% of organizations did not have this position staffed. All of this against the backdrop of increasing business intelligence user requests from organizations desperate to use their own data as a competitive advantage. Addressing this acute shortage requires a multi-faceted approach not least of which involves broadening the skills of existing students and professionals to include data science capabilities through dedicated data science certificates and programs, as well as company-sponsored cross-training for adjacent talent pools such as BI analysts. On the product front, key capabilities that can help alleviate this skills shortage include: (i) Greater self-service capabilities so that business users with little-to-no programming expertise and knowledge of the underlying data structures can still ask questions using a low code or no code paradigm, (ii) Pre-packaged AI solutions that have all the data source integrations, pipeline, ML models and visualization capabilities prebuilt for specific domains (eg: CRM, Finance, IT/DevOps) so that business users have the ability to obtain best practice insights and predictive capabilities in those chosen domains. When successfully deployed, such capabilities have the power to massively expand the reach of a company’s data scientists many times over.”

Unemployment Fraud Is Raging and Facial Recognition Isn’t The Answer. Commentary by Shaun Barry, Global Lead for Government and Healthcare at SAS.

“Since March 2020, approximately $800 billion in unemployment benefits has been distributed to over 12 million Americans, reflecting the impact of COVID-19 on the U.S. workforce. While unemployment benefits have increased, so have bad actors taking advantage of these benefits. It is estimated that between $89 billion to $400 billion in unemployment fraud has been distributed. To combat fraudsters and “promote equitable access”, the Administration passed The American Rescue Plan Act, which provides $2 billion to the U.S. Dept. of Labor. However, two technology approaches the government has been pursuing to combat UI fraud — facial recognition and data matching – introduce an unintended consequence of  inequities and unfairly limiting access to unemployment benefits for minority and disadvantaged communities. For example, facial recognition has struggled to accurately identify individuals with darker skin tones and most facial recognition requires the citizen to own a smartphone which impacts certain socioeconomic groups more than others. Data matching and identity solutions rely on credit history-based questions such as type of car owned, previous permanent addresses, strength of credit, existence of credit and banking history (all requirements that negatively impact communities of color, young, unbanked, immigrants, etc.). There is a critical need to evaluate the value of a more holistic approach that draws on identity analytics from data sources that do not carry the same type of inherent equity and access bias. By leveraging data that utilizes sources with fewer inherent biases such as digital devices, IP addresses, mobile phone numbers and email addresses, agencies can ethically combat unemployment fraud. Data-driven identity analytics is key to not only identifying and reducing fraud, but also reducing friction for citizens applying for legitimate UI benefits. The analytics happens on the backend, requiring the data the user has provided and nothing more. Only when something suspicious is flagged would the system introduce obstacles, like having to call a phone number to verify additional information. By implementing a more holistic, data approach, agencies can avoid the pitfalls of bias and inequity that penalize communities who need UI benefits the most.”

How boards can mitigate organizational AI risks. Commentary by Jeb Banner, CEO of Boardable.

“AI has proven to be beneficial for digital transformation efforts in the workplace. However, few understand the risks of implementing AI, including tactical errors, biases, compliance issues and security, to name a few. While the public sentiment of AI is positive, only 35% of companies intend to improve the governance of AI this year. The modern boardroom must understand AI, including its pros and cons. A key responsibility of the board of directors is to advise their organization on implementing the AI technology responsibly while overcoming the challenges and risks associated with it. To do so, the board should deploy a task force dedicated to understanding AI and how to use the technology ethically. The task force can work in tandem with technology experts and conduct routine audits to ensure AI is being used properly throughout the organization.”

How a digital workforce can meet the real-time expectations of today’s consumer. Commentary by Bassam Salem, Founder & CEO at AtlasRTX.

“Consumer expectations have never been higher. We want digital. We want on-demand. We want it on our mobile phone. We want to control our own customer journey and expect that immediacy 24 hours a day because “business hours” no longer exist. Thanks to the likes of Amazon and Tesla, the best experiences are the ones with the least friction, most automation, and minimal need for human intercedence. Interactions that rely solely on a human-powered team are not able to meet this new demand, so advanced AI technology must be implemented to augment and support staff. AI-powered digital assistants empower consumers to find answers on their terms, in their own language, at any time of day. These complex, intelligent chatbots do more than just answer simple questions, they connect with customers through social media, text message, and webchat by humanizing interactions through a mix of Machine Learning (ML) and Natural Language Processing (NLP). Today’s most advanced chatbots are measured by intelligence quotient (IQ) and emotional intelligence (EQ), continually “learning” from every conversation. As new generations emerge that are equally, if not more comfortable interacting with machines, companies must support their human teams with AI-powered digital colleagues that serve as the frontline to deliver Real-Time Experiences (RTX) powered and managed by an RTX platform that serves as the central nervous system of the augmented digital workforce.”

What sets container attached and container native storage apart. Commentary by Kirby Wadsworth of ionir.

“The advent of containers has revolutionized how developers create and deliver applications. The impact is huge; we’ve had to re-imagine how to store, protect, deliver and manage data. The container-attached (CAS) or container-ready approach is attractive because it uses existing traditional storage, promising the ability to reuse existing investments in storage and may make sense as an initial bridge to the container environment. What’s different about container-native storage (CNS) is that it is built for the Kubernetes environment. CNS is a software-defined storage solution that itself runs in containers on a Kubernetes cluster. Kubernetes spins up more storage capacity, connectivity services, and compute services as additional resources are required. It copies and distributes application instances. If anything breaks, Kubernetes restarts somewhere else. As the orchestration layer, Kubernetes generally keeps things running smoothly. CNS is built to be orchestrated, but container-ready or container-attached storage isn’t easily orchestrated. Organizations have many storage options today, and they need more storage than ever. With containers added to the mix, the decisions can become harder to make. Which approach will best serve your use case? You need to understand the difference between container-attached and container-native storage to answer this question. Carefully consider you needs and your management capabilities, and choose wisely.” 

 Data Quality Issues Organizations Face. Commentary by Kirk Haslbeck, vice president of data quality at Collibra.

“Every company is becoming a data-driven business – collecting more data than ever before, establishing data offices, and machine learning. But there are also more data errors than ever before. That includes duplicate data, inaccurate or inconsistent data, and data downtime. Many people start machine learning or data science projects, but end up spending the bulk of their time (studies suggest around 80%) trying to find and clean data, rather than engaging in productive data science activities. Data quality and governance have traditionally been seen as a necessity rather than a strategic pursuit. But a healthy data governance and data quality program equates to more trust in data, greater innovation, and better business outcomes.”

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*