Heard on the Street – 2/24/2022

Print Friendly, PDF & Email

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

Microsoft/Mandiant buyout rumors. Commentary by Jeannie Warner, Director of Product Marketing, Exabeam

Recent reports indicate that Microsoft is interested in purchasing cybersecurity giant Mandiant. While the rumors aren’t confirmed, this is in no way surprising. Despite the fact that Microsoft already dominates the software industry, it is steadily working to gain momentum in the global services space. The motivation is likely to fill current portfolio gaps and make a managed services play in the market. Mandiant will specifically be able to bolster Microsoft’s incident response bench strength  and managed detection and response (MDR) offerings at a time where heightened fears of nation-state attacks are widespread and security teams are increasingly short staffed. It would also provide world-class incident response and threat intelligence personnel to Microsoft’s team to increase its security clout. We’ve seen a number of similar mergers and acquisitions where a massive company absorbs an indirect competitor to appeal to a broader range of prospects, and take on the smaller organization’s customer base for instant growth. Less than a year ago, CrowdStrike purchased cloud log management and observability startup Humio for $352 million in cash and $40 million in stock and options to help its customers address non-security use cases and strengthen its detection and response capabilities. Accenture also purchased Broadcom’s Symantec Cyber Security Services business in 2020 – to name just two examples of many. And with market swings delaying or completely halting IPOs in the tech industry to kick off the year, buyouts and private investments may become security companies’ go-to exit strategies in the coming months. I imagine many product teams will be placing even bigger bets than normal on who their firms will be purchased by! The industry will be watching closely.

Federal Ransomware Advisory. Commentary by Brian Spanswick, Cohesity CISO and Head of IT

Ransomware continues to plague organizations globally and is becoming an increasingly sophisticated challenge. As the recent advisory warns, malicious actors are focusing their attacks on the cloud, data backup, and MSPs, looking to disrupt supply chain software or industrial processes. To help thwart these attacks and minimize their impact, organizations need to enhance their security postures by embracing next-gen data management capabilities that enable organizations to: utilize immutable backup snapshots, detect potential anomalies via AI/ML that could signal an attack in progress, and address mass data fragmentation challenges that can also reduce data proliferation. Organizations are also encouraged to embrace data management platforms that enable customers to adopt a 3-2-1 rule to data backups, ensure data is encrypted both at transit and at rest, enable multi-factor authentication, and employ zero trust principles. Next-gen data management not only focuses on going beyond zero trust security principles, but it can enable simplicity at scale, AI-powered monitoring and alerts, and 3rd-party app extensibility — which can play a role in advancing overall security protocols, and helping organizations restore operations promptly in the event of a successful attack.

The importance of quality data when it comes to powering AI. Commentary by Siddhartha Bal, Director of Autonomous Mobility at iMerit

As businesses look to level up with the new year underway, many are revamping their day-to-day operations by incorporating AI technologies. Whether it’s to get rid of product defects, better understand when and why customers opt for a competitor, or eliminate manual labor. However, in order to train machine-learning models to accurately and efficiently complete the tasks they’re assigned, companies need to prioritize quality data. There’s a common misconception that the algorithm is the most valuable piece of the machine-learning puzzle, but the differentiator is actually precise data, which will result in less failures, and thus, greater progress during the testing phase and beyond. During the testing phase, companies should assess and identify edge cases as early as possible in order to eliminate errors and ensure data is top-notch. Companies should also keep experts in the loop who can logically contextualize data, strengthening AI systems to be more human-like and therefore, more accurate.

Closing the data literacy gap that exists today within organizations. Commentary by Ajay Khanna, CEO of Tellius

COVID has caused unexpected shifts in consumer purchasing patterns as well as a dramatic slowdown in supply chains. As a result, decision making strategies across industries have been unable to rely on historical management experience and intuition, thereby highlighting the critical need for data to make sense of quickly changing business environments. Gaps in data literacy and a dependence on antiquated analytics processes have been slow to bring timely insights to decision makers, which has put companies at a strategic disadvantage through missed opportunities and increased business risk. To solve for this, data-driven organizations need to empower everyone within an organization – even non-technical workers and decision makers – to access and leverage robust analytical tools. Enabling business users of all skill levels to participate in data insights discovery alongside data analysts generates better and faster insights while also relieving the dependency on data experts as the sole provider of advanced insights. AI-driven decision intelligence combines the powers of machine learning and automation to upskill the analytics abilities for multiple personas, allowing business users to have an easier way to explore data and answer ad hoc questions with natural language search, analysts to diagnose changes in metrics without manual analysis, and citizen data scientists to create models that are easily explained and transparent to the business. Decision intelligence offers a way for enterprises large and small to tackle challenges in data literacy and leapfrog their analytics maturity. 

Future of AI & Touchless Interaction, Authentication. Commentary By Blaine Frederick, VP of Product at Alcatraz AI

Artificial Intelligence (AI), Machine Learning (ML), and Deep Neural Networks (DNN) are terms that are being discussed more and more frequently these days. They are often used interchangeably to describe software and algorithms that are capable of making real-time decisions based on several different inputs after being “trained” on reference data. This technology is applied to products and solutions ranging from automated chat programs for customer service to self-driving cars to Face Authentication for Physical Access Control. Traditionally, Face Authentication, like many biometric technologies, was based on calculating the probability that the data representing a person’s identity in real-time mirrors the data collected during enrollment (reference sample).  These calculations employed techniques such as hamming distance to determine how many bits in a binary string are different. A drawback to this approach is that the reference sample is not continuously updated. However, human beings change, impacting the probability calculation. As such, the accuracy of these systems is degraded over time. Fundamental to AI, ML and DNN are the ability for the reference data to be updated in real-time. When applied to biometrics, this allows the reference data (a person’s face profile, for example) to be modified as a person’s appearance changes. This could include a person losing or gaining weight, growing or shaving a beard, and the results of the aging process. Updating the reference data has the positive effect of making the systems easier to use over a long period of time and allows the systems to maintain their accuracy. The outcome is that biometric systems, which in some cases have not enjoyed mass deployments due to poor user experience, are now being deployed having overcome one of the larger barriers to large-scale adoption.

How Companies Can Utilize Their Data and AI To Improve Customer Experiences. Commentary by Richard Stevenson, CEO of Red Box

While the tremendous improvements of AI these past years provided enterprises with data-driven tools and analytics to help meet their goals, businesses still face challenges collecting valuable real-time data and using these AI tools to address growing consumer demands and the evolving customer experience. To optimize ROI from investments in CX and address the growing need for real-time data, businesses must be creative with the information they collect and how they utilize it, especially when it comes to voice data. Even with customer interactions as commonplace as a phone call, the actionable insights contained within this voice data is frequently overlooked in the collection and AI application process, yet provides untapped potential for the quality management, personalization, and other customer experience factors that are fundamental to business success today. For businesses to start using their voice data to shape their customer experiences, they must prioritize data sovereignty and take complete control of their data, as having access to such valuable information without roadblocks sets a solid foundation for it to be collected and pulled at any time. Once open access to data is established, they are free to make data-driven choices and inform their AI tools to create meaningful change to their customer interactions in real-time. Paired with the proper voice-analytics tools, the impact AI will have on improving customer experiences will not only be noticeable to the enterprise implementing these strategies, but to the customers they service and their sentiment towards their interactions with the business.

What data science will look like in 2022. Commentary by Wei Wang, Chair of ACM Special Interest Group on Knowledge Discovery and Data Mining (ACM SIGKDD) and a professor of computing science at UCLA

Data has become a critical resource that nourishes the next generation of technologies in many fields. In fact, it has already enabled ground-breaking advances in AI and machine learning models that have transformed many fields, such as natural language processing, robotics, autonomous driving, as well as drug design/repurposing and epidemiology. We will continue to witness an upward trend of adopting data science approaches by many other disciplines in the future. The focus of data science research has expanded from improving the accuracy and scalability (in early days) to tackling the challenges in data heterogeneity, scarcity, interpretability, fairness, and trustworthiness. There are also increasing interest in building novel ecosystems that enable convergence among domain scientists and data scientists. Trustworthiness has become and will continue to be an actively research area. Trust is contextual and multidimensional, and requires going beyond optimizing specific technical attributes. Several emerging topics have attracted increasing attention, and include (but are not limited to) trust modeling, trustworthy infrastructure and trust dynamics. It’s important to note that trustworthiness is also one of the priority areas that the National Science Foundation wants to support in their latest call for the AI Institute. We anticipate major breakthroughs in the next few years.  

Data Mesh & Data Fabric: Why the Combination Could Deliver a Quantum Leap in Distributed Data Analytics. Commentary by Lewis Carr, Senior Director of Marketing at Actian

Just when you thought you’ve figured out Data Warehouse versus Data Lake or their market spun complementary union, Lakehouse, two new terms pop up: Data Fabric and Data Mesh. At a high-level, they’re conceptual and not specific products or platforms; a Data Fabric is about the distribution of data across disparate processes and underlying silos, getting the data to the right consumers. A Data Mesh is about processing data closest to the community of interest and point of action. Both are very powerful, and in some ways you can expect all Data Hub, Data Warehouse, Data Lake, and Visualization and Reporting tool vendors to start associating their market strategies and products with these two terms. However, vendors for these technologies have been historically tethered to a concept that is antithetical to both the Data Fabric and Data Mesh – a single source of truth in a centralized location. It made sense when all process was manual or later tied to structured and static ERP systems. However, in the age of IoT, 5G, and multi-cloud, such a heterogeneous environment with latency and sovereignty issues, AI data and analytics will be far more distributed.  As more tech-savvy lines of business and IT push vendors’ buy-in to Data Fabric and Data Mesh to support their distributed environments, vendors will have to depart the ‘one ring to rule them all’ bandwagon. Doing so will open a world of innovative possibilities for all.

How Decision Intelligence is democratizing big data analytics for better business decisions. Commentary by Omri Kohl, Co-founder and CEO at Pyramid Analytics  

Traditional BI is broken. Despite the accessibility of modern data analytics and AI, in most organizations business users—from executives to team leaders—remain highly dependent on technical teams for their data. They rely on a wide range of systems, integrations, workarounds, and ‘key persons’ to access the data they need to inform critical decisions. This common scenario can introduce unnecessary confusion, and is very time-consuming. Decision intelligence is emerging as a strategic technology that can democratize business analytics, turning fragmented decision-making into a frictionless, data-driven decision process that spans the entire organization, from the C-suite to analysts to front-line workers. With an integrated decision intelligence approach, organizations can achieve broad business user adoption and support personalized decision-making.

AI will be the defining factor of advertising after cookies. Commentary by Melinda Han Williams, Chief Data Scientist for Dstillery

We’re heading toward a time when the challenge for advertisers won’t be having too much data; it’ll be having too little. And what’s interesting is that both scenarios are leading companies to use artificial intelligence — either to distill the massive amount of user-targeting data that exists today or to draw conclusions about audiences based on a proportionately much smaller first-party data set. Either way, it can’t be done effectively without AI. Cookie retirement will greatly reduce the available data, and AI will become the single most important piece of technology moving forward to help build the audiences that brands need to reach. Even those collecting massive amounts of first-party data today can only take their business so far by marketing back to their existing customers.

Customers are willing to share their data; but they expect more for it. Commentary by Cindi Howson, chief data strategy officer, ThoughtSpot and host of The Data Chief podcast

Recent industry research shows that 70% of business executives increased collection of consumer personal data over the last year, despite growing consumer concern over how they’re data is being used. Consumers are willing to provide personal data to a business, so long as it’s used to deliver highly personalized services and an exceptional experience – a better fan experience at a sporting event or tailored clothing styles, for example. Protecting this personal data from breaches is a given. Consumers are creating more data with their digital interactions. That data is often sold to third parties, creating enormous value to the business capturing the data. For example, in the early days of the pandemic, the value of United Airline’s customer data was worth more than the company’s market cap. Likewise, as Apple removed the ability for apps to track user behavior, Facebook ad spending plummeted with only 18% of Apple users enabling such detailed tracking. And yet, when businesses respect consumer’s privacy for the sole purpose of improving their service, consumers are willing to provide more data. Such is the case with the NBA’s Orlando Magic, and how they are improving fan experiences at basketball games and providing specialized pricing for families traveling from abroad. Daily Harvest is another example in using data to create personalized meals for customers. That a consumer loves chocolate may not be particularly sensitive, but healthcare and DNA details certainly are.  Surprisingly, an estimated 80% of subscribers to 23andme opt to share their data with researchers, showing that consumers do want to contribute their data to cure diseases, but don’t want to be spammed by marketers. Consumers are looking far beyond the basic requests of their data and the bar has been raised for bespoke experiences; it’s on brands and businesses to be proactive in ensuring trust and to be more creative in how they think about delivering on the expectation for how they use the data and return value to the consumer.

When hiring an in-house AI team, manage expectations and consider your needs. Commentary by Ksenia Palke, Director of AI at Airspace

So, you think your company needs an AI team. What’s next? The first step is to evaluate if AI has a place in your company. Many organizations that hire a data scientist or an entire AI team expect massive, fast, and magical gains. Even though most people realize that these expectations are naive, some business leaders and even venture capitalists are still attracted to the notion of miraculously making everything better with AI. It is a tempting idea that is often impractical and idealistic.  When deciding to start using AI at your company, consider how much real value AI can bring and the costs of implementing and maintaining it. To do this, consult with an AI expert to help answer the following questions: What are you specifically trying to solve with AI? What data do you have available right now and what data should you start collecting? And what level of expertise and bandwidth do your current employees have? Some straightforward and small-scale AI systems are easy to build with the automatic ML tools, provided the problem statement is clear and you have relevant, abundant, and clean data. Off-the-shelf AI systems can help generate the momentum needed to prove that AI can bring value to the company and convince the stakeholders. You will still likely need a person who is well-versed in machine learning and data, but they do not have to be an AI expert and you definitely do not need an entire team to start. For a larger and more complex AI system, you will need to grow your team. A common trap is to keep hiring data scientists. But at the growing stage, you need to invest in data engineers and machine learning engineers. If you hire data scientists without adequate engineering support, you will be left with many proofs of concept that never become products.

The convergence of AIOps and observability and its benefits. Commentary by Spiros Xanthos, General Manager of Observability, Splunk

In today’s hyper-digital world, IT and DevOps teams are feeling pressure from all sides of the business to innovate faster, keep services reliable and deliver exceptional customer experiences. With the stakes of digital experiences higher than ever, the historically distinct practices of AIOps and observability have converged. Modern observability tools have the ability to capture structured, full-fidelity data at very large volumes. This in turn provides site reliability engineers (SREs) and developers with a complete understanding of their infrastructure, applications and users in order to mitigate system failures — especially within complex systems. Large volumes of structured and unstructured data enables true AIOps in ways that were not possible in the past when data was siloed and sampled heavily. DevOps teams now have the power to automate the path from development to production. AIOps trains on data, resulting in identified problems turning into long-term solutions, past behaviors informing improved workflows and faults and failures fueling training algorithms. The unification of these practices not only increases the collaboration and speed of SREs and developers, but allows businesses to create an environment that is continuously observing, learning and improving itself to prevent devastating downtimes.

Not All Data was Created Equally: Distinguishing between ‘Deep’ and ‘Shallow’ Data. Commentary by Assaf Eisenstein, Co-Founder and President, Lusha

The digital age has brought with it a growing reliance on data, and businesses across all industries have become increasingly reliant on it to drive unprecedented productivity. With the rise of data’s popularity, regulators and lawmakers have grown concerned with data privacy and the protection of user information. Worried about potential exploitability, they introduced the GDPR and CCPA standards in the EU and the US respectively, which imposed blanket regulatory guidelines that would protect data on all levels. While the effort to protect the privacy of users is well intentioned, the blanket coverage data receives has significantly affected business. Limiting businesses’ access to data impacts their efficiency, accuracy, and the service they are able to give their customers.  A major issue with the current privacy regulations is that they paint all data with the same brush and don’t discriminate protection based on risk factor or exploitability. Much of the ‘shallow’, publicly available data businesses use to thrive, such as job title, work email, etc, is often subject to the same privacy standards as sensitive medical or financial information. In order for these regulations to be effective without hindering business, regulators need to aim for a balance between users’ rights to data privacy and business’ need for data to improve efficiency, accuracy, and service. By reassessing the way data is currently handled and differentiating between publicly available data and data that is private, sensitive, or potentially exploitable, orgs can move away from blanket protection toward a win-win privacy paradigm that benefits consumers and businesses alike.

The Importance of Celebrating AI Milestones. Commentary by Kurt Muehmel, Chief Customer Officer at Dataiku

Like research and development, data and AI is essential to fuel a company’s long-term ability to innovate. Yet many companies become disillusioned with slow-seeming AI while they remain patient for R&D dividends. To maintain momentum along the AI journey, companies can benefit by celebrating more successes along the way. There are three phases of AI development where companies can celebrate significant milestones – understanding , acting and culture.  By using data to better understand how the business operates, companies can inform smarter decisions through AI.  Companies that act wisely by automating decisions based on AI will move the business forward in a more efficient way. Finally, it is important to build a culture that provides employees with AI tools to inspire new questions and solutions. This will ensure companies are building data and AI literacy with a team approach in mind. In each of these phases, it’s important to celebrate not only the “moonshot” successes, but also the “mundane” improvements. A simple AI model being well-integrated with business processes, leading to improved outcomes is a milestone worth celebrating as it marks an important step towards making AI ubiquitous in our business processes.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*