Heard on the Street – 11/14/2022

Print Friendly, PDF & Email

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

Striking it Rich with the Untapped Data in Contracts. Commentary by Ajay Agrawal, CEO and co-founder of SirionLabs 

Data is the new oil. It’s the energy that powers the information economy. Contracts are the basis of all commercial relationships and contain a treasure trove of unused data. The future is about unlocking the data and leveraging it to make better business decisions. We’re just now seeing companies awaken to the possibility that parsing fuzzy objects offers a key to huge value. Just like retailers have monetized data at the point of sale, the next wave will see the monetization of contract data across millions of commercial contracts. Monitoring and making accessible information about the actual performance of obligations, service levels and other commitments against what is promised in the contract drives business value for enterprises and helps them realize hard dollar saving.

Low-code/No-code. Commentary by Vignesh Ravikumar, Partner, Sierra Ventures

Low-code/no-code apps are fueling the next digital transformation by swiftly democratizing the ability to create new software applications. Sierra recently convened for our 17th annual CXO Summit which brings together the Fortune 1000 C-Suite. This year, enterprise solutions around low-code/no-code applications were a pressing topic. One of the most interesting aspects we heard was that adoption was driven by CIOs’ desire to drive agility. Low/no-code accomplishes this by either pushing tasks to various departments or driving efficiency within IT organizations. Furthermore, data silos are now being unlocked as a result of the shift to the cloud and modern data stacks allowing departments to directly configure and personalize the applications and environments they were using. Modern CIOs must ensure visibility and compliance while maintaining velocity across IT functions. This next generation of low/no-code apps are enabling this movement. As we look to the future, we believe there is potential for organizations to adopt more tools as opposed to consolidating. In order to do this, software needs to be built to play well with others. Entrepreneurs have been told for years to dig moats around their products and now we are seeing the market demanding bridges to be built as well.

Combating Health Equity With Patient Data. Commentary by Chelsea King Arthur, Get Well’s Vice President of Population and Digital Health

Data can tell us a great deal about how an individual is accessing the health care system and how that experience is for them. The problem is not the data itself, but in the elements we are extracting. We have to ensure we are eliciting patient feedback in safe and non retaliatory ways and that the patients can see and feel how their personal experiences are impacting changes across the delivery system. That data that’s collected then needs to be shared so that providers know how best to interact with patients.

The rise of AIOps is raising the bar for cyber asset management. Commentary by Keith Neilson, Technical Evangelist at CloudSphere

AIOps is on the rise as companies that operate large IT estates increasingly turn to automation to help with alert management and auto resolution of issues to maximize operational reliability and uptime. However, success is keyed to prioritized management of cyber assets for any firm introducing or developing AIOps, as supporting algorithms require optimal visibility and control of cyber assets and data. IT assets and data must be vetted to improve business context and usability before installing AIOps. AIOps has evolved from traditional application performance monitoring (APM) of network traffic, data flows, and IT applications. Observability and control of AIOps goes beyond a limited set of applications and into an ecosystem of interconnected and interdependent systems. In fact, AIOps capabilities can now optimize beyond apps and data to support many system types, generating complex, predictive insights. As a result, simple network traffic analysis has grown to include advanced anomaly detection, root cause analysis, and predictive steps for auto-resolution. Agent-based discovery may be appropriate for limited application monitoring, but it rapidly becomes overwhelming when AIOps algorithms reveal additional operational relationships. AIOps requires agentless discovery techniques to track assets and dependencies across increasing systems and datasets. Cyber asset management must grow as performance monitoring becomes a holistic and predictive AIOps approach. It’s designed to mature asset visibility, interoperability, and management so AIOps systems can draw the maximum agility and business context from an organization’s cyber assets. Larger and more complex IT estates will drive AIOps adoption and enterprise leaders should recognize the significance of cyber asset management for AIOps success. Leaders must evolve cyber asset management techniques to give AIOps a solid basis of well-defined, well-governed IT assets.

Uncovering Untapped Opportunities with Digital Communication Analytics. Commentary by CEO Kon Leong, of ZL Technologies 

If data is the new oil, then unstructured data is now the newest oil field, comprising email, files, chats, social media, voice, and video. There is one commonality amongst all of these data sources: it’s human—created by humans for humans. It then follows that if we gather, manage, and analyze these data, we would for the first time ever enable a system for organizational memory, governance and intelligence. As you might expect, human data has high impact on the organization. It is the repository of all human activity and collaboration, providing a complete picture of who said what, to whom, when, and what the intent was. Its significance is validated by the fact that, whenever there is an event or scandal of note, the immediate focus of regulators and litigators is on unstructured data. For the last eight decades, we’ve ignored unstructured data, largely because we didn’t have the technology to harness it. In the last decade or two, however, events converged to make management of unstructured data possible and necessary. First, the required technologies emerged, such as large scale full-text indexing, textual analytics, and natural language processing. Second, new regulations such as GDPR and CCPA, and governance practices in litigation and compliance, now require vigorous control of unstructured data. If you believe that it is not systems but humans that run the organization, the next realization is that management of unstructured data opens up the next generation of IT.

Unsecured Databases. Commentary by Amit Shaked, CEO, Laminar

Unsecured ElasticSearch databases are extremely common and can affect nearly any company — leading to important information being exposed to potential compromise. Because these cloud hosting solutions often fall on the outskirts of data and security teams’ visibility range, this incident serves as a reminder for business leaders to ask: where is our sensitive data? As many companies transition into primarily cloud-based environments, this leads to scattered data stores that instantly increase organizational security risk. Many companies do not know where their sensitive data is located within the cloud. The presence of unknown or ‘shadow’ data — like the databases in this instance — is increasing and is a top concern for 82% of data security professionals. To safeguard against a majority of today’s cyberthreats and accidental exposures, organizations must have complete observability of their cloud data. It is critical to know where it resides, who is accessing it and what its security posture is.

The data issue behind election security. Commentary by Keith Barnes, Head of Public Sector at Tamr 

Governments can reduce threats to election infrastructure through data mastering. Next-generation data mastering that uses machine learning to provide clean, curated data helps fraud detection in the election process and increases the integrity of ballots. Government agencies capture data about individuals and organizations, e.g., date of birth, school, and tax records. With next-generation data mastering, the government can use machine learning to consolidate, clean, and categorize data across agencies. By doing so, states would be able to cross-reference voter registration rolls with other agencies, creating a cleaner, golden record for the voter roll. The key is for government agencies to share data while still protecting it, then use that data to help combat threats against the integrity of the electoral system. The hard part is getting government agencies to legally share the critical data needed to create the checks and balances that will give everyone confidence in safeguarding elections.

How to Protect Big Data from Ransomware and Data Corruption. Commentary by Anthony Cusimano, Director of Technical Marketing, Object First

October was Cybersecurity Awareness Month, and data scientists should have paid attention. Ransomware gangs have been trying out a new tactic called data corruption in which the threat actor corrupts data after exfiltrating it to their servers and then asks for money to return it to the victim. It is particularly effective against companies that do not have an excellent plan to recover their data or those who rely solely on the cloud to store their data. Despite the recent increase in investments in cybersecurity and data backup, ransomware groups will continue developing new methods to avoid detection, compromise sensitive company data, and make money. Companies must implement immutability into their data backup plans if they want to guarantee data protection. This means ensuring that the information in your backup storage cannot be deleted, modified, or corrupted – making your data ransomware-proof.

Advancements in Cyber Attacks: How Can Organizations Respond? Commentary by Rick Vanover, Senior Director of Product Strategy, Veeam

According to the 2022 Ransomware Trends Report, 76 percent of organizations experienced at least one cyberattack in 2021 and attackers show no signs of stopping. Cyberattacks are evolving and companies are becoming more concerned about their data protection infrastructure. There is an increasing number of threat actors leveraging multiple techniques to eliminate companies’ ability to plan and communicate, which can ultimately produce a more lethal attack. To achieve this, threat actors are utilizing three to four different chains of attacks simultaneously. Methods like intermittent and temporal encryption pose a large threat to organizations, because they create data quality issues and allow threat actors to utilize subtle tactics to move under the radar. Actors are gaining access many ways, but the most common is still phishing emails. From a Big Data perspective, it’s clear among data professionals that not all data gets the same treatment. This is true with databases and is definitely the case with data collections in an organization. It only takes one data source to be an ingest point for a threat, a testing area for a cybersecurity event or the launchpad of destruction to more important data. Although the threat continues to rise and it becomes more challenging to detect a bad actors’ next move, organizations can still fight back and strengthen data security. Organizations should continue educating employees on phishing emails along with other methods of entry. Additionally, look to strengthen security supply chain by analyzing security plan for gaps and utilizing other software/applications and re-evaluate your security strategy.

Developments in Fintech: Deep Learning’s Application for Massive Amounts of Data. Commentary by Kay Giesecke, Founder, Chairman and Chief Scientist at Infima Technologies and Professor of Management Science & Engineering at Stanford University

The $10+ trillion mortgage-backed securities (MBS) market is the perfect example of how large- scale Deep Learning technologies can be applied to a market that’s traditionally been stuck in an analog era. The options are endless. Deep learning brings performance-boosting edges to MBS market participants including mutual funds, hedge funds and investment banks. Deep Learning uncovers hidden patterns in mortgage borrower behavior from billions of data points spanning decades of monthly records from tens of millions of homeowners across the United States. By encoding these patterns, Deep Learning systems generate highly accurate projections of MBS and market behavior under alternative macro-economic scenarios. The projections and associated risk and return analytics enable investors, traders and other market participants to identify attractive opportunities and spot risks in MBS markets, even in volatile periods such as the pandemic and unprecedented rise of interest rates in 2022. In 2023, we’ll continue to see more Deep Learning and AI technologies applied to financial industries —merging technology with finance in a way that will modernize how business is conducted.

Bringing Explainable AI to the Insurance Sector. Commentary by Bill Schwaab, VP of North America for boost.ai

Advancing digital transformation in accustomed industries invites its own unique challenges. As the insurance industry opens up to enabling efficiency through technology, artificial intelligence will be key in providing the optimization firms are looking for. Risk assessment, claims processing, and customer service present their own set of nuances that will need to be proactively addressed. Illuminating processes behind artificial intelligence creates avenues to better leverage customer data and courses of action for advancement. Moving beyond the traditional black box, technology providers should offer insight on their solutions that not only explain backend processes but empower their users to be confident in their management of a technical product. In the case of conversational AI, firms must address the limitations of legacy systems, and provide information to customers that is comprehensive of the logistical lift required throughout insurance processes. When offloading some of that work to a virtual agent, decision points in conversation flows and suggestions made by an AI need to be clear so firms can ensure proper communication with customers. The end goal should be to keep communication straightforward and simple when possible so that customers can feel like they are playing an active role in their plan.

Blended approach of machine learning & human expertise essential to analyze data, fight fraud. Commentary by Christina Luttrell is the Chief Executive Officer for GBG Americas (Acuant and IDology)

With digital adoption surging, businesses are under pressure to verify a growing number of customer identities without friction and at the same time, must detect and stop fraud attempts, which are growing in both number and sophistication. Achieving this balance requires a careful mix of technology and fraud expertise. Machine learning has become an important player in helping businesses overcome the challenge of verifying identities at scale. However, machines alone aren’t enough to get the job done. They’re great at detecting trends that have already been identified as suspicious, but because machines use data patterns and assume future activity will follow those same patterns, they are ineffective at spotting new forms of fraud. Layering human fraud expertise onto machine learning is necessary to counter this critical blind spot. Human fraud experts can review and flag transactions that may have initially passed identity verification controls but are suspicious of fraud. When a new form of fraud is identified, updated data is fed back into the machine learning system which then encodes that knowledge and adjusts its algorithms accordingly. Another important consideration when applying machine learning to identity verification is its lack of transparency. Businesses need transparency to explain how and why certain decisions are made but most machine learning systems provide a simple pass or fail score. Without transparency into the process behind decisions, it can be difficult to justify them when regulators come calling. Identity verification solutions that offer continuous data feedback from machine learning systems alleviate this problem and help businesses make informed decisions and adjustments to verification processes.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*