Heard on the Street – 8/26/2021

Print Friendly, PDF & Email

Welcome to insideBIGDATA’s inaugural “Heard on the Street” round-up column! In this brand new regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

Explainability will remain the backbone of assuring high-quality and trustworthy AI/ML models. Commentary by: Shameek Kundu is Chief Strategy Officer at TruEra,

As societal concern around the opacity, ethics and fairness of Artificial Intelligence (AI) systems rises, multiple stakeholders – regulators, customers, internal clients of data science teams – are becoming increasingly vocal about the need to make such models explainable. In the United States, the Federal Trade Commission, the National Association of Insurance Commissioners and federal banking regulators have all highlighted this in recent months. In Europe, a draft law on governing the use of AI in high risk use cases promises to be at least as significant as the General Data Protection Regulation (GDPR). 

Not surprisingly, a huge volume of recent academic and commercial activity in the data science space focused on “AI explainability”. However, viewing explainability in isolation – as an end in itself – misses the bigger picture. Gaining transparency into a model’s ways of making predictions is useful, but the broader goal must surely be to build trust in AI by ensuring AI Quality. This encompasses not just model performance metrics, but a much richer set of attributes that capture how well the model will generalize, including its conceptual soundness, explainability, stability, robustness, reliability and data quality. It also includes attributes embodying societal and legal expectations of transparency, fairness and privacy. 

The Future Is Now for the Edge – Why Your Company–Yes, Yours–Will Benefit from Edge Computing. Commentary by: John DesJardins, CTO of Hazelcast.

There’s been a lot of talk about edge computing lately, but I’d argue that the conversation is limited–making too many companies feel like the edge is only for certain types of companies with certain kinds of workloads, or that it’s something to think about “someday.” In truth, any company that is working in the cloud or a hybrid environment is poised to make effective use of the edge–if they think of edge computing and the cloud as part of a data processing continuum rather than as a discrete technology model.

Edge computing enables organizations to take action on data as close to the source as possible–or, as soon as data is “born.” Having visibility into data the moment it’s created then provides insights that empower businesses to immediately act upon user activity while also leveraging technology that reduces latency and bandwidth use. Indeed, not everything needs to go back to the cloud or data center, a point that organizations must consider as their data grows. (And, it will grow!)

Yes, organizations will need to do a great deal of transformational (there’s that word again) work to close the gaps between requirements and solutions, leaning in on real-time app platforms, among other solutions. But will it be worth it? Without a doubt. The value in making effective use of data through intelligent interplay of the edge and the cloud is nothing short of your organization’s competitive edge.   

Eliminating Bias in Recruiting with AI-Powered Behavioral Assessments. Commentary by Maaz Rana, Co-founder and COO at Knockri.

AI is making waves in the recruitment industry, with AI-powered behavioral assessments changing the landscape of recruiting for the better. With companies committing themselves more to advancing DEI initiatives, it is no wonder we are all on the path to creating better tools to help eliminate bias from the hiring process. A McKinsey study says more diverse companies are 43% more likely to do better than their less diverse counterparts, which suggests that implementing tools to help DEI is a no-brainer for all companies. Furthermore, according to Forbes, “AI-based software platforms that are both data-driven and taught to ignore traditional prejudices rely on algorithms that prevent historical patterns of underrepresentation.”

Now you may be wondering, what exactly is an ‘AI-powered behavioral assessment’? To begin, according to the APA Dictionary of Psychology, behavioral assessments are “the systematic study and evaluation of an individual’s behavior.” By bringing these assessments together with machine learning, we can help eliminate bias when assessing candidates. It all comes down to marrying I/O Psychology with Machine Learning. I/O psychology has been a driving force in creating AI recruitment tools to recognize behaviors in a candidate’s assessment. By teaching the models to identify key behaviors, a candidate’s response can be analyzed to hone in on those skills and find the best fit for the job. AI-powered behavioral assessments are intended to observe and predict behavior without identifying race, ethnicity, age, disabilities, gender, and any other systemic biases that we all inherently have. When we use representative datasets and implement critical guardrails for AI, we can fully grasp a candidate’s skillsets, and remove any barriers that we might not necessarily realize we put up when meeting a candidate for the first time.

Why companies are mistakenly trusting they have complete data governance when it tends to miss key elements of policy or security. Commentary by: James Beecham, Co-founder and CTO, ALTR.

Unfortunately, perfectly well-intentioned leaders mistakenly believe their organizations have complete data governance when in reality, they tend to miss key elements of policy and security, leaving gaps in their governance strategy. Why is this happening? Because modern data infrastructure and its risks have changed significantly over the past few years. Early on, data governance required handling metadata management, discovery/classification, and data quality. Now, data experts have to keep up with a changing, complicated landscape for data sharing and access. When organizations start migrating data to the cloud, they are still responsible for managing both the data itself and the people who use it in that environment. Therefore, companies need to look at data governance and data security holistically to track access, set policies, enforce proper data governance, and secure the data. 

This requires a different approach to interacting with data, which includes creating governance policies that can work in real-time, controlling access to sensitive data, and automatically responding to potential data security threats. It’s one thing to protect data at rest, but in order to protect all data, it must be protected during its use as well. As the target and goals change, it’s essential for us to redefine data governance with a strong security mindset and strategy for a complete picture. With increasingly punitive monetary fines and brand damage associated with data misuse, companies should not consider themselves compliant or prepared without including a security strategy. 

Alphabet Earnings + Cloud Implications. Commentary by: Heikki Nousiainen, CTO at Aiven.

Google’s growth is a direct indicator of the upward trajectory of the cloud. The public cloud market is expected to total $304.9B in 2021 due to the popularity of managed database and cloud services, and another overlooked portion that I anticipate will continue to gain traction is open source. This year, Google continues to be a leading corporate contributor to open source software, having increased the number of employees actively contributing to open source projects significantly from June 2016 to June 2021. However, an unexpected result from recent research showed a 10% reduction in the number of contributions to code (commits) in the last year. Does this mean Google is cooling its commitment to open source? I don’t think so, it’s been an exceptional year, and Google has championed open source since its early days. It’s most likely a result of the increased maturity of its open source projects combined with the turbulence of the past year.

Despite this, those who back the open source community know how valuable players like Google are to its growth and overall success. While such enterprises were once known for doubting the capabilities of open source projects, they have since come around to support major strides in the community. In fact, Google co-founded the Open Source Security Foundation (OpenSSF) with Microsoft and even recently updated its Security Scorecards project to double down on security for open source projects and keep users and developers better informed, making it clear that tech giants know their customers rely on the power of open source in the services they provide. 

Just like their customers depend on open source, we depend on industry giants like Google to back the true nature of open source just like other community members – whether they’re smaller cloud providers or individual contributors. Google’s resources and influence help develop a stronger community that will benefit many people now and in the future.”

Ways that businesses can ensure their use of AI is responsible, ethical and trustworthy. Commentary by: Sudhir Jha, Head of AI company Brighterion and SVP of Mastercard.

The rapid acceleration in AI adoption across businesses plays a huge role in how consumers interact and perceive a brand. In addition to consumers, employees want to feel good about the businesses they work for. Responsible use of the technology is necessary to ensure positive impact. 

Transparency is key to building trust and should be integral to any AI solution within an organization. It should start with how the data is collected, stored and managed. There should be very clear policies around the insights and decisions that can be derived from the data. All possible impacts of actions taken by the AI solution should be well-understood and documented. There should also be periodic audits of the solution to ensure no biases or unwanted impact has been triggered over time. There is no reason to treat AI as a black box or “magic.” Increased transparency will result in increased trust, which will drive adoption across the organization as people feel more comfortable with the technology.

The Benefits of Quantum Computing in Supply Chain Management. Commentary by: Yuval Boger, Chief Marketing Officer at Classiq.

Global shipping companies had to scramble when the Suez Canal was recently blocked. Quantum computers can solve this class of problem (called “the traveling salesperson problem”) very efficiently, and much quicker than a classical computer can. Through quantum computing, shippers can quickly determine optimal shipping sequences including which route is fastest, which is most cost-effective, which has the least environmental impact, and even how these might change during the day with changing traffic or weather conditions. If Uber, UPS, or a global shipping company could save, for instance, 15-20% in their transportation costs, quantum computing becomes less of an investment and more of a formidable competitive advantage with huge bottom-line value.

AI Will Not Displace Human Beings Any Time Soon. Commentary by: James Isaacs, President of Cyara.

When you look at the use of AI in consumer-facing operations today, it’s mainly used in AI-supported chatbots and customer personalization features. If we look at how consumers have taken advantage of AI-supported features during the pandemic, we can see that they’re actually using them to resolve issues faster through human agents. Companies like Bank of America, which has a consumer-facing AI-powered chatbot named Erica, saw consumers using Erica to find the best course of engaging customer support teams. Rather than asking Erica questions to fix any issues directly, customers simply asked Erica how they should go about reaching out to the customer service team to rapidly resolve their problem with the appropriate human agent.

Code is the New English. Commentary by: Alok Kulkarni, CEO of Cyara.

English is the must-know language to conduct business around the globe, although other languages like Mandarin are becoming equally important. Several hundred years ago, the trade language was Latin. Soon, the must-know trade language will be code. Companies are beginning to apply DevOps and coding practices, such as continuous testing and analyzing production data to improve future product designs, to other business operations. The adoption of these practices across the enterprise will speed up the innovation process and offer significant improvements to customer personalization in contact centers. More and more organizations are requiring that C-suite-level leaders possess an understanding of coding and DevOps practices for this very reason. As more and more services and products become fully digital, companies will make the previously predicted shift towards leveraging their data to drive revenue. To do this, understanding coding and the flow of data between applications is vital.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*