Heard on the Street – 8/30/2022

Print Friendly, PDF & Email

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

Data science shaping the future of SEO. Commentary by from BrightEdge co-founder and CTO Lemuel Park

The fusion of data science and SEO will continue and accelerate in 2022. SEO success now requires the completion of data-heavy tasks like research, on-site analysis and user intent modeling. Additionally, modern SEO marketers and data scientists share more commonalities, including forecasting and predicting future trends with business insights, researching new market opportunities and analyzing complex datasets. As such, a forward-thinking, innovative SEO platform should have a data science framework at the heart of its tech stack, especially in an enterprise environment. As a CTO, I strive to make data science the top resource for SEOs of today and the next generation.

Fixing the Retail Returns Problem with Data. Commentary by Slava Kokaev, Director of Enterprise Data Architecture, goTRG

The continued growth of eCommerce sales, where roughly 20% of all purchases are returned ($218 billion of eCommerce returns in 2021), makes reverse logistics solutions one of the most complex challenges in the retail industry. On the reverse logistics side, companies seek to tackle problems such as making the right planning and sourcing decisions, managing inventory,  handling logistics,  fulfilling orders in a timely manner, dealing with pricing fluctuations, hiring and retaining workers, and sustainability prioritization through waste reduction and recycling. Managing these activities effectively requires a strong data foundation that can produce the right actionable insights at the right time. There are three main pillars to a strong and successful data solution: (i) Centralized cloud data repository structure for analytics as a single source of truth for fast and easy data access, self-service reporting, and advanced analytics; (ii) Scalable and agile data infrastructure to perform large AI and machine learning workloads, and aggregate massive data sets; (iii) Modern predictive modeling platform to enable data science and advanced analytics. By implementing these components, they will help companies understand the demand for returned items, identify item value in a secondary market and the most profitable channel(s), forecast consumer behavior and demands, lead to better customer experiences and retention and optimize the cost of operations.

Will Automation Save Airlines’ Reputations? Commentary by Rebecca Jones, senior vice president and GM of Mosaicx at Intrado

Global airlines are currently facing an onslaught of calls from unhappy summer travelers, while also managing staffing shortages and newly remote contact center operations. Travelers and customers across all industries demand fast, personalized and reliable support from businesses. If a business’ contact center wait times are too long, a customer might end the call and never call back, or leave a negative review if they were unhappy with the resolution and service the company provided. Both of these scenarios threaten a company’s reputation, customer loyalty and bottom line. To combat high call volumes with fewer staff, airlines and brands across industries are embracing automation and AI tools. AI and automation tools can alleviate the pain points contact centers face during the summer “flightmare” saga and beyond. Contact center solutions such as intelligent virtual agents (IVAs) enable rapid automation of common requests and lift some of the weight off contact center agents. IVAs sound like real people and understand natural language used by customers, making them more capable than automation tools used in the past. During times of higher call volumes, an IVA can supplement human agents, shorten wait times and streamline navigation. They can also quickly deliver contextual, personalized and thorough answers using customer data. Implementation of IVAs also offers significant cost savings for contact centers. These AI-powered virtual agents generate revenue through personalized and proactive lead nurturing efforts. They also help companies save overhead costs because the virtual agent can supplement existing staff and is often paid per productive hour. Greater reliance on automation in the contact center is a strategic imperative to provide better customer service and experiences.

Twilio suffers phishing attack, compromising customer data. Commentary by Arti Raman (She/Her), CEO & Founder,Titaniam

In the recent Twilio phishing attack, attackers gained access to its systems after tricking and stealing credentials from multiple employees targeted in the phishing incident and then used the stolen credentials to gain unauthorized access to information related to a limited number of Twilio customer accounts. As this incident proved, despite security protocols put in place, information can be accessed using privileged credentials, allowing access to hackers to steal underlying data. The most effective solution for keeping customer PII safe and minimizing the risk of extortion is data-in-use encryption, also known as encryption-in-use. Encryption-in-use provides enterprises with unmatched immunity to data-focused cyberattacks. Should adversaries gain access to data by any means, data-in-use encryption keeps the sensitive information encrypted and protected even when it is actively being utilized. This helps neutralize all possible data-related leverage and dramatically limits the impact of a data breach.

Digital solutions will accelerate business planning with precision. Commentary by Jochen Heßler, Senior Director Product Management at Jedox

While the manufacturing industry, which is reliant on robotics, artificial intelligence (AI), and machine learning, is the most automated in the world, these solutions have seeped into nearly every sector in the last decade. Once thought to be futuristic, AI has begun to infiltrate many aspects of the organization including the Office of the CFO. According to a PwC report, 86% of those surveyed viewed AI as a “mainstream” technology. The report also claimed that those who adopt AI and ML solutions this year will see higher revenue growth compared to those who cling to manual processes. Decisions based on accurate, real-time data can be accelerated by leveraging widespread expertise in Excel to design, implement, and explain large-scale forecasting and planning initiatives efficiently. Using flexible modeling is essential to configure solutions for finance, sales, and workforce planning. At the same time, leaders can adapt models and integrate internal and external drivers quickly as the business scales. Machine learning predictions can augment human-driven decisions. Accelerated cloud adoption is another trend worth closely watching. According to a recent Gartner report, spending on public cloud service will experience double-digit growth in 2022. Cloud adoption will exceed 45% of all enterprise-wide IT spending this year. In 2021, that number was less than 17%. Private cloud (on-premises) adoption will also accelerate this year. Whether businesses expand public, private or hybrid cloud models, tightening cloud security will continue to be on the minds of decision-makers, in particular for businesses in sectors with tight regulations and strict data privacy rules. High scalability and reduced complexity ensure agility for innovation.

How economic uncertainty and inflation is forcing companies to revisit their budgets and one part that is under scrutiny is the cost of corporate data and how a multi-cloud approach can help companies do this. Commentary by Sean Charnock, CEO of Faction

With the consumer price index jumping 8.5%, economic uncertainty is forcing businesses to revisit their budgets. One key item under scrutiny is the actual cost of corporate data; the spend associated with storing, accessing, sharing and governing data. Enterprises today spend billions of dollars on data, often without a long-term vision supporting efforts around data efficiency, economic efficiency and a focus on data as a strategic asset. Without this vision, companies are battling data sprawl, expensive on-demand fees related to cloud computing, data transfer, and an extreme burden of technical debt. So how can companies regain control of their data, and reign in their data management spend in the process? By adopting a true multi-cloud approach. This means maintaining a single copy of data in a central location accessible to all clouds at once, reducing both cost and complexity. Any market downturn forces companies to examine solutions that reduce costs, but adopting a true multi-cloud approach to data services also opens new opportunities for innovation.

Standardized Data Locality API for Data-intensive Workloads in Kubernetes. Commentary by Bin Fan, VP Open Source and Founding Engineer at Alluxio

While Kubernetes has made it exceptionally easy to deploy and scale data-intensive applications elastically, accessing data from cloud-native data sources (like AWS S3 or sometimes remote data warehouses) becomes more challenging. Platform engineers often have to copy data to optimize the I/O throughput, which is error-prone and time-consuming. As the Kubernetes ecosystem matures and becomes more efficient, this challenge gets more imperative to address, and different attempts are being made to bring back data locality and influence workload scheduling. The ability to make decisions agnostic to data locality is getting more important for Kubernetes schedulers. We believe this ability will become crucial for Kubernetes interface to help applications and schedulers to be more efficient.

Integration should live within the technology team. Commentary by Tam Ayers, Digibee Field CTO

96% of the respondents using alternate platforms experienced stagnation during solution implementation. That number should FLOOR people. This industry has evolved exponentially, leaving no excuse for downtime. To find that the majority of those surveyed have encountered it and company executives are complacent about it? Shocking. Lately, many integration solutions have been attempting to federate integration development to the business users. The reality is that integrations, which are mission-critical to the business, still live in IT. Whether with a principal integration architect, development operations lead, or even a CIO/CTO, integration ends up living with IT. When the business alone is driving the integration discussion to achieve a necessary business objective, IT is left in the dark. This opens the door to unanticipated and unnecessary risks like outages, rework and security issues. These consequences are simply not something the business can tolerate.

Modern Enterprises, Real-Time Reality. Commentary by Chetan Venkatesh, CEO & Co-Founder of Macrometa

Modern Enterprises aren’t just operating within a single realm, and they are also not limited to a single cloud. The modern Enterprise is global; it’s operating within multiple systems in multiple countries, managing compliance and data privacy everywhere, with customers and partners everywhere. There are multiple clouds, layers of security, legacy systems, data lakes, serverless applications transmitting data, and so on. True modernization is a journey that happens over time, one step at a time. What I’m getting at is that the modern enterprise is complex, and this level of complexity operating at some decades-old model of centralization is expensive and not ideal and not the only reality. When we talk about data mesh, or a global data mesh, we’re talking about connecting these existing databases, warehouses, lakes, and apps across systems and querying that data regardless of where it lives. It’s about enterprises getting value out of their data faster – without being stuck in the mire of antiquity. Modern Enterprises embracing the real-time reality.

On the difficultly to adopt real-time analytics and gain insights quickly. Commentary by Li Kang, VP of Strategy at StarRocks

In today’s enterprises, data freshness and responsiveness (query latency) are critical ingredients for success and survival. Although these challenges have always faced data professionals, they are being amplified by: ever-increasing data volumes, the proliferation of citizen analysts and data scientists, the emergence of embedded analytics, and growing data pipeline complexity.  Current cloud data warehouses and lake houses can not process and query real-time data fast enough and in a unified way. Therefore, it’s proving to be very difficult to adopt real-time analytics and gain insights quickly. For organizations able to address these challenges, most rely heavily on their data engineering teams to support their data users. This has proven to be a time-consuming process that leaves users waiting for extended periods of time for the tools and connections they need to be built, significantly slowing down their organization’s ability to innovate. Existing real time analytics databases are not able to support these business needs and cause unnecessary complexity to data architectures. Subsequently, real-time processing has caught on quickly, but real-time analytics, which involves discovering patterns in data and working with real-time streams, has been slower to gain traction, because of the many technical challenges in analytical queries.

Women’s Equality Day. Commentary by Annemie Vanoosterhout, release and project manager, Datadobi

I was hopeful that the pandemic would bring about a shift in work-life balance and create a better work environment for all. However, a number of companies are going back to their old norms. It might be that many enterprises want to be as cost-effective as possible and don’t want to invest in implementing improvements because of economic constraints. However, changes like the allowance for remote work would greatly benefit the mental health of employees, especially women and mothers balancing family life with their job. On Women’s Equality Day, my advice to young women aspiring for a career in tech is to walk your own path. Even if it takes a detour, you will end up where you are supposed to be and where you will shine. Don’t shy away from going outside your comfort zone, and set small, intermediate goals and targets to work toward. It’s easier to make small changes regularly than to make big adjustments later on. Keep learning, not only in the form of courses but also from colleagues and people you admire, whatever their gender is.

Women’s Equality Day. Commentary by Loretta Jones, VP of Growth, Acceldata

We continue to see growth and welcome change in diversity and inclusion, especially in the tech sector. However, Women’s Equality Day is a reminder that gender equality is still a work in process, and there is still a way to go to remove sexism in the workplace. To make real progress, we all need to be honest with ourselves about our implicit biases and be open to change. Companies can’t address implicit bias through policies and procedures or compliance sessions, but through open dialog with practical tips on recognizing and addressing implicit bias. My advice to women breaking into the tech industry is to keep at it. Don’t be daunted or intimidated, and don’t be afraid to ask for help because that’s how you learn, grow and advance. The tech industry is vast with many opportunities out there – you will find the right place for your skills.”

AIOps & how organizations are missing out on its full potential. Commentary by Patrick Lin, VP Product Management, Observability, Splunk

In a world with swelling data volumes, businesses are constantly looking for new and optimal ways to improve reliability, prevent downtime and drive operational efficiency. Observability has been in the spotlight as a leading solution for some time, and leaders are now turning to AIOps as a critical component of an observability practice. AIOps applies AI/ML to large amounts of data used in observability to drive monitoring, troubleshooting, and remediation use cases, spanning adaptive thresholding, anomaly detection and prediction, event correlation and analysis, root cause analysis, intelligent incident response and auto-remediation. The reality, however, is that many organizations are seriously underutilizing AIOps in their observability practices. A recent report shows that AIOps use cases vary significantly between observability beginners and leaders – the former are confined to mean time to detection (MTTD) and root-cause analysis, and often fail to get the most out of their AIOps/observability relationship. Leaders, or those with 24 months of experience with observability, take similar action but use automation to get a predictive edge on the health of their applications and infrastructure at a much higher rate than beginners. The vast majority of teams adopting AIOps see a measurable improvement in MTTD and root-cause analysis, and the natural next step should always be improved prediction. Once that is achieved, IT Operations, DevOps and software development teams will be empowered to ramp up innovation without worrying about manual, resource draining data tasks.

On release of NIST’s AI best practices playbook. Commentary by Byron Gaskin, Lead Scientist at Booz Allen

The release of the National Institute of Standards and Technology (NIST) draft playbook for AI best practice is exciting and welcome. The draft guide’s socio-technical focus as crucial to building Responsible AI and addressing bias in AI algorithms. Three aspects mentioned in the draft playbook are particularly encouraging: (i) First, NIST researchers recommends categorizing bias into three types: statistical, systemic, and human. At Booz Allen, we applaud this decision as we have found that each of these types of bias requires a unique approach for detection and mitigation. We believe viewing bias through each of these three lenses is necessary to reduce unintended harm caused by AI models; (ii) Second, the playbook aims to provide a checklist of actionable items that help organizations be more aware of the impact that AI technology can have on individuals, groups, and societies. This is immensely important given the nascent and largely research-focused state of the field of AI Ethics. These checklists will allow organizations to put theory into practice; (iii) Third, the playbook calls for an AI risk management process that involves all members of the AI technology team – from AI researchers to senior leadership. We wholly agree with this approach and believe this fosters organization-wide responsibility for the responsible implementation and use of AI models. With AI maintaining an omnipresent role in modern life, the NIST playbook, and previously released Risk Management Framework, will serve as an essential human-centered authority for industry development of Responsible AI. We believe the development of Responsible and Ethical AI solutions requires constant awareness of AI’s human impact while providing teams with the governance structure and tools needed to achieve positive impact. We look forward to seeing the impact this playbook will have on current and future AI developments.

Emotional AI: Sentience and Optimization. Commentary by Dr. Matthew Putman, CEO of Nanotronics

We have made impressive progress within the field of Artificial Intelligence in recent years and are continuing to witness exciting new developments in automation across many different industries. However, we have not yet reached a point where algorithms are able to relate to or truly experience the outside world—cognitive associations that define us as uniquely human. The essential attribute of sentience that advanced AI lacks is self-awareness. AI can give us the impression that it understands the world around it through language analysis, but the sense of self that is a necessary element of consciousness has not yet been developed. Current AI models operate by isolating a memory of a singular moment, which results in a fragmented representation of life as it happens around us. That interstate in between connected moments is necessary for sentience, hence the current effort to build up continuous experience as we design for the future of automation. AI is able to imitate us by taking human-created things, whether those be language, art, or other forms of expression, as its input data, so the output appears legible and believable to us as humans. But the idea that AI has gained human consciousness is not exactly correct, as the algorithm is simply passing content through transformers that have been auto-labeled to generate predictable solutions to questions or problems. It performs as it is expected to perform, engineered to optimize, and as humans we do not optimize. We often hesitate when faced with a moral quandary because it elicits an emotional response within us, which is something AI is not designed to do because it has been created purely for pragmatic purposes. AI can simulate emotion quite convincingly, but it is not an emotional being. The emergent properties of our brains responsible for the self-awareness that positions our feelings are still confusing and unknown, which makes the future of intelligence all the more exciting as we look towards building neural nets that respond emotionally to given situations.

Transforming into a data-driven enterprise. Commentary by Kevin Young, Senior Data & Analytics Consultant at SPR.

Data needs to be shared across departments. The value chain within any organization flows across all departments. If data isn’t flowing alongside this value chain, there are both growth opportunities and revenue pitfalls being missed. Sharing data also needs to be valued within the organization. A good way to begin data sharing is by creating a data lake. Members across the organization can add their data to the lake for other members across all departments to consume. Tracking the consumption of data is a vital step for quantifying the importance of the data. Owners of the more valuable datasets need to be recognized as delivering substantial value to the business. This is oftentimes the step that executives get wrong about becoming more data-driven. Maintaining focus on the value of the data itself and recognizing colleagues who bring forth that data creates an environment where sharing value is encouraged. Finally, automation should be leveraged to make fragile data more resilient. Manual data collection processes are error-prone, require a colleague with domain knowledge and can lead to inaccurate data. Automating these processes will improve data reliability, alleviate the employee’s repetitive task burden and free that employee up to conduct greater value-added work. Just make sure to include a few data quality checks in the automation.

Diving into the Metaverse with Data. Commentary by Dineshwar M, VP — Data Science, Polestar Solutions 

As we enter a new world of innovation with hyper-worlds, metaverses and Web 4.0, the possibilities for data analytics & AI-ML are exciting as AI-ML interacts with new age technologies such as augmented reality (AR), virtual reality (VR) and IoT. These worlds are accessed using VR headsets, AR glasses and apps. Metaverse enables users to own their own digital assets using blockchain technology in the virtual worlds which are composed of layers and layers of data. Now, data is transferred at every level of user activity, which enables users to act/react in the virtual worlds. Imagine the amount of people interacting within these virtual worlds, and how much insight one can garner from user behavior itself. The possibilities are endless for users as well as brands for providing personalized suggestions based on their historical activities. As data usage and volume increases and becomes more and more rich, AI algorithms also become more efficient. And, so it becomes extremely important for brands to identify the correct data to train and enrich AI models as well. This comes with its own set of challenges such as data integrity, security, and privacy. How we solve these issues is going to be important as it decides the rate at which the metaverse grows. The uses are rich — right from the gaming industry to personalized shopping experiences, entertainment, finance, and banking, etc. I believe NLP and conversational AI are going to be top runners amongst all other disciplines inside AI with respect to the metaverse value chain.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*