Heard on the Street – 9/22/2022

Print Friendly, PDF & Email

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

Artificial Intelligence and Clean Energy: How AI is Modernizing the Grid. Commentary by Chuck Wells, Chief Technology Architect at PXiSE Energy Solutions 

When it comes to managing the electric power grid of the 21st century, artificial intelligence (AI) offers solutions to stabilize the intermittent nature of renewables, enabling their widespread adoption and reducing the reliance on fossil fuels. In addition to helping utilities manage their power grids, AI also empowers smaller energy producers to take control of their energy production by automating energy management, thereby facilitating a more democratic approach to energy production. AI can be central to a grid or microgrid controller’s forecasting, control, optimization and scheduler features. Using an algorithm’s ability to compare predicted energy consumption to anticipated production 24-48 hours in advance, automated grid controllers can optimize renewable energy production and storage, improving the consistency of renewable power and reducing reliance on fossil fuel power generation. AI currently plays a major role in clean energy, with proprietary algorithms running under the hood of controllers to keep grids with a high amount of renewables stable. AI-enhanced automated grid controllers include microgrid controllers, distributed energy resource management systems (DERMS), and power plant controllers. Systems equipped with machine learning are well-positioned to quickly process and respond to vast quantities of energy data with a high level of accuracy.. Machine learning also plays a key role in interpreting weather forecasts, allowing commercial entities and utilities relying on solar energy to store energy ahead of cloudy periods to avoid power disruptions or energy brownouts. With the addition of AI capabilities, energy controllers can make split-second adjustments based on data coming in as often as 60 times per second, which is a – processing speed beyond the capabilities of even the most skilled grid operator. 

Transforming Healthcare With Artificial Intelligence. Commentary by Dr. Peter Fish, CEO of Mendelian

Along with economic challenges and a growing global population, COVID-19 has placed healthcare services under more pressure than ever before. Thankfully, cutting-edge AI, machine learning and data science technologies are becoming progressively impactful in assisting clinical practitioners. To help healthcare systems to thrive, over the next few years, it will be essential to see a range of AI-based solutions being trialed to assist clinicians in delivering high quality care to patients across the entire scope of services. The leveraging of analytics to mine significantly untapped reserves of clinical data will aid clinicians in providing more personalized medical treatment and care to patients by improving areas such as remote monitoring, health management, patient interaction, objective symptom collection, disease diagnosis and drug discovery. While separate technologies will, and already do, contribute value alone, the true potential lies in the synergy of multiple advances across the entire patient journey.

Ban of AI hardware to China/Russia. Commentary by Ambuj Kumar, co-founder and CEO of Fortanix

The US government effectively blocked sale of advanced AI chips from NVIDIA/AMD to China. My prediction is that in 5 years, you can count on a single hand the type of computing devices that will operate both in US and China. Most manufacturers will have to choose between US or China. I’m calling this digital decoupling. Right now, the focus may be on advanced computing, but wait till they realize how easy it would be snoop on the other side using microphones and camera found in almost every device. Most TVs, IOT devices, phones, laptops, servers, and such will not operate across these countries. Digital decoupling will happen faster than the most anticipate and democracies will be better because of it.

How AI and Blockchain Can Transform the Prescription Drug Market. Commentary by Luyuan Fang, Chief AI and Data Officer at Prescryptive 

Half of patients in the US have abandoned necessary prescription drugs at the pharmacy because of high cost and lack of price transparency. This is a greater problem within the healthcare industry as a whole, but one way to help both patients and pharmacists mitigate challenges around prescription drug prices is by utilizing the power of AI and blockchain. Using AI innovation can create customized, dynamic pricing and enhanced analytics by looking at factors like geolocation, population, therapeutic value and competition. This creates a competitive pricing model that enables pharmacies to provide the best value for customers while still getting the profitability they need to stay in business and continue to serve their communities. AI is also the foundation for creating a true intelligent “pass-through” pricing model to the pharmacies, which eliminates “spread pricing” and other practices that lead to opacity with pricing. Blockchain is then the technology to decentralize control of healthcare and prescription data from a few large healthcare institutions, insurers and payers directly to consumers to access their healthcare and prescription drug data directly and securely on any device. For pharmacists, blockchain tracks every payment record, so all transactions are auditable in real-time, making pharmacy benefits audits easier.  

Eliminate the pain of data migrations as tech M&As continue. Commentary by Paul Lechner, AVP of Product Management at Appfire

Merger and acquisition spending typically tends to slow down during economic downturns, but so far this year, the tech industry is an exception to this rule. As the value of tech M&As continues to rise, companies need to be prepared to make the transition a smooth one. One of the biggest challenges during M&As is migrating multiple organizations’ data into a single, coherent system. During migration, organizations risk duplicating data such as usernames, schemes and naming conventions, as well as risk corrupting data. In order to create a more refined data migration strategy, the organizations merging together must first map their individual data environments to understand how data will be affected by the migration and how it should be properly integrated with the newly combined system. Organizations should be aware of the potential issues related to data migration. For example, if a process breaks during a migration, it might create duplicate usernames or mistake the actual user owning the data. Comprehensive migration solutions can help facilitate the process by identifying any issues related to the data migration, and intervening to correct them.

Will the data sovereignty movement be short-lived? Commentary by Bryan Kirschner, VP of Strategy at DataStax

Given how valuable data is, it’s no wonder that countries ranging from the U.S. to China are accelerating plans to achieve data sovereignty. Data sovereignty, otherwise known as digital sovereignty, is the concept that countries should have control over the digital information that its citizens and organizations produce. Yet, this view of data is short-sighted; today, data sharing is an essential element of the digital economy. While regulators play a role in ensuring that data is shared securely and with privacy in mind, there is too much value in sharing data to take the “trade wars” style approach of data sovereignty. Those who support the movement fear that borderless data will be misused, but the inherent value of data implies that efforts will be made to guarantee that shared data is used fairly. By analyzing metadata and enacting policies to prevent the misuse of data, we can avoid perverse outcomes and use data to our collective advantage as humans, not just within, but also across borders. The real key to the open sharing of data is the recognition that data is non-rival — it can be used by many, and re-used. It can be shared in a variety of ways, whether it be anonymized, temporary, or protected. The focus should be on a positive-sum game and better outcomes for both businesses and society. In fact, the World Economic Forum found that the social value of digitalization is greater than the commercial value. As the scope and scale of data increases, so does its potential. The benefits of sharing data – whether it be construction companies sharing data to increase site safety or nations sharing data to help control disease – far outweigh the disadvantages. Just as the initial critics of the open source movement – including the president of Microsoft — quickly became its champions, those calling for data sovereignty will soon change course and embrace the open sharing of data and its benefits.

More data isn’t the problem – it’s less security. Commentary by Val Tsitlik, VP, Head of Big Data Practice and VP of Technology Solutions at EPAM Systems, Inc.

As demand for raw data to drive analytics is higher than ever before, businesses must protect the privacy of their customers and partners. Although data is foundational for companies to grow and optimize processes, failure to safeguard this vital information could result in hefty penalties from government regulators, not to mention the loss of customers, revenue and reputation. This phenomenon, which describes the need for more data while maintaining data security, is a key challenge to unlocking the value. Various tools such as tokenization, anonymization and synthetic data can make data more secure. However, creating a robust data governance strategy is often the best way for businesses to understand and defend their data. Today, there is growing investment in data governance, which will only increase as data privacy and security become ever-more critical.

On hyperautomation. Commentary by Avantra CEO John Appleby 

In a bid to improve efficiency and productivity, more enterprises are turning to hyperautomation to boost their bottom line. What exactly is hyperautomation? Where do we stand on the hyperautomation curve at the moment? And what is in it for your business? More than a collection of technology buzzwords, hyperautomation is about turning manual or partially automated tasks into systematic, rules-based processes. It can help large enterprises streamline processes to save time and money, improve quality and accuracy, deliver the right customer experiences by providing rapid error-free services, and help employees focus on higher-level tasks by automating routine tasks. And that’s what gives a competitive edge to any business. Automation is evolving from solutions that improve efficiency and productivity to an agent of change that fundamentally transforms how enterprise functions. Hyperautomation is but the early milestone of that big transformation journey that every business aspires to cross first.

Proceed into the enterprise metaverse with caution. Commentary by Ramprakash Ramamoorthy, Director of Research at ManageEngine

The metaverse is the latest fad within Big Tech’s surveillance economy and it’s bringing a host of problems related to privacy and security as organizations adopt enterprise metaverse concepts. Organization’s leveraging enterprise metaverses are expanding cyber-attack surfaces significantly because within the metaverse ecosystem, there are IoT devices and wearables from multiple vendors, as well as sensors throughout offices and homes actively processing a colossal amount of user behavior data in real time. Plus, the companies with an enterprise metaverse are using AR/VR devices that collect a ton of personally identifiable information (PII), including financial and personal data. Even more problematic, to verify users, these businesses will want to collect biometrics, including fingerprints and facial recognition. Not only does this create more data to protect from bad actors, but also allows the company access to more employee data than ever before. Further, enterprise metaverses will also be particularly ripe for social engineering attacks. It’s one thing to receive a fake request from a work colleague via email, and quite another to look at a colleague’s avatar face and process that same request. Avatars will be falsified, stolen, and weaponized more broadly by bad actors to commit fraud. This is on top of the prevalence of cryptocurrency transactions in the metaverse will make it easier to hide ill-gotten gains.

On multi-cloud data management. Commentary by Betsy Doughty, Vice President of Corporate Marketing, Spectra Logic

The ability to ensure data availability at any location at any time is becoming increasingly important. Organizations continue to explore how best to integrate cloud into their IT strategies to enable low latency data availability. This has opened the door to new methods of achieving this with distributed multi-cloud data management solutions capable of providing universal access and placement of data across multi-site and multi-cloud storage leading to highly efficient hybrid and multi-cloud workflows. The ability to integrate on-premises data storage with multiple cloud services using a single global namespace delivers much more agility, flexibility, and data accessibility for organizations. The question then isn’t cloud or no cloud, but what data needs to be stored in the cloud, on-premises, a hybrid set-up or in multiple clouds, as ideally all the files should appear in their native format. Since workflows are getting more complex, the seamless integration of applications, regardless of location, needs to be supported. Ultimately, deployment of strategic multi-cloud data management software enables organizations to harness the full power of data-intensive applications using lifecycle rules that determine where objects are stored and the length of time they are stored, accelerating business operations and reducing costs and complexities.

Streamlining AI Boosts Productivity and Beyond. Commentary by Shirish Nimgaonkar, President of Nanoheal

With the rise in remote work and the continual push for digital transformation across all industries, Artificial Intelligence (AI) has become imperative for organizations and workplaces to be efficient and stay competitive. Because of these rapid changes and the current talent shortages, IT teams find themselves struggling to keep their heads above water. By modernizing aging IT systems with AI, technology processes are streamlined, effectively recapturing value from operations workflow and positioning teams to add value with recovered time and experience. When implemented effectively, predictive automation solutions boost employee efficiency and productivity by at least 15 times and businesses can see a cost savings of more than 19 times. The adoption of predictive AI platforms takes over many rote and undesirable tasks that lead to increased productivity, significantly higher ROI for such investments, elevated user experience and sustained competitive advantage. When companies focus on end-user engagement and employee efficiency and productivity, they will see that without predictive automation, they are working harder not smarter. AI-based strategies are a game-changer and will be the solution to slow IT resolution, disengaged and unproductive employees and negative customer experience. 

How Process Automation Helps Mitigate Data Silos. Commentary by Alessio Alionco, Founder and CEO, Pipefy

Data silos present a challenge that companies of all sizes struggle to overcome. Data silos disrupt productivity by introducing task repetition and duplicate data entry into workflows. They also create fragmented processes that inhibit collaboration and make visibility difficult. Process automation is a critical strategy companies can use to resolve and prevent data silos. Data silos are increasingly common as tech stacks become more complex and teams integrate more apps and systems into their workflows. Low-code process automation is one of the most effective tools companies can use to dissolve these silos. Not only do these tools eliminate duplicative work, they can also provide a crucial orchestration layer that fills process gaps and creates a cohesive whole out of many individual parts. Low-code process automation options provide the additional benefit of helping IT teams and business units work together to identify and resolve data silos.  

Insider Threat Awareness Month. Commentary by Patrick Harr, CEO, SlashNext

When we think of insider threats, it’s important to remember we’re not just talking about disgruntled employees. Malicious intent is not required to constitute an insider threat. Even well-meaning employees, contractors and partners equipped with tools and training on cybersecurity risks can be a danger. This is because humans are not infallible. We can be lured into providing personal information, credentials or installing malicious apps that can undermine even the most sophisticated cybersecurity defenses. Social engineering phishing scams continue to be a serious problem for organizations because they target the weakest part of the organization – your “insiders,” or humans. These attacks are moving to SMS, collaboration tools, and social. We have seen an increase in requests for SMS and messaging protection as business text compromise, like its cousin business email compromise, is becoming a growing problem for an organization to detect and block. For the cybercriminal, it’s much more straightforward to launch an attack against a human because personal targeting, automation, and the availability of free legitimate domains have increased the speed and success of their attacks. Of course, employee education and training on cybersecurity risks should continue, but it’s important to keep the curriculum up to date with the current threat landscape – for example the use of legitimate, trusted services to launch attacks. In August 2021, 12% of all malicious URLs identified by SlashNext Threat Labs were from legitimate cloud hosting infrastructure, and preliminary data for 2022 shows this trend is increasing rapidly. While sophisticated phishing coming from trusted service is very hard for humans to identify, training that serves to enhance users’ analytical skills is critical for phishing that makes it through security defense. A good training program combined with AI-powered behavioral learning technology is combination to stop phishing – a common source of insider threats — from impacting your organization.

Inverse data uncovers hidden insights. Commentary by Maitreya Natu, Senior Scientist at Digitate

Enterprises analyze their historic data to fine-tune IT systems. But sometimes organizations lack enough historic data for an event to derive any reliable insights. To solve this problem, organizations are increasingly leveraging inverted data. In these situations, flipping data gives a very different perspective. It is easier to predict when a rare event does not occur than when it does because the rarer the event is, the larger its inverse will be. This technique is more efficient in finding complex patterns. For example, an event that occurs every second Monday for three months will have only three data points. But its inverse will have 87 data points (i.e. all the days that aren’t the second Monday of the month). So instead of modeling an abnormal behavior with few data points, this approach allows enterprises to model normal behavior with more data points. Then they can use those insights to better understand the system behavior and detect, diagnose, and predict anomalies.

Controlling Data in the Metaverse. Commentary by Sunil Senan, SVP and Business Head of Data and Analytics at Infosys

The metaverse’s functionality will depend on the ability to process a great deal of information very quickly, which will be facilitated by AI and edge processing. However, there is a need to build these experiences on the core foundation of Trust, Ethics, Data Privacy and Governance. The vast amount of personal data involved in the metaverse experiences how that data is used require due considerations from Privacy, Security, Control and Compliance standpoints. While the metaverse is still in its early stages, applications within the metaverse will build on existing digital-twin technology and use data from real-world sensors to create immersive experiences for users. However, there is a growing backlash around the capture and use of data about individuals, especially since this data will include personal information including body measurements, brainwaves, biometric data, health information, and even financial details. Organizations that want to build metaverse applications will need to demonstrate they are responsible, secure, and respectful with people’s most personal data, which is already a challenge. In addition, organizations will need to deploy new consent-based solutions and innovations available that help preserve and give the control of personal and sensitive data in the hands of end consumers. Beyond the obvious cybersecurity infrastructure, organizations will need to adopt strict data and security policies that allow users to control the data they want to share, for how long and for what purpose. Several governments across the globe are contemplating or implementing new regulations towards this.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*