Heard on the Street – 1/4/2023

Print Friendly, PDF & Email

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

Is AI becoming boring? Yes — and this is really good for enterprise AI. Commentary by Dr. Lewis Z. Liu, CEO and Co-Founder, Eigen Technologies

Not long ago, AI technology was reserved for businesses with extravagant resources and innovative visions, but a major change is underway. Over 50% of businesses now report utilizing AI in one or more of their functions, and this percentage keeps going up. Integrating AI into critical business functions has become a staple. But also this begs the question, is AI losing its cutting-edge allure and becoming “boring”? The answer: Yes — but this doesn’t have to be a bad thing. As the hype fades, it’s time to realize that AI has become as integral to society and business as smartphones or the internet. As the innovations of the last decade have made their way into everyday technology, AI platforms have become routine necessities for most businesses, with low/no-code platforms paving the way for smaller businesses to reliably utilize AI technology. This is an especially promising trend in light of tightening tech budgets and uncertain economic conditions as businesses are leaning toward implementing reliable programs with tangible benefits and a strong ROI. By taking a small data approach, businesses are able to supercharge automation and practically implement AI in areas such as document processing. This helps companies extract key data from business-critical documents — like reviewing mortgages, automating insurance claims processing and securing critical systems — and enables organizations to unlock the valuable insights that were historically trapped within their data. 

Training and designing machine learning algorithms to recognize patterns within patient data to provide a better analysis of patients’ overall health. Commentary by Austin Jordan, Head of Data Science, AI and ML at Apixio

The shift toward electronic health records has unlocked oceans of patient data – yet this is just the tip of the healthcare data iceberg. Unstructured data, or free-form text entries, accounts for an estimated 80% of all healthcare data, yet has remained largely untapped. These massive amounts of patient data that cannot be stored in relational databases contain critical information for matching patient symptoms with potential health conditions and assessing risk levels. As healthcare organizations transition from a fee-for-service model to value-based care, they need to find ways to improve outcomes and lower costs. Unstructured data analytics provides that opportunity. By implementing AI technologies, healthcare providers can sift through unstructured data such as medical charts and social worker and behavioral health notes to gain a more complete picture of their patient’s health. With this holistic view of the patient, physicians make more accurate diagnoses, improving overall patient care.

The convergence of AI and IoT on the march toward building industry 5.0. Commentary by Jens Beck, Partner Data Management and Innovation at Syntax

AI and collaborative robotics will see continued growth in 2023 and beyond in order to meet evolving customer needs and fluctuating market demands. And as our manufacturing, logistics and retail industries become increasingly “intelligent,” we see the convergence of automation and IoT – the artificial intelligence of things – rising as a dominant emerging technology. I also expect to see predictive scenarios and computer vision to become more widely used and required as markets understand the need for mature solutions that competitors are using. For example, visual inspection will not only be leveraged on the shop floor, but on the front end as well, assisting with logistics operations or analysis on inventory movement. As more factories and supply chains become intelligent, I see financial and administrative processes also stepping forward. Even retail and front-line customer service workers will come to understand AI and its importance in shaping the consumer experience.

Low Code Automation Is Key to Resolving the IT Worker Shortage. Commentary by Alessio Alionco, CEO and Founder, Pipefy

As organizations continue investing in digital transformation initiatives in 2023, they need to prioritize technology that will ease the IT backlog. Despite projections that IT spending will exceed $4.6 trillion in 2023, there are not enough skilled IT workers to meet growing demands. To resolve this, organizations should consider easy-to-use tools like low-code automation. Low-code software has an intuitive interface that is much easier to navigate than traditional coding landscapes. By reducing the amount of code needed to complete a project, non-technical users (like business teams) can solve IT issues on their own – giving developers valuable time back. A common misconception is that low code automation will replace IT teams. The reality is that companies are facing increasing pressure to deliver efficiency, agility, and innovation, in an environment characterized by talent shortages. For IT teams, finding ways to streamline and simplify their workload is a top priority.

Start Thinking Ahead to Cybersecurity Concerns in the Metaverse. Commentary by Patrick Harr, CEO, SlashNext

The metaverse, digital twins, and similar advanced technologies will present new security challenges for organizations and individual users. Artificial intelligence solutions will be needed to validate the legitimacy of identities and controls. When we think of the metaverse today, we often envision immersive gaming environments such as Fortnite. However, the metaverse will eventually reach beyond gaming into nearly all aspects of business and society. This new type of digital interface will present unforeseen security risks when avatars impersonate other people and trick users into giving away personal data. We are already seeing significant attack patterns that compromise users who click on a bad file or a malicious link. It could be a credential-harvesting ploy conducted through a spoofed URL, or a social engineering attack launched through a natural language message that triggers malware or ransomware. Then there are doctored videos of synthetic media “deep fakes” which can cause viewers to question whether someone or something they see is real or fake. We also find this trend with digital twins that allow users to conduct physical facility maintenance remotely through a digital environment. We can expect to see more of these holographic-type phishing attacks and fraud scams as the metaverse develops. In turn, folks will have to fight AI with stronger AI because we can no longer rely solely on the naked eye or human intuition to solve these complex security problems. 

Contextual Data Loss Prevention (DLP) will be the number one technology to thwart data breaches. Commentary by Sundaram Lakshmanan, CTO, Lookout

With its powerful applications and strong features, there’s no doubt that DLP will remain the best solution for preventing data loss for the next five to ten years. There are a couple of main areas where I see DLP evolving in the next five to ten years. One area is how technology understands data content. It’s one thing to identify sensitive data or personal identifying information (PII). However, it’s a completely different thing to identify whether a document, file, or object has sensitive information. Modern DLP solutions are giving organizations the tools to understand the contents of a file without the manual need to read a 100-megabyte document. Within moments, a DLP can tell you whether a document should be classified for HIPAA or PCI. Once you understand the data, you can then put in controls for protecting that data. DLP provides a single coverage for every lane of traffic your data travels, including email, internet, and sharing traffic. User and entity behavior analytics (UEBA) integrated with DLP can provide context which enables you to predict and detect data infiltration and data exfiltration. With ransomware attacks especially, attackers may live undetected in your network for up to six months before they start moving data to another site. With DLP in place, as soon as data starts to move to another web server or site, the traffic is inspected by DLP.In the same way, DLP can catch when important files are exposed on an Amazon s3 bucket or GDrive. In these ways, DLP can change the tide against data breaches by aiding breach detection and prevention.

How to move from a data swamp to clean, integrated data. Commentary by Andy Palmer, Co-Founder & CEO of Tamr

For years, data lakes held the promise of taming data chaos. Many organizations dumped their ever-growing body of data into a data lake with the hope that having all their data in one place will help bring order to it. But data lakes are overhyped and often lack proper governance. And without clean, curated data, they simply do not work. That’s why many organizations who implemented data lakes are realizing that what they actually have is a data swamp. Having clean, curated data is valuable. That’s a fact. But dirty data swamps are not and organizations must prioritize the importance of accurate and integrated data, develop a strategy of eliminating data silos, and make cleaning data everyone’s responsibility.

The Rise of Data Lakehouse Technology. Commentary by Dremio CPO and co-founder Tomer Shiran

Enterprise adoption rates of lakehouse technology is skyrocketing. We’re seeing lots of movement in companies embracing open source, file and table formats. Will data warehouses go away in a year? We can’t say yes for sure, but trends are pointing in that direction.

AWS Steps Toward Zero-ETL Future Don’t Go Far Enough. Commentary by Dan DeMers, CEO of Cinchy

No one could argue with the need to reduce data complexity, especially in a post-GDPR universe. And while AWS is getting some buzz for its announcements, the truth is that its vision of a zero-ETL future doesn’t go nearly far enough. Here’s the reality: Even the biggest multinational conglomerate with unlimited resources does not have the ability to control all sensitive and personal data in its possession. That’s because of the way today’s apps and systems fragment information into databases, data warehouses, and even spreadsheets. We won’t have efficient and effective data control and privacy until we can eliminate new data silos and data integration altogether. Without those fundamental changes, even the most well-meaning initiative couched in the most noble pledge is just kicking the can down the road. Instead of endless company pledges and government sanctions, we need technology advances like dataware, which supports a more controlled approach to application development, and emerging standards like the Zero-Copy Integration framework, which offers the best path forward for developers to deliver rapid digital innovation and meaningful data protection.

Harnessing the power of GPUs for stronger, faster enterprises. Commentary by Razi Shoshani, Co-Founder and CTO of SQream

Graphics Processing Units (GPUs) traditionally used for video and image rendering, are now being put to the test in a broader range of applications. For serious IT managers or data scientists, GPUs have been an exotic hardware of choice for big data infrastructure. But times are changing and the benefits of GPUs for big data analytics are increasing in popularity; they can handle multiple parallel processing tasks simultaneously, allowing large amounts of data to be processed much faster. GPUs also offer significantly more advanced analytics acceleration on a fraction of the hardware required for CPU-only based solutions. The GPU is therefore well suited for operations that perform the same instruction on large amounts of data at the same time. A great example of the application of GPUs is found in data preparation for Machine Learning operations. Allowing data scientists to train their models on a larger dataset which creates more accurate results in a far more timely manner. The ML models themselves are utilizing the GPU for the exact same reasons of massive parallel processing. When the process of preparation and training take place in the same machine, the time and money saved allows businesses to utilize more of their data, and in turn gain unprecedented new insights at exceptional speed.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*