Heard on the Street – 12/16/2021

Print Friendly, PDF & Email

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this new regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

How recruiters can implement AI to increase their talent pool and find the best candidate for jobs. Commentary by Gal Almog, CEO of Talenya

The recruitment market presents one of the largest opportunities for AI. So much of the recruitment process consists of time-consuming, manual tasks, and AI has the potential to completely automate it, eliminating hours of unnecessary labor in the process. Big Data can be leveraged to create a larger talent pool – The vast amount of talent data is a goldmine for AI. Not only can AI tools consolidate talent data from multiple sources, but it keeps it comprehensive and up to date.  According to LinkedIn, 70% of the workforce is made of passive job seekers and 87% of them are open to new opportunities. AI can help recruiters find the best passive job seekers at the best timing – moments before they start their job search by taking job descriptions from a company’s Applicant Tracking System (ATS), identifying if there’s a high likelihood of changing jobs and creating an automated search. Once potential candidates are identified, they are automatically contacted to schedule an interview with the company. AI instead of keyword search – AI can help eliminate tedious and discriminatory keyword searches. Algorithms read the job descriptions and create searches automatically, leveling the playing field for candidates who failed to include all their skills on their resumes (common among women and minorities) by predicting and adding missing skills at a very high level of accuracy. Machine learning to refine searches- A talent search cannot be created in a vacuum. It needs to reflect recruiters’ preferences and priorities. Machine learning algorithms can “learn” from a recruiter’s candidate selection and automatically refine future searches to reflect that. AI to optimize candidate engagement – 50% of recruiters’ work is spent messaging candidates. AI can completely automate this process by automatically sourcing candidates and engaging with them to increase the interview pipeline for employers.

Could AI Save Christmas? Commentary by Polly Mitchell-Guthrie, VP, Industry Outreach and Thought Leadership, Kinaxis

With the supply chain disruptions of the past two years showing no signs of waning, companies are warning and consumers worrying that delays may get the best of their holiday season. This year, a miracle on retail street is likely restricted to the movies, one unexpected avenue of help could come from artificial intelligence. As one of the major drivers for digital transformations, companies around the world are leveraging AI across their supply chain to navigate ongoing disruptions, including those predicted to impact Christmas. Today’s supply chain challenges demand rapid adjustments, making short-term signals critical to improving forecasts as retailers and their suppliers prepare for a busy season and determine how best to weigh short-term disruptions. Demand sensing technologies are bolstered by AI, which increases short-term forecast accuracy through augmenting sales history with external signals. The signals might include data from downstream suppliers, market conditions, product searches, social media engagements or commodity prices. Moreover, AI can provide early alerts to disruptions, which helps reduce supply chain instability by decreasing latency in response. Supply disruptions themselves can become a signal, which means forecasts can be adjusted based on AI-driven insights, constraining a demand forecast or limiting production according to real-time data. AI alone is not enough – it must be combined with agility and transparency to help supply chains to deliver good cheer this holiday season.

Harnessing the power of AI during a talent shortage. Commentary by Manu Thapar, CTO of Brighterion, a Mastercard company

The most persistent challenges of implementing a new AI system are time and talent. Many businesses lack the data science expertise to build a model that addresses their specific needs, populate it with data, and get it up and running quickly. Further, today’s talent shortage creates more competition in hiring and successfully building out an AI solution. Fortunately, an AI model can be built relatively quickly and without the need for in-house talent. By working with an experienced yet impartial third party, organizations can reduce the risks of adoption and see immediate ROI upon deployment. Once they have clearly defined objectives, expectations and data sets, businesses can tap into third-party developers that bring extensive expertise in developing an AI implementation strategy — without waiting months or years to build and deploy.

Edge Computing. Commentary by Nicolas Barcet, Sr. Director of Technology Strategy at Red Hat 

In 2022 and beyond, organizations will use edge computing to address major global issues like sustainability, climate change and fragile supply chains as a natural extension of the operations transformation use cases they are already pursuing – driven by an effort to ramp up revenue streams (both new and sustained) and enhance reliability as technology and business opportunities and challenges continue to move at record breaking speeds. This will be an ecosystem effort – pushing all industries to work together. One major player will be the telecommunications sector – who will use 5G and edge computing to assist in the chronic under-investment of these key industries. These technologies have now matured enough — becoming cheaper, more secure, more global in reach — and will enable a real window of opportunity to make a dent in what have been considered insurmountable problems. The use cases are almost limitless.

A Resurgence of Apache Cassandra. Commentary by Patrick McFadin, VP of Developer Relations for DataStax

There has been a perfect storm of positives for the planet’s most scalable database, Apache Cassandra. First is the release of the much-anticipated version 4.0 Cassandra. Already having a reputation of being highly resilient, this release took an almost obsessive approach to quality and correctness, resulting in one of the most stable databases you can put into production today. The pandemic pushed companies to deliver  faster digital transformation, and Cassandra, always known as the best global database for being where customers are (around the world), shined as developers built applications to help their companies make the shift to digital. This shift using Cassandra is paying dividends as those that lead with a digital-first approach are driving incremental revenues with improved customer experiences that will last a lifetime and all based on  the advantage of using a highly scaling, resilient data store for faster innovation. Lastly, Cassandra is a database whose time is now. With the massive growth in cloud native applications, finding the right database that was designed and built for the cloud is a no-brainer. Cassandra was born in the early days of cloud computing using commodity hardware while being fully distributed with no single point of failure. Where that used to be a hyper scaler problem, it’s now an everybody problem. By either deploying in Kubernetes or using DataStax Astra DB serverless cloud database-as-a-service,  Cassandra is proving to be the database CIOs, developers and data architects  deploy and don’t have to worry about. 

Is Data Mesh Architecture the “next big thing”? Commentary by Aviv Noy, Rivery Co-founder & CTO

The concept of data mesh isn’t necessarily new. Large data-oriented enterprises have had to figure out how to decentralize and manage access to data across their organizations. However, thanks to the cloud, thousands of smaller companies and startups can access and benefit from Enterprise-grade data tools, systems and platforms – and quickly realized that a central BI or data team can become a bottleneck if analysts and engineers across the business can’t access the data they need, when they need it, right away. The upside of having a data mesh approach means empowering people across the business with access to the data they need. To achieve this it is important to take into consideration data governance, since people should be able to access what is relevant to their role or department. In addition, data cataloguing and data lineage will be crucial for end users to navigate the system – helping teams avoid duplication, tracing errors, and adhering to a single source of truth when it comes to using shared metrics and KPIs. Ultimately, Data Mesh isn’t a product. Data is the product. Without a DataOps platform that helps connect all the dots and manage the entire operation, the idea of Data Mesh can’t be executed. In a similar way in which DevOps revolutionized the way in which teams manage continuous delivery and build lifecycle, DataOps will be at the core of embracing a Data Mesh or Data Fabric approach across an organization. 

Topline thoughts on Data Strategy. Commentary by Carolyn Duby, Cloudera’s Field CTO and Cybersecurity Lead

Organizations need to have a plan for their data. And it doesn’t start with what they’re going to collect, how they’re going to clean it, and what type of database they’ll put it in. They must start with what they want to do with the data and work backwards from there. And this is a different way of thinking about it. Many organizations start with bringing in this table and that table and putting this data together. But they’re often not really thinking about what they need to achieve and how they’re going to make use of the data over the longer term. Many businesses hold on to data that’s old, including customer information they didn’t really need and shouldn’t have been keeping. So the data strategy starts with figuring out what you want to do, and then what data you need to do that. And then figuring out how to collect it, prioritize it, clean it and make it into the data you need for everyday use. This is not only more efficient, but also much safer.”“It’s surprising how top-down the data strategy is with a lot of our customers. CTOs typically stay only a few years and then they move on to a new organization, and when this happens, everyone can feel paralyzed. So it’s important for an organization to think about data in a more strategic way and put in place a strategy that will survive changes at the top. Being a data-driven organization is a different way of thinking. It’s a journey that customers must go on. And they have to start with the basics. How do they make decisions? How do they use data? Do they look at dashboards? What kinds of dashboards? How do they interpret them? Those are all big questions that organizations need to answer to start the data-driven journey. They must figure out where they are now, and then what their plan is for the immediate and long-term future, and then put themselves on a course to get there. This is a complicated journey, sometimes involving multiple lines of business gathering data from multiple sources and putting it in a common location, so they’re dealing with security and governance issues. These are the kinds of things we help our customers understand because they can’t just snap their fingers and finish the data projects that they want to finish. Their platform is great, but it’s not magic. We help them solve large-scale problems that have a lot of moving pieces to it. We help them understand what use cases are going to be most helpful. We also have a value management team to help them understand the return on investment of each particular use case. And then we help them get it done.

Employers will sell themselves on their tech stack, not their offices. Commentary by Rafael Sweary, President and Co-Founder of WalkMe

It’s hard to turn on the news without hearing about The Great Resignation. It’s affecting all of us in some way, and the reality is, it’s most likely happening within your company right now. Combined with the continued impact COVID-19 has had on the workforce, organizations need to adopt new on-boarding strategies to make the employee experience seamless, productive, and successful. That’s where AI can make a big impact. AI can help understand how humans interact with software and proactively recommend ways to improve the user experience with actions that can be taken immediately. It’s a win for businesses, which can glean valuable data on technology usage and where end-users are having issues. And it’s a win for the employee, who can quickly navigate the company’s tech stack vs struggling with learning– especially remotely. The organization can deliver a better user experience aligned to business processes. Everything is done automatically, powered by AI and machine learning to extract data. Better user experiences equal better digital adoption and greater value derived from digital transformation.

On the recent AWS outage. Commentary by  Barr Moses, CEO & Co-founder of Monte Carlo

In 2021, the question isn’t ‘will my application break?’ It’s ‘when will my application break?’ The AWS outage serves as a reminder of the importance of both application reliability and the need to ensure reliability across all parts of these systems. Companies like Amazon need to focus not just on ensuring the performance of their software and servers, but also the reliability of the data powering these technologies. In fact, ‘data downtime’ can be even more detrimental to businesses than software outages, causing companies to lose millions of dollars per year in lost revenue when data is missing, wrong, or inaccurate. When data is “down,” digital applications lag, services break, and users lose trust in the product.

A look ahead to 2022. Commentary by John DesJardins, Hazelcast CTO

In 2022, we’ll see streaming platforms evolve to better address global, edge-to-cloud, multi-cloud and hybrid cloud architectures. We’ll also see accelerated adoption of real-time streaming platforms driven by SQL and an improved integration across data (databases, data warehouses, data lakes) & messaging platforms. Edge deployments with IoT will drive digital innovation stemming from the Post-COVID new normal in areas such as customer engagement and logistics automation. Just-in-time product replenishment will be driven by granular, localized real-time visibility of supply and demand. For example, a coffee shop chain will know how a specific store’s customers consume goods vs. another one a few blocks away and have the right products at the right time. I also believe that 2022 will be the year of advancements in developer centric platforms. The key to a good internal developer platform is finding the balance between self-service for developers and abstracting the tasks that are the least valuable, without making developers feel restricted.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*