Heard on the Street – 4/27/2023

Print Friendly, PDF & Email

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

Human touch vs. ChatGPT: Which is better for your SEO strategy? Commentary by Dustin Talley, Marketing Business Manager at Bonsai Media Group

The emergence of AI chatbots since the release of chat GPT in November has already wildly changed the landscape of SEO. Content production has become increasingly simplified, and the time required to research topics in depth has been vastly decreased, which is a significant win for marketers boosting their SEO outputs. However, the system could be better, and tools like Chat GPT have raised the question of how the search landscape will evolve over the next few years. Traditionally, SEOs have needed robust processes and teams to scale content effectively. Now, with AI, SEOs can pump out content for their clients. But how does all this new AI-generated content impact the SERPs? Well, the quality of content has always been one of the most important factors for Google’s algorithm when it comes to ranking content. Regurgitated & sparse content has never been an effective way to crawl up the SERPs; the same holds for AI-generated content. Simply asking Chat GPT to create an article on topic “X” will result in a lackluster copy that needs heavy editing and fact-checking. And while it is an incredibly effective tool to speed up writing, SEOs need to hone in on the strategy and purpose behind their content while utilizing effective prompting, priming, and data to get Chat GPT to produce much higher-quality content than AI can create on its own. We’ve already seen huge improvements between the language models GPT 3.0, 3.5, and 4.0 regarding the AI’s ability to give us more of what we want, but it still needs to be a one-click solution. However, with a strategic mind behind the keys, it’s an incredible tool for reshaping how SEO is done. Only time will tell what it means for the industry, but one thing is for sure: every marketer should be utilizing and following the saga of AI. It’s not going anywhere, and it is already changing the game.

Revolutionizing Data Analysis: The Power of Real-Time Analytics for Lightning-Fast Insights and Unmatched Competitive Advantage. Commentary by David Wang, Vice President, Product Marketing, Imply

Real-time analytics are the holy grail of data analysis, where fresh data and lightning-fast insights reign supreme. This is essential for applications that require immediate insights and demand new event-to-insight measured in seconds. In contrast, traditional analytics are for those that wish to query historical data for reporting purposes. The key to real-time analytics is the architecture, and not all databases can handle it at scale. Use cases today require a database that can handle millions of events ingested, aggregations on massive datasets, and  concurrency exceeding 100s, if not 1000s, of queries per second. The architecture of real-time analytics involves using event streaming, which collects and delivers large streams of event data from streaming platforms like Apache Kafka in real-time with a purpose-built, real-time analytics database like Apache Druid. Data professionals must pay attention to real-time analytics because it’s becoming a pivotal aspect of data analytics, and it’s only going to grow in importance as companies fight for more immediate insights. To stay competitive, it’s vital to have a solid grasp of the underlying components and data structure of real-time analytics. Employing the appropriate tools to construct real-time analytics systems is critical in generating insights quickly and efficiently.

What do search abandonment and ChatGPT have in common? Commentary by Sanjay Mehta, Head of Industry, Ecommerce at Lucidworks

In a few short months, ChatGPT has taken the world by storm. It has suddenly made the power and possibility of AI uniquely accessible, useful, and relevant to anyone with a smartphone. There is no limit to the number of industries that could be impacted by more accessible AI that can generate what you want at the drop of a hat—and the ways it can be applied are often unexpected. With online retailers, for example, the implications are perhaps less about how well AI bots can perform traditionally human tasks (like composing hilarious birthday limericks)—and more about how these technologies are shifting people’s habits and expectations every time they interact with your brand. If ChatGPT can spontaneously compose a college-level essay on string theory,, why can’t their favorite home improvement store’s search function point them to the perfect shelving option for their next project—no matter how they describe it? ChatGPT may not replace us, but it most certainly raises the bar for digital experiences.

The Emergence of AI-as-a-Service. Commentary by Sean Mullaney, Chief Technology Officer at Algolia

Build vs. Buy is an age-old argument in software development, but when it comes to AI, the “buy, not build” ethos will take home the gold. Big Tech is engaged in the AI wars, and most organizations now realize it’s impossible to keep up with the pace of innovation by building their own AI products in-house. Instead of spending tireless hours on expensive AI projects, we’ll see many companies turn to AI-as-a-Service (AIaaS) providers for ready-made AI solutions. For providers, it’s time to explore AI’s capabilities to the fullest by finding cost-effective ways to help enterprises deploy AI at scale. Once this can be achieved, the cutting-edge applications of AI will be endless.

What GPT-4 means for the future of language AI. Commentary by Olga Beregovaya, VP of Machine Translation + AI at Smartling

The gap between AI-generated translation and human translation is getting smaller and we are closer to human parity than ever before. The GPT-4 model can handle various multimodal tasks, such as production of images with the image text presented in a foreign language, which reduces reliance on costly DPT tasks and in-image translation process. Last, but not least, as GPT-4 is trained on more foreign language data than earlier versions of this LLM, we now have opportunities to produce translations for long-tail languages (like Bengali and Swahili) with much higher accuracy than before. 

Generative AI will go from a Great Assistant to a Co-marketer. Commentary by Damian Rollison, Director of Market Insights at SOCi

Generative AI is taking the world by storm, but a lot of questions remain in marketers’ minds. Many are wondering how they should leverage generative AI, whether or not it’s coming for their jobs and how its capabilities will evolve. Luckily for marketers, AI should be considered their high-powered research assistant and copywriter, not their replacement. Currently, generative AI should only be used for low-level tasks that don’t carry many risks. It can analyze data to help marketers make data-driven decisions and automate time-consuming marketing tasks at scale. For example, it was recently integrated into review response management tools to instantly respond to and collect insights from consumer reviews. In the future, specialized models will be trained for more and more specific use cases. These models could create automated recommendations that improve marketing strategies and performance. Eventually, generative AI will pair with pre-existing tools to become a kind of co-marketer, performing tasks, making automated recommendations, and helping humans get the most out of the data in SaaS platforms.

The role of AI in the payments journey. Commentary by Louis Joubert, Chief Technology Officer at PPS

Having joined PPS from Refinitiv, one of the largest providers of financial markets data and infrastructure, I’m really excited about how AI, machine learning and deep learning are all quietly shaping payments and fintech today. Electronic payment systems, and fintech more generally, are increasingly using AI tools to provide practical solutions that meet everyday consumer needs. For example, machine learning algorithms are detecting fraudulent transactions and reducing the amount of false positives, as well as helping to detect emerging fraud patterns. The most ground breaking current application of generative AI is ChatGPT. While still in its infancy, the industry is trying to understand its potential and how to leverage it. There’s no doubt this technology represents both major opportunities and potential risks. On the one hand it will accelerate innovation and improve customer service and responsiveness, while breaking down complexity. On the other, the models tend to fabricate the facts, so can’t be entirely trusted without oversight. Since our industry will inevitably be impacted by ChatGPT, we need to solve the issues surrounding IP protection and data privacy. Looking further ahead, we’ll see more applications for deep learning, which has already developed several different models to tackle issues within the industry, particularly when it comes to credit risk and financial investment.

Rescue and Recovery After Earthquakes. Commentary by Jon Kondo, CEO HEAVY.AI

Natural disasters have shifted and shaped the global landscape for millions of years; however, today technology has advanced to improve rescue and response in the event of these emergencies. One such technology is geospatial intelligence. This combined with advanced analytics can help rescue workers react faster, more efficiently and more safely when responding to disasters.

With the plethora of data sources available, like satellite imagery and fixed-wing and drone images, an advanced analytics tool offers first responders actionable insights to prioritize rescue and recovery efforts. As an example, layering before and after satellite images and leveraging machine learning on those images allows first responders to quickly identify areas most affected by disasters like earthquakes or floods. Using this combined data rescue workers can focus and deploy crucial aid faster and with more accuracy to reach and save as many people as possible.

The Biggest Challenge for GPT-4 Will Be Data Infrastructure. Commentary by Tony Pialis, cofounder and CEO of Alphawave Semi

While it is crucial to deliver massive computing power to run wildly popular applications like ChatGPT, compute is only one piece of the puzzle. High-speed connectivity solutions are critical to quickly and efficiently move the enormous amount of AI-generated data within and across data centers. Applications like GPT-4 consume 100x more data, which requires a comprehensive focus and investment in data connectivity infrastructure. As companies including Google, OpenAI, and others double down on generative AI, accelerating data connectivity bandwidth through advances in semiconductor technologies will help to unlock the full potential of generative AI – making it universally accessible to organizations and consumers alike.

Data lakehouse trend: focus solely on the semantic layer architecture. Commentary by Ohad Shalev, Product Marketing Manager at SQream 

Data lakehouse vendors usually combine a processing engine (often an open-source one) and a semantic layer for data governance and performance improvement, but lately the industry has been witness to vendors breaking this combination and focusing solely on the semantic layer architecture. The creators of the open table formats Apache Iceberg and Apache Hudi are suggesting managing the semantic layer for customers and as a result, have claimed to improve their processing performance and lower their cloud storage costs. As table formats are traditionally difficult to configure, the recent uptick in companies who focus on this, and on performance improvement of a specific format, is something new and important for the development of the analytics industry. While table formats were not widely adopted or common a few years ago, as data lakehouses gain popularity and momentum, enterprises must maintain a strong table format to keep pace and secure potential partnerships. By focusing on a singular aspect of the data lakehouse concept (simulating the “warehouse” part), enterprises can significantly improve the overall performance of this architecture and lead the industry towards a wider adoption of data lakehouse architecture.

ChatGPT and intelligent automation in healthcare. Commentary by Dr. Yan Chow, Global Healthcare Industry Leader and Strategist at Automation Anywhere 

The healthcare system will benefit most from AI technologies like ChatGPT when leveraged in combination with intelligent automation. ChatGPT acts as the brain that offers ideas and direction, while intelligent automation represents the arms and legs that carry out the work. These technologies can help streamline and reduce errors associated with data entry and processing in electronic health records, facilitate more informed diagnoses, and improve outcomes through patient instruction and engagement. For instance, healthcare organizations generate and process immense amounts of data daily, a daunting and error-prone task. A combination of ChatGPT and intelligent automation makes it possible to understand, classify, and contextualize text much faster and with greater accuracy. This improves healthcare data management and reporting, and enables automated execution of rules-based actions. Together, these technologies will empower healthcare workers and greatly improve how healthcare is delivered.

Worldwide searches about AI safety spike by 614% as ChatGPT hype continues. Commentary by a spokesperson for Cryptomaniaks.com

Searches for “is ChatGPT safe?” have skyrocketed in the past few days as people worry about the impact of AI on their future. The Google Trends data reveals that people all around the world are unsure how safe ChatGPT is, with searches for “is ChatGPT safe?” increasing by a massive 614% since March 16th. As AI technology like ChatGPT continues to advance and integrate into our daily lives, it’s vital to address the safety concerns that are emerging. This surge in searches highlights the need for greater public education and transparency around AI systems and their potential risks. It’s important to recognize that AI, like ChatGPT, holds immense potential for revolutionizing various industries, but it’s essential to strike a balance between innovation and safety. Developers, users, and regulators must work together to create guidelines that ensure responsible development and deployment of AI technology. As AI systems become increasingly sophisticated and integrated into society, it is our collective responsibility to stay informed about potential risks and actively engage in discussions about AI safety. We must embrace the benefits of AI while diligently mitigating potential harm, ensuring a secure and beneficial future for all.

It’s time for AI developers to prioritize the quality of the training data. Commentary by Armando Gonzalez, CEO and Co-founder of RavenPack

The potential of Language AI is boundless, but so are the disappointments when the expectations are not met – and it’s becoming increasingly obvious that large language models can produce misleading, inaccurate, or even uncomfortably off-putting content. The problem lies not in the structure of the models but in the quality of the training data – a set of examples used to teach machine learning models to make accurate predictions. It is the foundation for models to learn from, and its quality determines the success or failure of language AI applications.  Language models are expected to be trustworthy, from a basic chatbot answering clients’ questions to complex content-generating applications like ChatGPT. To earn the trust of users, AI developers need to be transparent about how these models work, accounting for the sources and mechanisms behind the outputs, and the quality of the training data.  A lack of trust in AI language models is essentially a lack of viable training data that can produce coherent outputs. Creating high-quality training data sets that represent the world accurately and its complexities is the key to achieving this. This is the only way AI developers can build better language models that can deliver outstanding, domain specific applications across different use cases while avoiding biases.

ChatGPT is the Next Sales Revolution. Commentary by Tony Grout, Chief Product Officer at Showpad

As ChatGPT has made ripples across the business sector, its impact will become even more pronounced when driving customer-led sales and marketing strategy. With generative AI, sales and marketing teams can create more specific customer personas, allowing them to build more targeted, aligned and purposeful content for customers. This approach enables account teams to understand the individual buyer’s niche needs and pain points, and more effectively deliver messaging that resonates. ChatGPT can also help elevate purchase intent for sales and marketing teams by using the insights from generative AI. With the ability to generate natural-sounding language and even totally new content, ChatGPT can help create a compelling brand voice that will help businesses stand out amongst competitors. Enabling teams to improve customer engagement, conversion rates – and ultimately – increase revenue. With the efficiency that ChatGPT brings into everyday work, companies can also benefit from improved team effectiveness and productivity. Sales and marketing teams can refine their collaboration, streamline workflows, and deliver better results in less time. With ChatGPT and the current wave of generative AI reaching the mainstream, companies can optimize their sales and marketing efforts, reaching buyers in new ways and driving better bottom-line impact.

The harmful effects of data silos and how to avoid them. Commentary by Shams Chauthani, CTO and SVP of Engineering at Zilliant

Data is the most important asset in today’s digital age, but compiling it into siloes is a disservice to organizations. A company may possess a tremendous amount of data but if it’s not easy to access or operationalize, it will be nearly impossible to spot inconsistencies across departments and prevent leaders from having a comprehensive view of company activities. Given the volume of unstructured data coming in and how often it changes hands, data will never be 100% perfect. However, there’s plenty of room for improvement and clear groundwork to prevent the worst. There are two answers for better data governance to allow your business to fully exploit its full value. First, acquire the right tools such as automated integration platforms to easily connect storage systems and external customer-facing data. Second, bring in outside expertise for a fresh perspective to help reshape and reorganize data collection practices. For example, a company may not know what data is actually missing until a data scientist is brought in to evaluate their governance policies with a use case that requires specific data. If they find those data sets aren’t combined with outside data from another part of the organization, it is clear that silos are wreaking havoc. 

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*