Video Highlights: Attention Is All You Need – Paper Explained

In this video presentation, Mohammad Namvarpour presents a comprehensive study on Ashish Vaswani and his coauthors’ renowned paper, “Attention Is All You Need.” This paper is a major turning point in deep learning research. The transformer architecture, which was introduced in this paper, is now used in a variety of state-of-the-art models in natural language processing and beyond. Transformers are the basis of the large language models (LLMs) we’re seeing today.

AI Under the Hood: Interactions

We asked our friends over at Interactions to do a deep dive into their technology. Mahnoosh Mehrabani, Ph.D., Interactions’ Sr. Principal Scientist shared some fascinating information about how Interactions’ Intelligent Virtual Assistants (IVAs) leverage advanced natural language understanding (NLU) models for “speech recognition” and “advanced machine learning.” The company uses NLU models to help some of today’s largest brands to understand customer speech and respond appropriately.

Video Highlights: Change Data Capture With Apache Flink

The featured video resource provided by Decodable is a webinar in which CDC experts provide an overview of CDC with Flink and Debezium. There is a growing role Change Data Capture (CDC) plays in real-time data analytics (specifically, stream processing with open source tools like Debezium and Apache Flink). CDC lets users analyze data as it’s generated by leveraging streaming from systems like Apache Kafka, Amazon Kinesis, and Azure Events Hubs to track and transport changes from one data system to another.

Video Highlights: Google Engineer on His Sentient AI Claim

Google Engineer Blake Lemoine (who worked for the company’s Responsible AI unit) joins Emily Chang of Bloomberg Technology in the video below to talk about some of the experiments he conducted that led him to believe that LaMDA (a Large Language Model) was a sentient AI, and to explain why he was placed on administrative leave and ultimately fired.

Video Highlights: Modernize your IBM Mainframe & Netezza With Databricks Lakehouse

In the video presentation below, learn from experts how to architect modern data pipelines to consolidate data from multiple IBM data sources into Databricks Lakehouse, using the state-of-the-art replication technique—Change Data Capture (CDC).

Video Highlights: Why Does Observability Matter?

Why does observability matter? Isn’t observability just a fancier word for monitoring? Observability has become a buzz word in the big data space. It’s thrown around so often, it can be easy to forget what it even really means. In this video presentation, our friends over at Pepperdata provide some important insights into this this technology that’s growing in popularity.

Video Highlights: JuliaHub – The Best Way to Run Large Scale Computing in the Cloud

JuliaHub helps you read, write, run and share Julia code and applications. JuliaHub is the quickest, easiest on-ramp to leverage Julia – the fastest, easiest, and most powerful scientific, mathematical, and statistical computation language yet.

Video Highlights: The Rise of DeBERTa for NLP Downstream Tasks

In episode seven of the NVIDIA Grandmaster Series, you’ll learn from four members of the Kaggle Grandmasters of NVIDIA (KGMON) team. Watch this video to learn how they used natural language processing to analyze argumentative writing elements from students and identified key phrases in patient notes from medical licensing exams.

Backprop Bonanza

Many new data scientists have voiced what they feel is the lack of a satisfying way to learn the concepts of back propagation/gradient computation in neural networks when taking undergrad level ML classes. So I thought I’d put together a number of useful learning resources to jump-start an understanding for this important process. The following list, curated from an informal Twitter poll, appears in no particular order.

Video Highlights: How to Optimize Deep Learning Models for Production

In the video presentation embedded below, our friends over at Neural Magic present a compelling workshop: How to Optimize Deep Learning Models for Production. After watching this video, you’ll be able to optimize your NLP and/or computer vision model, apply your own data with a few lines of code, and deploy it on commodity CPUs at GPU-level speeds.