Research Highlights: Pen and Paper Exercises in Machine Learning

In this regular column we take a look at highlights for breaking research topics of the day in the areas of big data, data science, machine learning, AI and deep learning. For data scientists, it’s important to keep connected with the research arm of the field in order to understand where the technology is headed. Enjoy!

Research Highlights: Interactive continual learning for robots: a neuromorphicapproach

In this regular column we take a look at highlights for breaking research topics of the day in the areas of big data, data science, machine learning, AI and deep learning. For data scientists, it’s important to keep connected with the research arm of the field in order to understand where the technology is headed. Enjoy!

Research Highlights: An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion

In this regular column we take a look at highlights for breaking research topics of the day in the areas of big data, data science, machine learning, AI and deep learning. For data scientists, it’s important to keep connected with the research arm of the field in order to understand where the technology is headed. Enjoy!

Research Highlights: Why Do Tree-based Models Still Outperform Deep Learning on Tabular Data?

In this regular column we take a look at highlights for breaking research topics of the day in the areas of big data, data science, machine learning, AI and deep learning. For data scientists, it’s important to keep connected with the research arm of the field in order to understand where the technology is headed. Enjoy!

Research Highlights: Emergent Abilities of Large Language Models

In this regular column we take a look at highlights for breaking research topics of the day in the areas of big data, data science, machine learning, AI and deep learning. For data scientists, it’s important to keep connected with the research arm of the field in order to understand where the technology is headed. Enjoy!

Research Highlights: Transformer Feed-Forward Layers Are Key-Value Memories

In this regular column, we take a look at highlights for important research topics of the day for big data, data science, machine learning, AI and deep learning. It’s important to keep connected with the research arm of the field in order to see where we’re headed. In this edition, if you (like me) have wondered what the feed-forward layers in transformer models are actually doing, this is a pretty interesting paper on that topic. Enjoy!

Research Highlights: Deep Neural Networks and Tabular Data: A Survey

In this regular column, we take a look at highlights for important research topics of the day for big data, data science, machine learning, AI and deep learning. It’s important to keep connected with the research arm of the field in order to see where we’re headed. In this edition, we feature a new paper showing that for tabular data, algorithms based on gradient-boosted tree ensembles still outperform the deep learning models. Enjoy!

Research Highlights: Generative Adversarial Networks

In this regular column, we take a look at highlights for important research topics of the day for big data, data science, machine learning, AI and deep learning. It’s important to keep connected with the research arm of the field in order to see where we’re headed. In this edition, we feature a new paper on Generative Adversarial Networks. Enjoy!

Research Highlights: Using Theory of Mind to improve Human Trust in Artificial Intelligence

eXplainable Artificial Intelligence (XAI) has become an active research area both for scientists and industry. XAI develops models using explanations that aim to shed light on the underlying mechanisms of AI systems, thus bringing transparency to the process. Results become more capable of being interpreted by both experts and non-expert end users alike. New research by a team of UCLA scientists is focused on boosting human trust in these increasingly common systems by greatly improving upon XAI. Their study was recently published in the journal iScience.

Research Highlights: AutoDC: Automated Data-centric Processing

Most AutoML solutions are developed with a model-centric approach, however, according to a research paper, “AutoDC Automated Data-centric Processing,” that was accepted into last year’s highly selective NeurIPS conference on the development of an automated data-centric tool (AutoDC), it was found to save an estimated 80% of the manual time needed for data set improvement – typically a bespoke and costly process.