Sign up for our newsletter and get the latest big data news and analysis.

Best of arXiv.org for AI, Machine Learning, and Deep Learning – April 2019

In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning – from disciplines including statistics, mathematics and computer science – and provide you with a useful “best of” list for the month.

UC Berkeley Graduate Receives ACM Doctoral Dissertation Award

ACM, the Association for Computing Machinery, announced that Chelsea Finn receives the 2018 ACM Doctoral Dissertation Award for her dissertation, “Learning to Learn with Gradients.” In her thesis, Finn introduced algorithms for meta-learning that enable deep networks to solve new tasks from small data sets, and demonstrated how her algorithms can be applied in areas including computer vision, reinforcement learning and robotics.

Accelerating Training for AI Deep Learning Networks with “Chunking”

At the International Conference on Learning Representations on May 6, IBM Research will share a deeper look around how chunk-based accumulation can speed the training for deep learning networks used for artificial intelligence (AI).

Advanced Performance and Massive Scaling Driven by AI and DL

In this contributed article, Kurt Kuckein, Director of Marketing for DDN Storage, discusses how current enterprise and research data center IT infrastructures are woefully inadequate in handling the demanding needs of AI and DL. Designed to handle modest workloads, minimal scalability, limited performance needs and small data volumes, these platforms are highly bottlenecked and lack the fundamental capabilities needed for AI-enabled deployments.

Best of arXiv.org for AI, Machine Learning, and Deep Learning – March 2019

In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning – from disciplines including statistics, mathematics and computer science – and provide you with a useful “best of” list for the month.

The insideBIGDATA IMPACT 50 List for Q2 2019

The team here at insideBIGDATA is deeply entrenched in following the big data ecosystem of companies from around the globe. We’re in close contact with most of the firms making waves in the technology areas of big data, data science, machine learning, AI and deep learning. Our in-box is filled each day with new announcements, commentaries, and insights about what’s driving the success of our industry so we’re in a unique position to publish our quarterly IMPACT 50 List of the most important movers and shakers in our industry. These companies have proven their relevance by the way they’re impacting the enterprise through leading edge products and services. We’re happy to publish this evolving list of the industry’s most impactful companies!

Distributed GPU Performance for Deep Learning Training

If there is a time deadline by which training must be completed, or if it simply takes too long to complete training, distributing the workload across many GPUs can be used to reduce training time.  This flexibility allows GPU resources to be maximally utilized and provides high ROI since time to results can be minimized. HPE highlights recent research that explores the performance of GPUs in a scale-out and scale-up scenarios for deep learning training. 

Best of arXiv.org for AI, Machine Learning, and Deep Learning – February 2019

In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning – from disciplines including statistics, mathematics and computer science – and provide you with a useful “best of” list for the month.

How to Get to the Data-Enabled Data Center

Despite their many promising benefits, advancements in Artificial Intelligence (AI) and Deep Learning (DL) are creating some of the most challenging workloads in modern computing history and put significant strain on the underlying I/O, storage, compute and network. An AI-enabled data center must be able to concurrently and efficiently service the entire spectrum of activities involved in the AI and DL process, including data ingest, training and inference.

Book Review: Deep Learning Revolution by Terrence J. Sejnowski

The new MIT Press title “Deep Learning Revolution,” by Professor Terrence J. Sejnowski, offers a useful historical perspective coupled with a contemporary look at the technologies behind the fast moving field of deep learning. This is not a technical book about deep learning principles or practices in the same class as my favorite “Deep Learning” […]