Sign up for our newsletter and get the latest big data news and analysis.

Distributed GPU Performance for Deep Learning Training

If there is a time deadline by which training must be completed, or if it simply takes too long to complete training, distributing the workload across many GPUs can be used to reduce training time.  This flexibility allows GPU resources to be maximally utilized and provides high ROI since time to results can be minimized. HPE highlights recent research that explores the performance of GPUs in a scale-out and scale-up scenarios for deep learning training.