One Stop Systems, Inc. (OSS), a leader in PCI Express® (PCIe®) expansion technology, introduces two new deep learning appliances, OSS-PASCAL4 and OSS-PASCAL8. The OSS-PASCAL8 is a 170 TeraFLOP engine with 80GB/s NVIDIA® NVLink™ for the largest deep learning models. The OSS-PASCAL4 provides 21.2 TeraFLOPS of double precision performance with an 80GB/s GPU peer-to-peer NVLink™. These systems are tuned for out-of-the-box operation and quick and easy deployment.
The OSS-PASCAL4 and OSS-PASCAL8 have the latest NVIDIA® Tesla™ P100 SXM2 GPUs for up to 170 TeraFLOPS of half precision performance. They utilize NVLink™ with speeds up to 80GB/s peer-to-peer between GPUs. These GPU accelerated servers have dual v4 Broadwell CPUs and up to 2TB DDR4 memory. The OSS-PASCAL4 and OSS-PASCAL8 can integrate into the GPUltima rack-level solution using 100Gb EDR Infiniband interfaces to large-scale multi-root peer-to-peer RDMA networks.
The OSS Deep Learning Appliances come with a choice of machine learning frameworks such as Caffe, Torch, Tensorflow and Theano. They also come with a choice of machine learning libraries such as MLPython, NVIDIA® cuDNN, DIGITS™ and CaffeOnSpark. GPU drivers, NVIDIA® CUDA® drivers, CUB and NCCL are supporting elements for the OSS-PASCAL4 and OSS-PASCAL8. The GPU management and monitoring is pre-installed and provides both health and workload management. It samples all of the metrics provided by all of the NVIDIA® GPUs and automatically performs health checks on every GPU. It is integrated with all of the popular HPC workload managers and automatically configures GPUs within the workload manager.
One Stop Systems’ deep learning appliances are designed for augmented performance in machine learning and deep learning applications. These appliances provide the ultimate power for performing deep learning training and exploring neural networks,” said Steve Cooper, OSS CEO. “The OSS-PASCAL4 and OSS-PASCAL8 round out One Stop Systems’ deep learning offerings by providing high performance with a low price tag.”
Sign up for the free insideBIGDATA newsletter.