Sign up for our newsletter and get the latest big data news and analysis.

Dr. Eng Lim Goh on New Trends in Big Data and Deep Learning for Artificial Intelligence

In this video from SC16, Dr. Eng Lim Goh from HPE/SGI discusses new trends in HPC Energy Efficiency and Deep Learning for Artificial Intelligence. “Recently acquired by Hewlett Packard Enterprise, SGI is a trusted leader in technical computing with a focus on helping customers solve their most demanding business and technology challenges.”

Nimbix Unveils Expanded Cloud Product Strategy for Enterprises and Developers

Nimbix, a leading provider of high performance and cloud supercomputing services, announced its new combined product strategy for enterprise computing, end users and developers. This new strategy will focus on three key capabilities – JARVICE™ Compute for high performance processing, including Machine Learning, AI and HPC workloads; PushToCompute™ for application developers creating and monetizing high performance workflows; and MaterialCompute™, a brand new intuitive user interface, featuring the industry’s largest high performance application marketplace available from a cloud provider.

HPC Storage Performance in the Cloud

In this contributed article, technical story teller Ken Strandberg, discusses the feeding of high-performance computing (HPC) and enterprise technical computing clusters with data using Lustre, the open source parallel file system that provides the performance and scalability to meet the demands of workloads on these systems.

Cray Works with Industry Leaders to Reach New Performance Milestone for Deep Learning at Scale

Cray Inc. announced the results of a deep learning collaboration between Cray, Microsoft, and the Swiss National Supercomputing Centre (CSCS) that expands the horizons of running deep learning algorithms at scale using the power of Cray supercomputers. Running larger deep learning models is a path to new scientific possibilities, but conventional systems and architectures limit the problems that can be addressed, as models take too long to train. Cray worked with Microsoft and CSCS, a world-class scientific computing center, to leverage their decades of high performance computing expertise to profoundly scale the Microsoft Cognitive Toolkit.

MapD Builds Out Award-Winning GPU and Visual Analytics Platform

MapD, a leader in GPU-powered analytics, announced significant new feature and performance enhancements to its Core database and Immerse visual analytics platform. The new capabilities extend the company’s pioneering work in using GPUs to both query and visualize billions of records with millisecond latency. The performance characteristics of MapD’s approach are anywhere from 75 to 3,500 times faster than traditional CPU-powered databases.

Cray Systems Power Deep Learning in Supercomputing at Scale

Supercomputer leader Cray Inc. (Nasdaq: CRAY) announced the Company has unveiled new deep learning capabilities across its line of supercomputing and cluster systems.

SC-16 The International Conference for High Performance Computing, Networking, Storage and Analysis

SC16 returns to Salt Lake City on Nov. 13-18. The Six-day supercomputing event features internationally-known expert speakers, cutting-edge workshops and sessions, a non-stop student competition, the world’s largest supercomputing exhibition,panel discussions and much more.

The Convergence of Big Data and HPC

In this special guest feature, Barry Bolding, Senior VP and Chief Strategy Officer at Cray Inc., discusses a highly germaine topic for many enterprises today: the intersection of big data and high performance computing.

SC16 – The International Conference for High Performance Computing

The International Conference for High Performance Computing, Networking, Storage and Analysis – SC16, is coming November 13-18, 2016 to the Salt Palace Convention Center, Salt Lake City, Utah.

Intel Scalable System Framework Facilitates Deep Learning Performance

In this special guest feature, Rob Farber from TechEnablement writes that the Intel Scalable Systems Framework is pushing the boundaries of Machine Learning performance. “machine learning and other data-intensive HPC workloads cannot scale unless the storage filesystem can scale to meet the increased demands for data.”