Sign up for our newsletter and get the latest big data news and analysis.

The Importance of Vectorization Resurfaces

Vectorization offers potential speedups in codes with significant array-based computations—speedups that amplify the improved performance obtained through higher-level, parallel computations using threads and distributed execution on clusters. Key features for vectorization include tunable array sizes to reflect various processor cache and instruction capabilities and stride-1 accesses within inner loops.

Intel® Parallel Studio XE Helps Developers Take their HPC, Enterprise, and Cloud Applications to the Max

Intel® Parallel Studio XE is a comprehensive suite of development tools that make it fast and easy to build modern code that gets every last ounce of performance out of the newest Intel® processors. This tool-packed suite simplifies creating code with the latest techniques in vectorization, multi- threading, multi-node, and memory optimization.

Identifying Health Risks Using Pattern Recognition and AI

Physicians are increasingly using AI technologies to treat patients with superhuman speed and performance, and predictive analytics will be key to delivering more effective, proactive, and quality care. Stephen Wheat, Director of HPC Pursuits at Hewlett Packard Enterprise, explores how we can identify health risks using pattern recognition and AI. 

Julia: A High-Level Language for Supercomputing and Big Data

Julia is a new language for technical computing that is meant to address the problem of language environments not designed to run efficiently on large compute clusters. It reads like Python or Octave, but performs as well as C. It has built-in primitives for multi-threading and distributed computing, allowing applications to scale to millions of cores. In addition to HPC, Julia is also gaining traction in the data science community.

Cray Powers Breakthrough Discoveries with Urika-XC – Delivering Analytics and AI at Supercomputing Scale

Global supercomputer leader Cray Inc. (Nasdaq: CRAY) announced the launch of the Cray® Urika®-XC analytics software suite, bringing graph analytics, deep learning, and robust big data analytics tools to the Company’s flagship line of Cray XC™ supercomputers. The Cray Urika-XC analytics software suite empowers data scientists to make breakthrough discoveries previously hidden within massive data sets, and achieve faster time-to-insight while leveraging the scale and performance of Cray XC supercomputers.

TERATEC 2017 Forum – The International Meeting for HPC, Simulation, Big Data

The TERATEC Forum is a major event in France and Europe that brings together the best international experts in HPC, Simulation and Big Data. It reaffirms the strategic importance of these technologies for developing industrial competitiveness and innovation capacity. The TERATEC Forum welcomes more than 1 300 attendees, highlighting the technological and industrial dynamism of HPC and the essential role that France plays in this field.

Cycle Computing Helps HyperXite Build Transportation for the Future

Cycle Computing, a leader in Big Compute and Cloud HPC orchestration, announced that the HyperXite team is utilizing Cycle Computing’s CycleCloud to manage Microsoft Azure compute hours needed in order to perform detailed simulations using ANSYS Fluent®.

Nimbix Unveils Expanded Cloud Product Strategy for Enterprises and Developers

Nimbix, a leading provider of high performance and cloud supercomputing services, announced its new combined product strategy for enterprise computing, end users and developers. This new strategy will focus on three key capabilities – JARVICE™ Compute for high performance processing, including Machine Learning, AI and HPC workloads; PushToCompute™ for application developers creating and monetizing high performance workflows; and MaterialCompute™, a brand new intuitive user interface, featuring the industry’s largest high performance application marketplace available from a cloud provider.

HPC Storage Performance in the Cloud

In this contributed article, technical story teller Ken Strandberg, discusses the feeding of high-performance computing (HPC) and enterprise technical computing clusters with data using Lustre, the open source parallel file system that provides the performance and scalability to meet the demands of workloads on these systems.

Cray Works with Industry Leaders to Reach New Performance Milestone for Deep Learning at Scale

Cray Inc. announced the results of a deep learning collaboration between Cray, Microsoft, and the Swiss National Supercomputing Centre (CSCS) that expands the horizons of running deep learning algorithms at scale using the power of Cray supercomputers. Running larger deep learning models is a path to new scientific possibilities, but conventional systems and architectures limit the problems that can be addressed, as models take too long to train. Cray worked with Microsoft and CSCS, a world-class scientific computing center, to leverage their decades of high performance computing expertise to profoundly scale the Microsoft Cognitive Toolkit.