In this special guest feature, Ubuntu Evangelist Randall Ross writes that the OpenPOWER Foundation is hosting an all-new type of developer event. “The OpenPOWER Foundation envisioned something completely different. In its quest to redefine the typical developer event the Foundation asked a simple question: What if developers at a developer event actually spent their time developing?”
In this video from SC16, Dr. Eng Lim Goh from HPE/SGI discusses new trends in HPC Energy Efficiency and Deep Learning for Artificial Intelligence. “Recently acquired by Hewlett Packard Enterprise, SGI is a trusted leader in technical computing with a focus on helping customers solve their most demanding business and technology challenges.”
In this video from SC16, Intel demonstrates how Altera FPGAs can accelerate Machine Learning applications with greater power efficiency. “The demo was put together using OpenCL design tools and then compiled to FPGA. From an end-user perspective, they tied it together using Intel MKL-DNN with CAFFE on top of that. This week, Intel announced the DLIA Deep Learning Inference Accelerator that brings the whole solution together in a box.”
In this video from the Intel HPC Developer Conference, Noah Rosenberg and Karl Stiefvater from Pikazo describe the company’s innovative Pikazo App for smartphones. “Pikazo was developed in 2015 using neural style transfer algorithms. It is a collaboration between human, machine, and our concept of art. It is a universal art machine that paints any image in the style of any other, producing sometimes-beautiful, sometimes-disturbing, always-surprising artworks. Pikazo allows novice artists to create impressive imagery via a technique known as neural style transfer.”
In this video from the Intel HPC Developer Conference, Franz Kiraly from Imperial College London and the Alan Turing Institute describes why many companies and organizations are beginning to scope their potential for applying rigorous quantitative methodology and machine learning.
In this video from the Intel HPC Developer Conference, Elmoustapha Ould-ahmed-vall from Intel describes how the company is doubling down to optimize Machine Learning frameworks for Intel Platforms. Using open source frameworks as a starting point, surprising speedups are possible using Intel technologies.
Deep learning is one of the hottest topics at SC16. Now, DK Panda and his team at Ohio State University have announced an exciting new High-Performance Deep Learning project that aims to bring HPC technologies to the DL field. “Welcome to the High-Performance Deep Learning project created by the Network-Based Computing Laboratory of The Ohio State University. Availability of large data sets like ImageNet and massively parallel computation support in modern HPC devices like NVIDIA GPUs have fueled a renewed interest in Deep Learning (DL) algorithms. This has triggered the development of DL frameworks like Caffe, Torch, TensorFlow, and CNTK. However, most DL frameworks have been limited to a single node. The objective of the HiDL project is to exploit modern HPC technologies and solutions to scale out and accelerate DL frameworks.”
Today SGI announced that enterprises can now leverage the Intel-based SGI UV 300H server in a multi-node cluster (scale out) to run SAP Business Warehouse (SAP BW) on SAP HANA or new SAP BW/4HANA. Unique to SGI, the cluster nodes can later be reconfigured as single-node systems with 1 to 32TB of shared memory (scale up) to run SAP S/4HANA and other real-time applications. “For large enterprises that plan to migrate to SAP S/4HANA but wish to begin their journey to SAP HANA with SAP BW, our new SGI cluster offering is unquestionably the optimal solution,” said Jorge Titinger, president and CEO, SGI. “The scalability of the SGI UV 300H architecture coupled with our expertise in mission-critical environments provides an ideal path to real-time business with SAP HANA.”
In this video from the 2016 HPC User Forum in Austin, John Feo from PNNL presents: Why use Tables and Graphs for Knowledge Discovery System? “GEMS software provides a scalable solution for graph queries over increasingly large data sets. As computing tools and expertise used in conducting scientific research continue to expand, so have the enormity and diversity of the data being collected. Developed at Pacific Northwest National Laboratory, the Graph Engine for Multithreaded Systems, or GEMS, is a multilayer software system for semantic graph databases. In their work, scientists from PNNL and NVIDIA Research examined how GEMS answered queries on science metadata and compared its scaling performance against generated benchmark data sets. They showed that GEMS could answer queries over science metadata in seconds and scaled well to larger quantities of data.”
“A lot of times when people think about big data, they think about it in ahistorical times…outside of this political context,” said Ruby Mendenhall, an associate professor of sociology at UIUC. “It’s really important to think about whose voice is digitized, in journals and newspapers. A lot of that for black women has been lost and you need to make a concerted effort to recover it.” Mendenhall’s study employs Latent Dirichlet allocation (LDA) algorithms and comparative text mining to search 800,000 periodicals in JSTOR (Journal Storage) and HathiTrust from 1746 to 2014 to identify the types of conversations that emerge about Black women’s shared experience over time.