Sign up for our newsletter and get the latest big data news and analysis.

2nd Generation Intel® Xeon® Platinum 9200 Processors Offer Leadership Performance, and Advance AI

Simulation, modeling, data analytics, and other workloads commonly use high performance computing (HPC) to advance research and business in many ways. However, as converged workloads involving AI grow in adoption, HPC systems must keep pace with evolving needs. 2nd Generation Intel® Xeon® Platinum processors, with built-in AI acceleration technologies, offer leadership performance to speed the most demanding HPC workloads.

How Astera Labs is Revolutionizing Semiconductor Product Development—100% in the Cloud

For any established semiconductor product developer, designing a next-generation PCIe 5.0 chipset in less than a year is no small feat. For a brand-new startup with no compute infrastructure other than laptops, however, it is a huge ask. That’s why, with time being of the essence, Astera Labs decided to take a chance on the efficiencies it would gain from a 100% cloud-based approach.

Six Platform Investments from Intel to Facilitate Running AI and HPC Workloads Together on Existing Infrastructure

Because HPC technologies today offer substantially more power and speed than their legacy predecessors, enterprises and research institutions benefit from combining AI and HPC workloads on a single system. Six platform investments from Intel will help reduce obstacles and make HPC and AI deployment even more accessible and practical.

DAOS Delivers Exascale Performance Using HPC Storage So Fast It Requires New Units of Measurement

Forget what you previously knew about high-performance storage and file systems. New I/O models for HPC such as Distributed Asynchronous Object Storage (DAOS) have been architected from the ground up to make use of new NVM technologies such as Intel® Optane™ DC Persistent Memory Modules (Intel Optane DCPMMs). With latencies measured in nanoseconds and bandwidth measured in tens of GB/s, new storage devices such as Intel DCPMMs redefine the measures used to describe high-performance nonvolatile storage.

Interview: Terry Deem and David Liu at Intel

I recently caught up with Terry Deem, Product Marketing Manager for Data Science, Machine Learning and Intel® Distribution for Python, and David Liu, Software Technical Consultant Engineer for the Intel® Distribution for Python*, both from Intel, to discuss the Intel® Distribution for Python (IDP): targeted classes of developers, use with commonly used Python packages for data science, benchmark comparisons, the solution’s use in scientific computing, and a look to the future with respect to IPD.

Develop Multiplatform Computer Vision Solutions with Intel® Distribution of OpenVINO™ Toolkit

Realize your computer vision deployment needs on Intel® platforms—from smart cameras and video surveillance to robotics, transportation, and much more. The Intel® Distribution of OpenVINO™ Toolkit (includes the Intel® Deep Learning Deployment Toolkit) allows for the development of deep learning inference solutions for multiple platforms.

The AI Opportunity

The tremendous growth in compute power and explosion of data is leading every industry to seek AI-based solutions. In this Tech.Decoded video, “The AI Opportunity – Episode 1: The Compute Power Difference,” Vice President of Intel Architecture and AI expert Wei Li shares his views on the opportunities and challenges in AI for software developers, how Intel is supporting their efforts, and where we’re heading next.

Fast-track Application Performance and Development with Intel® Performance Libraries

Intel continues its strident efforts to refine libraries optimized to yield the utmost performance from Intel® processors. The Intel® Performance Libraries provide a large collection of prebuilt and tested, performance-optimized functions to developers. By utilizing these libraries, developers can reduce the costs and time associated with software development and maintenance, and focus efforts on their own application code.

Supercharge Data Science Applications with the Intel® Distribution for Python

Intel® Distribution for Python is a distribution of commonly used packages for computation and data intensive domains, such as scientific and engineering computing, big data, and data science. With Intel® Distribution for Python you can supercharge Python applications and speed up core computational packages with this performance-oriented distribution. Professionals who can gain advantage with this product include: machine learning developers, data scientists, numerical and scientific computing developers, and HPC developers.

DarwinAI Generative Synthesis Platform and Intel Optimizations for TensorFlow Accelerate Neural Networks

DarwinAI, a Waterloo, Canada startup creating next-generation technologies for Artificial Intelligence development, announced that the company’s Generative Synthesis platform – when used with Intel technology and optimizations – generated neural networks with a 16.3X improvement in image classification inference performance. Intel shared the optimization results in a recently published solution brief.