Tachyum Prodigy Hits Milestone With 96 Percent Of Silicon Designed

Tachyum™ Inc. announced that it has reached another milestone in meeting its goal of volume production of the Prodigy Universal Processor in 2021 by achieving 96 percent of silicon designed and layout completed, with only a stable netlist layout to go before the final netlist and tape out. The company has been making steady progress in its march toward Prodigy’s product release next year.

Accelerated Machine Learning Available from Your Browser

InAccel, a pioneer on application acceleration, makes accessible the power of FPGA acceleration from your browser. Data scientists and ML engineers can now easily deploy and manage FPGAs, speeding up compute-intense workloads and reduce total cost of ownership with zero code changes.

Intel Announces AI and Analytics Platform with New Processor, Memory, Storage and FPGA Solutions

Intel today introduced its 3rd Gen Intel Xeon Scalable processors and additions to its hardware and software AI portfolio, enabling customers to accelerate the development and use of AI and analytics workloads running in data center, network and intelligent-edge environments. As the industry’s first mainstream server processor with built-in bfloat16 support, Intel’s new 3rd Gen Xeon Scalable processors makes artificial intelligence (AI) inference and training more widely deployable on general-purpose CPUs for applications that include image classification, recommendation engines, speech recognition and language modeling.

Heterogeneous Computing Programming: oneAPI and Data Parallel C++

Sponsored Post What you missed at the Intel Developer Conference, and how to catch-up today By James Reinders In the interests of full disclosure … I must admit that I became sold on DPC++ after Intel approached me (as a consultant – 3 years retired from Intel) asking if I’d help with a book on […]

Data Centers Get a Performance Boost From FPGAs

With the advent of next generation workloads, such as Big Data and streaming analytics, Artificial Intelligence (AI), Internet of Things (IoT), genomics, and network security, CPUs are seeing different data types, mixtures of file sizes, and new algorithms with different processing requirements. Hewlett Packard Enterprise’s Bill Mannel explores how as big data continues to explode, data centers are benefitting from a relatively new type of offload accelerator: FPGAs.

FPGAs Speed Machine Learning at SC16 Intel Discovery Zone

In this video from SC16, Intel demonstrates how Altera FPGAs can accelerate Machine Learning applications with greater power efficiency. “The demo was put together using OpenCL design tools and then compiled to FPGA. From an end-user perspective, they tied it together using Intel MKL-DNN with CAFFE on top of that. This week, Intel announced the DLIA Deep Learning Inference Accelerator that brings the whole solution together in a box.”