Fast-track Application Performance and Development with Intel® Performance Libraries

Print Friendly, PDF & Email

Sponsored Post

Intel continues its strident efforts to refine libraries optimized to yield the utmost performance from Intel® processors. The Intel® Performance Libraries provide a large collection of prebuilt and tested, performance-optimized functions to developers. By utilizing these libraries, developers can reduce the costs and time associated with software development and maintenance, and focus efforts on their own application code. Intel has a mission to support innovation and impressive performance:

  • Intel® Data Analytics Acceleration Library – boosts machine learning and big data analytics, optimizing across all data analysis stages
  • Intel® Integrated Performance Primitives – highly optimized image, signal, data compression, and cryptography functions
  • Intel® Math Kernel Library – features highly optimized, threaded, and vectorized functions to maximize performance on each processor family
  • Intel® MPI Library – focuses on enabling Message Passing Interface (MPI) applications to perform better for clusters based on Intel® architecture
  • Intel® Threading Building Blocks – scalable parallel model to implement task-based parallelism

The functions contained in the libraries have been carefully optimized to capitalize on specific performance features built into current Intel processors and will be optimized for future Intel processors. An important advantage of using the Intel Performance Libraries is that they provide transparent portability of application programs across the full range of Intel processors.

The Intel® Data Analytics Acceleration Library (Intel® DAAL) helps boost machine learning and big-data analytics and helps data engineers reduce the time it takes to develop high-performance applications. Intel DAAL enables applications to make better predictions faster and analyze larger data sets with available compute resources. Simply link to the newest version and your code is ready for the latest processors. This library addresses all stages of the data analytics pipeline: preprocessing, transformation, analysis, modeling, validation, and decision-making.

Intel DAAL Fits in the Data Analytics Ecosystem

The Intel® Integrated Performance Primitives (Intel® IPP) is a valuable resource for programming tools and libraries that are highly optimized for a wide range of Intel® architecture (Intel Atom®, Intel® Core™, and Intel® Xeon® processors). These ready-to-use, APIs are used by software developers, integrators, and solution providers to tune their applications and get the best performance.

Intel IPP software building blocks are highly optimized using Intel® Streaming SIMD Extensions (Intel® SSE), Intel® Advanced Vector Extensions 2 (Intel® AVX2), and Intel® Advanced Vector Extensions 512 (Intel® AVX-512) instruction sets. Plug in these primitives to have your applications perform faster than what an optimizing compiler can produce alone.

Intel IPP offers thousands of optimized functions for commonly used algorithms, including those for creating digital media, enterprise data, embedded communications, and scientific, technical, and security applications. The library includes more than 2,500 image processing, 1,300 signal processing, 500 computer vision, and 300 cryptography primitives.

The Intel® Math Kernel Library (Intel® MKL) optimizes code and offers a choice of compilers, languages, operating systems, and linking and threading models. The library features highly optimized, threaded, and vectorized math functions that maximize performance on each processor family. The library uses industry-standard C and Fortran APIs for compatibility with popular Basic Linear Algebra Subprograms (BLAS), Linear Algebra Package (LAPACK), and Fast Fourier Transform (FFT) functions—no code changes required. Intel MKL dispatches optimized code for each processor automatically without the need to branch code.

Intel and Cloudera have collaborated to speed up Spark’s machine learning (ML) algorithms via integration with the Intel® MKL. Spark’s ML libraries (known as MLlib) is a leading solution for machine learning on large distributed data sets.

The Intel® MPI Library is a multi-fabric message-passing library that implements the open-source MPICH specification. The library is used to create, maintain, and test advanced, complex applications that perform well on HPC clusters based on Intel® processors. You can develop applications that can run on multiple cluster interconnects chosen by the user at run time, and quickly deliver maximum end-user performance without having to change the software or operating environment. The Intel® MPI library helps you achieve the best latency, bandwidth, and scalability through automatic tuning for the latest Intel® platforms. In addition, you can reduce the time to market by linking to one library and deploying on the latest optimized fabrics.

The Intel® Threading Building Blocks Library (Intel® TBB) allows for advanced threading for fast, scalable parallel applications. It also provides the ability to parallelize computationally intensive work, delivering higher-level and simpler solutions using standard C++. Intel® TBB is a feature-rich and comprehensive solution for parallel application development and highly portable, composable, affordable, and approachable and also provides future-proof scalability. Intel® TBB is a C++ library for shared-memory parallel programming and intra-node distributed memory programming. The library provides a wide range of features for parallel programming, including generic parallel algorithms, concurrent containers, a scalable memory allocator, work-stealing task scheduler, and low-level synchronization primitives.

Speak Your Mind

*