NXP Delivers Embedded AI Environment to Edge Processing

Print Friendly, PDF & Email

NXP Semiconductors N.V. (NASDAQ:NXPI) announced a comprehensive, easy-to-use machine learning (ML) environment for building innovative applications with cutting-edge capabilities. Customers can now easily implement ML functionality on NXP’s breadth of devices from low-cost microcontrollers (MCUs) to breakthrough crossover i.MX RT processors and high-performance application processors. The ML environment provides turnkey enablement for choosing the optimum execution engine from among Arm Cortex cores to high-performance GPU/DSP (Graphics Processing Unit/Digital Signal Processor) complexes and tools for deploying machine learning models, including neural nets, on those engines.

Embedded Artificial Intelligence (AI) is quickly becoming an essential capability for edge processing, gives ‘smart’ devices an ability to become ‘aware’ of its surroundings and make decisions on the input received with little or no human intervention. NXP’s ML environment enables fast-growing machine learning use-cases in vision, voice, and anomaly detections. The vision-based ML applications utilize cameras as inputs to the various machine learning algorithms of which neural networks are the most popular. These applications span most market segments and perform functions such as object recognition, identification, people-counting and others. Voice Activated Devices (VADs) are driving the need for machine learning at the edge for wake word detection, natural language processing, and for ‘voice as the user-interface’ applications. Machine learning-based anomaly detection (based on vibration/sound patterns) will revolutionize Industry 4.0 by recognizing imminent failures and dramatically reducing down-times. NXP offers its customers several approaches for integrating machine learning into their applications. The NXP ML environment includes free software that allows customers to import their own trained TensorFlow or Caffe models, convert them to optimized inference engines, and deploy them on NXP’s breadth of scalable processing solutions from MCUs to highly-integrated i.MX and Layerscape processors.

When it comes to machine learning in embedded applications, it’s all about balancing cost and the end-user experience. For example, many people are still amazed that they can deploy inference engines with sufficient performance even in our cost-effective MCUs,” said Markus Levy, head of AI technologies at NXP. “At the other end of the spectrum is our high-performance crossover and applications processors that have processing resources for fast inference and training in many of our customer’s applications. As the use-cases for AI expand, we will continue to power that growth with next-generation processors that have dedicated acceleration for machine learning.”

Another critical requirement in bringing AI/ML capability to the edge is easy and secure deployment and upgrade from the cloud to embedded devices. The EdgeScale platform enables secure provisioning and management of IoT and Edge devices. EdgeScale enables an end-to-end continuous development and delivery experience by containerizing AI/ML learning and inference engines in the cloud, and securely deploying the containers to edge devices automatically.

To support a broad range of customer needs, NXP also created a Machine Learning partner ecosystem to connect customers with technology vendors that can accelerate time-to-revenue with proven ML tools, inference engines, solutions and design services.  Members of the ecosystem include Au-Zone Technologies and Pilot.AI. Au-Zone Technologies provides the industry’s first end-to-end embedded ML toolkit and RunTime inference engine, DeepView, which enables developers to deploy and profile CNNs on NXP’s entire SoC portfolio that includes heterogeneous mixture of Arm Cortex-A, Cortex-M cores, and GPU’s. Pilot.AI has built a framework to enable a variety of perception tasks – including detection, classification, tracking, and identification – across a variety of customer platforms, ranging from microcontrollers to GPUs, along with data collection/annotation tools and pre-trained models to enable drop-in model deployment.

 

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*