The NXP® eIQ® machine learning (ML) software development environment enables the use of ML algorithms on NXP EdgeVerse™ microcontrollers and microprocessors, including i.MX RT crossover MCUs, and i.MX family application processors. eIQ ML software includes a ML workflow tool called eIQ Toolkit, along with inference engines, neural network compilers and optimized libraries. This software leverages open-source and proprietary technologies and is fully integrated into our MCUXpresso SDK and Yocto development environments, allowing you to develop complete system-level applications with ease.
This white paper sets out to add clarity to the legal background and challenges for the IP aspects of ML for data collection and model building and highlights tools NXP offers to help protect ML IP investments.
A platform-optimized runtime inference engine that enables compact code size for resource-constrained devices.
eIQ-Enabled Devices:
i.MX RT1050, i.MX RT1060, i.MX RT1064, i.MX RT1160, i.MX RT1170, i.MX 8M Plus, i.MX 8M, i.MX 8M Nano, i.MX 8M Nano UL, i.MX 8M Mini.
Faster and smaller than TensorFlow, TensorFlow Lite enables inference at the edge with lower latency and smaller binary size using open source libraries.
eIQ-Enabled Devices:
i.MX 8M Plus, i.MX 8M, i.MX 8M Nano, i.MX 8M Nano UL, i.MX 8M Mini.
Faster and smaller than TensorFlow, TF Micro is optimized for running machine learning models on resource constrained devices.
eIQ-Enabled Devices:
i.MX RT500, i.MX RT600, i.MX RT1050, i.MX RT1060, i.MX RT1064, i.MX RT1160, i.MX RT1170, i.MX 8M Plus.
A machine learning compiler that enables ahead-of-time compilation by converting neural networks into object files, which are then converted into binary images for increased performance and smaller memory footprint.
eIQ-Enabled Devices:
i.MX RT600, i.MX RT1050, i.MX RT1060, i.MX RT1064, i.MX RT1160, i.MX RT1170.
Cross-platform inference and training ML accelerator.
eIQ-Enabled Devices:
i.MX 8M Plus, i.MX 8M, i.MX 8M Nano, i.MX 8M Nano UL, i.MX 8M Mini.
On-demand training modules to help you with your ML application development.
Watch our webinar recording with Arcturus, Arm, Siemens and Synopsys and learn how to build and secure an Edge AI solution from IP cores, to SoC, to OS, to algorithm, to software applications.
Image recognition for label identification using the i.MX RT1060 and a TensorFlow Lite model.
NXP’s eIQ Machine Learning software development platform makes Machine Learning at the edge possible for all levels of developers - from those just getting started, to the ML experts.® NN for object detection acceleration.
eIQ Auto deep learning (DL) toolkit enables developers to introduce DL algorithms into their applications and to continue satisfying automotive standards.
Discover five key factors developers need to consider when choosing a processing solution for their edge ML projects in this whitepaper from ABI Research.
Sign in to read the whitepaperThis application note focuses on handwritten digit recognition on embedded systems through deep learning, using i.MX RT MCUs, MCUXpresso SDK and eIQ technology.
Read the application noteeIQ software leverages inference engines, neural network compilers, optimized libraries and open-source technology allowing AI and ML enablement on edge nodes.
Read the factsheetThis Application Note describes the deployment of the eIQ’s LiteRT inference engine (formerly known as TensorFlow Lite), with NPU acceleration support on Android OS for i.MX Applications Processors.
Read the application note