eIQ Inference with DeepViewRT™ is a platform-optimized, runtime inference engine that scales across a wide range of NXP devices and neural network compute engines. Provided free-of-charge, this inference engine enables compact code size for resource-constrained devices including the i.MX RT crossover MCUs (Arm® Cortex®-M cores), i.MX applications processors (Cortex-A and Cortex-M cores, dedicated neural processing units (NPU) and GPUs).
This inference engine is delivered via NXP standard Yocto BSP release for Linux® OS-based development, and MCUXpresso SDK release for embedded RTOS-enabled MCU development.