In this demo, NXP integrates a neural network accelerator with our next generation i.MX 8M Plus applications processor.
This solution offers a smaller, less expensive and less power-hungry application processor – enabling multiple customer use cases.
NXP‘s i.MX 8M Plus processor provides an easy-to-use software infrastructure and allows customers to use ML functionality by offsetting the ML processing to NPU accelerator – to help achieve higher performance.
In this video, NXP compares ML performance between NPU and CPU by demonstrating NPU capabilities (2.3 TOPS) and comparing inference performance with CPU.
- First, it runs Object Classification Model (Mobilenet_v1) on NPU accelerator and then runs the exact same model on CPU. The results show that the NPU takes less than 3 milliseconds and the CPU takes around 200 milliseconds to perform the task.
- Second, the demo further runs Object Detection Model (Mobilenet_SSD) on the NPU accelerator and then runs the same model on the CPU. The results show that the NPU takes less than 15 milliseconds and the CPU takes around 280 milliseconds.