We all like to talk about the super high-end of machine learning with computer
vision algorithms running on a turbo-charged, 10 tera-operations-per-second
accelerator, but the reality, especially for our embedded industry, is that
the majority of applications need a processing engine suitable enough to get
the job done and no more. This is our motivation for offering scalable machine
learning devices from MCUs (such as the Arm®
Cortex®-M7-based i.MX RT1050) to application processors (such
as the
i.MX 8QuadMax
and Layerscape®
LS1046)—and finally you’re able to see this range of performance in
action with no less than 12 machine learning demos at the Arm TechCon in the
NXP booth (details below).
For example, stop by the booth and see a wide range of solutions representing
low cost, low power, secure and high performance face recognition. How about
face recognition solutions starting at $2 USD? Our design starts with an NXP
i.MX RT1020, a low-cost device sporting an Arm®
Cortex®-M7 core. NXP developed its own face recognition
algorithms and the ability to train for new faces directly on the RT1020
platform. The outcome is face detection and recognition in slightly more than
200msec with accuracy up to 95%—starting at $2 USD. Higher performance
face recognition examples will also be on display using devices such as
i.MX 7ULP
(high-performance and ultra-low-power),
i.MX 8M Nano
(real-time face detection using Haar Cascades to give an efficient result of
classifiers)
i.MX 8M Mini
(doing secure identification with anti-spoofing) and the i.MX 8M Quad-based
Google® Coral Board with the Google TPU (for super-fast facial
recognition in a sea of people).
Moving on
to image classification, the NXP booth will host an application using the i.MX
RT1060 and the eIQ® machine learning software development
environment. This example performs classification with a TensorFlow Lite model
trained to recognize different types of flowers (sunflower, tulip, rose,
dandelion and daisy). Specifically, we’re running a MobileNet model and
doing inferencing at the rate of 3 frames per second—on an MCU! This
demonstration also shows the flexibility of eIQ, providing support for a
variety of inference mechanisms (for example, TensorFlow Lite, CMSIS-NN, Glow) and
other types of machine learning models besides image classification (for example,
audio or anomaly detection).
Other Cool NXP Things at Arm TechCon
-
I’ll be giving a talk “Open Source ML is Rapidly
Advancing”, Tuesday October 8th @9am.
-
Donnie Garcia will talk about “Rightsizing Security for an MCU-based
Voiced Assistant” Tuesday, Oct 8th 1:30pm
-
NXP will host a kegerator in the exhibit hall at 5pm on October 9th and
10th.