Rate this Page

NXP eIQ Neutron Backend#

This manual page is dedicated to introduction of using the ExecuTorch with NXP eIQ Neutron Backend. NXP offers accelerated machine learning models inference on edge devices. To learn more about NXP’s machine learning acceleration platform, please refer to the official NXP website.

For up-to-date status about running ExecuTorch on Neutron Backend please visit the manual page.

Features#

Executorch v1.0 supports running machine learning models on selected NXP chips (for now only i.MXRT700). Among currently supported machine learning models are:

  • Convolution-based neutral networks

  • Full support for MobileNetv2 and CifarNet

Prerequisites (Hardware and Software)#

In order to succesfully build executorch project and convert models for NXP eIQ Neutron Backend you will need a computer running Windows or Linux.

If you want to test the runtime, you’ll also need:

Using NXP backend#

To test converting a neural network model for inference on NXP eIQ Neutron Backend, you can use our example script:

# cd to the root of executorch repository
./examples/nxp/aot_neutron_compile.sh [model (cifar10 or mobilenetv2)]

For a quick overview how to convert a custom PyTorch model, take a look at our exmple python script.

Runtime Integration#

To learn how to run the converted model on the NXP hardware, use one of our example projects on using executorch runtime from MCUXpresso IDE example projects list. For more finegrained tutorial, visit this manual page.