Skip to main content
Ctrl+K
ExecuTorch - Home
ExecuTorch - Home
  • Intro
    • ExecuTorch Overview
    • How ExecuTorch Works
    • Architecture and Components
    • Concepts
  • Quick Start
    • Getting Started with ExecuTorch
    • Model Export and Lowering
    • Building from Source
  • Edge
    • Android
    • iOS
    • Desktop & Laptop Platforms
    • Embedded Systems
    • Profiling and Debugging
  • Backends
    • XNNPACK Backend
    • CUDA Backend
    • Core ML Backend
    • MPS Backend
    • Vulkan Backend
    • Qualcomm AI Engine Backend
    • MediaTek Backend
    • Arm Ethos-U Backend
    • Arm Cortex-M Backend
    • Arm VGF Backend
    • Building and Running ExecuTorch with OpenVINO Backend
    • NXP eIQ Neutron Backend
    • Cadence Xtensa Backend
    • Samsung Exynos Backend
  • LLMs
    • Deploying LLMs to ExecuTorch
    • Exporting LLMs
    • Exporting LLMs with HuggingFace’s Optimum ExecuTorch
    • Exporting custom LLMs
    • Running LLMs with C++
    • Run Llama 3 3B Instruct on Android (with Qualcomm AI Engine Direct Backend)
    • Running LLMs on iOS
  • Advanced
    • Quantization & Optimization
    • Model Export and Lowering
    • Kernel Library Deep Dive
    • Backend & Delegates
    • Runtime & Integration
    • Compiler & IR
    • File Formats
  • Tools
    • Introduction to the ExecuTorch Developer Tools
    • Bundled Program – a Tool for ExecuTorch Model Validation
    • Prerequisite | ETRecord - ExecuTorch Record
    • Prerequisite | ETDump - ExecuTorch Dump
    • Profiling Models in ExecuTorch
    • Debugging Models in ExecuTorch
    • Inspector APIs
    • Memory Planning Inspection in ExecuTorch
    • Developer Tools Usage Tutorials
  • API
    • Export API Reference
    • Runtime API Reference
    • Runtime Python API Reference
    • API Life Cycle and Deprecation Policy
    • Running an ExecuTorch Model Using the Module Extension in C++
    • Managing Tensor Memory in C++
    • Detailed C++ Runtime APIs Tutorial
  • More
    • Support
Go to pytorch.org
Ctrl+K
  • X
  • GitHub
  • Discourse
  • PyPi
  • Intro
    • ExecuTorch Overview
    • How ExecuTorch Works
    • Architecture and Components
    • Concepts
  • Quick Start
    • Getting Started with ExecuTorch
    • Model Export and Lowering
    • Building from Source
  • Edge
    • Android
    • iOS
    • Desktop & Laptop Platforms
    • Embedded Systems
    • Profiling and Debugging
  • Backends
    • XNNPACK Backend
    • CUDA Backend
    • Core ML Backend
    • MPS Backend
    • Vulkan Backend
    • Qualcomm AI Engine Backend
    • MediaTek Backend
    • Arm Ethos-U Backend
    • Arm Cortex-M Backend
    • Arm VGF Backend
    • Building and Running ExecuTorch with OpenVINO Backend
    • NXP eIQ Neutron Backend
    • Cadence Xtensa Backend
    • Samsung Exynos Backend
  • LLMs
    • Deploying LLMs to ExecuTorch
    • Exporting LLMs
    • Exporting LLMs with HuggingFace’s Optimum ExecuTorch
    • Exporting custom LLMs
    • Running LLMs with C++
    • Run Llama 3 3B Instruct on Android (with Qualcomm AI Engine Direct Backend)
    • Running LLMs on iOS
  • Advanced
    • Quantization & Optimization
    • Model Export and Lowering
    • Kernel Library Deep Dive
    • Backend & Delegates
    • Runtime & Integration
    • Compiler & IR
    • File Formats
  • Tools
    • Introduction to the ExecuTorch Developer Tools
    • Bundled Program – a Tool for ExecuTorch Model Validation
    • Prerequisite | ETRecord - ExecuTorch Record
    • Prerequisite | ETDump - ExecuTorch Dump
    • Profiling Models in ExecuTorch
    • Debugging Models in ExecuTorch
    • Inspector APIs
    • Memory Planning Inspection in ExecuTorch
    • Developer Tools Usage Tutorials
  • API
    • Export API Reference
    • Runtime API Reference
    • Runtime Python API Reference
    • API Life Cycle and Deprecation Policy
    • Running an ExecuTorch Model Using the Module Extension in C++
    • Managing Tensor Memory in C++
    • Detailed C++ Runtime APIs Tutorial
  • Support
    • Frequently Asked Questions
    • Profiling and Debugging
    • Contributing to ExecuTorch
Go to pytorch.org
Ctrl+K
  • X
  • GitHub
  • Discourse
  • PyPi

Section Navigation

Backend Overview

  • XNNPACK Backend
    • Partitioner API
    • Quantization
    • Troubleshooting
    • Architecture and Internals
  • CUDA Backend
  • Core ML Backend
    • Troubleshooting
    • Partitioner API
    • Quantization
    • Op support
  • MPS Backend
  • Vulkan Backend
    • Partitioner API
    • Quantization
    • Operator Support
    • Troubleshooting
    • Vulkan Backend Tutorials
      • Executing and profiling an ExecuTorch Vulkan model on device
      • Exporting Llama 3.2 1B/3B Instruct to ExecuTorch Vulkan and running on device
  • Qualcomm AI Engine Backend
  • MediaTek Backend
  • Arm Ethos-U Backend
    • Partitioner API
    • Quantization
    • Arm Ethos-U Troubleshooting
    • Arm Ethos-U Backend Tutorials
      • Getting Started Tutorial
    • EdgeIR Operator support for the U55 backend
    • EdgeIR Operator support for the U85 backend
  • Arm Cortex-M Backend
  • Arm VGF Backend
    • Partitioner API
    • Quantization
    • Arm VGF Troubleshooting
    • Arm VGF Backend Tutorials
      • Getting Started Tutorial
    • EdgeIR Operator support for the VGF backend
  • Building and Running ExecuTorch with OpenVINO Backend
  • NXP eIQ Neutron Backend
    • Partitioner API
    • NXP eIQ Neutron Quantization
    • NXP Tutorials
      • Getting started with eIQ Neutron NPU ExecuTorch backend
    • NXP eIQ Dim Order Support
    • NXP eIQ Neutron Kernel Selective Kernel Registration
  • Cadence Xtensa Backend
  • Samsung Exynos Backend
    • Partitioner API
    • Quantization
    • Operator Support
  • Backends
  • NXP eIQ Neutron Backend
  • NXP Tutorials
Rate this Page
★ ★ ★ ★ ★

NXP Tutorials#

→Getting started with eIQ Neutron NPU ExecuTorch backend — Lower and run a model on the NXP eIQ Neutron backend.

Rate this Page
★ ★ ★ ★ ★

previous

NXP eIQ Neutron Quantization

next

Getting started with eIQ Neutron NPU ExecuTorch backend

Built with the PyData Sphinx Theme 0.15.4.

previous

NXP eIQ Neutron Quantization

next

Getting started with eIQ Neutron NPU ExecuTorch backend

Edit on GitHub
Show Source

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources

Stay in touch for updates, event info, and the latest news

By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.

© PyTorch. Copyright © The Linux Foundation®. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For more information, including terms of use, privacy policy, and trademark usage, please see our Policies page. Trademark Usage. Privacy Policy.

To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.

© Copyright 2024, ExecuTorch.

Created using Sphinx 7.2.6.

Built with the PyData Sphinx Theme 0.15.4.