Quick Start Pathway#
This pathway is for engineers who want to get a model running on a device as quickly as possible. It assumes you are familiar with PyTorch model development and have some prior exposure to mobile or edge deployment concepts. Steps are kept concise and link directly to the most actionable documentation.
Estimated time to first inference: 15–30 minutes.
Choose Your Scenario#
Select the scenario that most closely matches what you are trying to accomplish right now.
Fastest path: Export → Run
Install:
pip install executorchExport with Getting Started with ExecuTorch (Exporting section)
Time: ~15 min
Fastest path: Download → Run
Pre-exported .pte files for Llama 3.2, MobileNet, and other models are available on HuggingFace ExecuTorch Community.
Skip export entirely and go directly to the runtime section of Getting Started with ExecuTorch.
Time: ~10 min
Fastest path: Optimum ExecuTorch
Use the optimum-executorch CLI for a one-command export of HuggingFace models.
See Exporting LLMs with HuggingFace’s Optimum ExecuTorch for installation and usage.
Time: ~20 min
Fastest path: Llama on ExecuTorch
Follow the Llama on ExecuTorch guide for the complete Llama export and deployment workflow, including quantization and platform-specific setup.
Time: ~45 min (model download included)
The 5-Minute Setup#
If you have not yet installed ExecuTorch, run the following in a Python 3.10–3.13 virtual environment:
pip install executorch
Then verify the installation with a minimal export:
import torch
from executorch.exir import to_edge_transform_and_lower
from executorch.backends.xnnpack.partition.xnnpack_partitioner import XnnpackPartitioner
# Define a simple model
class Add(torch.nn.Module):
def forward(self, x, y):
return x + y
model = Add()
sample_inputs = (torch.ones(1), torch.ones(1))
et_program = to_edge_transform_and_lower(
torch.export.export(model, sample_inputs),
partitioner=[XnnpackPartitioner()]
).to_executorch()
with open("add.pte", "wb") as f:
f.write(et_program.buffer)
print("Export successful: add.pte created")
If this runs without error, your environment is correctly configured.
Quick Reference: Export Cheat Sheet#
Task |
Code / Command |
|---|---|
Install ExecuTorch |
|
Export with XNNPACK (mobile CPU) |
|
Export with Core ML (iOS) |
Replace |
Export with Qualcomm (Android NPU) |
See Qualcomm AI Engine Backend for QNN SDK setup and partitioner usage |
Run from Python |
|
Run from C++ |
See Running an ExecuTorch Model Using the Module Extension in C++ for the high-level |
Export an LLM |
|
Platform Quick Start Guides#
Jump directly to the platform-specific setup guide for your target.
Gradle dependency, Java Module API, and XNNPACK / Vulkan / Qualcomm backend selection for Android.
Swift Package Manager setup, Objective-C runtime API, and Core ML / MPS / XNNPACK backend selection for iOS.
Python runtime, C++ CMake integration, and XNNPACK / Core ML / MPS backends for desktop platforms.
Bare-metal and RTOS deployment, Arm Ethos-U, Cadence, NXP, and other embedded backends.
Backend Selection Guide#
Choosing the right backend has the largest impact on performance. Use this table to select the appropriate backend for your hardware.
Platform |
Hardware Target |
Backend |
Documentation |
|---|---|---|---|
Android |
CPU (Arm/x86) |
XNNPACK |
|
Android |
GPU (Vulkan) |
Vulkan |
|
Android |
Qualcomm NPU/DSP |
QNN |
|
Android |
MediaTek APU |
MediaTek |
|
iOS / macOS |
Neural Engine / GPU |
Core ML |
|
iOS / macOS |
Metal GPU |
MPS |
|
iOS / macOS |
CPU (Arm) |
XNNPACK |
|
Desktop |
Intel CPU/GPU/NPU |
OpenVINO |
|
Desktop |
Apple Silicon |
Core ML / MPS |
|
Embedded |
Arm Cortex-M / Ethos-U |
Arm Ethos-U |
|
Embedded |
Cadence DSP |
Cadence |
|
Embedded |
NXP eIQ Neutron |
NXP |
Troubleshooting Quick Fixes#
Symptom |
Quick Fix |
|---|---|
|
Run |
Export fails with |
Ensure your model is |
|
Use Developer Tools Usage Tutorials to compare intermediate activations |
Android Gradle sync fails |
Check |
iOS build fails with missing xcframework |
Verify the Swift PM branch name matches your ExecuTorch version (format: |
Going Deeper#
Once your model is running, explore these topics to optimize performance and expand capabilities.
Quantization — Reduce model size and improve latency with INT8/INT4 quantization
Profiling and Debugging — Profiling and debugging tools
Model Export and Lowering — Advanced export options including dynamic shapes
Advanced Pathway — Full advanced user pathway for production deployments