Rate this Page

Beginner Pathway#

Welcome to ExecuTorch. This pathway is designed for engineers who are comfortable with PyTorch but are new to on-device deployment. You will follow a structured, step-by-step sequence that builds foundational knowledge before introducing more complex topics.

Estimated time to complete: 2–4 hours for the core sequence. Individual steps can be done independently.


What You Will Learn#

By following this pathway, you will be able to:

  1. Understand what ExecuTorch is and why it exists

  2. Install ExecuTorch and set up your development environment

  3. Export a PyTorch model to the .pte format

  4. Run inference using the Python runtime

  5. Deploy a model to Android or iOS

  6. Know where to go next based on your use case


Core Learning Sequence#

Work through these steps in order. Each step builds on the previous one.

Step 1 — Understand ExecuTorch (15 min)#

Before writing any code, read the conceptual overview to understand the ExecuTorch workflow and its key benefits.

Overview of ExecuTorch

High-level introduction to ExecuTorch’s purpose, design principles, and where it fits in the PyTorch ecosystem.

Difficulty: Beginner

ExecuTorch Overview
How ExecuTorch Works

A technical walkthrough of the three-stage pipeline: export, compilation, and runtime execution.

Difficulty: Beginner

How ExecuTorch Works

Step 2 — Set Up Your Environment (20 min)#

Install ExecuTorch and verify your setup before attempting to export a model.

Getting Started with ExecuTorch

Install the ExecuTorch Python package, export a MobileNet V2 model using XNNPACK, and run your first inference. This is the canonical entry point for all new users.

Difficulty: Beginner | Prerequisites: Python 3.10–3.13, PyTorch, g++7+ or clang5+

Getting Started with ExecuTorch

Tip: If you encounter build errors or platform-specific issues during installation, consult the Frequently Asked Questions page before proceeding.


Step 3 — Understand Core Concepts (20 min)#

A brief review of the key concepts and terminology used throughout ExecuTorch documentation.

Core Concepts and Terminology

Definitions for Export IR, Edge Dialect, delegates, partitioners, .pte files, and other ExecuTorch-specific terms you will encounter throughout the documentation.

Difficulty: Beginner

Concepts

Step 4 — Export Your First Model (30 min)#

Learn the standard export workflow using torch.export and to_edge_transform_and_lower.

Model Export and Lowering

The complete guide to exporting a PyTorch model for ExecuTorch, including backend selection, quantization basics, and handling dynamic shapes.

Difficulty: Intermediate | Builds on: Step 2

Model Export and Lowering
Visualize Your Model

Use ModelExplorer to inspect your exported model graph and verify the export result before deployment.

Difficulty: Beginner

Visualize a Model using ModelExplorer

Step 5 — Deploy to Your Target Platform (30–60 min)#

Choose the platform you are targeting and follow the appropriate guide.

🤖 Android

Integrate ExecuTorch into an Android app using the Java/Kotlin bindings. Includes Gradle dependency setup and the Module API.

Difficulty: Intermediate

Android
🍎 iOS

Add ExecuTorch to an iOS or macOS project via Swift Package Manager. Covers Objective-C and Swift integration.

Difficulty: Intermediate

iOS
💻 Desktop / Python

Run inference directly from Python using the ExecuTorch runtime bindings — the fastest way to validate a model before mobile deployment.

Difficulty: Beginner

Getting Started with ExecuTorch

Step 6 — Explore a Complete Example (optional, 30 min)#

Seeing a complete end-to-end example reinforces the concepts from the previous steps.

Pico2: MNIST on a Microcontroller

A self-contained tutorial that exports an MNIST model and runs it on a Raspberry Pi Pico2. Excellent for understanding the full pipeline on constrained hardware.

Difficulty: Beginner (hardware required)

Pico2: A simple MNIST Tutorial
MobileNet V2 — Colab Notebook

An interactive Colab notebook covering the complete export, lowering, and verification workflow for MobileNet V2. No local setup required.

Difficulty: Beginner

https://colab.research.google.com/drive/1qpxrXC3YdJQzly3mRg-4ayYiOjC6rue3?usp=sharing

Frequently Encountered Issues#

New users commonly encounter the following issues. Consult these resources before opening a support request.

Issue

Resource

Installation fails or package not found

Frequently Asked Questions — Installation section

Export fails with unsupported operator error

Model Export and Lowering — Operator support section

Model produces incorrect output after export

Developer Tools Usage Tutorials — Numerical debugging

Build errors on Windows

Getting Started with ExecuTorch — Windows prerequisites note

Backend not accelerating as expected

Backends — Backend selection guide


Where to Go Next#

Once you have completed the core sequence, choose your next direction based on your use case.

Work with LLMs

Export and deploy Llama, Phi, Qwen, and other LLMs to mobile and edge devices.

LLMs
Hardware Acceleration

Use XNNPACK, Core ML, Qualcomm, Vulkan, and other backends for hardware-accelerated inference.

Backends
Advanced Topics

Quantization, memory planning, custom compiler passes, and backend development.

Advanced Pathway