Skip to main content
Ctrl+K
ExecuTorch - Home
ExecuTorch - Home
  • Intro
    • ExecuTorch Overview
    • How ExecuTorch Works
    • Architecture and Components
    • Concepts
  • Quick Start
    • Getting Started with ExecuTorch
    • Model Export and Lowering
    • Building from Source
  • Edge
    • Android
    • iOS
    • Desktop & Laptop Platforms
    • Embedded Systems
    • Profiling and Debugging
  • Backends
    • XNNPACK Backend
    • CUDA Backend
    • Core ML Backend
    • MPS Backend
    • Vulkan Backend
    • Qualcomm AI Engine Backend
    • MediaTek Backend
    • Arm Ethos-U Backend
    • Arm Cortex-M Backend
    • Arm VGF Backend
    • Building and Running ExecuTorch with OpenVINO Backend
    • NXP eIQ Neutron Backend
    • Cadence Xtensa Backend
    • Samsung Exynos Backend
  • LLMs
    • Deploying LLMs to ExecuTorch
    • Exporting LLMs
    • Exporting LLMs with HuggingFace’s Optimum ExecuTorch
    • Exporting custom LLMs
    • Running LLMs with C++
    • Run Llama 3 3B Instruct on Android (with Qualcomm AI Engine Direct Backend)
    • Running LLMs on iOS
  • Advanced
    • Quantization & Optimization
    • Model Export and Lowering
    • Kernel Library Deep Dive
    • Backend & Delegates
    • Runtime & Integration
    • Compiler & IR
    • File Formats
  • Tools
    • Introduction to the ExecuTorch Developer Tools
    • Bundled Program – a Tool for ExecuTorch Model Validation
    • Prerequisite | ETRecord - ExecuTorch Record
    • Prerequisite | ETDump - ExecuTorch Dump
    • Profiling Models in ExecuTorch
    • Debugging Models in ExecuTorch
    • Inspector APIs
    • Memory Planning Inspection in ExecuTorch
    • Developer Tools Usage Tutorials
  • API
    • Export API Reference
    • Runtime API Reference
    • Runtime Python API Reference
    • API Life Cycle and Deprecation Policy
    • Running an ExecuTorch Model Using the Module Extension in C++
    • Managing Tensor Memory in C++
    • Detailed C++ Runtime APIs Tutorial
  • More
    • Support
Go to pytorch.org
Ctrl+K
  • X
  • GitHub
  • Discourse
  • PyPi
  • Intro
    • ExecuTorch Overview
    • How ExecuTorch Works
    • Architecture and Components
    • Concepts
  • Quick Start
    • Getting Started with ExecuTorch
    • Model Export and Lowering
    • Building from Source
  • Edge
    • Android
    • iOS
    • Desktop & Laptop Platforms
    • Embedded Systems
    • Profiling and Debugging
  • Backends
    • XNNPACK Backend
    • CUDA Backend
    • Core ML Backend
    • MPS Backend
    • Vulkan Backend
    • Qualcomm AI Engine Backend
    • MediaTek Backend
    • Arm Ethos-U Backend
    • Arm Cortex-M Backend
    • Arm VGF Backend
    • Building and Running ExecuTorch with OpenVINO Backend
    • NXP eIQ Neutron Backend
    • Cadence Xtensa Backend
    • Samsung Exynos Backend
  • LLMs
    • Deploying LLMs to ExecuTorch
    • Exporting LLMs
    • Exporting LLMs with HuggingFace’s Optimum ExecuTorch
    • Exporting custom LLMs
    • Running LLMs with C++
    • Run Llama 3 3B Instruct on Android (with Qualcomm AI Engine Direct Backend)
    • Running LLMs on iOS
  • Advanced
    • Quantization & Optimization
    • Model Export and Lowering
    • Kernel Library Deep Dive
    • Backend & Delegates
    • Runtime & Integration
    • Compiler & IR
    • File Formats
  • Tools
    • Introduction to the ExecuTorch Developer Tools
    • Bundled Program – a Tool for ExecuTorch Model Validation
    • Prerequisite | ETRecord - ExecuTorch Record
    • Prerequisite | ETDump - ExecuTorch Dump
    • Profiling Models in ExecuTorch
    • Debugging Models in ExecuTorch
    • Inspector APIs
    • Memory Planning Inspection in ExecuTorch
    • Developer Tools Usage Tutorials
  • API
    • Export API Reference
    • Runtime API Reference
    • Runtime Python API Reference
    • API Life Cycle and Deprecation Policy
    • Running an ExecuTorch Model Using the Module Extension in C++
    • Managing Tensor Memory in C++
    • Detailed C++ Runtime APIs Tutorial
  • Support
    • Frequently Asked Questions
    • Profiling and Debugging
    • Contributing to ExecuTorch
Go to pytorch.org
Ctrl+K
  • X
  • GitHub
  • Discourse
  • PyPi

Section Navigation

Edge Platforms

  • Android
    • Using ExecuTorch on Android
    • Backends
      • XNNPACK Backend
      • Vulkan Backend
      • Qualcomm AI Engine Backend
      • MediaTek Backend
      • Arm VGF Backend
      • Samsung Exynos Backend
        • Partitioner API
        • Quantization
        • Operator Support
    • Examples & Demos
      • Arm VGF Backend Tutorials
        • Getting Started Tutorial
  • iOS
    • Using ExecuTorch on iOS
    • Backends
      • Core ML Backend
      • MPS Backend
      • XNNPACK Backend
    • Examples & Demos
  • Desktop & Laptop Platforms
    • Using ExecuTorch with C++
    • Building from Source
    • Backends
      • XNNPACK Backend
      • Building and Running ExecuTorch with OpenVINO Backend
      • Core ML Backend
        • Troubleshooting
        • Partitioner API
        • Quantization
        • Op support
      • MPS Backend
    • ExecuTorch on Raspberry Pi
  • Embedded Systems
    • Runtime API Reference
    • Detailed C++ Runtime APIs Tutorial
    • Running an ExecuTorch Model Using the Module Extension in C++
    • Managing Tensor Memory in C++
    • Using ExecuTorch with C++
    • Building from Source
    • Backends
      • Arm Cortex-M Backend
      • Cadence Xtensa Backend
      • NXP eIQ Neutron Backend
    • Getting Started Tutorial
    • ExecuTorch on Raspberry Pi
    • Pico2: A simple MNIST Tutorial
  • Profiling and Debugging
  • Edge
  • iOS
  • Examples & Demos
Rate this Page
★ ★ ★ ★ ★

Examples & Demos#

  • iOS LLM Examples Repository

  • MobileViT Demo App

Rate this Page
★ ★ ★ ★ ★

previous

XNNPACK Backend

next

Desktop & Laptop Platforms

Built with the PyData Sphinx Theme 0.15.4.

previous

XNNPACK Backend

next

Desktop & Laptop Platforms

Edit on GitHub
Show Source

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources

Stay in touch for updates, event info, and the latest news

By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.

© PyTorch. Copyright © The Linux Foundation®. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For more information, including terms of use, privacy policy, and trademark usage, please see our Policies page. Trademark Usage. Privacy Policy.

To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.

© Copyright 2024, ExecuTorch.

Created using Sphinx 7.2.6.

Built with the PyData Sphinx Theme 0.15.4.