Edge#
Deploy ExecuTorch on mobile, desktop, and embedded platforms with optimized backends for each.
ExecuTorch supports deployment across a wide variety of edge computing platforms, from high-end mobile devices to constrained embedded systems and microcontrollers.
Android#
Deploy ExecuTorch on Android devices with hardware acceleration support.
→ Android — Complete Android deployment guide
Key features:
Hardware acceleration support (CPU, GPU, NPU)
Multiple backend options (XNNPACK, Vulkan, Qualcomm, MediaTek, ARM, Samsung)
Comprehensive examples and demos
iOS#
Deploy ExecuTorch on iOS devices with Apple hardware acceleration.
→ iOS — Complete iOS deployment guide
Key features:
Apple hardware optimization (CoreML, MPS, XNNPACK)
Swift and Objective-C integration
LLM and computer vision examples
Desktop & Laptop Platforms#
Deploy ExecuTorch on Linux, macOS, and Windows with optimized backends.
→ Desktop & Laptop Platforms — Complete desktop deployment guide
Key features:
Cross-platform C++ runtime
Platform-specific optimization (OpenVINO, CoreML, MPS)
CPU and GPU acceleration options
Embedded Systems#
Deploy ExecuTorch on constrained embedded systems and microcontrollers.
→ Embedded Systems — Complete embedded deployment guide
Key features:
Resource-constrained deployment
DSP and NPU acceleration (Cadence, ARM Ethos-U, NXP)
Custom backend development support
LLM and computer vision examples
Troubleshooting & Support#
Profiling and Debugging - Common issues and solutions across all platforms
Next Steps#
After choosing your platform:
Backends - Deep dive into backend selection and optimization
llms-section - Working with Large Language Models on edge devices