Skip to main content
Ctrl+K
PyTorch - Home
PyTorch - Home
  • Install PyTorch 
  • User Guide
    • PyTorch Main Components
    • torch.compiler
    • torch.export
    • Developer Notes
    • Accelerator Integration
  • Reference API
    • torch
    • torch.nn
    • torch.nn.functional
    • torch.Tensor
    • Tensor Attributes
    • Tensor Views
    • Automatic Mixed Precision package - torch.amp
    • Automatic differentiation package - torch.autograd
    • torch.library
    • torch.accelerator
    • torch.cpu
    • torch.cuda
    • Understanding CUDA Memory Usage
    • torch.mps
    • torch.xpu
    • torch.mtia
    • torch.mtia.memory
    • torch.mtia.mtia_graph
    • Meta device
    • torch.backends
    • torch.export
    • Distributed communication package - torch.distributed
    • torch.distributed.tensor
    • Generic Join Context Manager
    • Torch Distributed Elastic
    • FullyShardedDataParallel
    • torch.distributed.fsdp.fully_shard
    • Tensor Parallelism - torch.distributed.tensor.parallel
    • Distributed Optimizers
    • Pipeline Parallelism
    • PyTorch Symmetric Memory
    • Distributed Checkpoint - torch.distributed.checkpoint
    • Probability distributions - torch.distributions
    • torch.compiler API reference
    • torch.fft
    • torch.func
    • torch.futures
    • torch.fx
    • torch.fx.experimental
    • torch.hub
    • torch.linalg
    • torch.monitor
    • torch.signal
    • torch.special
    • torch.overrides
    • torch.nativert
    • torch.package
    • torch.profiler
    • torch.nn.init
    • torch.nn.attention
    • torch.onnx
    • torch.optim
    • Complex Numbers
    • DDP Communication Hooks
    • Quantization
    • Distributed RPC Framework
    • torch.random
    • torch.masked
    • torch.nested
    • torch.Size
    • torch.sparse
    • torch.Storage
    • torch.testing
    • torch.utils
    • Benchmark Utils - torch.utils.benchmark
    • torch.utils.checkpoint
    • torch.utils.cpp_extension
    • torch.utils.data
    • torch.utils.deterministic
    • JIT Utils - torch.utils.jit
    • torch.utils.dlpack
    • torch.utils.mobile_optimizer
    • torch.utils.model_zoo
    • torch.utils.tensorboard
    • torch.utils.module_tracker
    • Type Info
    • Named Tensors
    • Named Tensors operator coverage
    • torch.config
    • torch.__future__
    • torch._logging
    • Torch Environment Variables
  • Developer Notes
    • Automatic Mixed Precision examples
    • Autograd mechanics
    • Broadcasting semantics
    • CPU threading and TorchScript inference
    • CUDA semantics
    • PyTorch Custom Operators Landing Page
    • Distributed Data Parallel
    • Extending PyTorch
    • Extending torch.func with autograd.Function
    • Frequently Asked Questions
    • Getting Started on Intel GPU
    • Gradcheck mechanics
    • HIP (ROCm) semantics
    • Features for large-scale deployments
    • LibTorch Stable ABI
    • LocalTensor Tutorial: Single-Process SPMD Debugging
    • MKLDNN backend
    • Modules
    • MPS backend
    • Multiprocessing best practices
    • Numerical accuracy
    • Out Notes
    • Reproducibility
    • Serialization semantics
    • Windows FAQ
  • Community
    • PyTorch Governance | Build + CI
    • PyTorch Contribution Guide
    • PyTorch Design Philosophy
    • PyTorch Governance | Mechanics
    • PyTorch Governance | Maintainers
  • Tutorials 
Go to pytorch.org
Ctrl+K
  • X
  • GitHub
  • PyTorch Forum
  • PyPi
  • Install PyTorch 
  • User Guide
    • PyTorch Main Components
    • torch.compiler
    • torch.export
    • Developer Notes
    • Accelerator Integration
  • Reference API
    • torch
    • torch.nn
    • torch.nn.functional
    • torch.Tensor
    • Tensor Attributes
    • Tensor Views
    • Automatic Mixed Precision package - torch.amp
    • Automatic differentiation package - torch.autograd
    • torch.library
    • torch.accelerator
    • torch.cpu
    • torch.cuda
    • Understanding CUDA Memory Usage
    • torch.mps
    • torch.xpu
    • torch.mtia
    • torch.mtia.memory
    • torch.mtia.mtia_graph
    • Meta device
    • torch.backends
    • torch.export
    • Distributed communication package - torch.distributed
    • torch.distributed.tensor
    • Generic Join Context Manager
    • Torch Distributed Elastic
    • FullyShardedDataParallel
    • torch.distributed.fsdp.fully_shard
    • Tensor Parallelism - torch.distributed.tensor.parallel
    • Distributed Optimizers
    • Pipeline Parallelism
    • PyTorch Symmetric Memory
    • Distributed Checkpoint - torch.distributed.checkpoint
    • Probability distributions - torch.distributions
    • torch.compiler API reference
    • torch.fft
    • torch.func
    • torch.futures
    • torch.fx
    • torch.fx.experimental
    • torch.hub
    • torch.linalg
    • torch.monitor
    • torch.signal
    • torch.special
    • torch.overrides
    • torch.nativert
    • torch.package
    • torch.profiler
    • torch.nn.init
    • torch.nn.attention
    • torch.onnx
    • torch.optim
    • Complex Numbers
    • DDP Communication Hooks
    • Quantization
    • Distributed RPC Framework
    • torch.random
    • torch.masked
    • torch.nested
    • torch.Size
    • torch.sparse
    • torch.Storage
    • torch.testing
    • torch.utils
    • Benchmark Utils - torch.utils.benchmark
    • torch.utils.checkpoint
    • torch.utils.cpp_extension
    • torch.utils.data
    • torch.utils.deterministic
    • JIT Utils - torch.utils.jit
    • torch.utils.dlpack
    • torch.utils.mobile_optimizer
    • torch.utils.model_zoo
    • torch.utils.tensorboard
    • torch.utils.module_tracker
    • Type Info
    • Named Tensors
    • Named Tensors operator coverage
    • torch.config
    • torch.__future__
    • torch._logging
    • Torch Environment Variables
  • Developer Notes
    • Automatic Mixed Precision examples
    • Autograd mechanics
    • Broadcasting semantics
    • CPU threading and TorchScript inference
    • CUDA semantics
    • PyTorch Custom Operators Landing Page
    • Distributed Data Parallel
    • Extending PyTorch
    • Extending torch.func with autograd.Function
    • Frequently Asked Questions
    • Getting Started on Intel GPU
    • Gradcheck mechanics
    • HIP (ROCm) semantics
    • Features for large-scale deployments
    • LibTorch Stable ABI
    • LocalTensor Tutorial: Single-Process SPMD Debugging
    • MKLDNN backend
    • Modules
    • MPS backend
    • Multiprocessing best practices
    • Numerical accuracy
    • Out Notes
    • Reproducibility
    • Serialization semantics
    • Windows FAQ
  • Community
    • PyTorch Governance | Build + CI
    • PyTorch Contribution Guide
    • PyTorch Design Philosophy
    • PyTorch Governance | Mechanics
    • PyTorch Governance | Maintainers
  • Tutorials 
Go to pytorch.org
Ctrl+K
  • X
  • GitHub
  • PyTorch Forum
  • PyPi

Section Navigation

  • C++

Python API

  • torch
  • Aliases in torch.nn
  • torch.nn
  • torch.nn.functional
  • torch.Tensor
  • Tensor Attributes
  • Tensor Views
  • torch.amp
  • torch.autograd
  • torch.library
  • torch.accelerator
  • torch.cpu
  • torch.cuda
  • torch.cuda.memory
  • torch.mps
  • torch.xpu
  • torch.mtia
  • torch.mtia.memory
  • torch.mtia.mtia_graph
  • Meta device
  • torch.backends
  • torch.export
  • torch.distributed
  • torch.distributed.tensor
  • torch.distributed.algorithms.join
  • torch.distributed.elastic
  • torch.distributed.fsdp
  • torch.distributed.fsdp.fully_shard
  • torch.distributed.tensor.parallel
  • torch.distributed.optim
  • torch.distributed.pipelining
  • torch.distributed._symmetric_memory
  • torch.distributed.checkpoint
  • torch.distributions
  • torch.compiler
  • torch.fft
  • torch.func
  • torch.futures
  • torch.fx
  • torch.fx.experimental
  • torch.hub
  • torch.linalg
  • torch.monitor
  • torch.signal
  • torch.special
  • torch.overrides
  • torch.nativert
  • torch.package
  • torch.profiler
  • torch.nn.init
  • torch.nn.attention
  • torch.onnx
  • torch.optim
  • Complex Numbers
  • DDP Communication Hooks
  • Quantization
  • Distributed RPC Framework
  • torch.random
  • torch.masked
  • torch.nested
  • torch.Size
  • torch.sparse
  • torch.Storage
  • torch.testing
  • torch.utils
  • Benchmark Utils - torch.utils.benchmark
  • torch.utils.checkpoint
  • torch.utils.cpp_extension
  • torch.utils.data
  • torch.utils.deterministic
  • JIT Utils - torch.utils.jit
  • torch.utils.dlpack
  • torch.utils.mobile_optimizer
  • torch.utils.model_zoo
  • torch.utils.tensorboard
  • torch.utils.module_tracker
  • Type Info
  • Named Tensors
  • Named Tensors operator coverage
  • torch.__config__
  • torch.__future__
  • torch._logging
  • Torch Environment Variables
  • Reference API
  • torch.utils
  • torch.utils....
Rate this Page
★ ★ ★ ★ ★

torch.utils.flop_counter.shape_wrapper#

torch.utils.flop_counter.shape_wrapper(f)[source]#
Rate this Page
★ ★ ★ ★ ★

previous

torch.utils.flop_counter.sdpa_flop_count

next

torch.utils.hipify.hipify_python.add_dim3

Built with the PyData Sphinx Theme 0.15.4.

previous

torch.utils.flop_counter.sdpa_flop_count

next

torch.utils.hipify.hipify_python.add_dim3

On this page
  • shape_wrapper()
Show Source
PyTorch Libraries
  • ExecuTorch
  • Helion
  • torchao
  • kineto
  • torchtitan
  • TorchRL
  • torchvision
  • torchaudio
  • tensordict
  • PyTorch on XLA Devices

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources

Stay in touch for updates, event info, and the latest news

By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.

© PyTorch. Copyright © The Linux Foundation®. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For more information, including terms of use, privacy policy, and trademark usage, please see our Policies page. Trademark Usage. Privacy Policy.

To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.

© Copyright PyTorch Contributors.

Created using Sphinx 7.2.6.

Built with the PyData Sphinx Theme 0.15.4.