Rate this Page

Common Graph Breaks#

Created On: Jul 28, 2025 | Last Updated On: Mar 09, 2026

Below are some common graph breaks and some workarounds.

Incorrect Code#

Your code might contain errors (meaning it doesn’t execute even without torch.compile). In the example below, there’s a typo in the torch.sin call due to an extra argument. Always disable torch.compile to check if the code runs correctly.

@torch.compile
def fn(x):
    y = torch.sin(x, x)
    return y

try:
    fn(torch.ones(3, 3))
except Exception as e:
    pass

Dynamo makes a best-effort attempt to hint if a graph break is caused by your code. But it can still sometimes be difficult to tell from the logs if the graph break is caused by an error in your code, is a more complicated graph break, or is a torch.compile bug. In order to differentiate, we recommend trying to run your code without torch.compile to see if you still get the error reported by the graph break.

You can also use torch.compiler.set_stance("force_eager") to quickly disable torch.compile without needing to modify the torch.compile call:

@torch.compile
def fn(x):
    y = torch.sin(x, x)
    return y

try:
    with torch.compiler.set_stance("force_eager"):
        fn(torch.ones(3, 3))
except Exception as e:
    print(e)

See https://docs.pytorch.org/tutorials/recipes/torch_compiler_set_stance_tutorial.html#crashing-sooner for more examples of set_stance usage for debugging.

Data-dependent operations#

torch.compile graph breaks on data-dependent operations such as data-dependent control flow (if-statements, loops with tensors) and direct tensor data accesses (.item, .data_ptr).

@torch.compile
def fn(x):
    y = x.sum()
    if y > 0:
        return x + y.item()
    return x - y.item()

print(fn(torch.ones(3, 3)))

The general workaround for these graph breaks is to avoid doing data-dependent operations. Some specific workarounds are:

  • If your control flow doesn’t actually depend on data values, consider modifying your code to perform control flow on constants.

# old
x = torch.randn(3, 3)
@torch.compile
def fn(y):
    if x.sum() > 0:
        return y + x
    else:
        return y - x

print(fn(torch.ones(3, 3)))
# new
x = torch.randn(3, 3)
cond = (x.sum() > 0).item()
@torch.compile
def fn(y):
    if cond:
        return y + x
    else:
        return y - x

print(fn(torch.ones(3, 3)))
# old
@torch.compile
def fn(x):
    if x.sum() > 0:
        return x + 1
    return x - 1

print(fn(torch.ones(3, 3)))
# new
@torch.compile
def fn(x):
    return torch.cond(
        x.sum() > 0,
        lambda x: x + 1,
        lambda x: x - 1,
        (x,),
    )

print(fn(torch.ones(3, 3)))
  • If you have a .item() call, try torch._dynamo.config.capture_scalar_outputs = True or TORCHDYNAMO_CAPTURE_SCALAR_OUTPUTS=1.

  • Wrap problematic parts of the function in a custom operator

Printing and logging#

Printing/logging/issuing warnings will result in a graph break. You can try working around this by using torch._dynamo.config.reorderable_logging_functions. This config is used to reorder logging functions so that they are called at the end of the traced function, thus avoiding a graph break. However, the logged contents may differ if, for example, a mutation occurs.

Note: reorderable_logging_functions has restrictions, these functions must return None, and their arguments must be limited to tensors, constants, or format strings.

If you do not need to run the printing or logging function, then consider using torch.compiler.is_compiling() or torch._dyanmo.config.ignore_logging_functions to skip the function entirely. See this page for more details.