Rate this Page

torch.fx.experimental.optimization.gen_mkl_autotuner#

torch.fx.experimental.optimization.gen_mkl_autotuner(example_inputs, iters=10, warmup=1)[source]#

This generates a heuristic that can be passed into optimize_for_inference that determines whether a subgraph should be run in MKL by running it with the example_inputs.

Example usage:

heuristic = gen_mkl_autotuner(example_inputs, iters=10) fast_model = optimization.optimize_for_inference(model, heuristic)

Return type:

Callable[[MklSubgraph], bool]