:github_url: https://github.com/pytorch/pytorch .. _exhale_function_namespaceat_1_1symint_1ac7efefbe1fc97b0b1065e736ad71a043: Template Function at::symint::_scaled_dot_product_fused_attention_overrideable_backward(const at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, ::std::array, const at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, int64_t, int64_t, double, bool, const at::Tensor&, const at::Tensor&, ::std::optional) ======================================================================================================================================================================================================================================================================================================================================================================================== - Defined in :ref:`file_build_aten_src_ATen_Functions.h` Function Documentation ---------------------- .. doxygenfunction:: at::symint::_scaled_dot_product_fused_attention_overrideable_backward(const at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, ::std::array, const at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, int64_t, int64_t, double, bool, const at::Tensor&, const at::Tensor&, ::std::optional)