torch.__future__¶
- torch.__future__.set_overwrite_module_params_on_conversion(value)[source]¶
- Sets whether to assign new tensors to the parameters instead of changing the existing parameters in-place when converting an - nn.Module.- When enabled, the following methods will assign new parameters to the module: - module.{device}()(e.g.- nn.Module.cuda()) for moving a module between devices
- module.{dtype}()(e.g.- nn.Module.float()) for converting a module to a different dtype
- nn.Module.to()
- nn.Module.to_empty()
 - Parameters
- value (bool) – Whether to assign new tensors or not. 
 
- torch.__future__.get_overwrite_module_params_on_conversion()[source]¶
- Returns whether to assign new tensors to the parameters instead of changing the existing parameters in-place when converting an - torch.nn.Module. Defaults to- False.- See - set_overwrite_module_params_on_conversion()for more information.- Return type
 
- torch.__future__.set_swap_module_params_on_conversion(value)[source]¶
- Sets whether to use - swap_tensors()instead of setting- .datato change the existing parameters in-place when converting an- nn.Moduleand instead of- param.copy_(state_dict[key])when loading a state dict into an- nn.Module.- Note - This function takes precedence over - get_overwrite_module_params_on_conversion()- When enabled, the following methods will swap the existing parameters in-place: - module.{device}()(e.g.- nn.Module.cuda()) for moving a module between devices
- module.{dtype}()(e.g.- nn.Module.float()) for converting a module to a different dtype
- nn.Module.to()
- nn.Module.to_empty()
- nn.Module.load_state_dict()
 - The semantics for - load_state_dict()when this is set are as follows:- For each parameter/buffer, its corresponding - state_dict['key']is transformed via- module_load()(i.e.- res = param.module_load(state_dict['key']))
- If necessary, - reswill be wrapped in an- Parameter
- The parameter/buffer in the module will be swapped via - swap_tensors()with- res
 - Parameters
- value (bool) – Whether to use - swap_tensors()or not.
 
- torch.__future__.get_swap_module_params_on_conversion()[source]¶
- Returns whether to use - swap_tensors()instead of setting .data to change the existing parameters in-place when converting an- nn.Module. Defaults to- False.- See - set_swap_module_params_on_conversion()for more information.- Return type