PackedSequence#
- class torch.nn.utils.rnn.PackedSequence(data, batch_sizes=None, sorted_indices=None, unsorted_indices=None)[source]#
Holds the data and list of
batch_sizesof a packed sequence.All RNN modules accept packed sequences as inputs.
Note
Instances of this class should never be created manually. They are meant to be instantiated by functions like
pack_padded_sequence().Batch sizes represent the number elements at each sequence step in the batch, not the varying sequence lengths passed to
pack_padded_sequence(). For instance, given dataabcandxthePackedSequencewould contain dataaxbcwithbatch_sizes=[2,1,1].- Variables
data (Tensor) – Tensor containing packed sequence
batch_sizes (Tensor) – Tensor of integers holding information about the batch size at each sequence step
sorted_indices (Tensor, optional) – Tensor of integers holding how this
PackedSequenceis constructed from sequences.unsorted_indices (Tensor, optional) – Tensor of integers holding how this to recover the original sequences with correct order.
- Return type
Self
Note
datacan be on arbitrary device and of arbitrary dtype.sorted_indicesandunsorted_indicesmust betorch.int64tensors on the same device asdata.However,
batch_sizesshould always be a CPUtorch.int64tensor.This invariant is maintained throughout
PackedSequenceclass, and all functions that construct aPackedSequencein PyTorch (i.e., they only pass in tensors conforming to this constraint).- count(value, /)#
Return number of occurrences of value.
- index(value, start=0, stop=9223372036854775807, /)#
Return first index of value.
Raises ValueError if the value is not present.
- to(dtype: dtype, non_blocking: bool = ..., copy: bool = ...) Self[source]#
- to(device: Optional[Union[str, device, int]] = ..., dtype: Optional[dtype] = ..., non_blocking: bool = ..., copy: bool = ...) Self
- to(other: Tensor, non_blocking: bool = ..., copy: bool = ...) Self
Perform dtype and/or device conversion on self.data.
It has similar signature as
torch.Tensor.to(), except optional arguments like non_blocking and copy should be passed as kwargs, not args, or they will not apply to the index tensors.Note
If the
self.dataTensor already has the correcttorch.dtypeandtorch.device, thenselfis returned. Otherwise, returns a copy with the desired configuration.