Storage Backends¶
TorchRL provides various storage backends for replay buffers, each optimized for different use cases.
|
A storage that compresses and decompresses data. |
A storage checkpointer for CompressedListStorage. |
|
|
Saves the storage in a compact form, saving space on the TED format. |
|
Saves the storage in a compact form, saving space on the TED format and using H5 format to save the data. |
|
A blocking writer for immutable datasets. |
|
A memory-mapped storage for tensors and tensordicts. |
|
A pre-allocated tensor storage for tensors and tensordicts. |
|
A storage stored in a list. |
|
A ListStorage that returns LazyStackTensorDict instances. |
A storage checkpointer for ListStoage. |
|
|
Saves the storage in a compact form, saving space on the TED format and using memory-mapped nested tensors. |
|
A Storage is the container of a replay buffer. |
Public base class for storage checkpointers. |
|
|
An ensemble of storages. |
Checkpointer for ensemble storages. |
|
|
A storage for tensors and tensordicts. |
A storage checkpointer for TensorStorages. |
Storage Performance¶
Storage choice is very influential on replay buffer sampling latency, especially
in distributed reinforcement learning settings with larger data volumes.
LazyMemmapStorage is highly
advised in distributed settings with shared storage due to the lower serialization
cost of MemoryMappedTensors as well as the ability to specify file storage locations
for improved node failure recovery.