ThreadingTransport¶
- class torchrl.modules.inference_server.ThreadingTransport[source]¶
In-process transport for actors that are threads.
Uses a shared list protected by a
threading.Conditionas the request queue andFutureobjects for response routing.This is the simplest backend and is appropriate when all actors live in the same process (e.g. running in a
ThreadPoolExecutor).- drain(max_items: int) tuple[list[TensorDictBase], list[Future]][source]¶
Dequeue up to max_items pending requests.
- resolve(callback: Future, result: TensorDictBase) None[source]¶
Set the result on the actor’s Future.
- resolve_exception(callback: Future, exc: BaseException) None[source]¶
Set an exception on the actor’s Future.
- submit(td: TensorDictBase) Future[TensorDictBase][source]¶
Enqueue a request and return a Future for the result.