Shortcuts

ThreadingTransport

class torchrl.modules.inference_server.ThreadingTransport[source]

In-process transport for actors that are threads.

Uses a shared list protected by a threading.Condition as the request queue and Future objects for response routing.

This is the simplest backend and is appropriate when all actors live in the same process (e.g. running in a ThreadPoolExecutor).

drain(max_items: int) tuple[list[TensorDictBase], list[Future]][source]

Dequeue up to max_items pending requests.

resolve(callback: Future, result: TensorDictBase) None[source]

Set the result on the actor’s Future.

resolve_exception(callback: Future, exc: BaseException) None[source]

Set an exception on the actor’s Future.

submit(td: TensorDictBase) Future[TensorDictBase][source]

Enqueue a request and return a Future for the result.

wait_for_work(timeout: float) None[source]

Block until at least one request is enqueued or timeout elapses.

Docs

Lorem ipsum dolor sit amet, consectetur

View Docs

Tutorials

Lorem ipsum dolor sit amet, consectetur

View Tutorials

Resources

Lorem ipsum dolor sit amet, consectetur

View Resources