RunningAverage#
- class ignite.metrics.RunningAverage(src=None, alpha=0.98, output_transform=None, epoch_bound=None, device=None, skip_unrolling=False)[source]#
Compute running average of a metric or the output of process function.
- Parameters
src (Optional[Metric]) – input source: an instance of
Metricor None. The latter corresponds to engine.state.output which holds the output of process function.alpha (float) – running average decay factor, default 0.98
output_transform (Optional[Callable]) – a function to use to transform the output if src is None and corresponds the output of process function. Otherwise it should be None.
epoch_bound (Optional[bool]) – whether the running average should be reset after each epoch. It is depracated in favor of
usageargument inattach()method. Settingepoch_boundtoFalseis equivalent tousage=SingleEpochRunningBatchWise()and setting it toTrueis equivalent tousage=RunningBatchWise()in theattach()method. Default None.device (Optional[Union[str, device]]) – specifies which device updates are accumulated on. Should be None when
srcis an instance ofMetric, as the running average will use thesrc’s device. Otherwise, defaults to CPU. Only applicable when the computed value from the metric is a tensor.skip_unrolling (bool) – specifies whether output should be unrolled before being fed to update method. Should be true for multi-output model, for example, if
y_predcontains multi-ouput as(y_pred_a, y_pred_b)Alternatively,output_transformcan be used to handle this.
Examples
For more information on how metric works with
Engine, visit Attach Engine API.from collections import OrderedDict import torch from torch import nn, optim from ignite.engine import * from ignite.handlers import * from ignite.metrics import * from ignite.metrics.regression import * from ignite.utils import * # create default evaluator for doctests def eval_step(engine, batch): return batch default_evaluator = Engine(eval_step) # create default optimizer for doctests param_tensor = torch.zeros([1], requires_grad=True) default_optimizer = torch.optim.SGD([param_tensor], lr=0.1) # create default trainer for doctests # as handlers could be attached to the trainer, # each test must define his own trainer using `.. testsetup:` def get_default_trainer(): def train_step(engine, batch): return batch return Engine(train_step) # create default model for doctests default_model = nn.Sequential(OrderedDict([ ('base', nn.Linear(4, 2)), ('fc', nn.Linear(2, 1)) ])) manual_seed(666)
default_trainer = get_default_trainer() accuracy = Accuracy() metric = RunningAverage(accuracy) metric.attach(default_trainer, 'running_avg_accuracy') @default_trainer.on(Events.ITERATION_COMPLETED) def log_running_avg_metrics(): print(default_trainer.state.metrics['running_avg_accuracy']) y_true = [torch.tensor(y) for y in [[0], [1], [0], [1], [0], [1]]] y_pred = [torch.tensor(y) for y in [[0], [0], [0], [1], [1], [1]]] state = default_trainer.run(zip(y_pred, y_true))
1.0 0.98 0.98039... 0.98079... 0.96117... 0.96195...
default_trainer = get_default_trainer() metric = RunningAverage(output_transform=lambda x: x.item()) metric.attach(default_trainer, 'running_avg_accuracy') @default_trainer.on(Events.ITERATION_COMPLETED) def log_running_avg_metrics(): print(default_trainer.state.metrics['running_avg_accuracy']) y = [torch.tensor(y) for y in [[0], [1], [0], [1], [0], [1]]] state = default_trainer.run(y)
0.0 0.020000... 0.019600... 0.039208... 0.038423... 0.057655...
Changed in version 0.5.1:
skip_unrollingargument is added.Methods
Attach the metric to the
engineusing the events determined by theusage.Computes the metric based on its accumulated state.
Detaches current metric from the engine and no metric's computation is done during the run.
Resets the metric to its initial state.
Updates the metric's state using the passed batch output.
- attach(engine, name, usage=<ignite.metrics.metric.RunningBatchWise object>)[source]#
Attach the metric to the
engineusing the events determined by theusage.- Parameters
engine (Engine) – the engine to get attached to.
name (str) – by which, the metric is inserted into
engine.state.metricsdictionary.usage (Union[str, MetricUsage]) –
the usage determining on which events the metric is reset, updated and computed. It should be an instance of the
MetricUsages in the following table.usageclassDescription
Running average of the
srcmetric orengine.state.outputis computed across batches. In the former case, on each batch,srcis reset, updated and computed then its value is retrieved. Default.Same as above but the running average is computed across batches in an epoch so it is reset at the end of the epoch.
Running average of the
srcmetric orengine.state.outputis computed across epochs. In the former case,srcworks as if it was attached in aEpochWisemanner and its computed value is retrieved at the end of the epoch. The latter case doesn’t make much sense for this usage as theengine.state.outputof the last batch is retrieved then.
- Return type
None
RunningAverageretrievesengine.state.outputatusage.ITERATION_COMPLETEDif thesrcis not given and it’s computed and updated usingsrc, by manually calling itscomputemethod, orengine.state.outputatusage.COMPLETEDevent. Also ifsrcis given, it is updated atusage.ITERATION_COMPLETED, but its reset event is determined byusagetype. Ifisinstance(usage, BatchWise)holds true,srcis reset onBatchWise().STARTED, otherwise onEpochWise().STARTEDifisinstance(usage, EpochWise).Changed in version 0.5.1: Added usage argument
- compute()[source]#
Computes the metric based on its accumulated state.
By default, this is called at the end of each epoch.
- Returns
- the actual quantity of interest. However, if a
Mappingis returned, it will be (shallow) flattened into engine.state.metrics whencompleted()is called. - Return type
Any
- Raises
NotComputableError – raised when the metric cannot be computed.
- detach(engine, usage=<ignite.metrics.metric.RunningBatchWise object>)[source]#
Detaches current metric from the engine and no metric’s computation is done during the run. This method in conjunction with
attach()can be useful if several metrics need to be computed with different periods. For example, one metric is computed every training epoch and another metric (e.g. more expensive one) is done every n-th training epoch.- Parameters
engine (Engine) – the engine from which the metric must be detached
usage (Union[str, MetricUsage]) – the usage of the metric. Valid string values should be ‘epoch_wise’ (default) or ‘batch_wise’.
- Return type
None
Examples
metric = ... engine = ... metric.detach(engine) assert "mymetric" not in engine.run(data).metrics assert not metric.is_attached(engine)
Example with usage:
metric = ... engine = ... metric.detach(engine, usage="batch_wise") assert "mymetric" not in engine.run(data).metrics assert not metric.is_attached(engine, usage="batch_wise")
- required_output_keys: Optional[Tuple] = None#