Loss Functions#
Loss functions measure how well the model’s predictions match the targets. The choice of loss function depends on your task type and data characteristics.
Regression losses:
L1Loss/MSELoss: Basic regression losses (MAE vs MSE)
SmoothL1Loss/HuberLoss: Robust to outliers
Classification losses:
CrossEntropyLoss: Multi-class classification (combines LogSoftmax + NLLLoss)
NLLLoss: Negative log likelihood (use with LogSoftmax output)
BCELoss/BCEWithLogitsLoss: Binary classification
Specialized losses:
CTCLoss: Sequence-to-sequence without alignment (speech recognition)
TripletMarginLoss: Metric learning (similarity/embedding tasks)
CosineEmbeddingLoss: Similarity learning with cosine distance
L1Loss#
-
class L1Loss : public torch::nn::ModuleHolder<L1LossImpl>#
A
ModuleHoldersubclass forL1LossImpl.See the documentation for
L1LossImplclass to learn what methods it provides, and examples of how to useL1Losswithtorch::nn::L1LossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = L1LossImpl#
-
using Impl = L1LossImpl#
-
struct L1LossImpl : public torch::nn::Cloneable<L1LossImpl>#
Creates a criterion that measures the mean absolute error (MAE) between each element in the input : math :
xand target :y.See https://pytorch.org/docs/main/nn.html#torch.nn.L1Loss to learn about the exact behavior of this module.
See the documentation for
torch::nn::L1LossOptionsclass to learn what constructor arguments are supported for this module.Example:
L1Loss model(L1LossOptions(torch::kNone));
Public Functions
-
explicit L1LossImpl(L1LossOptions options_ = {})#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
explicit L1LossImpl(L1LossOptions options_ = {})#
MSELoss#
-
class MSELoss : public torch::nn::ModuleHolder<MSELossImpl>#
A
ModuleHoldersubclass forMSELossImpl.See the documentation for
MSELossImplclass to learn what methods it provides, and examples of how to useMSELosswithtorch::nn::MSELossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = MSELossImpl#
-
using Impl = MSELossImpl#
-
struct MSELossImpl : public torch::nn::Cloneable<MSELossImpl>#
Creates a criterion that measures the mean squared error (squared L2 norm) between each element in the input :math:
xand target :math:y.See https://pytorch.org/docs/main/nn.html#torch.nn.MSELoss to learn about the exact behavior of this module.
See the documentation for
torch::nn::MSELossOptionsclass to learn what constructor arguments are supported for this module.Example:
MSELoss model(MSELossOptions(torch::kNone));
Public Functions
-
explicit MSELossImpl(MSELossOptions options_ = {})#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
explicit MSELossImpl(MSELossOptions options_ = {})#
Example:
auto loss_fn = torch::nn::MSELoss();
auto loss = loss_fn->forward(predictions, targets);
CrossEntropyLoss#
-
class CrossEntropyLoss : public torch::nn::ModuleHolder<CrossEntropyLossImpl>#
A
ModuleHoldersubclass forCrossEntropyLossImpl.See the documentation for
CrossEntropyLossImplclass to learn what methods it provides, and examples of how to useCrossEntropyLosswithtorch::nn::CrossEntropyLossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = CrossEntropyLossImpl#
-
using Impl = CrossEntropyLossImpl#
-
struct CrossEntropyLossImpl : public torch::nn::Cloneable<CrossEntropyLossImpl>#
Creates a criterion that computes cross entropy loss between input and target.
See https://pytorch.org/docs/main/nn.html#torch.nn.CrossEntropyLoss to learn about the exact behavior of this module.
See the documentation for
torch::nn::CrossEntropyLossOptionsclass to learn what constructor arguments are supported for this module.Example:
CrossEntropyLoss model(CrossEntropyLossOptions().ignore_index(-100).reduction(torch::kMean));
Public Functions
-
explicit CrossEntropyLossImpl(CrossEntropyLossOptions options_ = {})#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
virtual void pretty_print(std::ostream &stream) const override#
Pretty prints the
CrossEntropyLossmodule into the givenstream.
-
explicit CrossEntropyLossImpl(CrossEntropyLossOptions options_ = {})#
Example:
auto loss_fn = torch::nn::CrossEntropyLoss();
auto logits = torch::randn({32, 10}); // [batch, num_classes]
auto targets = torch::randint(0, 10, {32}); // [batch]
auto loss = loss_fn->forward(logits, targets);
NLLLoss#
-
class NLLLoss : public torch::nn::ModuleHolder<NLLLossImpl>#
A
ModuleHoldersubclass forNLLLossImpl.See the documentation for
NLLLossImplclass to learn what methods it provides, and examples of how to useNLLLosswithtorch::nn::NLLLossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = NLLLossImpl#
-
using Impl = NLLLossImpl#
-
struct NLLLossImpl : public torch::nn::Cloneable<NLLLossImpl>#
The negative log likelihood loss.
It is useful to train a classification problem with
Cclasses. See https://pytorch.org/docs/main/nn.html#torch.nn.NLLLoss to learn about the exact behavior of this module.See the documentation for
torch::nn::NLLLossOptionsclass to learn what constructor arguments are supported for this module.Example:
NLLLoss model(NLLLossOptions().ignore_index(-100).reduction(torch::kMean));
Public Functions
-
explicit NLLLossImpl(NLLLossOptions options_ = {})#
-
virtual void pretty_print(std::ostream &stream) const override#
Pretty prints the
NLLLossmodule into the givenstream.
-
explicit NLLLossImpl(NLLLossOptions options_ = {})#
BCELoss#
-
class BCELoss : public torch::nn::ModuleHolder<BCELossImpl>#
A
ModuleHoldersubclass forBCELossImpl.See the documentation for
BCELossImplclass to learn what methods it provides, and examples of how to useBCELosswithtorch::nn::BCELossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = BCELossImpl#
-
using Impl = BCELossImpl#
-
struct BCELossImpl : public torch::nn::Cloneable<BCELossImpl>#
Creates a criterion that measures the Binary Cross Entropy between the target and the output.
See https://pytorch.org/docs/main/nn.html#torch.nn.BCELoss to learn about the exact behavior of this module.
See the documentation for
torch::nn::BCELossOptionsclass to learn what constructor arguments are supported for this module.Example:
BCELoss model(BCELossOptions().reduction(torch::kNone).weight(weight));
Public Functions
-
explicit BCELossImpl(BCELossOptions options_ = {})#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
explicit BCELossImpl(BCELossOptions options_ = {})#
BCEWithLogitsLoss#
-
class BCEWithLogitsLoss : public torch::nn::ModuleHolder<BCEWithLogitsLossImpl>#
A
ModuleHoldersubclass forBCEWithLogitsLossImpl.See the documentation for
BCEWithLogitsLossImplclass to learn what methods it provides, and examples of how to useBCEWithLogitsLosswithtorch::nn::BCEWithLogitsLossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = BCEWithLogitsLossImpl#
-
using Impl = BCEWithLogitsLossImpl#
-
struct BCEWithLogitsLossImpl : public torch::nn::Cloneable<BCEWithLogitsLossImpl>#
This loss combines a
Sigmoidlayer and theBCELossin one single class.This version is more numerically stable than using a plain
Sigmoidfollowed by aBCELossas, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability. See https://pytorch.org/docs/main/nn.html#torch.nn.BCEWithLogitsLoss to learn about the exact behavior of this module.See the documentation for
torch::nn::BCEWithLogitsLossOptionsclass to learn what constructor arguments are supported for this module.Example:
BCEWithLogitsLoss model(BCEWithLogitsLossOptions().reduction(torch::kNone).weight(weight));
Public Functions
-
explicit BCEWithLogitsLossImpl(BCEWithLogitsLossOptions options_ = {})#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
virtual void pretty_print(std::ostream &stream) const override#
Pretty prints the
BCEWithLogitsLossmodule into the givenstream.
-
explicit BCEWithLogitsLossImpl(BCEWithLogitsLossOptions options_ = {})#
HuberLoss#
-
class HuberLoss : public torch::nn::ModuleHolder<HuberLossImpl>#
A
ModuleHoldersubclass forHuberLossImpl.See the documentation for
HuberLossImplclass to learn what methods it provides, and examples of how to useHuberLosswithtorch::nn::HuberLossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = HuberLossImpl#
-
using Impl = HuberLossImpl#
-
struct HuberLossImpl : public torch::nn::Cloneable<HuberLossImpl>#
Creates a criterion that uses a squared term if the absolute element-wise error falls below delta and a delta-scaled L1 term otherwise.
See https://pytorch.org/docs/main/nn.html#torch.nn.HuberLoss to learn about the exact behavior of this module.
See the documentation for
torch::nn::HuberLossOptionsclass to learn what constructor arguments are supported for this module.Example:
HuberLoss model(HuberLossOptions().reduction(torch::kNone).delta(0.5));
Public Functions
-
explicit HuberLossImpl(HuberLossOptions options_ = {})#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
explicit HuberLossImpl(HuberLossOptions options_ = {})#
SmoothL1Loss#
-
class SmoothL1Loss : public torch::nn::ModuleHolder<SmoothL1LossImpl>#
A
ModuleHoldersubclass forSmoothL1LossImpl.See the documentation for
SmoothL1LossImplclass to learn what methods it provides, and examples of how to useSmoothL1Losswithtorch::nn::SmoothL1LossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = SmoothL1LossImpl#
-
using Impl = SmoothL1LossImpl#
-
struct SmoothL1LossImpl : public torch::nn::Cloneable<SmoothL1LossImpl>#
Creates a criterion that uses a squared term if the absolute element-wise error falls below beta and an L1 term otherwise.
It is less sensitive to outliers than the
MSELossand in some cases prevents exploding gradients (e.g. see the paperFast R-CNNby Ross Girshick). See https://pytorch.org/docs/main/nn.html#torch.nn.SmoothL1Loss to learn about the exact behavior of this module.See the documentation for
torch::nn::SmoothL1LossOptionsclass to learn what constructor arguments are supported for this module.Example:
SmoothL1Loss model(SmoothL1LossOptions().reduction(torch::kNone).beta(0.5));
Public Functions
-
explicit SmoothL1LossImpl(SmoothL1LossOptions options = {})#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
explicit SmoothL1LossImpl(SmoothL1LossOptions options = {})#
KLDivLoss#
-
class KLDivLoss : public torch::nn::ModuleHolder<KLDivLossImpl>#
A
ModuleHoldersubclass forKLDivLossImpl.See the documentation for
KLDivLossImplclass to learn what methods it provides, and examples of how to useKLDivLosswithtorch::nn::KLDivLossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = KLDivLossImpl#
-
using Impl = KLDivLossImpl#
-
struct KLDivLossImpl : public torch::nn::Cloneable<KLDivLossImpl>#
The Kullback-Leibler divergence loss measure See https://pytorch.org/docs/main/nn.html#torch.nn.KLDivLoss to learn about the exact behavior of this module.
See the documentation for
torch::nn::KLDivLossOptionsclass to learn what constructor arguments are supported for this module.Example:
KLDivLoss model(KLDivLossOptions().reduction(torch::kNone));
Public Functions
-
explicit KLDivLossImpl(KLDivLossOptions options_ = {})#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
explicit KLDivLossImpl(KLDivLossOptions options_ = {})#
CTCLoss#
-
class CTCLoss : public torch::nn::ModuleHolder<CTCLossImpl>#
A
ModuleHoldersubclass forCTCLossImpl.See the documentation for
CTCLossImplclass to learn what methods it provides, and examples of how to useCTCLosswithtorch::nn::CTCLossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = CTCLossImpl#
-
using Impl = CTCLossImpl#
-
struct CTCLossImpl : public torch::nn::Cloneable<CTCLossImpl>#
The Connectionist Temporal Classification loss.
See https://pytorch.org/docs/main/nn.html#torch.nn.CTCLoss to learn about the exact behavior of this module.
See the documentation for
torch::nn::CTCLossOptionsclass to learn what constructor arguments are supported for this module.Example:
CTCLoss model(CTCLossOptions().blank(42).zero_infinity(false).reduction(torch::kSum));
Public Functions
-
explicit CTCLossImpl(CTCLossOptions options_ = {})#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
explicit CTCLossImpl(CTCLossOptions options_ = {})#
PoissonNLLLoss#
-
class PoissonNLLLoss : public torch::nn::ModuleHolder<PoissonNLLLossImpl>#
A
ModuleHoldersubclass forPoissonNLLLossImpl.See the documentation for
PoissonNLLLossImplclass to learn what methods it provides, and examples of how to usePoissonNLLLosswithtorch::nn::PoissonNLLLossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = PoissonNLLLossImpl#
-
using Impl = PoissonNLLLossImpl#
-
struct PoissonNLLLossImpl : public torch::nn::Cloneable<PoissonNLLLossImpl>#
Negative log likelihood loss with Poisson distribution of target.
See https://pytorch.org/docs/main/nn.html#torch.nn.PoissonNLLLoss to learn about the exact behavior of this module.
See the documentation for
torch::nn::PoissonNLLLossOptionsclass to learn what constructor arguments are supported for this module.Example:
PoissonNLLLoss model(PoissonNLLLossOptions().log_input(false).full(true).eps(0.42).reduction(torch::kSum));
Public Functions
-
explicit PoissonNLLLossImpl(PoissonNLLLossOptions options_ = {})#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
virtual void pretty_print(std::ostream &stream) const override#
Pretty prints the
PoissonNLLLossmodule into the givenstream.
-
explicit PoissonNLLLossImpl(PoissonNLLLossOptions options_ = {})#
MarginRankingLoss#
-
class MarginRankingLoss : public torch::nn::ModuleHolder<MarginRankingLossImpl>#
A
ModuleHoldersubclass forMarginRankingLossImpl.See the documentation for
MarginRankingLossImplclass to learn what methods it provides, and examples of how to useMarginRankingLosswithtorch::nn::MarginRankingLossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = MarginRankingLossImpl#
-
using Impl = MarginRankingLossImpl#
-
struct MarginRankingLossImpl : public torch::nn::Cloneable<MarginRankingLossImpl>#
Creates a criterion that measures the loss given inputs :math:
x1, :math:x2, two 1D mini-batchTensors, and a label 1D mini-batch tensor :math:y(containing 1 or -1).See https://pytorch.org/docs/main/nn.html#torch.nn.MarginRankingLoss to learn about the exact behavior of this module.
See the documentation for
torch::nn::MarginRankingLossOptionsclass to learn what constructor arguments are supported for this module.Example:
MarginRankingLoss model(MarginRankingLossOptions().margin(0.5).reduction(torch::kSum));
Public Functions
-
explicit MarginRankingLossImpl(MarginRankingLossOptions options_ = {})#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
virtual void pretty_print(std::ostream &stream) const override#
Pretty prints the
MarginRankingLossmodule into the givenstream.
-
explicit MarginRankingLossImpl(MarginRankingLossOptions options_ = {})#
HingeEmbeddingLoss#
-
class HingeEmbeddingLoss : public torch::nn::ModuleHolder<HingeEmbeddingLossImpl>#
A
ModuleHoldersubclass forHingeEmbeddingLossImpl.See the documentation for
HingeEmbeddingLossImplclass to learn what methods it provides, and examples of how to useHingeEmbeddingLosswithtorch::nn::HingeEmbeddingLossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = HingeEmbeddingLossImpl#
-
using Impl = HingeEmbeddingLossImpl#
-
struct HingeEmbeddingLossImpl : public torch::nn::Cloneable<HingeEmbeddingLossImpl>#
Creates a criterion that measures the loss given an input tensor :math:
xand a labels tensor :math:y(containing 1 or -1).See https://pytorch.org/docs/main/nn.html#torch.nn.HingeEmbeddingLoss to learn about the exact behavior of this module.
See the documentation for
torch::nn::HingeEmbeddingLossOptionsclass to learn what constructor arguments are supported for this module.Example:
HingeEmbeddingLoss model(HingeEmbeddingLossOptions().margin(4).reduction(torch::kNone));
Public Functions
-
explicit HingeEmbeddingLossImpl(HingeEmbeddingLossOptions options_ = {})#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
virtual void pretty_print(std::ostream &stream) const override#
Pretty prints the
HingeEmbeddingLossmodule into the givenstream.
-
explicit HingeEmbeddingLossImpl(HingeEmbeddingLossOptions options_ = {})#
CosineEmbeddingLoss#
-
class CosineEmbeddingLoss : public torch::nn::ModuleHolder<CosineEmbeddingLossImpl>#
A
ModuleHoldersubclass forCosineEmbeddingLossImpl.See the documentation for
CosineEmbeddingLossImplclass to learn what methods it provides, and examples of how to useCosineEmbeddingLosswithtorch::nn::CosineEmbeddingLossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = CosineEmbeddingLossImpl#
-
using Impl = CosineEmbeddingLossImpl#
-
struct CosineEmbeddingLossImpl : public torch::nn::Cloneable<CosineEmbeddingLossImpl>#
Creates a criterion that measures the loss given input tensors
input1,input2, and aTensorlabeltargetwith values 1 or -1.This is used for measuring whether two inputs are similar or dissimilar, using the cosine distance, and is typically used for learning nonlinear embeddings or semi-supervised learning. See https://pytorch.org/docs/main/nn.html#torch.nn.CosineEmbeddingLoss to learn about the exact behavior of this module.
See the documentation for
torch::nn::CosineEmbeddingLossOptionsclass to learn what constructor arguments are supported for this module.Example:
CosineEmbeddingLoss model(CosineEmbeddingLossOptions().margin(0.5));
Public Functions
-
explicit CosineEmbeddingLossImpl(CosineEmbeddingLossOptions options_ = {})#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
virtual void pretty_print(std::ostream &stream) const override#
Pretty prints the
CosineEmbeddingLossmodule into the givenstream.
-
explicit CosineEmbeddingLossImpl(CosineEmbeddingLossOptions options_ = {})#
MultiMarginLoss#
-
class MultiMarginLoss : public torch::nn::ModuleHolder<MultiMarginLossImpl>#
A
ModuleHoldersubclass forMultiMarginLossImpl.See the documentation for
MultiMarginLossImplclass to learn what methods it provides, and examples of how to useMultiMarginLosswithtorch::nn::MultiMarginLossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = MultiMarginLossImpl#
-
using Impl = MultiMarginLossImpl#
-
struct MultiMarginLossImpl : public torch::nn::Cloneable<MultiMarginLossImpl>#
Creates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input :math:
x(a 2D mini-batchTensor) and output :math:y(which is a 1D tensor of target class indices, :math:0 \leq y \leq \text{x.size}(1)-1).See https://pytorch.org/docs/main/nn.html#torch.nn.MultiMarginLoss to learn about the exact behavior of this module.
See the documentation for
torch::nn::MultiMarginLossOptionsclass to learn what constructor arguments are supported for this module.Example:
MultiMarginLoss model(MultiMarginLossOptions().margin(2).weight(weight));
Public Functions
-
explicit MultiMarginLossImpl(MultiMarginLossOptions options_ = {})#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
virtual void pretty_print(std::ostream &stream) const override#
Pretty prints the
MultiMarginLossmodule into the givenstream.
-
explicit MultiMarginLossImpl(MultiMarginLossOptions options_ = {})#
MultiLabelMarginLoss#
-
class MultiLabelMarginLoss : public torch::nn::ModuleHolder<MultiLabelMarginLossImpl>#
A
ModuleHoldersubclass forMultiLabelMarginLossImpl.See the documentation for
MultiLabelMarginLossImplclass to learn what methods it provides, and examples of how to useMultiLabelMarginLosswithtorch::nn::MultiLabelMarginLossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = MultiLabelMarginLossImpl#
-
using Impl = MultiLabelMarginLossImpl#
-
struct MultiLabelMarginLossImpl : public torch::nn::Cloneable<MultiLabelMarginLossImpl>#
Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input :math:
x(a 2D mini-batchTensor) and output :math:y(which is a 2DTensorof target class indices).See https://pytorch.org/docs/main/nn.html#torch.nn.MultiLabelMarginLoss to learn about the exact behavior of this module.
See the documentation for
torch::nn::MultiLabelMarginLossOptionsclass to learn what constructor arguments are supported for this module.Example:
MultiLabelMarginLoss model(MultiLabelMarginLossOptions(torch::kNone));
Public Functions
-
explicit MultiLabelMarginLossImpl(MultiLabelMarginLossOptions options_ = {})#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
explicit MultiLabelMarginLossImpl(MultiLabelMarginLossOptions options_ = {})#
MultiLabelSoftMarginLoss#
-
class MultiLabelSoftMarginLoss : public torch::nn::ModuleHolder<MultiLabelSoftMarginLossImpl>#
A
ModuleHoldersubclass forMultiLabelSoftMarginLossImpl.See the documentation for
MultiLabelSoftMarginLossImplclass to learn what methods it provides, and examples of how to useMultiLabelSoftMarginLosswithtorch::nn::MultiLabelSoftMarginLossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = MultiLabelSoftMarginLossImpl#
-
using Impl = MultiLabelSoftMarginLossImpl#
-
struct MultiLabelSoftMarginLossImpl : public torch::nn::Cloneable<MultiLabelSoftMarginLossImpl>#
Creates a criterion that optimizes a multi-label one-versus-all loss based on max-entropy, between input :math:
xand target :math:yof size :math:(N, C).See https://pytorch.org/docs/main/nn.html#torch.nn.MultiLabelSoftMarginLoss to learn about the exact behavior of this module.
See the documentation for
torch::nn::MultiLabelSoftMarginLossOptionsclass to learn what constructor arguments are supported for this module.Example:
MultiLabelSoftMarginLoss model(MultiLabelSoftMarginLossOptions().reduction(torch::kNone).weight(weight));
Public Functions
-
explicit MultiLabelSoftMarginLossImpl(MultiLabelSoftMarginLossOptions options_ = {})#
-
virtual void pretty_print(std::ostream &stream) const override#
Pretty prints the
MultiLabelSoftMarginLossmodule into the givenstream.
-
explicit MultiLabelSoftMarginLossImpl(MultiLabelSoftMarginLossOptions options_ = {})#
SoftMarginLoss#
-
class SoftMarginLoss : public torch::nn::ModuleHolder<SoftMarginLossImpl>#
A
ModuleHoldersubclass forSoftMarginLossImpl.See the documentation for
SoftMarginLossImplclass to learn what methods it provides, and examples of how to useSoftMarginLosswithtorch::nn::SoftMarginLossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = SoftMarginLossImpl#
-
using Impl = SoftMarginLossImpl#
-
struct SoftMarginLossImpl : public torch::nn::Cloneable<SoftMarginLossImpl>#
Creates a criterion that optimizes a two-class classification logistic loss between input tensor :math:
xand target tensor :math:y(containing 1 or -1).See https://pytorch.org/docs/main/nn.html#torch.nn.SoftMarginLoss to learn about the exact behavior of this module.
See the documentation for
torch::nn::SoftMarginLossOptionsclass to learn what constructor arguments are supported for this module.Example:
SoftMarginLoss model(SoftMarginLossOptions(torch::kNone));
Public Functions
-
explicit SoftMarginLossImpl(SoftMarginLossOptions options_ = {})#
-
virtual void pretty_print(std::ostream &stream) const override#
Pretty prints the
SoftMarginLossmodule into the givenstream.
-
explicit SoftMarginLossImpl(SoftMarginLossOptions options_ = {})#
TripletMarginLoss#
-
class TripletMarginLoss : public torch::nn::ModuleHolder<TripletMarginLossImpl>#
A
ModuleHoldersubclass forTripletMarginLossImpl.See the documentation for
TripletMarginLossImplclass to learn what methods it provides, and examples of how to useTripletMarginLosswithtorch::nn::TripletMarginLossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = TripletMarginLossImpl#
-
using Impl = TripletMarginLossImpl#
-
struct TripletMarginLossImpl : public torch::nn::Cloneable<TripletMarginLossImpl>#
Creates a criterion that measures the triplet loss given an input tensors :math:
x1, :math:x2, :math:x3and a margin with a value greater than :math:0.This is used for measuring a relative similarity between samples. A triplet is composed by
a,pandn(i.e.,anchor,positive examplesandnegative examplesrespectively). The shapes of all input tensors should be :math:(N, D). See https://pytorch.org/docs/main/nn.html#torch.nn.TripletMarginLoss to learn about the exact behavior of this module.See the documentation for
torch::nn::TripletMarginLossOptionsclass to learn what constructor arguments are supported for this module.Example:
TripletMarginLoss model(TripletMarginLossOptions().margin(3).p(2).eps(1e-06).swap(false));
Public Functions
-
explicit TripletMarginLossImpl(TripletMarginLossOptions options_ = {})#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
virtual void pretty_print(std::ostream &stream) const override#
Pretty prints the
TripletMarginLossmodule into the givenstream.
-
explicit TripletMarginLossImpl(TripletMarginLossOptions options_ = {})#
TripletMarginWithDistanceLoss#
-
class TripletMarginWithDistanceLoss : public torch::nn::ModuleHolder<TripletMarginWithDistanceLossImpl>#
A
ModuleHoldersubclass forTripletMarginWithDistanceLossImpl.See the documentation for
TripletMarginWithDistanceLossImplclass to learn what methods it provides, and examples of how to useTripletMarginWithDistanceLosswithtorch::nn::TripletMarginWithDistanceLossOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Types
-
using Impl = TripletMarginWithDistanceLossImpl#
-
using Impl = TripletMarginWithDistanceLossImpl#
-
struct TripletMarginWithDistanceLossImpl : public torch::nn::Cloneable<TripletMarginWithDistanceLossImpl>#
Creates a criterion that measures the triplet loss given input tensors :math:
a, :math:p, and :math:n(representing anchor, positive, and negative examples, respectively); and a nonnegative, real-valued function (“distance function”) used to compute the relationships between the anchor and positive example (“positive distance”) and the anchor and negative example (“negative distance”).See https://pytorch.org/docs/main/nn.html#torch.nn.TripletMarginWithDistanceLoss to learn about the exact behavior of this module.
See the documentation for
torch::nn::TripletMarginWithDistanceLossOptionsclass to learn what constructor arguments are supported for this module.Example:
TripletMarginWithDistanceLoss model(TripletMarginWithDistanceLossOptions().margin(3).swap(false));
Public Functions
-
explicit TripletMarginWithDistanceLossImpl(TripletMarginWithDistanceLossOptions options_ = {})#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
virtual void pretty_print(std::ostream &stream) const override#
Pretty prints the
TripletMarginWithDistanceLossmodule into the givenstream.
-
explicit TripletMarginWithDistanceLossImpl(TripletMarginWithDistanceLossOptions options_ = {})#