Embedding Layers#
Embedding layers map discrete tokens (words, categories, IDs) to dense vector representations. They are the foundation of NLP models and recommendation systems.
Embedding: Standard lookup table that maps indices to dense vectors
EmbeddingBag: Computes sums or means of embeddings (efficient for sparse features)
Key parameters:
num_embeddings: Size of the vocabulary (number of unique tokens)embedding_dim: Dimension of each embedding vectorpadding_idx: Index that outputs zeros (useful for padding tokens)
Embedding#
-
class Embedding : public torch::nn::ModuleHolder<EmbeddingImpl>#
A
ModuleHoldersubclass forEmbeddingImpl.See the documentation for
EmbeddingImplclass to learn what methods it provides, and examples of how to useEmbeddingwithtorch::nn::EmbeddingOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.
-
class EmbeddingImpl : public torch::nn::Cloneable<EmbeddingImpl>#
Performs a lookup in a fixed size embedding table.
See https://pytorch.org/docs/main/nn.html#torch.nn.Embedding to learn about the exact behavior of this module.
See the documentation for
torch::nn::EmbeddingOptionsclass to learn what constructor arguments are supported for this module.Example:
Embedding model(EmbeddingOptions(10, 2).padding_idx(3).max_norm(2).norm_type(2.5).scale_grad_by_freq(true).sparse(true));
Public Functions
-
inline EmbeddingImpl(int64_t num_embeddings, int64_t embedding_dim)#
-
explicit EmbeddingImpl(EmbeddingOptions options_)#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
void reset_parameters()#
-
inline EmbeddingImpl(int64_t num_embeddings, int64_t embedding_dim)#
Example:
auto embedding = torch::nn::Embedding(
torch::nn::EmbeddingOptions(10000, 256) // num_embeddings, embedding_dim
.padding_idx(0));
auto indices = torch::tensor({1, 2, 3, 4});
auto embedded = embedding->forward(indices); // [4, 256]
EmbeddingBag#
-
class EmbeddingBag : public torch::nn::ModuleHolder<EmbeddingBagImpl>#
A
ModuleHoldersubclass forEmbeddingBagImpl.See the documentation for
EmbeddingBagImplclass to learn what methods it provides, and examples of how to useEmbeddingBagwithtorch::nn::EmbeddingBagOptions. See the documentation forModuleHolderto learn about PyTorch’s module storage semantics.Public Static Functions
-
static inline EmbeddingBag from_pretrained(const torch::Tensor &embeddings, const EmbeddingBagFromPretrainedOptions &options = {})#
See the documentation for
torch::nn::EmbeddingBagFromPretrainedOptionsclass to learn what optional arguments are supported for this function.
-
static inline EmbeddingBag from_pretrained(const torch::Tensor &embeddings, const EmbeddingBagFromPretrainedOptions &options = {})#
-
class EmbeddingBagImpl : public torch::nn::Cloneable<EmbeddingBagImpl>#
Computes sums or means of ‘bags’ of embeddings, without instantiating the intermediate embeddings.
See https://pytorch.org/docs/main/nn.html#torch.nn.EmbeddingBag to learn about the exact behavior of this module.
See the documentation for
torch::nn::EmbeddingBagOptionsclass to learn what constructor arguments are supported for this module.Example:
EmbeddingBag model(EmbeddingBagOptions(10, 2).max_norm(2).norm_type(2.5).scale_grad_by_freq(true).sparse(true).mode(torch::kSum).padding_idx(1));
Public Functions
-
inline EmbeddingBagImpl(int64_t num_embeddings, int64_t embedding_dim)#
-
explicit EmbeddingBagImpl(EmbeddingBagOptions options_)#
-
virtual void reset() override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
-
void reset_parameters()#
-
virtual void pretty_print(std::ostream &stream) const override#
Pretty prints the
EmbeddingBagmodule into the givenstream.
Public Members
-
EmbeddingBagOptions options#
The
Optionsused to configure thisEmbeddingBagmodule.
Friends
- friend struct torch::nn::AnyModuleHolder
-
inline EmbeddingBagImpl(int64_t num_embeddings, int64_t embedding_dim)#