lightly.loss
The lightly.loss package provides loss functions for self-supervised learning.
.barlow_twins_loss
- class lightly.loss.barlow_twins_loss.BarlowTwinsLoss(lambda_param: float = 0.005, gather_distributed: bool = False)
Implementation of the Barlow Twins Loss from Barlow Twins[0] paper. This code specifically implements the Figure Algorithm 1 from [0].
[0] Zbontar,J. et.al, 2021, Barlow Twins… https://arxiv.org/abs/2103.03230
Examples:
>>> # initialize loss function >>> loss_fn = BarlowTwinsLoss() >>> >>> # generate two random transforms of images >>> t0 = transforms(images) >>> t1 = transforms(images) >>> >>> # feed through SimSiam model >>> out0, out1 = model(t0, t1) >>> >>> # calculate loss >>> loss = loss_fn(out0, out1)
- forward(z_a: Tensor, z_b: Tensor) Tensor
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
.dcl_loss
- class lightly.loss.dcl_loss.DCLLoss(temperature: float = 0.1, weight_fn: Optional[Callable[[Tensor, Tensor], Tensor]] = None, gather_distributed: bool = False)
Implementation of the Decoupled Contrastive Learning Loss from Decoupled Contrastive Learning [0].
This code implements Equation 6 in [0], including the sum over all images i and views k. The loss is reduced to a mean loss over the mini-batch. The implementation was inspired by [1].
[0] Chun-Hsiao Y. et. al., 2021, Decoupled Contrastive Learning https://arxiv.org/abs/2110.06848
[1] https://github.com/raminnakhli/Decoupled-Contrastive-Learning
- temperature
Similarities are scaled by inverse temperature.
- weight_fn
Weighting function w from the paper. Scales the loss between the positive views (views from the same image). No weighting is performed if weight_fn is None. The function must take the two input tensors passed to the forward call as input and return a weight tensor. The returned weight tensor must have the same length as the input tensors.
- gather_distributed
If True then negatives from all gpus are gathered before the loss calculation.
Examples
>>> loss_fn = DCLLoss(temperature=0.07) >>> >>> # generate two random transforms of images >>> t0 = transforms(images) >>> t1 = transforms(images) >>> >>> # embed images using some model, for example SimCLR >>> out0 = model(t0) >>> out1 = model(t1) >>> >>> # calculate loss >>> loss = loss_fn(out0, out1) >>> >>> # you can also add a custom weighting function >>> weight_fn = lambda out0, out1: torch.sum((out0 - out1) ** 2, dim=1) >>> loss_fn = DCLLoss(weight_fn=weight_fn)
- forward(out0: Tensor, out1: Tensor) Tensor
Forward pass of the DCL loss.
- Parameters
out0 – Output projections of the first set of transformed images. Shape: (batch_size, embedding_size)
out1 – Output projections of the second set of transformed images. Shape: (batch_size, embedding_size)
- Returns
Mean loss over the mini-batch.
- class lightly.loss.dcl_loss.DCLWLoss(temperature: float = 0.1, sigma: float = 0.5, gather_distributed: bool = False)
Implementation of the Weighted Decoupled Contrastive Learning Loss from Decoupled Contrastive Learning [0].
This code implements Equation 6 in [0] with a negative Mises-Fisher weighting function. The loss returns the mean over all images i and views k in the mini-batch. The implementation was inspired by [1].
[0] Chun-Hsiao Y. et. al., 2021, Decoupled Contrastive Learning https://arxiv.org/abs/2110.06848
[1] https://github.com/raminnakhli/Decoupled-Contrastive-Learning
- temperature
Similarities are scaled by inverse temperature.
- sigma
Similar to temperature but applies the inverse scaling in the weighting function.
- gather_distributed
If True then negatives from all gpus are gathered before the loss calculation.
Examples
>>> loss_fn = DCLWLoss(temperature=0.07) >>> >>> # generate two random transforms of images >>> t0 = transforms(images) >>> t1 = transforms(images) >>> >>> # embed images using some model, for example SimCLR >>> out0 = model(t0) >>> out1 = model(t1) >>> >>> # calculate loss >>> loss = loss_fn(out0, out1)
.dino_loss
- class lightly.loss.dino_loss.DINOLoss(output_dim: int, warmup_teacher_temp: float = 0.04, teacher_temp: float = 0.04, warmup_teacher_temp_epochs: int = 30, student_temp: float = 0.1, center_momentum: float = 0.9)
Implementation of the loss described in ‘Emerging Properties in Self-Supervised Vision Transformers’. [0]
This implementation follows the code published by the authors. [1] It supports global and local image crops. A linear warmup schedule for the teacher temperature is implemented to stabilize training at the beginning. Centering is applied to the teacher output to avoid model collapse.
[0]: DINO, 2021, https://arxiv.org/abs/2104.14294
- output_dim
Dimension of the model output.
- warmup_teacher_temp
Initial value of the teacher temperature. Should be decreased if the training loss does not decrease.
- teacher_temp
Final value of the teacher temperature after linear warmup. Values above 0.07 result in unstable behavior in most cases. Can be slightly increased to improve performance during finetuning.
- warmup_teacher_temp_epochs
Number of epochs for the teacher temperature warmup.
- student_temp
Temperature of the student.
- center_momentum
Momentum term for the center calculation.
Examples
>>> # initialize loss function >>> loss_fn = DINOLoss(128) >>> >>> # generate a view of the images with a random transform >>> view = transform(images) >>> >>> # embed the view with a student and teacher model >>> teacher_out = teacher(view) >>> student_out = student(view) >>> >>> # calculate loss >>> loss = loss_fn([teacher_out], [student_out], epoch=0)
- forward(teacher_out: List[Tensor], student_out: List[Tensor], epoch: int) Tensor
Cross-entropy between softmax outputs of the teacher and student networks.
- Parameters
teacher_out – List of view feature tensors from the teacher model. Each tensor is assumed to contain features from one view of the batch and have length batch_size.
student_out – List of view feature tensors from the student model. Each tensor is assumed to contain features from one view of the batch and have length batch_size.
epoch – The current training epoch.
- Returns
The average cross-entropy loss.
- update_center(teacher_out: Tensor) None
Moving average update of the center used for the teacher output.
- Parameters
teacher_out – Stacked output from the teacher model.
.hypersphere_loss
- class lightly.loss.hypersphere_loss.HypersphereLoss(t=1.0, lam=1.0, alpha=2.0)
Implementation of the loss described in ‘Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere.’ [0]
[0] Tongzhou Wang. et.al, 2020, … https://arxiv.org/abs/2005.10242
Note
In order for this loss to function as advertized, an l1-normalization to the hypersphere is required. This loss function applies this l1-normalization internally in the loss-layer. However, it is recommended that the same normalization is also applied in your architecture, considering that this l1-loss is also intended to be applied during inference. Perhaps there may be merit in leaving it out of the inferrence pathway, but this use has not been tested.
Moreover it is recommended that the layers preceeding this loss function are either a linear layer without activation, a batch-normalization layer, or both. The directly upstream architecture can have a large influence on the ability of this loss to achieve its stated aim of promoting uniformity on the hypersphere; and if by contrast the last layer going into the embedding is a RELU or similar nonlinearity, we may see that we will never get very close to achieving the goal of uniformity on the hypersphere, but will confine ourselves to the subspace of positive activations. Similar architectural considerations are relevant to most contrastive loss functions, but we call it out here explicitly.
Examples
>>> # initialize loss function >>> loss_fn = HypersphereLoss() >>> >>> # generate two random transforms of images >>> t0 = transforms(images) >>> t1 = transforms(images) >>> >>> # feed through SimSiam model >>> out0, out1 = model(t0, t1) >>> >>> # calculate loss >>> loss = loss_fn(out0, out1)
- forward(z_a: Tensor, z_b: Tensor) Tensor
- Parameters
x (torch.Tensor, [b, d], float)
y (torch.Tensor, [b, d], float)
- Returns
Loss (torch.Tensor, [], float)
.memory_bank
- class lightly.loss.memory_bank.MemoryBankModule(size: int = 65536)
Memory bank implementation
This is a parent class to all loss functions implemented by the lightly Python package. This way, any loss can be used with a memory bank if desired.
- size
Number of keys the memory bank can store. If set to 0, memory bank is not used.
Examples
>>> class MyLossFunction(MemoryBankModule): >>> >>> def __init__(self, memory_bank_size: int = 2 ** 16): >>> super(MyLossFunction, self).__init__(memory_bank_size) >>> >>> def forward(self, output: torch.Tensor, >>> labels: torch.Tensor = None): >>> >>> output, negatives = super( >>> MyLossFunction, self).forward(output) >>> >>> if negatives is not None: >>> # evaluate loss with negative samples >>> else: >>> # evaluate loss without negative samples
- forward(output: Tensor, labels: Optional[Tensor] = None, update: bool = False)
Query memory bank for additional negative samples
- Parameters
output – The output of the model.
labels – Should always be None, will be ignored.
- Returns
The output if the memory bank is of size 0, otherwise the output and the entries from the memory bank.
.msn_loss
- class lightly.loss.msn_loss.MSNLoss(temperature: float = 0.1, sinkhorn_iterations: int = 3, me_max_weight: float = 1.0, gather_distributed: bool = False)
Implementation of the loss function from MSN [0].
Code inspired by [1].
[0]: Masked Siamese Networks, 2022, https://arxiv.org/abs/2204.07141
- temperature
Similarities between anchors and targets are scaled by the inverse of the temperature. Must be in (0, 1].
- sinkhorn_iterations
Number of sinkhorn normalization iterations on the targets.
- me_max_weight
Weight factor lambda by which the mean entropy maximization regularization loss is scaled. Set to 0 to disable the reguliarization.
Examples:
>>> # initialize loss function >>> loss_fn = MSNLoss() >>> >>> # generate anchors and targets of images >>> anchors = transforms(images) >>> targets = transforms(images) >>> >>> # feed through MSN model >>> anchors_out = model(anchors) >>> targets_out = model.target(targets) >>> >>> # calculate loss >>> loss = loss_fn(anchors_out, targets_out, prototypes=model.prototypes)
- forward(anchors: Tensor, targets: Tensor, prototypes: Tensor, target_sharpen_temperature: float = 0.25) Tensor
Computes the MSN loss for a set of anchors, targets and prototypes.
- Parameters
anchors – Tensor with shape (batch_size * anchor_views, dim).
targets – Tensor with shape (batch_size, dim).
prototypes – Tensor with shape (num_prototypes, dim).
target_sharpen_temperature – Temperature used to sharpen the target probabilities.
- Returns
Mean loss over all anchors.
.negative_cosine_similarity
- class lightly.loss.negative_cosine_similarity.NegativeCosineSimilarity(dim: int = 1, eps: float = 1e-08)
Implementation of the Negative Cosine Simililarity used in the SimSiam[0] paper.
[0] SimSiam, 2020, https://arxiv.org/abs/2011.10566
Examples
>>> # initialize loss function >>> loss_fn = NegativeCosineSimilarity() >>> >>> # generate two representation tensors >>> # with batch size 10 and dimension 128 >>> x0 = torch.randn(10, 128) >>> x1 = torch.randn(10, 128) >>> >>> # calculate loss >>> loss = loss_fn(x0, x1)
- forward(x0: Tensor, x1: Tensor) Tensor
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
.ntx_ent_loss
- class lightly.loss.ntx_ent_loss.NTXentLoss(temperature: float = 0.5, memory_bank_size: int = 0, gather_distributed: bool = False)
Implementation of the Contrastive Cross Entropy Loss.
This implementation follows the SimCLR[0] paper. If you enable the memory bank by setting the memory_bank_size value > 0 the loss behaves like the one described in the MoCo[1] paper.
[0] SimCLR, 2020, https://arxiv.org/abs/2002.05709
[1] MoCo, 2020, https://arxiv.org/abs/1911.05722
- temperature
Scale logits by the inverse of the temperature.
- memory_bank_size
Number of negative samples to store in the memory bank. Use 0 for SimCLR. For MoCo we typically use numbers like 4096 or 65536.
- gather_distributed
If True then negatives from all gpus are gathered before the loss calculation. This flag has no effect if memory_bank_size > 0.
- Raises
ValueError – If abs(temperature) < 1e-8 to prevent divide by zero.
Examples
>>> # initialize loss function without memory bank >>> loss_fn = NTXentLoss(memory_bank_size=0) >>> >>> # generate two random transforms of images >>> t0 = transforms(images) >>> t1 = transforms(images) >>> >>> # feed through SimCLR or MoCo model >>> batch = torch.cat((t0, t1), dim=0) >>> output = model(batch) >>> >>> # calculate loss >>> loss = loss_fn(output)
- forward(out0: Tensor, out1: Tensor)
Forward pass through Contrastive Cross-Entropy Loss.
If used with a memory bank, the samples from the memory bank are used as negative examples. Otherwise, within-batch samples are used as negative samples.
- Parameters
out0 – Output projections of the first set of transformed images. Shape: (batch_size, embedding_size)
out1 – Output projections of the second set of transformed images. Shape: (batch_size, embedding_size)
- Returns
Contrastive Cross Entropy Loss value.
.regularizer.co2
- class lightly.loss.regularizer.co2.CO2Regularizer(alpha: float = 1, t_consistency: float = 0.05, memory_bank_size: int = 0)
Implementation of the CO2 regularizer [0] for self-supervised learning.
[0] CO2, 2021, https://arxiv.org/abs/2010.02217
- alpha
Weight of the regularization term.
- t_consistency
Temperature used during softmax calculations.
- memory_bank_size
Number of negative samples to store in the memory bank. Use 0 to use the second batch for negative samples.
Examples
>>> # initialize loss function for MoCo >>> loss_fn = NTXentLoss(memory_bank_size=4096) >>> >>> # initialize CO2 regularizer >>> co2 = CO2Regularizer(alpha=1.0, memory_bank_size=4096) >>> >>> # generate two random trasnforms of images >>> t0 = transforms(images) >>> t1 = transforms(images) >>> >>> # feed through the MoCo model >>> out0, out1 = model(t0, t1) >>> >>> # calculate loss and apply regularizer >>> loss = loss_fn(out0, out1) + co2(out0, out1)
- forward(out0: Tensor, out1: Tensor)
Computes the CO2 regularization term for two model outputs.
- Parameters
out0 – Output projections of the first set of transformed images.
out1 – Output projections of the second set of transformed images.
- Returns
The regularization term multiplied by the weight factor alpha.
.sym_neg_cos_sim_loss
- class lightly.loss.sym_neg_cos_sim_loss.SymNegCosineSimilarityLoss
Implementation of the Symmetrized Loss used in the SimSiam[0] paper.
[0] SimSiam, 2020, https://arxiv.org/abs/2011.10566
Examples
>>> # initialize loss function >>> loss_fn = SymNegCosineSimilarityLoss() >>> >>> # generate two random transforms of images >>> t0 = transforms(images) >>> t1 = transforms(images) >>> >>> # feed through SimSiam model >>> out0, out1 = model(t0, t1) >>> >>> # calculate loss >>> loss = loss_fn(out0, out1)
- forward(out0: Tensor, out1: Tensor)
Forward pass through Symmetric Loss.
- Parameters
out0 – Output projections of the first set of transformed images. Expects the tuple to be of the form (z0, p0), where z0 is the output of the backbone and projection mlp, and p0 is the output of the prediction head.
out1 – Output projections of the second set of transformed images. Expects the tuple to be of the form (z1, p1), where z1 is the output of the backbone and projection mlp, and p1 is the output of the prediction head.
- Returns
Contrastive Cross Entropy Loss value.
- Raises
ValueError if shape of output is not multiple of batch_size. –