lightly.loss

The lightly.loss package provides loss functions for self-supervised learning.

.ntx_ent_loss

class lightly.loss.ntx_ent_loss.NTXentLoss(temperature: float = 0.5, use_cosine_similarity: bool = True, memory_bank_size: int = 0)

Implementation of the Contrastive Cross Entropy Loss.

Attributes:
temperature:

Scale logits by the inverse of the temperature.

use_cosine_similarity:

Whether to use cosine similarity over L2 distance.

memory_bank_size:

Number of samples to store in the memory bank.

Raises:

ValueError if abs(temperature) < 1e-8 to prevent divide by zero.

Examples:

>>> # initialize loss function without memory bank
>>> loss_fn = NTXentLoss(memory_bank_size=0)
>>>
>>> # generate two random transforms of images
>>> t0 = transforms(images)
>>> t1 = transforms(images)
>>>
>>> # feed through SimCLR or MoCo model
>>> batch = torch.cat((t0, t1), dim=0)
>>> output = model(batch)
>>>
>>> # calculate loss
>>> loss = loss_fn(output)
forward(out0: torch.Tensor, out1: torch.Tensor)

Forward pass through Contrastive Cross Entropy Loss.

If used with a memory bank, the samples from the memory bank are used as negative examples. Otherwise, within-batch samples are used as negative samples.

Args:
out0:

Output projections of the first set of transformed images.

out1:

Output projections of the second set of transformed images.

Returns:

Contrastive Cross Entropy Loss value.

.sym_neg_cos_sim_loss

class lightly.loss.sym_neg_cos_sim_loss.SymNegCosineSimilarityLoss

Implementation of the Symmetrized Loss.

Examples:

>>> # initialize loss function
>>> loss_fn = SymNegCosineSimilarityLoss()
>>>
>>> # generate two random transforms of images
>>> t0 = transforms(images)
>>> t1 = transforms(images)
>>>
>>> # feed through SimSiam model
>>> out0, out1 = model(t0, t1)
>>>
>>> # calculate loss
>>> loss = loss_fn(out0, out1)
forward(out0: torch.Tensor, out1: torch.Tensor)

Forward pass through Symmetric Loss.

Args:
out0:

Output projections of the first set of transformed images. Expects the tuple to be of the form (z0, p0), where z0 is the output of the backbone and projection mlp, and p0 is the output of the prediction head.

out1:

Output projections of the second set of transformed images. Expects the tuple to be of the form (z1, p1), where z1 is the output of the backbone and projection mlp, and p1 is the output of the prediction head.

Returns:

Contrastive Cross Entropy Loss value.

Raises:

ValueError if shape of output is not multiple of batch_size.

.memory_bank

class lightly.loss.memory_bank.MemoryBankModule(size: int = 65536)

Memory bank implementation

This is a parent class to all loss functions implemented by the lightly Python package. This way, any loss can be used with a memory bank if desired.

Attributes:
size:

Number of keys the memory bank can store. If set to 0, memory bank is not used.

Examples:
>>> class MyLossFunction(MemoryBankModule):
>>>
>>>     def __init__(self, memory_bank_size: int = 2 ** 16):
>>>         super(MyLossFunction, self).__init__(memory_bank_size)
>>>
>>>     def forward(self, output: torch.Tensor,
>>>                 labels: torch.Tensor = None):
>>>
>>>         output, negatives = super(
>>>             MyLossFunction, self).forward(output)
>>>
>>>         if negatives is not None:
>>>             # evaluate loss with negative samples
>>>         else:
>>>             # evaluate loss without negative samples
forward(output: torch.Tensor, labels: torch.Tensor = None, update: bool = False)

Query memory bank for additional negative samples

Args:
output:

The output of the model.

labels:

Should always be None, will be ignored.

Returns:

The output if the memory bank is of size 0, otherwise the output and the entries from the memory bank.