lightly.utils

The lightly.utils package provides global utility methods.

The io module contains utility to save and load embeddings in a format which is understood by the Lightly library. With the embeddings_2d module, embeddings can be transformed to a two-dimensional space for better visualization.

.io

I/O operations to save and load embeddings.

lightly.utils.io.load_embeddings(path: str)

Loads embeddings from a csv file in a Lightly compatible format.

Args:
path:

Path to the csv file.

Returns:

The embeddings as a numpy array, labels as a list of integers, and filenames as a list of strings in the order they were saved.

The embeddings will always be of the Float32 datatype.

Examples:
>>> import lightly.utils.io as io
>>> embeddings, labels, filenames = io.load_embeddings(
>>>     'path/to/my/embeddings.csv')
lightly.utils.io.load_embeddings_as_dict(path: str, embedding_name: str = 'default', return_all: bool = False)

Loads embeddings from csv and store it in a dictionary for transfer.

Loads embeddings to a dictionary which can be serialized and sent to the Lightly servers. It is recommended that the embedding_name is always specified because the Lightly web-app does not allow two embeddings with the same name.

Args:
path:

Path to the csv file.

embedding_name:

Name of the embedding for the platform.

return_all:

If true, return embeddings, labels, and filenames, too.

Returns:

A dictionary containing the embedding information (see load_embeddings)

Examples:
>>> import lightly.utils.io as io
>>> embedding_dict = io.load_embeddings_as_dict(
>>>     'path/to/my/embeddings.csv',
>>>     embedding_name='MyEmbeddings')
>>>
>>> result = io.load_embeddings_as_dict(
>>>     'path/to/my/embeddings.csv',
>>>     embedding_name='MyEmbeddings',
>>>     return_all=True)
>>> embedding_dict, embeddings, labels, filenames = result
lightly.utils.io.save_embeddings(path: str, embeddings: numpy.ndarray, labels: List[int], filenames: List[str])

Saves embeddings in a csv file in a Lightly compatible format.

Creates a csv file at the location specified by path and saves embeddings, labels, and filenames.

Args:
path:

Path to the csv file.

embeddings:

Embeddings of the images as a numpy array (n x d).

labels:

List of integer labels.

filenames:

List of filenames.

Raises:

ValueError if embeddings, labels, and filenames have different lengths.

Examples:
>>> import lightly.utils.io as io
>>> io.save_embeddings(
>>>     'path/to/my/embeddings.csv',
>>>     embeddings,
>>>     labels,
>>>     filenames)

.embeddings_2d

Transform embeddings to two-dimensional space for visualization.

class lightly.utils.embeddings_2d.PCA(n_components: int = 2, eps: float = 1e-10)

Handmade PCA to bypass sklearn dependency.

Attributes:
n_components:

Number of principal components to keep.

eps:

Epsilon for numerical stability.

fit(X: numpy.ndarray)

Fits PCA to data in X.

Args:
X:

Datapoints stored in numpy array of size n x d.

Returns:

PCA object to transform datapoints.

transform(X: numpy.ndarray)

Uses PCA to transform data in X.

Args:
X:

Datapoints stored in numpy array of size n x d.

Returns:

Numpy array of n x p datapoints where p <= d.

lightly.utils.embeddings_2d.fit_pca(embeddings: numpy.ndarray, n_components: int = 2, fraction: float = None)

Fits PCA to randomly selected subset of embeddings.

For large datasets, it can be unfeasible to perform PCA on the whole data. This method can fit a PCA on a fraction of the embeddings in order to save computational resources.

Args:
embeddings:

Datapoints stored in numpy array of size n x d.

n_components:

Number of principal components to keep.

fraction:

Fraction of the dataset to fit PCA on.

Returns:

A transformer which can be used to transform embeddings to lower dimensions.

Raises:

ValueError if fraction < 0 or fraction > 1.

.benchmarking

Helper modules for benchmarking SSL models

class lightly.utils.benchmarking.BenchmarkModule(dataloader_kNN: torch.utils.data.dataloader.DataLoader, num_classes: int, knn_k: int = 200, knn_t: float = 0.1)

A PyTorch Lightning Module for automated kNN callback

At the end of every training epoch we create a feature bank by feeding the dataloader_kNN passed to the module through the backbone. At every validation step we predict features on the validation data. After all predictions on validation data (validation_epoch_end) we evaluate the predictions on a kNN classifier on the validation data using the feature_bank features from the train data.

We can access the highest test accuracy during a kNN prediction using the max_accuracy attribute.

Attributes:
backbone:

The backbone model used for kNN validation. Make sure that you set the backbone when inheriting from BenchmarkModule.

max_accuracy:

Floating point number between 0.0 and 1.0 representing the maximum test accuracy the benchmarked model has achieved.

dataloader_kNN:

Dataloader to be used after each training epoch to create feature bank.

num_classes:

Number of classes. E.g. for cifar10 we have 10 classes. (default: 10)

knn_k:

Number of nearest neighbors for kNN

knn_t:

Temperature parameter for kNN

Examples:
>>> class SimSiamModel(BenchmarkingModule):
>>>     def __init__(dataloader_kNN, num_classes):
>>>         super().__init__(dataloader_kNN, num_classes)
>>>         resnet = lightly.models.ResNetGenerator('resnet-18')
>>>         self.backbone = nn.Sequential(
>>>             *list(resnet.children())[:-1],
>>>             nn.AdaptiveAvgPool2d(1),
>>>         )
>>>         self.resnet_simsiam = 
>>>             lightly.models.SimSiam(self.backbone, num_ftrs=512)
>>>         self.criterion = lightly.loss.SymNegCosineSimilarityLoss()
>>>
>>>     def forward(self, x):
>>>         self.resnet_simsiam(x)
>>>
>>>     def training_step(self, batch, batch_idx):
>>>         (x0, x1), _, _ = batch
>>>         x0, x1 = self.resnet_simsiam(x0, x1)
>>>         loss = self.criterion(x0, x1)
>>>         return loss
>>>     def configure_optimizers(self):
>>>         optim = torch.optim.SGD(
>>>             self.resnet_simsiam.parameters(), lr=6e-2, momentum=0.9
>>>         )
>>>         return [optim]
>>>
>>> model = SimSiamModel(dataloader_train_kNN)
>>> trainer = pl.Trainer()
>>> trainer.fit(
>>>     model,
>>>     train_dataloader=dataloader_train_ssl,
>>>     val_dataloaders=dataloader_test
>>> )
>>> # you can get the peak accuracy using
>>> print(model.max_accuracy)
training_epoch_end(outputs)

Called at the end of the training epoch with the outputs of all training steps. Use this in case you need to do something with all the outputs for every training_step.

# the pseudocode for these calls
train_outs = []
for train_batch in train_data:
    out = training_step(train_batch)
    train_outs.append(out)
training_epoch_end(train_outs)
Args:
outputs: List of outputs you defined in training_step(), or if there are

multiple dataloaders, a list containing a list of outputs for each dataloader.

Return:

None

Note:

If this method is not overridden, this won’t be called.

Example:

def training_epoch_end(self, training_step_outputs):
    # do something with all training_step outputs
    return result

With multiple dataloaders, outputs will be a list of lists. The outer list contains one entry per dataloader, while the inner list contains the individual outputs of each training step for that dataloader.

def training_epoch_end(self, training_step_outputs):
    for out in training_step_outputs:
        # do something here
validation_epoch_end(outputs)

Called at the end of the validation epoch with the outputs of all validation steps.

# the pseudocode for these calls
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    val_outs.append(out)
validation_epoch_end(val_outs)
Args:
outputs: List of outputs you defined in validation_step(), or if there

are multiple dataloaders, a list containing a list of outputs for each dataloader.

Return:

None

Note:

If you didn’t define a validation_step(), this won’t be called.

Examples:

With a single dataloader:

def validation_epoch_end(self, val_step_outputs):
    for out in val_step_outputs:
        # do something

With multiple dataloaders, outputs will be a list of lists. The outer list contains one entry per dataloader, while the inner list contains the individual outputs of each validation step for that dataloader.

def validation_epoch_end(self, outputs):
    for dataloader_output_result in outputs:
        dataloader_outs = dataloader_output_result.dataloader_i_outputs

    self.log('final_metric', final_value)
validation_step(batch, batch_idx)

Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.

# the pseudocode for these calls
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    val_outs.append(out)
    validation_epoch_end(val_outs)
Args:
batch (Tensor | (Tensor, …) | [Tensor, …]):

The output of your DataLoader. A tensor, tuple or list.

batch_idx (int): The index of this batch dataloader_idx (int): The index of the dataloader that produced this batch

(only if multiple val datasets used)

Return:

Any of.

  • Any object or value

  • None - Validation will skip to the next batch

# pseudocode of order
out = validation_step()
if defined('validation_step_end'):
    out = validation_step_end(out)
out = validation_epoch_end(out)
# if you have one val dataloader:
def validation_step(self, batch, batch_idx)

# if you have multiple val dataloaders:
def validation_step(self, batch, batch_idx, dataloader_idx)
Examples:
# CASE 1: A single validation dataset
def validation_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'val_loss': loss, 'val_acc': val_acc})

If you pass in multiple val datasets, validation_step will have an additional argument.

# CASE 2: multiple validation datasets
def validation_step(self, batch, batch_idx, dataloader_idx):
    # dataloader_idx tells you which dataset this is.
Note:

If you don’t need to validate you don’t need to implement this method.

Note:

When the validation_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.

lightly.utils.benchmarking.knn_predict(feature: torch.Tensor, feature_bank: torch.Tensor, feature_labels: torch.Tensor, num_classes: int, knn_k: int = 200, knn_t: float = 0.1) → torch.Tensor

Run kNN predictions on features based on a feature bank

This method is commonly used to monitor performance of self-supervised learning methods.

The default parameters are the ones used in https://arxiv.org/pdf/1805.01978v1.pdf.

Args:
feature:

Tensor of shape [N, D] for which you want predictions

feature_bank:

Tensor of a database of features used for kNN

feature_labels:

Labels for the features in our feature_bank

num_classes:

Number of classes (e.g. 10 for CIFAR-10)

knn_k:

Number of k neighbors used for kNN

knn_t:

Temperature parameter to reweights similarities for kNN

Returns:

A tensor containing the kNN predictions

Examples:
>>> images, targets, _ = batch
>>> feature = backbone(images).squeeze()
>>> # we recommend to normalize the features
>>> feature = F.normalize(feature, dim=1)
>>> pred_labels = knn_predict(
>>>     feature,
>>>     feature_bank,
>>>     targets_bank,
>>>     num_classes=10,
>>> )