
Documentation¶
Lightly is a computer vision framework for self-supervised learning.
With Lightly you can train deep learning models using self-supervision. This means, that you don’t require any labels to train a model. Lightly has been built to help you understand and work with large unlabeled datasets. It is built on top of PyTorch and therefore fully compatible with other frameworks such as Fast.ai.
Overview¶
The figure below shows an overview of the different used by the ligthly PIP package and a schema of how they interact. The expressions in bold are explained further below.

Overview of the different concepts used by the lightly PIP package and how they interact.¶
- Dataset
In lightly, datasets are accessed through the
lightly.data.dataset.LightlyDataset
. You can create a LightlyDataset from a folder of images, videos, or simply from a torchvision dataset. You can learn more about this here: Tutorial 1: Structure Your Input.
- Collate Function
The collate function is the place where lightly applies augmentations which are crucial for self-supervised learning. You can use our pre-defined augmentations or write your own ones. For more information, check out Advanced and
lightly.data.collate.BaseCollateFunction
.
- Dataloader
For the dataloader you can simply use the PyTorch dataloader. Be sure to pass it a LightlyDataset though!
- Backbone Neural Network
One of the cool things about self-supervised learning is that you can pre-train your neural networks without the need for annotated data. You can plugin whatever backbone you want! If you don’t know where to start, our tutorials show how you can get a backbone neural network from a
lightly.models.resnet.ResNet
.
- Model
The model combines your backbone neural network with a projection head and, if required, a momentum encoder to provide an easy-to-use interface to the most popular self-supervised learning frameworks. Learn more in our tutorials:
- Loss
The loss function plays a crucial role in self-supervised learning. Currently, lightly supports a contrastive and a similarity based loss function.
- Optimizer
With lightly, you can use any PyTorch optimizer to train your model.
- Self-supervised Embedding
The
lightly.embedding.embedding.SelfSupervisedEmbedding
connects the concepts from above in an easy-to-use PyTorch-Lightning module. After creating a SelfSupervisedEmbedding, it can be trained with a single line:# build a self-supervised embedding and train it encoder = lightly.embedding.SelfSupervisedEmbedding(model, loss, optimizer, dataloader) encoder.train(gpus=1, max_epochs=10)
However, you can still write the training loop in plain PyTorch code. See Tutorial 4: Train SimSiam on Satellite Images for an example
First Steps
Tutorials
Python API
On-Premise