Lightly is a computer vision framework for self-supervised learning.
With Lightly you can train deep learning models using self-supervision. This means, that you don’t require any labels to train a model. Lightly has been built to help you understand and work with large unlabeled datasets. It is built on top of PyTorch and therefore fully compatible with other frameworks such as Fast.ai.
The figure below shows an overview of the different used by the ligthly PIP package and a schema of how they interact. The expressions in bold are explained further below.
For the dataloader you can simply use the PyTorch dataloader. Be sure to pass it a LightlyDataset though!
- Backbone Neural Network
One of the cool things about self-supervised learning is that you can pre-train your neural networks without the need for annotated data. You can plugin whatever backbone you want! If you don’t know where to start, our tutorials show how you can get a backbone neural network from a
The model combines your backbone neural network with a projection head and, if required, a momentum encoder to provide an easy-to-use interface to the most popular self-supervised learning frameworks. Learn more in our tutorials:
The loss function plays a crucial role in self-supervised learning. Currently, lightly supports a contrastive and a similarity based loss function.
With lightly, you can use any PyTorch optimizer to train your model.
- Self-supervised Embedding
lightly.embedding.embedding.SelfSupervisedEmbeddingconnects the concepts from above in an easy-to-use PyTorch-Lightning module. After creating a SelfSupervisedEmbedding, it can be trained with a single line:
# build a self-supervised embedding and train it encoder = lightly.embedding.SelfSupervisedEmbedding(model, loss, optimizer, dataloader) encoder.train(gpus=1, max_epochs=10)
However, you can still write the training loop in plain PyTorch code. See Tutorial 4: Train SimSiam on Satellite Images for an example