# LightlyTrain Documentation
```{eval-rst}
.. image:: _static/lightly_train_light.svg
:align: center
:class: only-light
.. image:: _static/lightly_train_dark.svg
:align: center
:class: only-dark
```
[](https://colab.research.google.com/github/lightly-ai/lightly-train/blob/main/examples/notebooks/quick_start.ipynb)
[](https://docs.lightly.ai/train/stable/installation.html)
[](https://docs.lightly.ai/train/stable/installation.html)
[](https://docs.lightly.ai/train/stable/docker.html#)
[](https://docs.lightly.ai/train/stable/)
[](https://discord.gg/xvNJW94)
*Train Better Models, Faster - No Labels Needed*
LightlyTrain brings self-supervised pretraining to real-world computer vision pipelines, using
your unlabeled data to reduce labeling costs and speed up model deployment. Leveraging the
state-of-the-art from research, it pretrains your model on your unlabeled, domain-specific
data, significantly reducing the amount of labeling needed to reach a high model performance.
This allows you to focus on new features and domains instead of managing your labeling cycles.
LightlyTrain is designed for simple integration into existing training pipelines and supports
a wide range of model architectures and use-cases out of the box.
## Why Lightly**Train**?
- πΈ **No Labels Required**: Speed up development by pretraining models on your unlabeled image and video data.
- π **Domain Adaptation**: Improve models by pretraining on your domain-specific data (e.g. video analytics, agriculture, automotive, healthcare, manufacturing, retail, and more).
- ποΈ **Model & Task Agnostic**: Compatible with any architecture and task, including detection, classification, and segmentation.
- π **Industrial-Scale Support**: LightlyTrain scales from thousands to millions of images. Supports on-prem, cloud, single, and multi-GPU setups.
## How It Works [](https://colab.research.google.com/github/lightly-ai/lightly-train/blob/main/examples/notebooks/quick_start.ipynb)
Install Lightly**Train**:
```bash
pip install lightly-train
```
Then start pretraining with:
```python
import lightly_train
if __name__ == "__main__":
lightly_train.train(
out="out/my_experiment", # Output directory
data="my_data_dir", # Directory with images
model="torchvision/resnet50", # Model to train
)
```
This will pretrain a Torchvision ResNet-50 model using unlabeled images from `my_data_dir`.
All training logs, model exports, and checkpoints are saved to the output directory
at `out/my_experiment`. The final model is exported to `out/my_experiment/exported_models/exported_last.pt`.
Finally, load the pretrained model and fine-tune it using your existing training pipeline:
```python
import torch
from torchvision import models
# Load the pretrained model
model = models.resnet50()
model.load_state_dict(torch.load("out/my_experiment/exported_models/exported_last.pt"))
# Fine-tune the model with your existing training pipeline
...
```
```{seealso}
Looking for a full fine-tuning example? Head over to the [Quick Start](quick_start.md#fine-tune)!
```
```{seealso}
Want to use your model to generate image embeddings instead? Check out the {ref}`embed` guide!
```
## Features
- Train models on any image data without labels
- Train models from popular libraries such as [Torchvision](#models-torchvision),
[TIMM](#models-timm), [Ultralytics](#models-ultralytics), [SuperGradients](#models-supergradients),
[RT-DETR](#models-rtdetr), [RF-DETR](#models-rfdetr), and [YOLOv12](#models-yolov12)
- Train [custom models](#custom-models) with ease
- No self-supervised learning expertise required
- Automatic SSL method selection (coming soon!)
- Python, Command Line, and {ref}`docker` support
- Built for [high performance](#performance) including [multi-GPU](#multi-gpu) and [multi-node](#multi-node) support
- {ref}`Export models ` for fine-tuning or inference
- Generate and export {ref}`image embeddings