# LightlyTrain Documentation
```{eval-rst}
.. image:: _static/lightly_train_light.svg
:align: center
:class: only-light
.. image:: _static/lightly_train_dark.svg
:align: center
:class: only-dark
```
[](https://colab.research.google.com/github/lightly-ai/lightly-train/blob/main/examples/notebooks/quick_start.ipynb)
[](https://docs.lightly.ai/train/stable/installation.html)
[](https://docs.lightly.ai/train/stable/installation.html)
[](https://docs.lightly.ai/train/stable/docker.html#)
[](https://docs.lightly.ai/train/stable/)
[](https://discord.gg/xvNJW94)
*Train Better Models, Faster*
LightlyTrain is the leading framework for transforming your data into state-of-the-art
computer vision models. It covers the entire model development lifecycle from pretraining
DINOv2/v3 vision foundation models on your unlabeled data to fine-tuning transformer and
YOLO models on detection and segmentation tasks for edge deployment.
[Contact us](https://www.lightly.ai/contact) to request a license for commercial use.
## News
- \[[0.12.0](https://docs.lightly.ai/train/stable/changelog.html#changelog-0-12-0)\] - 2025-11-06: 💡 **New DINOv3 Object Detection:** Run inference or fine-tune DINOv3 models for [object detection](https://docs.lightly.ai/train/stable/object_detection.html)! 💡
- \[[0.11.0](https://docs.lightly.ai/train/stable/changelog.html#changelog-0-11-0)\] - 2025-08-15: 🚀 **New DINOv3 Support:** Pretrain your own model with [distillation](https://docs.lightly.ai/train/stable/methods/distillation.html#methods-distillation-dinov3) from DINOv3 weights. Or fine-tune our SOTA [EoMT semantic segmentation model](https://docs.lightly.ai/train/stable/semantic_segmentation.html#semantic-segmentation-eomt-dinov3) with a DINOv3 backbone! 🚀
- \[[0.10.0](https://docs.lightly.ai/train/stable/changelog.html#changelog-0-10-0)\] - 2025-08-04:
🔥 **Train state-of-the-art semantic segmentation models** with our new
[**DINOv2 semantic segmentation**](https://docs.lightly.ai/train/stable/semantic_segmentation.html)
fine-tuning method! 🔥
- \[[0.9.0](https://docs.lightly.ai/train/stable/changelog.html#changelog-0-9-0)\] - 2025-07-21:
[**DINOv2 pretraining**](https://docs.lightly.ai/train/stable/methods/dinov2.html) is
now officially available!
## Workflows
````{grid} 1 1 2 3
```{grid-item-card} Object Detection
:link: object_detection.html

Train LTDETR detection models with DINOv2 or DINOv3 backbones.
```
```{grid-item-card} Instance Segmentation
:link: instance_segmentation.html

Train EoMT segmentation models with DINOv3 backbones.
```
```{grid-item-card} Semantic Segmentation
:link: semantic_segmentation.html

Train EoMT segmentation models with DINOv2 or DINOv3 backbones.
```
```{grid-item-card} Distillation
:link: methods/distillation.html

Distill knowledge from DINOv2 or DINOv3 into any model architecture.
```
```{grid-item-card} Pretraining
:link: methods/dinov2.html

Pretrain DINOv2 foundation models on your domain data.
```
```{grid-item-card} Autolabeling
:link: predict_autolabel.html

Generate high-quality pseudo labels for detection and segmentation tasks.
```
````
## How It Works [](https://colab.research.google.com/github/lightly-ai/lightly-train/blob/main/examples/notebooks/quick_start.ipynb)
Install Lightly**Train** on Python 3.8+ for Windows, Linux or MacOS.
```bash
pip install lightly-train
```
Then train an object detection model with:
```python
import lightly_train
if __name__ == "__main__":
lightly_train.train_object_detection(
out="out/my_experiment",
model="dinov3/convnext-tiny-ltdetr-coco",
data={
# ... Data configuration
}
)
```
And run inference like this:
```python
import lightly_train
# Load the model from the best checkpoint
model = lightly_train.load_model("out/my_experiment/exported_models/exported_best.pt")
# Or load one of the models hosted by LightlyTrain
model = lightly_train.load_model("dinov3/convnext-tiny-ltdetr-coco")
results = model.predict("image.jpg")
```
See the full [quick start guide](#quick_start) for more details.
## Features
- Python, Command Line, and [Docker](https://docs.lightly.ai/train/stable/docker.html) support
- Built for [high performance](https://docs.lightly.ai/train/stable/performance/index.html) including [multi-GPU](https://docs.lightly.ai/train/stable/performance/multi_gpu.html) and [multi-node](https://docs.lightly.ai/train/stable/performance/multi_node.html) support
- [Monitor training progress](https://docs.lightly.ai/train/stable/train.html#loggers) with MLflow, TensorBoard, Weights & Biases, and more
- Runs fully on-premises with no API authentication
- Export models in their native format for fine-tuning or inference
- Export models in ONNX or TensorRT format for edge deployment
## Models
LightlyTrain supports the following model and workflow combinations.
### Fine-tuning
| Model | Object Detection | Instance Segmentation | Semantic Segmentation |
| ------ | :----------------------------------------------------------------: | :---------------------------------------------------------------------: | :------------------------------------------------------------------------------------------: |
| DINOv3 | ✅ [🔗](https://docs.lightly.ai/train/stable/object_detection.html) | ✅ [🔗](https://docs.lightly.ai/train/stable/instance_segmentation.html) | ✅ [🔗](https://docs.lightly.ai/train/stable/semantic_segmentation.html#use-eomt-with-dinov3) |
| DINOv2 | ✅ [🔗](https://docs.lightly.ai/train/stable/object_detection.html) | | ✅ [🔗](https://docs.lightly.ai/train/stable/semantic_segmentation.html) |
### Distillation & Pretraining
| Model | Distillation | Pretraining |
| ------------------------------ | :----------------------------------------------------------------------------------------: | :--------------------------------------------------------------------: |
| DINOv3 | ✅ [🔗](https://docs.lightly.ai/train/stable/methods/distillation.html#distill-from-dinov3) | |
| DINOv2 | ✅ [🔗](https://docs.lightly.ai/train/stable/methods/distillation.html) | ✅ [🔗](https://docs.lightly.ai/train/stable/methods/dinov2.html) |
| Torchvision ResNet, ConvNext, ShuffleNetV2 | ✅ [🔗](https://docs.lightly.ai/train/stable/models/torchvision.html) | ✅ [🔗](https://docs.lightly.ai/train/stable/models/torchvision.html) |
| TIMM models | ✅ [🔗](https://docs.lightly.ai/train/stable/models/timm.html) | ✅ [🔗](https://docs.lightly.ai/train/stable/models/timm.html) |
| Ultralytics YOLOv5–YOLO12 | ✅ [🔗](https://docs.lightly.ai/train/stable/models/ultralytics.html) | ✅ [🔗](https://docs.lightly.ai/train/stable/models/ultralytics.html) |
| RT-DETR, RT-DETRv2 | ✅ [🔗](https://docs.lightly.ai/train/stable/models/rtdetr.html) | ✅ [🔗](https://docs.lightly.ai/train/stable/models/rtdetr.html) |
| RF-DETR | ✅ [🔗](https://docs.lightly.ai/train/stable/models/rfdetr.html) | ✅ [🔗](https://docs.lightly.ai/train/stable/models/rfdetr.html) |
| YOLOv12 | ✅ [🔗](https://docs.lightly.ai/train/stable/models/yolov12.html) | ✅ [🔗](https://docs.lightly.ai/train/stable/models/yolov12.html) |
| Custom PyTorch Model | ✅ [🔗](https://docs.lightly.ai/train/stable/models/custom_models.html) | ✅ [🔗](https://docs.lightly.ai/train/stable/models/custom_models.html) |
[Contact us](https://www.lightly.ai/contact) if you need support for additional models.
## Usage Events
LightlyTrain collects anonymous usage events to help us improve the product. We only
track training method, model architecture, and system information (OS, GPU). To opt-out,
set the environment variable: `export LIGHTLY_TRAIN_EVENTS_DISABLED=1`
## License
Lightly**Train** offers flexible licensing options to suit your specific needs:
- **AGPL-3.0 License**: Perfect for open-source projects, academic research, and community contributions.
Share your innovations with the world while benefiting from community improvements.
- **Commercial License**: Ideal for businesses and organizations that need proprietary development freedom.
Enjoy all the benefits of LightlyTrain while keeping your code and models private.
We're committed to supporting both open-source and commercial users.
Please [contact us](https://www.lightly.ai/contact) to discuss the best licensing option for your project!
## Contact
[](https://www.lightly.ai/lightly-train)
[](https://discord.gg/xvNJW94)
[](https://github.com/lightly-ai/lightly-train)
[](https://x.com/lightlyai)
[](https://www.linkedin.com/company/lightly-tech)
```{toctree}
---
hidden:
maxdepth: 2
---
quick_start
installation
train/index
object_detection
instance_segmentation
semantic_segmentation
predict_autolabel
export
embed
models/index
methods/index
data/index
performance/index
docker
tutorials/index
python_api/index
faq
changelog
```