ΒΆ
Train Better Models, Faster
LightlyTrain is the leading framework for transforming your data into state-of-the-art computer vision models. It covers the entire model development lifecycle from pretraining DINOv2/v3 vision foundation models on your unlabeled data to fine-tuning transformer and YOLO models on detection and segmentation tasks for edge deployment.
Contact us to request a license for commercial use.
NewsΒΆ
[0.12.0] - 2025-11-06: π‘ New DINOv3 Object Detection: Run inference or fine-tune DINOv3 models for object detection! π‘
[0.11.0] - 2025-08-15: π New DINOv3 Support: Pretrain your own model with distillation from DINOv3 weights. Or fine-tune our SOTA EoMT semantic segmentation model with a DINOv3 backbone! π
[0.10.0] - 2025-08-04: π₯ Train state-of-the-art semantic segmentation models with our new DINOv2 semantic segmentation fine-tuning method! π₯
[0.9.0] - 2025-07-21: DINOv2 pretraining is now officially available!
WorkflowsΒΆ

Train LTDETR detection models with DINOv2 or DINOv3 backbones.

Train EoMT segmentation models with DINOv3 backbones.

Train EoMT segmentation models with DINOv2 or DINOv3 backbones.

Distill knowledge from DINOv2 or DINOv3 into any model architecture.

Pretrain DINOv2 foundation models on your domain data.

Generate high-quality pseudo labels for detection and segmentation tasks.
How It Works
ΒΆ
Install LightlyTrain on Python 3.8+ for Windows, Linux or MacOS.
pip install lightly-train
Then train an object detection model with:
import lightly_train
if __name__ == "__main__":
lightly_train.train_object_detection(
out="out/my_experiment",
model="dinov3/convnext-tiny-ltdetr-coco",
data={
# ... Data configuration
}
)
And run inference like this:
import lightly_train
# Load the model from the best checkpoint
model = lightly_train.load_model("out/my_experiment/exported_models/exported_best.pt")
# Or load one of the models hosted by LightlyTrain
model = lightly_train.load_model("dinov3/convnext-tiny-ltdetr-coco")
results = model.predict("image.jpg")
See the full quick start guide for more details.
FeaturesΒΆ
Python, Command Line, and Docker support
Built for high performance including multi-GPU and multi-node support
Monitor training progress with MLflow, TensorBoard, Weights & Biases, and more
Runs fully on-premises with no API authentication
Export models in their native format for fine-tuning or inference
Export models in ONNX or TensorRT format for edge deployment
ModelsΒΆ
LightlyTrain supports the following model and workflow combinations.
Fine-tuningΒΆ
Distillation & PretrainingΒΆ
Model |
Distillation |
Pretraining |
|---|---|---|
DINOv3 |
β π |
|
DINOv2 |
β π |
β π |
Torchvision ResNet, ConvNext, ShuffleNetV2 |
β π |
β π |
TIMM models |
β π |
β π |
Ultralytics YOLOv5βYOLO12 |
β π |
β π |
RT-DETR, RT-DETRv2 |
β π |
β π |
RF-DETR |
β π |
β π |
YOLOv12 |
β π |
β π |
Custom PyTorch Model |
β π |
β π |
Contact us if you need support for additional models.
Usage EventsΒΆ
LightlyTrain collects anonymous usage events to help us improve the product. We only
track training method, model architecture, and system information (OS, GPU). To opt-out,
set the environment variable: export LIGHTLY_TRAIN_EVENTS_DISABLED=1
LicenseΒΆ
LightlyTrain offers flexible licensing options to suit your specific needs:
AGPL-3.0 License: Perfect for open-source projects, academic research, and community contributions. Share your innovations with the world while benefiting from community improvements.
Commercial License: Ideal for businesses and organizations that need proprietary development freedom. Enjoy all the benefits of LightlyTrain while keeping your code and models private.
Weβre committed to supporting both open-source and commercial users. Please contact us to discuss the best licensing option for your project!