LightlyTrain Documentation

_images/lightly_train_light.svg _images/lightly_train_dark.svg

Google Colab Python OS Docker Documentation Discord

Train Better Models, Faster

LightlyTrain is the leading framework for transforming your data into state-of-the-art computer vision models. It covers the entire model development lifecycle from pretraining DINOv2/v3 vision foundation models on your unlabeled data to fine-tuning transformer and YOLO models on detection and segmentation tasks for edge deployment.

Contact us to request a license for commercial use.

News

Workflows

Object Detection

Train LTDETR detection models with DINOv2 or DINOv3 backbones.

object_detection.html
Instance Segmentation

Train EoMT segmentation models with DINOv3 backbones.

instance_segmentation.html
Panoptic Segmentation

Train EoMT segmentation models with DINOv3 backbones.

panoptic_segmentation.html
Semantic Segmentation

Train EoMT segmentation models with DINOv2 or DINOv3 backbones.

semantic_segmentation.html
Distillation

Distill knowledge from DINOv2 or DINOv3 into any model architecture.

pretrain_distill/methods/distillation.html
Pretraining

Pretrain DINOv2 foundation models on your domain data.

pretrain_distill/methods/dinov2.html
Autolabeling

Generate high-quality pseudo labels for detection and segmentation tasks.

predict_autolabel.html

How It Works Google Colab

Install LightlyTrain on Python 3.8+ for Windows, Linux or MacOS.

pip install lightly-train

Then train an object detection model with:

import lightly_train

if __name__ == "__main__":
    lightly_train.train_object_detection(
        out="out/my_experiment",
        model="dinov3/vitt16-ltdetr-coco",
        data={
            # ... Data configuration
        }
      )

And run inference like this:

import lightly_train

# Load the model from the best checkpoint
model = lightly_train.load_model("out/my_experiment/exported_models/exported_best.pt")
# Or load one of the models hosted by LightlyTrain
model = lightly_train.load_model("dinov3/vitt16-ltdetr-coco")
results = model.predict("image.jpg")

See the full quick start guide for more details.

Features

  • Python, Command Line, and Docker support

  • Built for high performance including multi-GPU and multi-node support

  • Monitor training progress with MLflow, TensorBoard, Weights & Biases, and more

  • Runs fully on-premises with no API authentication

  • Export models in their native format for fine-tuning or inference

  • Export models in ONNX or TensorRT format for edge deployment

Models

LightlyTrain supports the following model and workflow combinations.

Fine-tuning

Model

Object
Detection

Instance
Segmentation

Panoptic
Segmentation

Semantic
Segmentation

DINOv3

🔗

🔗

🔗

🔗

DINOv2

🔗

🔗

Distillation & Pretraining

Model

Distillation

Pretraining

DINOv3

🔗

DINOv2

🔗

🔗

Torchvision ResNet, ConvNext, ShuffleNetV2

🔗

🔗

TIMM models

🔗

🔗

Ultralytics YOLOv5–YOLO12

🔗

🔗

RT-DETR, RT-DETRv2

🔗

🔗

RF-DETR

🔗

🔗

YOLOv12

🔗

🔗

Custom PyTorch Model

🔗

🔗

Contact us if you need support for additional models.

Usage Events

LightlyTrain collects anonymous usage events to help us improve the product. We only track training method, model architecture, and system information (OS, GPU). To opt-out, set the environment variable: export LIGHTLY_TRAIN_EVENTS_DISABLED=1

License

LightlyTrain offers flexible licensing options to suit your specific needs:

  • AGPL-3.0 License: Perfect for open-source projects, academic research, and community contributions. Share your innovations with the world while benefiting from community improvements.

  • Commercial License: Ideal for businesses and organizations that need proprietary development freedom. Enjoy all the benefits of LightlyTrain while keeping your code and models private.

We’re committed to supporting both open-source and commercial users. Please contact us to discuss the best licensing option for your project!

Contact

Website
Discord
GitHub
X
LinkedIn