LightlyTrain DocumentationΒΆ

_images/lightly_train_light.svg _images/lightly_train_dark.svg

Google Colab Python OS Docker Documentation Discord

Train Better Models, Faster

LightlyTrain is the leading framework for transforming your data into state-of-the-art computer vision models. It covers the entire model development lifecycle from pretraining DINOv2/v3 vision foundation models on your unlabeled data to fine-tuning transformer and YOLO models on detection and segmentation tasks for edge deployment.

Contact us to request a license for commercial use.

NewsΒΆ

WorkflowsΒΆ

Object Detection


Train LTDETR detection models with DINOv2 or DINOv3 backbones.

object_detection.html
Instance Segmentation


Train EoMT segmentation models with DINOv3 backbones.

instance_segmentation.html
Semantic Segmentation


Train EoMT segmentation models with DINOv2 or DINOv3 backbones.

semantic_segmentation.html
Distillation


Distill knowledge from DINOv2 or DINOv3 into any model architecture.

methods/distillation.html
Pretraining


Pretrain DINOv2 foundation models on your domain data.

methods/dinov2.html
Autolabeling


Generate high-quality pseudo labels for detection and segmentation tasks.

predict_autolabel.html

How It Works Google ColabΒΆ

Install LightlyTrain on Python 3.8+ for Windows, Linux or MacOS.

pip install lightly-train

Then train an object detection model with:

import lightly_train

if __name__ == "__main__":
    lightly_train.train_object_detection(
        out="out/my_experiment",
        model="dinov3/convnext-tiny-ltdetr-coco",
        data={
            # ... Data configuration
        }
      )

And run inference like this:

import lightly_train

# Load the model from the best checkpoint
model = lightly_train.load_model("out/my_experiment/exported_models/exported_best.pt")
# Or load one of the models hosted by LightlyTrain
model = lightly_train.load_model("dinov3/convnext-tiny-ltdetr-coco")
results = model.predict("image.jpg")

See the full quick start guide for more details.

FeaturesΒΆ

  • Python, Command Line, and Docker support

  • Built for high performance including multi-GPU and multi-node support

  • Monitor training progress with MLflow, TensorBoard, Weights & Biases, and more

  • Runs fully on-premises with no API authentication

  • Export models in their native format for fine-tuning or inference

  • Export models in ONNX or TensorRT format for edge deployment

ModelsΒΆ

LightlyTrain supports the following model and workflow combinations.

Fine-tuningΒΆ

Model

Object Detection

Instance Segmentation

Semantic Segmentation

DINOv3

βœ… πŸ”—

βœ… πŸ”—

βœ… πŸ”—

DINOv2

βœ… πŸ”—

βœ… πŸ”—

Distillation & PretrainingΒΆ

Model

Distillation

Pretraining

DINOv3

βœ… πŸ”—

DINOv2

βœ… πŸ”—

βœ… πŸ”—

Torchvision ResNet, ConvNext, ShuffleNetV2

βœ… πŸ”—

βœ… πŸ”—

TIMM models

βœ… πŸ”—

βœ… πŸ”—

Ultralytics YOLOv5–YOLO12

βœ… πŸ”—

βœ… πŸ”—

RT-DETR, RT-DETRv2

βœ… πŸ”—

βœ… πŸ”—

RF-DETR

βœ… πŸ”—

βœ… πŸ”—

YOLOv12

βœ… πŸ”—

βœ… πŸ”—

Custom PyTorch Model

βœ… πŸ”—

βœ… πŸ”—

Contact us if you need support for additional models.

Usage EventsΒΆ

LightlyTrain collects anonymous usage events to help us improve the product. We only track training method, model architecture, and system information (OS, GPU). To opt-out, set the environment variable: export LIGHTLY_TRAIN_EVENTS_DISABLED=1

LicenseΒΆ

LightlyTrain offers flexible licensing options to suit your specific needs:

  • AGPL-3.0 License: Perfect for open-source projects, academic research, and community contributions. Share your innovations with the world while benefiting from community improvements.

  • Commercial License: Ideal for businesses and organizations that need proprietary development freedom. Enjoy all the benefits of LightlyTrain while keeping your code and models private.

We’re committed to supporting both open-source and commercial users. Please contact us to discuss the best licensing option for your project!

ContactΒΆ

Website
Discord
GitHub
X
LinkedIn