Changelog

All notable changes to LightlyTrain will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

[0.5.0] - 2025-03-04

Added

  • Add tutorial on how to use LightlyTrain with YOLO.

  • Show the data_wait percentage in the progress bar to better monitor performance bottlenecks.

  • Add auto format export with example logging, which automatically determines the best export option for your model based on the used model library.

  • Add support for configuring the random rotation transform via transform_args.random_rotation.

  • Add support for configuring the color jitter transform via transform_args.color_jitter.

  • When using the DINO method and configuring the transforms: Removes local_view_size, local_view_resize and n_local_views from DINOTransformArgs in favor of local_view.view_size, local_view.random_resize and local_view.num_views. When using the CLI, replace transform_args.local_view_size with transform_args.local_view.view_size, … respectively.

  • Allow specifying the precision when using the embed command. The loaded checkpoint will be casted to that precision if necessary.

Changed

  • Increase default DenseCL SGD learning rate to 0.1.

  • Dataset initialization is now faster when using multiple GPUs.

  • Models are now automatically exported at the end of a training.

  • Update the docker image to PyTorch 2.5.1, CUDA 11.8, and cuDNN 9.

  • Switched from using PIL+torchvision to albumentations for the image transformations. This gives a performance boost and allows for more advanced augmentations.

  • The metrics batch_time and data_time are grouped under profiling in the logs.

Fixed

  • Fix Ultralytics model export for Ultralytics v8.1 and v8.2

  • Fix that the export command may fail when called in the same script as a train command using DDP.

  • Fix the logging of the train_loss to report the batch_size correctly.

[0.4.0] - 2024-12-05

Added

  • Log system information during training

  • Add Performance Tuning guide with documentation for multi-GPU and multi-node training

  • Add Pillow-SIMD support for faster data processing

    • The docker image now has Pillow-SIMD installed by default

  • Add ultralytics export format

  • Add support for DINO weight decay schedule

  • Add support for SGD optimizer with optim="sgd"

  • Report final accelerator, num_devices, and strategy in the resolved config

  • Add Changelog to the documentation

Changed

  • Various improvements for the DenseCL method

    • Increase default memory bank size

    • Update local loss calculation

  • Custom models have a new interface

  • The number of warmup epochs is now set to 10% of the training epochs for runs with less than 100 epochs

  • Update default optimizer settings

    • SGD is now the default optimizer

    • Improve default learning rate and weight decay values

  • Improve automatic num_workers calculation

  • The SPPF layer of Ultralytics YOLO models is no longer trained

Removed

  • Remove DenseCLDINO method

  • Remove DINO teacher_freeze_last_layer_epochs argument

[0.3.2] - 2024-11-06

Added

  • Log data loading and forward/backward pass time as data_time and batch_time

  • Batch size is now more uniformly handled

Changed

  • The custom model feature_dim property is now a method

  • Replace FeatureExtractor base class by the set of Protocols

Fixed

  • Datasets support symlinks again

[0.3.1] - 2024-10-29

Added

  • The documentation is now available at https://docs.lightly.ai/train

  • Support loading checkpoint weights with the checkpoint argument

  • Log resolved training config to tensorboard and WandB

Fixed

  • Support single-channel images by converting them to RGB

  • Log config instead of locals

  • Skip pooling in DenseCLDino

[0.3.0] - 2024-10-22

Added

  • Add Ultralytics model support

  • Add SuperGradients PP-LiteSeg model support

  • Save normalization transform arguments in checkpoints and automatically use them in the embed command

  • Better argument validation

  • Automatically configure num_workers based on available CPU cores

  • Add faster and more memory efficient image dataset

  • Log more image augmentations

  • Log resolved config for CallbackArgs, LoggerArgs, MethodArgs, MethodTransformArgs, and OptimizerArgs