Changelog¶
All notable changes to LightlyTrain will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
[0.6.1] - 2025-03-31¶
Added¶
Document platform compatibility.
Changed¶
TensorBoard is now automatically installed and no longer an optional dependency.
Update the Models documentation.
Update the YOLO tutorial
Removed¶
Remove DenseCL from the documentation.
[0.6.0] - 2025-03-24¶
Added¶
Add support for DINOv2 distillation pretraining with the
"distillation"method.Add support for YOLO11 and YOLO12 models.
Add support for RT-DETR models.
Add support for YOLOv12 models by the original authors.
The Git info (branch name, commit, uncommited changes) for the LightlyTrain package and the directory from where the code runs are now logged in the
train.logfile.
Changed¶
The default pretraining method is now
"distillation".The default embedding format is now
"torch".The log messages in the
train.logfile are now more concise.
Fixed¶
Ensures proper usage of the
blur_limitparameter in theGaussianBlurtransforms.
[0.5.0] - 2025-03-04¶
Added¶
Add tutorial on how to use LightlyTrain with YOLO.
Show the
data_waitpercentage in the progress bar to better monitor performance bottlenecks.Add auto format export with example logging, which automatically determines the best export option for your model based on the used model library.
Add support for configuring the random rotation transform via
transform_args.random_rotation.Add support for configuring the color jitter transform via
transform_args.color_jitter.When using the DINO method and configuring the transforms: Removes
local_view_size,local_view_resizeandn_local_viewsfromDINOTransformArgsin favor oflocal_view.view_size,local_view.random_resizeandlocal_view.num_views. When using the CLI, replacetransform_args.local_view_sizewithtransform_args.local_view.view_size, … respectively.Allow specifying the precision when using the
embedcommand. The loaded checkpoint will be casted to that precision if necessary.
Changed¶
Increase default DenseCL SGD learning rate to 0.1.
Dataset initialization is now faster when using multiple GPUs.
Models are now automatically exported at the end of a training.
Update the docker image to PyTorch 2.5.1, CUDA 11.8, and cuDNN 9.
Switched from using PIL+torchvision to albumentations for the image transformations. This gives a performance boost and allows for more advanced augmentations.
The metrics
batch_timeanddata_timeare grouped underprofilingin the logs.
Fixed¶
Fix Ultralytics model export for Ultralytics v8.1 and v8.2
Fix that the export command may fail when called in the same script as a train command using DDP.
Fix the logging of the
train_lossto report the batch_size correctly.
[0.4.0] - 2024-12-05¶
Added¶
Log system information during training
Add Performance Tuning guide with documentation for multi-GPU and multi-node training
Add Pillow-SIMD support for faster data processing
The docker image now has Pillow-SIMD installed by default
Add
ultralyticsexport formatAdd support for DINO weight decay schedule
Add support for SGD optimizer with
optim="sgd"Report final
accelerator,num_devices, andstrategyin the resolved configAdd Changelog to the documentation
Changed¶
Various improvements for the DenseCL method
Increase default memory bank size
Update local loss calculation
Custom models have a new interface
The number of warmup epochs is now set to 10% of the training epochs for runs with less than 100 epochs
Update default optimizer settings
SGD is now the default optimizer
Improve default learning rate and weight decay values
Improve automatic
num_workerscalculationThe SPPF layer of Ultralytics YOLO models is no longer trained
Removed¶
Remove DenseCLDINO method
Remove DINO
teacher_freeze_last_layer_epochsargument
[0.3.2] - 2024-11-06¶
Added¶
Log data loading and forward/backward pass time as
data_timeandbatch_timeBatch size is now more uniformly handled
Changed¶
The custom model
feature_dimproperty is now a methodReplace FeatureExtractor base class by the set of Protocols
Fixed¶
Datasets support symlinks again
[0.3.1] - 2024-10-29¶
Added¶
The documentation is now available at https://docs.lightly.ai/train
Support loading checkpoint weights with the
checkpointargumentLog resolved training config to tensorboard and WandB
Fixed¶
Support single-channel images by converting them to RGB
Log config instead of locals
Skip pooling in DenseCLDino
[0.3.0] - 2024-10-22¶
Added¶
Add Ultralytics model support
Add SuperGradients PP-LiteSeg model support
Save normalization transform arguments in checkpoints and automatically use them in the embed command
Better argument validation
Automatically configure
num_workersbased on available CPU coresAdd faster and more memory efficient image dataset
Log more image augmentations
Log resolved config for CallbackArgs, LoggerArgs, MethodArgs, MethodTransformArgs, and OptimizerArgs