We all know that sometimes when working with ML data we deal with really BIG datasets. The cloud solution is great for exploration, prototyping and an easy way to work with lightly. But there is more!
With the introduction of our on-premise solution, you can process larger datasets completely on your end without data leaving your infrastructure. We worked hard to make this happen and are very proud to present you with the following specs:
NEW Automatically upload the sampled dataset to the Lightly Platform (see Upload Sampled Dataset To Lightly Platform)
NEW See your docker runs live in the Lightly Platform (see Live View of Docker Status)
Lightly Docker has built-in pretagging models (see Pretagging)
Use this feature to pre-label your dataset or to only select images which contain certain objects
Supported object categories are: bicycle, bus, car, motorcycle, person, train, truck
Sample more than 1 Million samples within a few hours!
Runs directly with videos without prior extraction of the frames!
Wrapped in a docker container (no setup required if your system supports docker)
Use stopping conditions for sampling such as minimum distance between two samples
Use various sampling methods
Check for corrupt files and report them
Check for exact duplicates and report them
We expose the full lightly framework config
Automated reporting of the datasets for each run
PDF report with histograms, plots, statistics, and much more …
Hand-optimized code (to instruction-level)
Minimal hardware requirements:
1 CPU core
4 GB free RAM
8 CPU cores or more
16GB free RAM
1 Nvidia Tesla P100, T4, V100 or A100 GPU with CUDA 10.0+