Selection Input

There are different input types that Lightly supports for providing input to the selection strategies to achieve your objectives.

The input can be one of the following:

Not all inputs can be combined with every selection strategy. Please see input and strategy combinations for detailed information.

Embeddings

The Lightly Framework for self supervised learning is used to compute the embeddings. They are a vector of numbers for each sample.

You can access embeddings as input using:

"input": {
  "type": "EMBEDDINGS"
}

You can also use embeddings from other datasets for strategies such as similarity search:

"input": {
    "type": "EMBEDDINGS",
    "dataset_id": "DATASET_ID_OF_THE_QUERY_IMAGES",
    "tag_name": "TAG_NAME_OF_THE_QUERY_IMAGES" # e.g. "initial-tag"
},

Or object embeddings from an object or keypoint detection task to select images with diverse objects:

"input": {
    "type": "EMBEDDINGS",
    "task": "my_detection_task",   # or "lightly_pretagging"
},

Scores

Active learning scores estimate for each sample how much adding that sample to the training set would improve the model. Since the label of each sample is unknown, the improvement cannot be calculated directly. Instead, a proxy score is used for each sample based on the prediction of the currently trained model. For example, samples where the model is uncertain offer the greatest learning potential for the model, as captured by high uncertainty active learning scores.
For more details on active learning scores and a list of all scores supported by Lightly, see Scorers for reference.

To use scores as input, specify the prediction task and the score keys:

# using your own predictions
"input": {
    "type": "SCORES",
    "task": "YOUR_TASK_NAME",
    "score": "uncertainty_entropy"
}

# using the lightly pretagging model
"input": {
    "type": "SCORES",
    "task": "lightly_pretagging",
    "score": "uncertainty_entropy"
}

You can specify one of the tasks you specified in your datasource, see Work with Predictions for reference. Alternatively, set the task to lightly_pretagging to use object detections created by the Lightly Worker itself. See Lightly Pretagging for reference.

We generally recommend using the uncertainty_entropy score.

Predictions

The class distribution probability vector of predictions can be used as well. Here, two cases have to be distinguished:

  • Image Classification: The probability vector of each sample's prediction is used directly.
  • Object Detection: The probability vectors of the class predictions of all objects in an image are summed up.

This input is specified using the prediction task. Remember the class names, as they are needed in later steps.

If you use your own predictions, the task name and class names are taken from the specification in the prediction schema.json.

Alternatively, set the task to lightly_pretagging to use object detections created by the Lightly Worker itself. Its class names are specified here: Lightly Pretagging.

# using your own predictions
"input": {
    "type": "PREDICTIONS",
    "task": "my_object_detection_task",
    "name": "CLASS_DISTRIBUTION"
}

# using the lightly pretagging model
"input": {
    "type": "PREDICTIONS",
    "task": "lightly_pretagging",
    "name": "CLASS_DISTRIBUTION"
}

On the other hand, the number of detected objects in certain categories can also be used as input with the input name CATEGORY_COUNT. The following example takes the number of cars and people detected in an image as input.

"input": {
    "type": "PREDICTIONS",
    "task": "my_object_detection_task",
    "name": "CATEGORY_COUNT",
    "categories": ["car", "person"] # optional since worker v2.11 with defaults being all categories
}

Categories of interest should be specified in categories as a list of category names. Category names are case-sensitive and should be defined in the prediction schema.json.

Note that when categories is omitted, it defaults to counting all predictions of all categories specified in your schema.json.

The final input value of a prediction is an integer, the total amount of detected objects that belong to the categories specified in categories. For instance, with the input defined above, if a prediction contains two cars, one person, and one dog, the final value of this prediction is three. CATEGORY_COUNT can then be paired with selection strategies such as weights and threshold.

Metadata

Metadata is specified by the metadata key. It can be divided into two dimensions:

  • Custom Metadata vs. Lightly Metadata
  • Numerical vs. Categorical values

Custom Metadata

Custom metadata can be uploaded to the datasource and accessed from there. See Metadata Format for reference. An example configuration:

"input": {
    "type": "METADATA",
    "key": "weather.temperature"
}

Use as key the “path” you specified when creating the metadata in the datasource. The key lightly.is reserved for Lightly metadata and cannot be used for custom metadata.

Lightly Metadata

Lightly Metadata is calculated by the Lightly Worker. It is specified by prepending lightly to the key. We currently support these keys:

  • lightly.sharpness: Sharpness. Calculated as the standard deviation of values after application of a Laplacian 3x3 kernel on the image.
  • lightly.snr: Signal-to-noise ratio. Computed as the mean of color values divided by their standard deviation.
  • lightly.uniformRowRatio: (New in 2.7.0) Uniform row ratio. A row is considered uniform if its pixel color values differ only marginally. More precisely, we apply a Laplacian 3x3 kernel on a resized grayscale image and consider a row uniform if at least 97% of its pixels are below a threshold. The metadata value is between 0 and 1. Higher values typically indicate undesired artifacts from image decoding.
  • lightly.luminance: (New in 2.7.4) Luminance. Computed from the mean color value as perceived lightness L* in the CIELAB color space. The value ranges from 0 to 100.
  • lightly.width and lightly.height: (New in 2.11.1) Width and height of the image in pixels.
  • lightly.meanRed, lightly.meanGreen and lightly.meanBlue: (New in 2.11.1) Mean values of the respective colour channel. The values range from 0 to 1 each.
  • lightly.contrast: (New in 2.11.1) Root Mean Squared (RMS) Contrast . The value ranges from 0 to 1.

An example configuration:

"input": {
    "type": "METADATA",
    "key": "lightly.sharpness"
}

📘

Types of Metadata

Not all metadata types can be used in all selection strategies. Lightly differentiates between numerical and categorical metadata.

Numerical metadata refers to numbers (int, float), such as lightly.sharpness or weather.temperature.

Categorical metadata refers to discrete categories, for example: video.location_id or weather.description. It can be either an integer or a string.

Categorical boolean metadata cannot be used for selection a the moment.

📘

Crop Metadata

Metadata of object crops is not available for selection and also can not be specified as custom metadata. Only metadata of the full images or video frames can be used for selection.

Random

This selection input generates random samples, optionally setting a seed. See here how you can combine it with weights to perform a random selection.

"input": {
	"type": "RANDOM", 
	"random_seed": 42	# Optional
}