lightly_edge_sdk.lightly_edge_sdk#
Module Contents#
Classes#
Provides access to the main functionalities of the LightlyEdge SDK. |
|
Selection information about a processed frame. |
|
The result of a diversity strategy. |
|
The result of an adaptive diversity strategy. |
|
The result of a similarity strategy. |
|
The result of a detection strategy. |
|
The result of a classification strategy. |
|
The result of a metadata strategy. |
|
Enumeration used to select the inference device. |
|
Configuration options for the Object detector. |
|
Configuration options for the LightlyEdge SDK. |
|
Object detection result. |
|
Defines one of the metadata-filtering conditions you can register: |
- class lightly_edge_sdk.lightly_edge_sdk.LightlyEdge(path: str, config: LightlyEdgeConfig | None = None)#
Provides access to the main functionalities of the LightlyEdge SDK.
Offers methods to embed images and texts and make decisions about which frames to select.
- property num_diversity_strategies: int#
The number of registered diversity strategies.
- property num_adaptive_diversity_strategies: int#
The number of registered adaptive diversity strategies.
- property num_similarity_strategies: int#
The number of registered similarity strategies.
- property num_detection_strategies: int#
The number of registered detection strategies.
- property num_classification_strategies: int#
The number of registered classification strategies.
- property num_metadata_strategies: int#
The number of registered metadata strategies.
- embed_texts(texts: List[str]) List[List[float]]#
Embeds a list of text strings.
Processes a list of text strings, generating an embedding for each text. The embeddings are returned as a list of lists, where each inner list represents the embedding of a single text.
- Parameters:
texts – A list of text strings to embed.
- Returns:
A list of lists of floats, where each inner list represents the embedding of a single text.
- Raises:
LightlyEdgeError – If an error occurs during the embedding process.
Example
embeddings = lightly_edge.embed_texts(["cat", "dog"])
- embed_frame(frame: PIL.Image.Image) List[float]#
Embeds a PIL image.
- Parameters:
frame – A PIL image to embed.
- Returns:
An embedding (list of floats). The dimension of the embedding is determined by the model.
- Raises:
LightlyEdgeError – If an error occurs during the embedding process.
Example
from PIL.Image import Image with Image.open("cat.jpg") as frame: embedding = lightly_edge.embed_frame(frame=frame)
- embed_frame_rgb_bytes(rgb_bytes: bytes, width: int, height: int) List[float]#
Embeds an RGB image from a byte array.
- Parameters:
rgb_bytes – A byte array representing the image data. The data must be in row-major RGB format, with each pixel consisting of three 8-bit integer values (0-255). The length of the byte array must equal
width * height * 3.width – The width of the image.
height – The height of the image.
- Returns:
An embedding (list of floats). The dimension of the embedding is determined by the model.
- Raises:
LightlyEdgeError – If an error occurs during the embedding process.
Example
rgb_bytes = b'ÿÿÿÿÿÿ' embedding = lightly_edge.embed_frame_rgb_bytes(rgb_bytes=rgb_bytes, width=2, height=2)
- detect(frame: PIL.Image.Image) List[ObjectDetection]#
Detects objects in a PIL image.
Detection is disabled by default. It has to be enabled by setting the
detector_configfield of typeLightlyEdgeDetectorConfigon theLightlyEdgeConfigobject passed to the LightlyEdge constructor. If detection is disabled, this method returns an empty list.If classifiers are enabled in the
LightlyEdgeDetectorConfig, detections are classified for every detection class for which a classifier exists. To bound computation, the number of classifications per detection class is limited by themax_classificationsfield of theLightlyEdgeDetectorConfig. The most confident detections are classified first.- Parameters:
frame – A PIL image to detect objects in.
- Returns:
A list of object detection results as List[ObjectDetection].
- Raises:
LightlyEdgeError – If an error occurs during object detection.
Example
from lightly_edge_sdk import LightlyEdge, LightlyEdgeConfig # Initialize LightlyEdge with object detector enabled. config = LightlyEdgeConfig.default() config.detector_config = LightlyEdgeDetectorConfig( object_detector_enable=True, classifiers_enable=True, max_classifications=5, ) lightly_edge = LightlyEdge(path="lightly_model.tar", config=config) # Detect objects in the frame. with Image.open("cat.jpg") as frame: detections = lightly_edge.detect(frame=frame) # Print the detected objects. class_labels = lightly_edge.detection_class_labels() for det in detections: # Prints e.g. "cat 0.9 10 20 100 200" print(class_labels[det.class_id], det.confidence, det.x, det.y, det.w, det.h)
- detect_rgb_bytes(rgb_bytes: bytes, width: int, height: int) List[ObjectDetection]#
Detects objects in an RGB image.
Detection is disabled by default. It has to be enabled by setting the
detector_configfield of typeLightlyEdgeDetectorConfigon theLightlyEdgeConfigobject passed to the LightlyEdge constructor. If detection is disabled, this method returns an empty list.If classifiers are enabled in the
LightlyEdgeDetectorConfig, detections are classified for every detection class for which a classifier exists. To bound computation, the number of classifications per detection class is limited by themax_classificationsfield of theLightlyEdgeDetectorConfig. The most confident detections are classified first.- Parameters:
rgb_bytes – A byte array representing the image data. The data must be in row-major RGB format, with each pixel consisting of three 8-bit integer values (0-255). The length of the byte array must equal
width * height * 3.width – The width of the image.
height – The height of the image.
- Returns:
A list of object detection results as List[ObjectDetection].
- Raises:
LightlyEdgeError – If an error occurs during object detection.
Example
from lightly_edge_sdk import LightlyEdge, LightlyEdgeConfig # Initialize LightlyEdge with object detector enabled. config = LightlyEdgeConfig.default() config.detector_config = LightlyEdgeDetectorConfig( object_detector_enable=True, classifiers_enable=True, max_classifications=5, ) lightly_edge = LightlyEdge(path="lightly_model.tar", config=config) # Detect objects in the frame. rgb_bytes = b'ÿÿÿÿÿÿ' detections = lightly_edge.detect_rgb_bytes(rgb_bytes=rgb_bytes, width=2, height=2) # Print the detected objects. class_labels = lightly_edge.detection_class_labels() for det in detections: # Prints e.g. "cat 0.9 10 20 100 200" print(class_labels[det.class_id], det.confidence, det.x, det.y, det.w, det.h)
- detection_class_labels() List[str]#
Returns the class labels of the object detector.
- Returns:
A list of class labels as strings. The zero-based index of the class label in the list corresponds to the class ID in the object detection results.
Example
class_labels = lightly_edge.detection_class_labels()
- detection_subclass_labels(class_id: int) List[str] | None#
Returns subclass labels for a given detection class.
Subclass labels are generated by a classifier specific to the class.
- Parameters:
class_id – The class ID of the object detector.
- Returns:
A list of subclass labels as strings. The zero-based index of the subclass label in the list corresponds to the subclass ID in the object detection results. Returns None if a classifier is not available for the given class.
Example
subclass_labels = lightly_edge.detection_subclass_labels(class_id=5)
- insert_into_embedding_database(embedding: List[float]) None#
Inserts an embedding into the embedding database.
Embedding database is used by the diversity strategy. If an embedding is inserted into the database the strategy will not anymore select frames with embeddings similar to it. You should call
insert_into_embedding_databasefor frames marked byshould_selectwith diversity strategy.- Parameters:
embedding – Embedding of a frame. Can be obtained using
embed_frame.- Raises:
LightlyEdgeError – If the database is full or the embedding has a different dimension than other embeddings in the database.
- clear_embedding_database() None#
Clears the embedding database.
Removes all embeddings from the database.
- register_diversity_strategy(min_distance: float) None#
Registers a diversity strategy.
A diversity strategy selects frames that are different from previously selected frames. The
min_distanceparameter must be between 0 and 1 and controls how different the new frame must be from the previously selected frames: Only frames with a distance greater than themin_distancewill be selected. To remember selected frames across multiple calls, the user must fill the database with embeddings by callinginsert_into_embedding_database.- Parameters:
min_distance – Only frame embeddings with a distance greater than
min_distanceto all embeddings in the database will be selected. Must be between 0 and 1. A higher value will result in fewer frames being selected.- Raises:
LightlyEdgeError – If
min_distanceis not between 0 and 1.
Example
# Register a diversity strategy. lightly_edge.register_diversity_strategy(min_distance=0.5) # Embed a frame. embedding = lightly_edge.embed_frame(frame=frame) # Check if the frame should be selected. select_info = lightly_edge.should_select(embedding) # Add to the embedding database if the image is selected. if select_info.diversity[0].should_select: lightly_edge.insert_into_embedding_database(embedding)
- register_adaptive_diversity_strategy(target_ratio: float, buffer_max_length: int | None = None) None#
Registers an adaptive diversity strategy.
An adaptive diversity strategy selects frames that are different from previously selected frames given a
target_ratio. With every new frame, the strategy calculates the distance to the closest frame in the embedding database. It also keeps track of the pastbuffer_max_lengthobserved distances.Intuitively, the strategy selects a frame if its distance is in the top
target_ratiopercentile of the observed distances.More precisely, the strategy computes an “adjusted target ratio” which is larger than the
target_ratioif the overall selection ratio is too low, and vice versa. The “adjusted target ratio” is then used to compute the percentile of the observed distances. This assures the strategy compensates for long-term under- or oversampling.To initialise the algorithm, the first image is always selected, and the second image is never selected. Convergence to the target ratio is expected after a few thousand images.
- Parameters:
target_ratio – The target ratio of selected images. Must be between 0 and 1.
buffer_max_length –
The maximum number of past distances to store. Must be at least 2. If set to None, default value of 3000 is used for the the buffer length.
Lower values result in faster adaptation to changes in the selection ratio, while larger values select samples diverse over a longer time span. Recommended values are in thousands.
- Raises:
OverflowError – If
buffer_max_lengthis negative.LightlyEdgeError – If
target_ratiois not between 0 and 1 orbuffer_max_lengthis less than 2.
- register_similarity_strategy(query_embedding: List[float], max_distance: float) None#
Registers a similarity strategy.
A similarity strategy selects frames that are similar to a query embedding. The
max_distanceparameter must be between 0 and 1 and controls how similar the embeddings must be for a frame to be selected: Only frames with a distance less than themax_distancewill be selected.- Parameters:
query_embedding – Query embedding. Can be obtained using embed_texts.
max_distance – Only frame embeddings with a distance less than
max_distanceto the query embedding will be selected. Must be between 0 and 1. A higher value will result in more frames being selected.
- Raises:
LightlyEdgeError – If
max_distanceis not between 0 and 1.
Example
# Embed query texts. query_embeddings = lightly_edge.embed_texts(["cat", "dog"]) lightly_edge.register_similarity_strategy(query_embeddings[0], 0.3) lightly_edge.register_similarity_strategy(query_embeddings[1], 0.4) # Embed a frame. embedding = lightly_edge.embed_frame(frame=frame) # Check if the frame should be selected. select_info = lightly_edge.should_select(embedding) is_cat = select_info.similarity[0].should_select is_dog = select_info.similarity[1].should_select
- register_detection_strategy(class_id: int, threshold: float, subclass_id: int | None = None) None#
Registers a detection strategy.
A detection strategy selects frames which contain an object detection of a given class. The
class_idmust match the ID inObjectDetectionreturned by thedetectfunction and the detectionconfidencevalue must be larger thanthreshold.If
subclass_idis specified, moreover the detection subclass ID must match.- Parameters:
class_id – The class ID of the object detector.
threshold – The confidence threshold for the detection. Images with detections with higher confidence will be selected.
subclass_id – The subclass ID of the object class. If set to
Nonesubclass ID is not considered.
- Raises:
LightlyEdgeError – If
thresholdis not between 0 and 1.OverflowError – If
class_idorsubclass_idis negative.
Example
# Register a detection strategy. lightly_edge.register_detection_strategy(class_id=0, threshold=0.5) lightly_edge.register_detection_strategy(class_id=1, threshold=0.6, subclass_id=0) # Embed and run detections on a frame. embedding = lightly_edge.embed_frame(frame=frame) detections = lightly_edge.detect(frame=frame) # Check if the frame should be selected. select_info = lightly_edge.should_select(embedding, detections) has_class_0 = select_info.detection[0].should_select has_class_1_subclass_0 = select_info.detection[1].should_select
- register_classification_strategy(classifier_path: str, class_id: int | None = None) None#
Registers a classification strategy.
A classification strategy selects frames which are classified as the class with ID
class_idby the provided classifier loaded from disk.- Parameters:
classifier_path – The path to the classifier pickle file.
class_id – Optional ID of the class used for selection. Defaults to 0 (first class).
- Raises:
LightlyEdgeError – If the classifier cannot be loaded or if the class ID is not available.
Example
# Register a detection strategy. lightly_edge.register_classification_strategy(classifier_path="path/to/file.pkl") lightly_edge.register_classification_strategy(classifier_path="path/to/file.pkl", class_id=1) # Embed a frame. embedding = lightly_edge.embed_frame(frame=frame) # Check if the frame should be selected. select_info = lightly_edge.should_select(embedding) is_class_0 = select_info.classification[0].should_select is_class_1 = select_info.classification[1].should_select
- register_metadata_strategy(metadata_source_id: uuid.UUID, selection_url: str, selection_condition: SelectionCondition) None#
Registers a metadata strategy.
A metadata strategy selects frames based on metadata values associated with the frame. The strategy is configured by specifying the metadata source ID, selection URL and selection condition (with a condition value).
Metadata is ingested via the
process_metadatamethod as a JSON string and stored in the metadata registry. If you callprocess_metadatamultiple times before it’s used, only the last payload is considered.Example metadata payload:
{ "timestamp": 12345, "image_size": { "width": 640, "height": 480 }, "tags": ["sunny", "mountains"] }
- Selection URL examples:
Simple member: “timestamp” -> 12345
Nested member: “image_size.height” -> 480
Array element: “tags.0” -> “sunny”
- Parameters:
metadata_source_id – The UUID of the metadata source.
selection_url – URL to select the metadata field.
selection_condition – The condition to apply for selection (see
SelectionCondition).
LightlyEdgeError is raised when the provided metadata ID does not exist, the metadata source has no registered data, the provided selection URL does not exist or the json matadata has unexpected data type for the requested URL. The error will be raised when
process_metadataorshould_selectare called.Example
# Register a metadata source metadata_source_id = lightly_edge.register_metadata_source("my_source") # Register a new metadata strategy lightly_edge.register_metadata_strategy( metadata_source_id, selection_url="sample", selection_condition=SelectionCondition.IntEqual(123), ) # Process input json lightly_edge.process_metadata(metadata_source_id, '{"sample": 123}') # Embed a frame. embedding = lightly_edge.embed_frame(frame=frame) # Check if a frame should be selected. select_info = lightly_edge.should_select(embedding) result = select_info.metadata[0].should_select
- should_select(embedding: List[float], detections: List[ObjectDetection] | None = None) SelectInfo#
Checks if a frame should be selected.
Checks if a frame should be selected based on its embedding, detections and registered strategies. In particular, it iterates over all registered strategies and creates a selection info object of for each strategy. Each strategy selection info object contains the selection result (true or false) and metadata about the selection.
- Parameters:
embedding – Embedding of a frame. Can be obtained using
embed_frame.detections – A list of object detections. Can be obtained using
detect. If set toNonethe frame is considered to have no detections.
- Returns:
A selection info object containing the selection result and metadata for each registered strategy.
- Raises:
LightlyEdgeError – If an error occurs during the selection process.
Example
# Register a diversity strategy. lightly_edge.register_diversity_strategy(min_distance=0.5) # Embed a frame and run detections. embedding = lightly_edge.embed_frame(frame=frame) detections = lightly_edge.detect(frame=frame) # Check if the frame should be selected. select_info = lightly_edge.should_select(embedding, detections) assert len(select_info.diversity) == 1 # One diversity strategy is registered. assert len(select_info.similarity) == 0 # No similarity strategy is registered. assert len(select_info.detection) == 0 # No detection strategy is registered. is_selected = select_info.diversity[0].selected
- register_metadata_source(name: str) uuid.UUID#
Registers a new metadata source.
- Parameters:
name – The name of the metadata source.
- Returns:
A UUID identifying the registered metadata source.
Example
source_id = lightly_edge.register_metadata_source("my_source")
- process_metadata(metadata_source_id: uuid.UUID, metadata_json: str) None#
Processes metadata for a registered metadata source.
- Parameters:
metadata_source_id – The UUID of the metadata source as returned by
register_metadata_source.metadata_json – The metadata string (must be valid JSON).
- Raises:
LightlyEdgeError – If the metadata source is not found or the metadata is invalid.
Example
source_id = lightly_edge.register_metadata_source("my_source") lightly_edge.process_metadata(source_id, '{"foo": 123}')
- exception lightly_edge_sdk.lightly_edge_sdk.LightlyEdgeError(message: str)#
Custom error type raised by LightlyEdge SDK functions.
- __str__() str#
Return str(self).
- class lightly_edge_sdk.lightly_edge_sdk.SelectInfo#
Selection information about a processed frame.
- diversity: List[DiversitySelectInfo]#
A list of diversity strategy selection information.
- adaptive_diversity: List[AdaptiveDiversitySelectInfo]#
A list of adaptive diversity strategy selection information.
- similarity: List[SimilaritySelectInfo]#
A list of similarity strategy selection information.
- detection: List[DetectionSelectInfo]#
A list of detection strategy selection information.
- classification: List[ClassificationSelectInfo]#
A list of classification strategy selection information.
- metadata: List[MetadataSelectInfo]#
A list of metadata strategy selection information.
- class lightly_edge_sdk.lightly_edge_sdk.DiversitySelectInfo#
The result of a diversity strategy.
- min_distance: float#
The minimum distance to the embeddings in the database for the frame to be selected.
- should_select: bool#
A boolean indicating whether the frame should be selected.
- class lightly_edge_sdk.lightly_edge_sdk.AdaptiveDiversitySelectInfo#
The result of an adaptive diversity strategy.
- min_distance: float#
The minimum distance to the embeddings in the database for the frame to be selected.
- should_select: bool#
A boolean indicating whether the frame should be selected.
- class lightly_edge_sdk.lightly_edge_sdk.SimilaritySelectInfo#
The result of a similarity strategy.
- distance: float#
The distance to the query embedding for the frame to be selected.
- should_select: bool#
A boolean indicating whether the frame should be selected.
- class lightly_edge_sdk.lightly_edge_sdk.DetectionSelectInfo#
The result of a detection strategy.
- class_id: int#
The class ID set by the detection strategy.
- subclass_id: int | None#
The subclass ID set by the detection strategy.
- threshold: float#
The confidence threshold set by the detection strategy.
- should_select: bool#
A boolean indicating whether the frame should be selected.
- class lightly_edge_sdk.lightly_edge_sdk.ClassificationSelectInfo#
The result of a classification strategy.
- should_select: bool#
A boolean indicating whether the frame should be selected.
- class lightly_edge_sdk.lightly_edge_sdk.MetadataSelectInfo#
The result of a metadata strategy.
- should_select: bool#
A boolean indicating whether the frame should be selected.
- class lightly_edge_sdk.lightly_edge_sdk.InferenceDeviceType#
Enumeration used to select the inference device.
- Auto = 0#
Pick the most powerful inference device. The order of trying is TensorRT, CUDA and finally CPU.
- CPU = 1#
Use CPU for inference.
- CUDA = 2#
Use an available CUDA device for inference. CUDA is available for NVIDIA GPUs.
- TensorRT = 3#
Use an available TensorRT device for inference. TensorRT is available for NVIDIA GPUs with TensorRT installed.
- class lightly_edge_sdk.lightly_edge_sdk.LightlyEdgeDetectorConfig(object_detector_enable: bool, classifiers_enable: bool, max_classifications: int)#
Configuration options for the Object detector.
- object_detector_enable: bool#
If set to true, object detector is initialized and LightlyEdge.detect method can be used.
- classifiers_enable: bool#
If set to true, classifier is initialized and LightlyEdge.detect method will provide also classification if classifiers are available.
- max_classifications: int#
Sets the maximum number of classifications done per frame per detection class.
- static default() LightlyEdgeDetectorConfig#
Returns the default configuration with
object_detector_enable=False,classifiers_enable=Falseandmax_classifications=5.
- class lightly_edge_sdk.lightly_edge_sdk.LightlyEdgeConfig(detector_config: LightlyEdgeDetectorConfig, inference_device_type: InferenceDeviceType, tensor_rt_cache_path: str | None = None)#
Configuration options for the LightlyEdge SDK.
- detector_config: LightlyEdgeDetectorConfig#
Object detector options.
- inference_device_type: InferenceDeviceType#
Sets the device used for the inference.
- tensor_rt_cache_path: str | None#
Sets the cache path for Tensor RT.
- static default() LightlyEdgeConfig#
Returns the default configuration with
detector_config=LightlyEdgeDetectorConfig.default(),inference_device_type=Autoandtensor_rt_cache_path=None.
- class lightly_edge_sdk.lightly_edge_sdk.ObjectDetection(class_id: int, subclass_id: int | None, confidence: float, x: int, y: int, w: int, h: int)#
Object detection result.
The bounding box is quaranteed to fit within the image. More precisely:
0 <= x <= x + w <= image_width 0 <= y <= y + h <= image_height
- class_id: int#
The class ID of the detected object. Use
LightlyEdge.detection_class_labels()to convert it to a human-readable label.
- subclass_id: int | None#
The subclass ID of the object class. Set to
Noneif this object has not been classified. UseLightlyEdge.detection_subclass_labels(class_id)to convert it to a human-readable label.
- confidence: float#
The confidence score between 0.0 and 1.0 of the detected object.
- x: int#
The x-coordinate of the top-left corner of the bounding box in pixel coordinates.
- y: int#
The y-coordinate of the top-left corner of the bounding box in pixel coordinates.
- w: int#
The width of the bounding box in pixel coordinates.
- h: int#
The height of the bounding box in pixel coordinates.
- class lightly_edge_sdk.lightly_edge_sdk.SelectionCondition#
Defines one of the metadata-filtering conditions you can register:
Equality for int, float and string
GreaterThanOrEqual for int and float
LessThanOrEqual for int and float
- static StringEqual(value: str) SelectionCondition#
- static BoolEqual(value: bool) SelectionCondition#
- static IntEqual(value: int) SelectionCondition#
- static FloatEqual(value: float) SelectionCondition#
- static IntGreaterThanOrEqual(value: int) SelectionCondition#
- static FloatGreaterThanOrEqual(value: float) SelectionCondition#
- static IntLessThanOrEqual(value: int) SelectionCondition#
- static FloatLessThanOrEqual(value: float) SelectionCondition#