LightlyEdge comes with a built-in object detection model. The model is tailored for automotive industry and provides classes like car
, pedestrian
, bicycle
, truck
, etc. Moreover, classification can be performed for the traffic sign
class to identify the sign type. Other detection classes might be enabled for classification in the future.
In this guide, we detect objects in the image on the left. The image on the right shows the result. We also set up a detection strategy to identify images with objects of interest.

Project Setup
Code for this tutorial is provided in examples/06_object_detection
directory. Before starting this tutorial, copy the model file to examples/lightly_model.tar
and verify that your project layout is as follows:
lightly_edge_sdk_cpp
├── ...
└── examples
├── ...
├── 06_object_detection
│ ├── CMakeLists.txt
│ ├── images
│ │ └── swiss_town.jpg
│ ├── main.cpp
│ └── stb_image.h
└── lightly_model.tar
Build and Run a Complete Example
See below the content of the main.cpp file. We will first run the example, and explain it right after.
4#define STB_IMAGE_IMPLEMENTATION
6#include "lightly_edge_sdk.h"
10Frame load_image(std::string image_path) {
11 std::cout <<
"Loading image: " << image_path << std::endl;
12 int width, height, channels;
13 unsigned char *data = stbi_load(image_path.c_str(), &width, &height, &channels, 0);
14 if (data ==
nullptr) {
15 throw std::runtime_error(
"Failed to load image.");
18 return Frame(width, height, data);
23 std::cout <<
"Initializing LightlyEdge..." << std::endl << std::endl;
25 config.detector_config.object_detector_enable =
true;
26 config.detector_config.classifiers_enable =
true;
27 LightlyEdge lightly_edge = LightlyEdge::new_from_tar(
"../lightly_model.tar", config);
30 Frame frame = load_image(
"images/swiss_town.jpg");
31 std::vector<ObjectDetection> detections = lightly_edge.
detect(frame);
35 const int traffic_sign_class_id = 5;
40 std::cout <<
"Detected " << detections.size() <<
" objects:" << std::endl;
41 for (
auto &det : detections) {
42 std::cout << std::fixed << std::setprecision(2)
43 <<
"\"" << class_labels[det.class_id] <<
"\""
44 <<
", confidence=" << det.confidence
45 <<
", x=" << det.x <<
", y=" << det.y <<
", w=" << det.w <<
", h=" << det.h;
46 if (det.class_id == traffic_sign_class_id && det.subclass_id >= 0) {
47 std::cout <<
", subclass: \"" << traffic_sign_labels[det.subclass_id] <<
"\"";
49 std::cout << std::endl;
58 std::vector<float> image_embedding = lightly_edge.
embed_frame(frame);
60 std::cout <<
"Should select: " << select_info.
detection[0].should_select << std::endl;
65 std::cout <<
"Program successfully finished." << std::endl;
LightlyEdge.
Definition lightly_edge_sdk.h:118
auto register_detection_strategy(uint32_t class_id, int32_t subclass_id, float threshold) const -> void
Register a detection strategy.
Definition lightly_edge_sdk.h:733
auto detect(const Frame &frame) const -> std::vector< ObjectDetection >
Run object detection on a frame.
Definition lightly_edge_sdk.h:1226
auto should_select(const std::vector< float > &embedding, const std::vector< ObjectDetection > &detections) const -> SelectInfo
Check if a frame should be selected.
Definition lightly_edge_sdk.h:985
auto detection_class_labels() const noexcept -> std::vector< std::string >
Get labels of object detection classes.
Definition lightly_edge_sdk.h:1242
auto detection_subclass_labels(uint32_t class_id) const noexcept -> std::vector< std::string >
Get the subclass labels for a specified detection class.
Definition lightly_edge_sdk.h:1280
auto embed_frame(const Frame &frame) const -> std::vector< float >
Embed an RGB image.
Definition lightly_edge_sdk.h:199
Namespace with core LightlyEdge SDK functionality.
Definition lightly_edge_error.h:15
Definition lightly_edge_rs_bindings.h:43
Frame data for LightlyEdge.
Definition lightly_edge_sdk.h:25
void * rgbImageData_
Pointer to the RGB image data.
Definition lightly_edge_sdk.h:42
Selection information about a processed frame.
Definition lightly_edge_sdk.h:83
std::vector< DetectionSelectInfo > detection
The detection selection info for each detection strategy in the order of registration.
Definition lightly_edge_sdk.h:106
Build and run:
# Enter the project folder.
cd 06_object_detection
# Configure CMake. This will create a `build` subfolder.
cmake -B build
# Build using configuration from the `build` subfolder.
cmake --build build
# Run (Linux variant)
./build/main
# Or run (Windows variant)
.\build\main.exe
The output should be similar to the following, the exact numbers might slightly differ on your machine architecture:
Initializing LightlyEdge...
Loading image: images/swiss_town.jpg
Detected 14 objects:
"car", confidence=0.96, x=464, y=348, w=201, h=176
"car", confidence=0.95, x=0, y=352, w=262, h=188
"car", confidence=0.95, x=368, y=371, w=79, h=62
"car", confidence=0.94, x=219, y=358, w=150, h=118
"traffic light", confidence=0.88, x=493, y=141, w=31, h=74
"car", confidence=0.83, x=442, y=371, w=52, h=44
"traffic sign", confidence=0.82, x=784, y=203, w=44, h=70, subclass: "parking"
"traffic light", confidence=0.76, x=622, y=139, w=27, h=65
"traffic light", confidence=0.72, x=780, y=273, w=20, h=47
"person", confidence=0.70, x=752, y=370, w=23, h=63
"traffic light", confidence=0.64, x=798, y=273, w=21, h=46
"person", confidence=0.63, x=742, y=370, w=19, h=67
"traffic light", confidence=0.61, x=652, y=318, w=11, h=26
"traffic sign", confidence=0.59, x=799, y=328, w=29, h=32, subclass: "no_left_turn"
Should select: 1
Program successfully finished.
We see 14 objects were successfully detected.
- Note
- In this example we use the model version
lightly_model_14.tar
. You might need to adjust the thresholds in this tutorial if your model version differs.
Object Detection
By default, object detection and object classification models are not loaded. To enable them, we pass lightly_edge_rs::LightlyEdgeConfig
when creating a LightlyEdge instance. We enable object detection and classification by:
config.detector_config.object_detector_enable = true;
config.detector_config.classifiers_enable = true;
LightlyEdge lightly_edge = LightlyEdge::new_from_tar(
"../lightly_model.tar", config);
Detection is performed with lightly_edge_sdk::LightlyEdge::detect
. Moreover, if classifiers_enable
was set to true, classification is performed for each class for which a classifier exists (currently only traffic_sign
). To bound the compute time, only 5 most confident detections are classified for each class. This value can be customised by setting config.detector_config.max_classifications
.
Frame frame = load_image(
"images/swiss_town.jpg");
std::vector<ObjectDetection> detections = lightly_edge.
detect(frame);
Next we print the detections. First we fetch the list of class labels with lightly_edge_sdk::LightlyEdge::detection_class_labels
and lightly_edge_sdk::LightlyEdge::detection_subclass_labels
. The models return zero-based class IDs, these lists provide the mapping from ID to a human readable label.
const int traffic_sign_class_id = 5;
Result of detection is a list of lightly_edge_rs::ObjectDetection
structures. The printing code showcases its members:
class_id
is the ID of the detected class.
confidence
is a float value between 0 and 1. All detections have confidence value at least 0.5.
- Bounding box in pixel coordinates. Values
x
and y
identify the top left corner, w
its width and h
its height.
subclass_id
is the subclass ID determined by a classifier. If a classification was not performed for this detection the value is set to -1.
When overlaid over the original image, the detections look like this:

Detection Strategy
Additionally, a detection strategy can be registered with LightlyEdge. The arguments of the lightly_edge_sdk::LightlyEdge::register_detection_strategy
function are class ID, subclass ID, and confidence threshold. A frame is considered matching the strategy if it contains a detection where the class ID and subclass ID match and the detection confidence is higher than the threshold. When registering the strategy, value -1 can be used for subclass ID to have it ignored.
Finally, we check whether the frame matches the strategy by calling should_select
. Note that the image_embedding
parameter is irrelevant in this setup.
std::vector<float> image_embedding = lightly_edge.
embed_frame(frame);
std::cout <<
"Should select: " << select_info.
detection[0].should_select << std::endl;
Because the example image contains a traffic sign of type "parking" the value of select_info.detection[0].should_select
will be true.
Next Steps
Congratulations! You have completed the Getting Started guide. Next you can explore LightlyEdge topics in detail in our User Manual or check out the API reference.