
Privacy warning
With the click on the play button an external video from www.youtube.com is loaded and started. Your data is possible transferred and stored to third party. Do not start the video if you disagree. Find more about the youtube privacy statement under the following link: https://policies.google.com/privacySemantic Similarity Propagation compared to other methods for video segmentation: significantly higher temporal stability and quality of segmentation.
To enable autonomous flight, aerial systems must be able to perceive their surroundings and continuously classify objects in real time. This requires the use of various sensors – including RGB cameras – whose heterogeneous data must be reliably processed and made available for autonomous flight control. With the help of AI-based semantic segmentation, the system can divide its environment into meaningful categories, such as roads, buildings, vegetation, or obstacles, enabling informed decisions during tasks like obstacle avoidance, emergency landings, or the inspection of critical infrastructure like power lines. However, under changing lighting conditions, rapid movements, or limited training data, the quality of video analysis tends to fluctuate.
A new method called Semantic Similarity Propagation (SSP) now offers a solution: it ensures that the AI’s “understanding” of its surroundings remains consistent across multiple video frames, even when the camera is in motion. The method was developed by researchers at Fraunhofer IVI and the Institut Polytechnique de Paris, and presented at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), held June 11–15, 2025, in Nashville, USA.