EVELOC: Event-based Visual Localization

EVELOC is a PRCI ANR/FWF project led by P. VASSEUR and F. FRAUNDORFER. The team consists of members from Université de Picardie Jules Verne (UPJV), Technische Universität Graz (TU Graz) and Université de Bourgogne (UB), including x4 principle investigators, x2 post-doc researchers, x4 Ph.D. researchers and x3 master students. The duration of the project is 48 months (2024-2027).

The goal of this project is to develop new image processing and computer vision algorithms for event cameras. Event-based sensors differ from conventional frame-based cameras in that they respond to brightness changes in a scene asynchronously and independently for every pixel, instead of synchronously recording information from a light-sensitive pixel array. There are four main advantages of event cameras over frame cameras: (a) high temporal resolution (high frame rate), (b) low latency, (c) low power consumption, and (d) high dynamic range. The unique properties make event cameras valuable in many computer vision and robotic applications. In particular, in mobile robotics, rapid motions and scenes with variable lighting conditions, are very challenging for standard frame-based cameras.

Image data of event cameras is however considerably different to images of standard cameras. Specialised algorithms have to be developed for processing the image data of event cameras. While several computer vision methods for standard cameras have been redesigned for event cameras, others still wait to be discovered. In particular, to this day, there are no event-based counterparts of visual localization algorithms. 

Visual localization refers to the problem of determining the pose (position and orientation) of a camera, from an image captured by the camera. Visual localization plays an important role in mobile robotics. Of particular interest is visual localization for flying robots (drones) as weight restrictions prohibit the use of heavy and cumbersome sensors, and cameras are typically small and lightweight. In particular, in a range of application scenarios, drones need to operate in indoor environments where the use of GNSS for localization is not possible. Such an application scenario, would be inspection flights for cultural heritage documentation and preservation, e.g., inside a church. Visual localization would allow a drone to safely and repeatably navigate such an environment, given a previously-created 3D LiDAR map, for instance. With such a motivation in mind, we propose to investigate visual localization techniques for event-cameras, towards an application to autonomous drones. The goal of the project will be then to enrich and extend the already existing suite of algorithms for event cameras, with visual localization methods.

More information can be found on the EVELOC project page.

Project Team
image/svg+xml
Friedrich Fraundorfer
Univ.-Prof. Dipl.-Ing. Dr.techn.