Time-of-Flight Imaging

The ability to capture 3D-data in a way which is as easy as taking an image, has the potential to revolutionize the way we perceive and interact with our technical world. Especially the advent of Time-of-Flight camera technology has triggered a vision of new products like 3D Body-Scanners in our living rooms, mobile phones which are aware of their indoor location and augmented reality glasses, which make monitor and mouse obsolete.

3D camera modules based on the Time-of-Flight (ToF) sensing principle, which are as small as a two-Euro-coin and cheap, would be the ideal backend technology to make all of that possible. In one shot, these cameras capture the geometry of a scenery as 50.000 depth measurements which finally form a depth-image. These measurements can act as a means to measure human body shape and posture, recognize and interpret hand gestures and even create a 3D map of buildings The algorithms we developed are based on the mathematical principle of energy minimization, where one seeks to minimize the energy (i.e. cost) of a problem-specific functional. In this way we created novel formulations for the computer vision problems of scene-flow estimation, image super-resolution and guided image denoising as global optimization problems. Computationally, we solved these problems efficiently based on a primal-dual approach. We further developed machine-learning techniques to increase image resolution from a single image, and to recognize human head and hand poses to enable novel applications for end-users. --> Read More


Single image super-resolution is an important task in the field of computer vision and finds many practical applications. Current state-of-the-art methods typically rely on machine learning algorithms to infer a mapping from low- to high-resolution images. These methods use a single fixed blur kernel during training and, consequently, assume the exact same kernel underlying the image formation process for all test images. However, this setting is not realistic for practical applications, because the blur is typically different for each test image. --> Read More

Mobile Sensing & Mapping

The automated recognition and geo-referencing of objects in the vincinity of rail/road infrastructure. --> Read More

Robot-Vision Sensors and Systems

Robot-Vision Solutions and Applications --> Read More

Abnormal Event Detection in Repetitive Processes

Goal of this topic is to gather visual information from repetitive robotic processes and analyse these for abnormal appearance which is a hint for defects, obstacles or malfunctioning. --> Read More
Youtube Channel

2016/12/16: New Open Student Position: LIDAR Metrology Tooling

--> Learn More

2016/12/01: New Open Student Position: Robotic Charging of Electric Vehicles

--> Learn More

2016/07/15: Accepted to BMVC 2016

Our paper "A Deep Primal-Dual Network for Guided Depth Super-Resolution" has been accepted for oral presentation at the British Machine Vision Conference 2016 held at the University of York, United Kingdom.

2016/07/11: Accepted to ECCV 2016

Our paper "ATGV-Net: Accurate Depth Superresolution" has been accepted at the European Conference on Computer Vision 2016 in Amsterdam, The Netherlands.

2015/10/07: Accepted to ICCV 2015 Workshop: TASK-CV

Our paper "Anatomical landmark detection in medical applications driven by synthetic data" has been accepted at the IEEE International Conference on Computer Vision 2015 workshop on transferring and adapting source knowledge in computer vision.

2015/09/14: Camera calibration code online

The camera calibration toolbox accompanying our paper "Learning Depth Calibration of Time-of-Flight Cameras" is available here.

2015/09/07: Accepted to ICCV 2015

Our papers "Variational Depth Superresolution using Example-Based Edge Representations" and "Conditioned Regression Models for Non-Blind Single Image Super-Resolution" have been accepted at the IEEE International Conference on Computer Vision 2015, December 13-16, Santiago, Chile.

2015/07/03: Accepted to BMVC 2015

Our papers "Depth Restoration via Joint Training of a Global Regression Model and CNNs" and "Learning Depth Calibration of Time-of-Flight Cameras" have been accepted as a poster presentation at the 26th British Machine Vision Conference, September 7-10, Swansea, United Kingdom.