A Vision-Guided Wood Feeding Crane

The objective of BioChipFeeding is to develop a new wood chip feeding system of the future for small-scale heating plants. A core component of the system is a gripper which enables feeding from above the pile of stored fuel. It will be equipped with sensors to screen the fuel quality regarding particle size and moisture content and thereby have the ability to create a rather constant fuel quality by producing appropriate fuel blends. A core task in the screening process is the optical evaluation of fuel parameters such as particle size and ash content. Another vital aspect is reliable localization of larger patches of fine-grained fuel, overly large objects, and foreign matter in order to maintain a high reliability of the feeding process and heating plant.

Particle Size Evaluation

Illuminating the scene from different angles and overexposing the images creates cast shadows at particle boundaries. Fusing these images allows for easy particle segmentation using a watershed approach. Spilling of particles into another along directions normal to the baseline of two light sources can be mitigated by restricting the particle segmentation to star-convex shapes. Metric information of particle sizes is obtained either by triangulation using a stereo camera setup or, if a more coarse estimation is sufficient, by leveraging the distance measurement of a ultrasonic rangefinder.

Ash Content Estimation

Depending on the load of the heating plant it may be feasible to avoid feeding fuel with high ash content. High ash content is usually associated with a large proportion of bark in the fuel and thus can be tied to the radiometric intensity. A reliable estimate for the scene radiance independent of changes in ambient light is obtained by acquiring a HDR seuqence with illumination dominated by the LED light sources. Based on the HDR histogram of the scene, the fuel is classified into predefined fuel classes.

Detecting Sawdust and Overlarge Objects

Sawdust as well as overlarge particles may obstruct the screwfeeder leading to unexpected shutdowns of the heating plant. Both cases are detected by exploiting statistics of the image segmentation obtained for particle size evaluation. The respective areas in the fuel depot then can be avoided when feeding the plant and later can be removed at a time when the gripper would idle otherwise.

Foreign Object Detection

  In the various steps from wood chip production until delivery to the fuel depot of the heating plant unnoticed contamination with foreign particles may occur. Again, these particles pose a threat of obstructing the feeder screw or the grate in the furnace, leading to unexpected plant shutdown. Employing an abnormal event detection framework with sparse dictionary learning methods, foreign matter on the surface of the fuel pile is detected and allows for an automatic removal from the fuel depot. Contact: Ludwig Mohr, Matthias Rüther

To top

Automatic Camera Calibration - a Matlab Toolbox

This page accompanies our paper [1] on automatic calibration of depth cameras. The presented calibration target and automatic feature extraction are not limited to depth cameras but can also be used for conventional cameras. The provided code is designed to be used as an addon to the widely known camera calibration toolbox of Jean-Yves Bouguet

Calibration Target

The calibration target consists of a central marker and circular patterns:   Our automatic feature detection starts by searching for the central marker and then iteratively refining the circular markers around the central marker (depticted as black dashed line). Compared to standard checkerboard targets, our methods has the following advantages:
  • Target does not have to be visible as a whole
  • Detection of groups of circular patterns is more robust to perspective distortions than line crossings
  • Feature detection is more accurate for low-resolution cameras (like ToF or Event Cameras)

Example Detection Result

The following images show the result of the automatic feature detection on two exemplary images from the paper. The calibration target is detected in the gray value image and reprojected to the corresponding depth image from a Microsoft Kinect v2.0:

How to use the code

The provided source code should be used as an addon the the Bouguet Camera Calibration Toolbox. Installation therefore amounts to:
  • Downloading of the calibration toolbox from GitHub [GitLab Link]
  • Running autocalibration.m and selecting the images from testdata/image_xxx.jpg starts the mono calibration of the camera.
  • The calibration target can be created using the make_target.m function. Remember to measure it after printing!
  • To use it with the GUI of the Toolbox, simply start calib_gui_normal_auto.m which asks for the target parameters interactively.
  • Stereo calibration requires the use of calib_stereo_auto.m instead of calib_stereo.m because our method does not detect all grid points in all images!
The following parameters have to be set:
  • parameters.approx_marker_width_pixels: Approximate minimum size of the center marker in pixels
  • parameters.grid_width_mm: Grid width (distance between points) in millimeters
  • parameters.checker_aspect_ratio: Aspect ratio (= height/width)
  • parameters.grid_coordinates_h:Horizontal grid dimensions (i.e. -11:11)
  • parameters.grid_coordinates_v: Vertical grid dimensions (i.e. -18:16)
The target can be created by using the function function template = make_target (grid_width_pixels, grid_width_mm, grid_coordinates_h, grid_coordinates_v), i.e.: target = make_target(240,5,-18:18,-10:10);

How to cite the materials on this website

We grant permission to use the code on this website. If you if you use the code in your own publication, we request that you cite our paper [1]. If you want to cite this website, please use the URL "http://rvlab.icg.tugraz.at/calibration/".

Software Download

Matlab code for automatic feature detection is now available on GitHub - camera_calibration. This includes the whole calibration toolbox and also the tool to make the calibration target.

Version History

  • v 0.1: Initial Release (2015-07-27)
Learning Depth Calibration of Time-of-Flight Cameras [bib]

PROCAM - Active Illumination and Metrology

Optical sensing on reflective, transparent or untextured surfaces is difficult. Active illumination and computational photography may help in many aspects of industrial computer vision and 3D reconstruction. In the robot vision group we continuously evaluate and apply problem-specific sensing principles to difficult tasks.

Structured Light Sensing

Applying a time-multiplexed projection pattern allows for rapid and robust measurements on complex objects. With state of the art camera and projector hardware we allow for an image acquisition up to 150 frames per second, and 4M 3D points per measurement. A flexible sensor carrier allows to adapt for various baselines and measurement ranges from 50x50mm up to 5000x5000mm. GPU-based code extraction, matching and triangulation allows for rapid results.
The resolution reachable by this sensing principle allows for reliable measurements on a per-pixel level. Even thin and delicate structures are observed and reconstructed.

Time-Multiplexed HDR

A problem arising on most specular and white surfaces is over-exposure due to reflection. If one and the same scenery contains very dark structures and highly reflective structures, even a dynamic camera range of 12 bit may not capture the dynamic range entirely. One may compensate for this effect by generating exposure sequences and finally merging images of varying exposure into a high-dynamic-range (HDR) image. This comes at the cost of more images to capture, but is still feasible with a fast camera system.

Space-Multiplexed HDR

Instead of gaining dynamic range in the temporal domain, one may do so in the spatial domain. By applying intensity filters with varying absorption rate on a per-pixel level, we capture radiometrically complex scenes with a considerably lower number of images, hence increasing the measurement rate for slightly moving objects like faces.

Data Fusion

A single point scan delivers measurements only from a given viewpoint. To create 360 degree measurements, and measurements of large objects like an engine block, we estimate the relative sensor motion through structure and motion techniques and finally fuse individual measurements to a complete object.

Robot Vision

By mounting the sensor on a robot arm and calibrating the kinematic chain, we create a measurement system with several meters of measurement range. This allows us to fully automatically scan large objects.

MedEyeTrack: Fully Automated EyeTracking System for Medical Eyetumor Treatment

The treatment of eye cancer is one of the most complex treatments in the field of radiation therapy. Dispite from the high costs for planning and execution is the treatment itself highly invasive for the patient.

In this broad cooperation with the Medical University Graz, we develop a revolutionary system for diagnostical imaging and radiation therapy of eyetumors. This therapy works in multiple steps. First, the tumor is exactly localized through magnet resonance imaging. In this imaging process, the eye position is recorded to define the relative position of the tumor and the eyes viewing direction. Second, the cancer treated in a radiation therapy system. During this treatment, the misalignment between the actual viewing direction and the recoded viewing direction (and therefore also the relative tumor position) is measured at ten frames per second. If the deviation in position and time goes beyond a predefined threshold, the radiation is paused unitl the correct viewing direction is reached again. Through this system the rigid fixation of the eye which is highly invasive is replaced by a completely non-invasive tracking and triggering. With this non-invasive eye tracking - for the first time - multiple treatment sessions per patient become possible. The camera system itself is designed to work in a Magnet Resonance Tomograph, a Computer Tomograph and a Radiation Therapy System to follow the patient during the whole treatment process. To form a ridgid fixation of the patient in the system without using any metal materials we develop a hardware prototype together with M&R Automation GmbH.
 Setup of the MedEyeTrack System. The head is rigidly fixed to the system through a head mask. The eyes are observed by two cameras (one for each eye) to track the movement during the treatment.
Active infrared illumination is used to get independent from environmental lighting conditions.   The MedEyeTrack software calculated the pupil misallignment and triggers the radiation system in case of a wrong viewing direction. Eye-Tracker ermöglicht neuartige Behandlungsverfahren bei Augen-Tumoren [bib] Der Grazer 10/2014 (ger) Contact: Matthias Rüther
Youtube Channel
image/svg+xml
News
image/svg+xml

2016/12/16: New Open Student Position: LIDAR Metrology Tooling

--> Learn More

2016/12/01: New Open Student Position: Robotic Charging of Electric Vehicles

--> Learn More

2016/07/15: Accepted to BMVC 2016

Our paper "A Deep Primal-Dual Network for Guided Depth Super-Resolution" has been accepted for oral presentation at the British Machine Vision Conference 2016 held at the University of York, United Kingdom.

2016/07/11: Accepted to ECCV 2016

Our paper "ATGV-Net: Accurate Depth Superresolution" has been accepted at the European Conference on Computer Vision 2016 in Amsterdam, The Netherlands.

2015/10/07: Accepted to ICCV 2015 Workshop: TASK-CV

Our paper "Anatomical landmark detection in medical applications driven by synthetic data" has been accepted at the IEEE International Conference on Computer Vision 2015 workshop on transferring and adapting source knowledge in computer vision.

2015/09/14: Camera calibration code online

The camera calibration toolbox accompanying our paper "Learning Depth Calibration of Time-of-Flight Cameras" is available here.

2015/09/07: Accepted to ICCV 2015

Our papers "Variational Depth Superresolution using Example-Based Edge Representations" and "Conditioned Regression Models for Non-Blind Single Image Super-Resolution" have been accepted at the IEEE International Conference on Computer Vision 2015, December 13-16, Santiago, Chile.

2015/07/03: Accepted to BMVC 2015

Our papers "Depth Restoration via Joint Training of a Global Regression Model and CNNs" and "Learning Depth Calibration of Time-of-Flight Cameras" have been accepted as a poster presentation at the 26th British Machine Vision Conference, September 7-10, Swansea, United Kingdom.