The objective of BioChipFeeding is to develop a new wood chip feeding system of the future for small-scale heating plants. A core component of the system is a gripper which enables feeding from above the pile of stored fuel. It will be equipped with sensors to screen the fuel quality regarding particle size and moisture content and thereby have the ability to create a rather constant fuel quality by producing appropriate fuel blends. A core task in the screening process is the optical evaluation of fuel parameters such as particle size and ash content. Another vital aspect is reliable localization of larger patches of fine-grained fuel, overly large objects, and foreign matter in order to maintain a high reliability of the feeding process and heating plant.
![]() | ![]() |
Illuminating the scene from different angles and overexposing the images creates cast shadows at particle boundaries. Fusing these images allows for easy particle segmentation using a watershed approach. Spilling of particles into another along directions normal to the baseline of two light sources can be mitigated by restricting the particle segmentation to star-convex shapes. Metric information of particle sizes is obtained either by triangulation using a stereo camera setup or, if a more coarse estimation is sufficient, by leveraging the distance measurement of a ultrasonic rangefinder.
![]() | ![]() |
Depending on the load of the heating plant it may be feasible to avoid feeding fuel with high ash content. High ash content is usually associated with a large proportion of bark in the fuel and thus can be tied to the radiometric intensity. A reliable estimate for the scene radiance independent of changes in ambient light is obtained by acquiring a HDR seuqence with illumination dominated by the LED light sources. Based on the HDR histogram of the scene, the fuel is classified into predefined fuel classes.
![]() | ![]() |
Sawdust as well as overlarge particles may obstruct the screwfeeder leading to unexpected shutdowns of the heating plant. Both cases are detected by exploiting statistics of the image segmentation obtained for particle size evaluation. The respective areas in the fuel depot then can be avoided when feeding the plant and later can be removed at a time when the gripper would idle otherwise.
In the various steps from wood chip production until delivery to the fuel depot of the heating plant unnoticed contamination with foreign particles may occur. Again, these particles pose a threat of obstructing the feeder screw or the grate in the furnace, leading to unexpected plant shutdown. Employing an abnormal event detection framework with sparse dictionary learning methods, foreign matter on the surface of the fuel pile is detected and allows for an automatic removal from the fuel depot.
Contact: Ludwig Mohr, Matthias Rüther
This page accompanies our paper [1] on automatic calibration of depth cameras. The presented calibration target and automatic feature extraction are not limited to depth cameras but can also be used for conventional cameras. The provided code is designed to be used as an addon to the widely known camera calibration toolbox of Jean-Yves Bouguet
The calibration target consists of a central marker and circular patterns:
Our automatic feature detection starts by searching for the central marker and then iteratively refining the circular markers around the central marker (depticted as black dashed line). Compared to standard checkerboard targets, our methods has the following advantages:
The following images show the result of the automatic feature detection on two exemplary images from the paper. The calibration target is detected in the gray value image and reprojected to the corresponding depth image from a Microsoft Kinect v2.0:
![]() | ![]() |
The provided source code should be used as an addon the the Bouguet Camera Calibration Toolbox. Installation therefore amounts to:
The following parameters have to be set:
The target can be created by using the function function template = make_target (grid_width_pixels, grid_width_mm, grid_coordinates_h, grid_coordinates_v), i.e.: target = make_target(240,5,-18:18,-10:10);
Optical sensing on reflective, transparent or untextured surfaces is difficult. Active illumination and computational photography may help in many aspects of industrial computer vision and 3D reconstruction. In the robot vision group we continuously evaluate and apply problem-specific sensing principles to difficult tasks.
![]() | ![]() |
Applying a time-multiplexed projection pattern allows for rapid and robust measurements on complex objects. With state of the art camera and projector hardware we allow for an image acquisition up to 150 frames per second, and 4M 3D points per measurement. A flexible sensor carrier allows to adapt for various baselines and measurement ranges from 50x50mm up to 5000x5000mm. GPU-based code extraction, matching and triangulation allows for rapid results.
![]() | ![]() |
The resolution reachable by this sensing principle allows for reliable measurements on a per-pixel level. Even thin and delicate structures are observed and reconstructed.
![]() | ![]() | ![]() |
A problem arising on most specular and white surfaces is over-exposure due to reflection. If one and the same scenery contains very dark structures and highly reflective structures, even a dynamic camera range of 12 bit may not capture the dynamic range entirely. One may compensate for this effect by generating exposure sequences and finally merging images of varying exposure into a high-dynamic-range (HDR) image. This comes at the cost of more images to capture, but is still feasible with a fast camera system.
![]() | ![]() | ![]() |
---|
Instead of gaining dynamic range in the temporal domain, one may do so in the spatial domain. By applying intensity filters with varying absorption rate on a per-pixel level, we capture radiometrically complex scenes with a considerably lower number of images, hence increasing the measurement rate for slightly moving objects like faces.
![]() | ![]() |
A single point scan delivers measurements only from a given viewpoint. To create 360 degree measurements, and measurements of large objects like an engine block, we estimate the relative sensor motion through structure and motion techniques and finally fuse individual measurements to a complete object.
By mounting the sensor on a robot arm and calibrating the kinematic chain, we create a measurement system with several meters of measurement range. This allows us to fully automatically scan large objects.
The treatment of eye cancer is one of the most complex treatments in the field of radiation therapy. Dispite from the high costs for planning and execution is the treatment itself highly invasive for the patient.
In this broad cooperation with the Medical University Graz, we develop a revolutionary system for diagnostical imaging and radiation therapy of eyetumors. This therapy works in multiple steps. First, the tumor is exactly localized through magnet resonance imaging. In this imaging process, the eye position is recorded to define the relative position of the tumor and the eyes viewing direction. Second, the cancer treated in a radiation therapy system. During this treatment, the misalignment between the actual viewing direction and the recoded viewing direction (and therefore also the relative tumor position) is measured at ten frames per second. If the deviation in position and time goes beyond a predefined threshold, the radiation is paused unitl the correct viewing direction is reached again. Through this system the rigid fixation of the eye which is highly invasive is replaced by a completely non-invasive tracking and triggering. With this non-invasive eye tracking - for the first time - multiple treatment sessions per patient become possible.
The camera system itself is designed to work in a Magnet Resonance Tomograph, a Computer Tomograph and a Radiation Therapy System to follow the patient during the whole treatment process. To form a ridgid fixation of the patient in the system without using any metal materials we develop a hardware prototype together with M&R Automation GmbH.
![]() | ![]() |
Setup of the MedEyeTrack System. The head is rigidly fixed to the system through a head mask.
The eyes are observed by two cameras (one for each eye) to track the movement during the treatment.
Active infrared illumination is used to get independent from environmental lighting conditions.
The MedEyeTrack software calculated the pupil misallignment and triggers the radiation system in case of a wrong viewing direction.
Eye-Tracker ermöglicht neuartige Behandlungsverfahren bei Augen-Tumoren
Contact: Matthias Rüther