ALCN is a novel illumination normalization method. It lets us learn to detect objects and estimate their 3D poses under challenging illumination conditions from very few training samples. This work was supported by the Christian Doppler Laboratory for Semantic 3D Computer Vision, funded in part by Qualcomm Inc.
The ALCN dataset consists of ALCN-2D and ALCN-Duck datasets:
ALCN-2D for benchmarking object detection under challenging light conditions and cluttered background. We select three objects spanning different material properties: plastic, velvet and metal (velvet has a BRDF that is neither Lambertian nor specular, and the metallic object -- the watch -- is very specular). For each object, we have 10 grayscale 300x300 real trainng images and 1200 1280x800 grayscale test images, exhibiting these objects under different illuminations, different lighting colors, and distractors in the background.
ALCN-Duck for benchmarking 3D pose estimation. This dataset is made of training sequence of 1000 registered frames of the Duck from the Hinterstoisser dataset obtained by 3D printing under a single illumination, and 8 testing sequences under various illuminations.