We propose a method for annotating images of a hand manipulating an object with the 3D poses of both the hand and the object, together with a dataset created using this method. There is a current lack of annotated real images for this problem, as estimating the 3D poses is challenging, mostly because of the mutual occlusions between the hand and the object. To tackle this challenge, we capture sequences with one or several RGB-D cameras, and jointly optimizes the 3D hand and object poses over all the frames simultaneously. This method allows us to automatically annotate each frame with accurate estimates of the poses, despite large mutual occlusions. With this method, we created HO-3D, the first markerless dataset of color images with 3D annotations of both hand and object. This dataset is currently made of 80,000 frames, 65 sequences, 10 persons, and 10 objects.. We also use it to train a deepnet to perform RGB-based single frame hand pose estimation and provide a baseline on our dataset.
We provide baseline results for hand pose estimation from single RGB image on our HO3D dataset. Please refer to the paper for more details.
|Mesh error (procrustes alignment, cms)||F@5mm||F@15mm||Joint error (Scale trans. alignment, cms)|
 Hasson et al. Learning Joint Reconstruction of Hands and Manipulated Objects. CVPR'19
This work was supported by the Christian Doppler Laboratory for Semantic 3D Computer Vision, funded in part by Qualcomm Inc.