We explore how a general AI algorithm can be used for 3D scene understanding to reduce the need for training data. More exactly, we propose a modification of the Monte Carlo Tree Search (MCTS) algorithm to retrieve objects and room layouts from noisy RGB-D scans. While MCTS was developed as a game-playing algorithm, we show it can also be used for complex perception problems. Our adapted MCTS algorithm has few easy-to-tune hyperparameters and can optimise general losses. We use it to optimise the posterior probability of objects and room layout hypotheses given the RGB-D data. This results in an analysis-by-synthesis approach that explores the solution space by rendering the current solution and comparing it to the RGB-D observations. To perform this exploration even more efficiently, we propose simple changes to the standard MCTS' tree construction and exploration policy. We demonstrate our approach on the ScanNet dataset. Our method often retrieves configurations that are better than some manual annotations, especially on layouts.
Given a scene consisting of registered RGB-D images, we first extract proposals that potentially belong to the furniture and structural components of the scene. Our MCTS based approach, we call it Monte Carlo Scene Search (MCSS), efficiently finds the optimal set of proposals that best fits the given scene. For this, we rely on a scoring function that measures that enforces semantic and geometric consistencies.
|Method||Obj-Obj Relationship modelling||Chair||Sofa||Bed|