Computer Science – Robotics
Scientific paper
2011-11-22
Computer Science
Robotics
arXiv admin note: substantial text overlap with arXiv:1106.5551
Scientific paper
RGB-D cameras, which give an RGB image to- gether with depths, are becoming increasingly popular for robotic perception. In this paper, we address the task of detecting commonly found objects in the 3D point cloud of indoor scenes obtained from such cameras. Our method uses a graphical model that captures various features and contextual relations, including the local visual appearance and shape cues, object co-occurence relationships and geometric relationships. With a large number of object classes and relations, the model's parsimony becomes important and we address that by using multiple types of edge potentials. We train the model using a maximum-margin learning approach. In our experiments over a total of 52 3D scenes of homes and offices (composed from about 550 views), we get a performance of 84.06% and 73.38% in labeling office and home scenes respectively for 17 object classes each. We also present a method for a robot to search for an object using the learned model and the contextual information available from the current labelings of the scene. We applied this algorithm successfully on a mobile robot for the task of finding 12 object classes in 10 different offices and achieved a precision of 97.56% with 78.43% recall.
Anand Abhishek
Joachims Thorsten
Koppula Hema Swetha
Saxena Ashutosh
No associations
LandOfFree
Contextually Guided Semantic Labeling and Search for 3D Point Clouds does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Contextually Guided Semantic Labeling and Search for 3D Point Clouds, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Contextually Guided Semantic Labeling and Search for 3D Point Clouds will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-376318