Featureless 2D-3D Pose Estimation by Minimising an Illumination-Invariant Loss

Computer Science – Computer Vision and Pattern Recognition

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

18 LaTeX pages, 7 figures

Scientific paper

The problem of identifying the 3D pose of a known object from a given 2D image has important applications in Computer Vision ranging from robotic vision to image analysis. Our proposed method of registering a 3D model of a known object on a given 2D photo of the object has numerous advantages over existing methods: It does neither require prior training nor learning, nor knowledge of the camera parameters, nor explicit point correspondences or matching features between image and model. Unlike techniques that estimate a partial 3D pose (as in an overhead view of traffic or machine parts on a conveyor belt), our method estimates the complete 3D pose of the object, and works on a single static image from a given view, and under varying and unknown lighting conditions. For this purpose we derive a novel illumination-invariant distance measure between 2D photo and projected 3D model, which is then minimised to find the best pose parameters. Results for vehicle pose detection are presented.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Featureless 2D-3D Pose Estimation by Minimising an Illumination-Invariant Loss does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Featureless 2D-3D Pose Estimation by Minimising an Illumination-Invariant Loss, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Featureless 2D-3D Pose Estimation by Minimising an Illumination-Invariant Loss will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-145171

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.