Robust Head Pose Estimation Using Contourlet Transform

Computer Science – Computer Vision and Pattern Recognition

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

5 pages, conference paper

Scientific paper

Estimating pose of the head is an important preprocessing step in many pattern recognition and computer vision systems such as face recognition. Since the performance of the face recognition systems is greatly affected by the poses of the face, how to estimate the accurate pose of the face in human face image is still a challenging problem. In this paper, we represent a novel method for head pose estimation. To enhance the efficiency of the estimation we use contourlet transform for feature extraction. Contourlet transform is multi-resolution, multi-direction transform. In order to reduce the feature space dimension and obtain appropriate features we use LDA (Linear Discriminant Analysis) and PCA (Principal Component Analysis) to remove ineffcient features. Then, we apply different classifiers such as k-nearest neighborhood (knn) and minimum distance. We use the public available FERET database to evaluate the performance of proposed method. Simulation results indicate the superior robustness of the proposed method.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Robust Head Pose Estimation Using Contourlet Transform does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Robust Head Pose Estimation Using Contourlet Transform, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Robust Head Pose Estimation Using Contourlet Transform will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-520574

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.