Multi-Layer Local Graph Words for Object Recognition

Computer Science – Multimedia

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

International Conference on MultiMedia Modeling, Klagenfurt : Autriche (2012)

Scientific paper

In this paper, we propose a new multi-layer structural approach for the task of object based image retrieval. In our work we tackle the problem of structural organization of local features. The structural features we propose are nested multi-layered local graphs built upon sets of SURF feature points with Delaunay triangulation. A Bag-of-Visual-Words (BoVW) framework is applied on these graphs, giving birth to a Bag-of-Graph-Words representation. The multi-layer nature of the descriptors consists in scaling from trivial Delaunay graphs - isolated feature points - by increasing the number of nodes layer by layer up to graphs with maximal number of nodes. For each layer of graphs its own visual dictionary is built. The experiments conducted on the SIVAL and Caltech-101 data sets reveal that the graph features at different layers exhibit complementary performances on the same content and perform better than baseline BoVW approach. The combination of all existing layers, yields significant improvement of the object recognition performance compared to single level approaches.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Multi-Layer Local Graph Words for Object Recognition does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Multi-Layer Local Graph Words for Object Recognition, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Multi-Layer Local Graph Words for Object Recognition will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-148334

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.