Statistics – Computation
Scientific paper
2012-02-28
Statistics
Computation
Originally presented at Distributed Machine Learning and Sparse Representation with Massive Data Sets (DMMD 2011)
Scientific paper
We consider deployment of the particle filter on modern massively parallel hardware architectures, such as Graphics Processing Units (GPUs), with a focus on the resampling stage. While standard multinomial and stratified resamplers require a sum of importance weights computed collectively between threads, a Metropolis resampler favourably requires only pair-wise ratios between weights, computed independently by threads, and can be further tuned for performance by adjusting its number of iterations. While achieving respectable results for the stratified and multinomial resamplers, we demonstrate that a Metropolis resampler can be faster where the variance in importance weights is modest, and so is worth considering in a performance-critical context, such as particle Markov chain Monte Carlo and real-time applications.
No associations
LandOfFree
GPU acceleration of the particle filter: the Metropolis resampler does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with GPU acceleration of the particle filter: the Metropolis resampler, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and GPU acceleration of the particle filter: the Metropolis resampler will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-606515