Rollout Sampling Policy Iteration for Decentralized POMDPs

Computer Science – Artificial Intelligence

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)

Scientific paper

We present decentralized rollout sampling policy iteration (DecRSPI) - a new algorithm for multi-agent decision problems formalized as DEC-POMDPs. DecRSPI is designed to improve scalability and tackle problems that lack an explicit model. The algorithm uses Monte- Carlo methods to generate a sample of reachable belief states. Then it computes a joint policy for each belief state based on the rollout estimations. A new policy representation allows us to represent solutions compactly. The key benefits of the algorithm are its linear time complexity over the number of agents, its bounded memory usage and good solution quality. It can solve larger problems that are intractable for existing planning algorithms. Experimental results confirm the effectiveness and scalability of the approach.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Rollout Sampling Policy Iteration for Decentralized POMDPs does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Rollout Sampling Policy Iteration for Decentralized POMDPs, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Rollout Sampling Policy Iteration for Decentralized POMDPs will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-32395

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.