Computer Science – Artificial Intelligence
Scientific paper
2012-01-16
Computer Science
Artificial Intelligence
Scientific paper
Large-scale, parallel clusters composed of commodity processors are increasingly available, enabling the use of vast processing capabilities and distributed RAM to solve hard search problems. We investigate Hash-Distributed A* (HDA*), a simple approach to parallel best-first search that asynchronously distributes and schedules work among processors based on a hash function of the search state. We use this approach to parallelize the A* algorithm in an optimal sequential version of the Fast Downward planner, as well as a 24-puzzle solver. The scaling behavior of HDA* is evaluated experimentally on a shared memory, multicore machine with 8 cores, a cluster of commodity machines us- ing up to 64 cores, and a large-scale high-performance cluster using up to 1024 processors. We show that this approach scales well, allowing the effective utilization of large amount of distributed memory to optimally solve problems which require more than a terabyte of RAM. We also compare HDA* to Transposition-table Driven Scheduling (TDS), a hash-based parallelization of IDA*, and show that, in planning, HDA* significantly outperforms TDS. A simple hybrid which combines HDA* and TDS to exploit both of their strengths is proposed.
Botea Adi
Fukunaga Alex
Kishimoto Akihiro
No associations
LandOfFree
Evaluation of a Simple, Scalable, Parallel Best-First Search Strategy does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Evaluation of a Simple, Scalable, Parallel Best-First Search Strategy, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Evaluation of a Simple, Scalable, Parallel Best-First Search Strategy will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-409771