Computer Science – Multiagent Systems
Scientific paper
2009-04-15
Computer Science
Multiagent Systems
11 pages
Scientific paper
Experimental verification has been the method of choice for verifying the stability of a multi-agent reinforcement learning (MARL) algorithm as the number of agents grows and theoretical analysis becomes prohibitively complex. For cooperative agents, where the ultimate goal is to optimize some global metric, the stability is usually verified by observing the evolution of the global performance metric over time. If the global metric improves and eventually stabilizes, it is considered a reasonable verification of the system's stability. The main contribution of this note is establishing the need for better experimental frameworks and measures to assess the stability of large-scale adaptive cooperative systems. We show an experimental case study where the stability of the global performance metric can be rather deceiving, hiding an underlying instability in the system that later leads to a significant drop in performance. We then propose an alternative metric that relies on agents' local policies and show, experimentally, that our proposed metric is more effective (than the traditional global performance metric) in exposing the instability of MARL algorithms.
No associations
LandOfFree
Why Global Performance is a Poor Metric for Verifying Convergence of Multi-agent Learning does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Why Global Performance is a Poor Metric for Verifying Convergence of Multi-agent Learning, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Why Global Performance is a Poor Metric for Verifying Convergence of Multi-agent Learning will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-591925