Computer Science – Computer Science and Game Theory
Scientific paper
2011-06-22
Journal Of Artificial Intelligence Research, Volume 17, pages 363-378, 2002
Computer Science
Computer Science and Game Theory
Scientific paper
10.1613/jair.1065
Much work in AI deals with the selection of proper actions in a given (known or unknown) environment. However, the way to select a proper action when facing other agents is quite unclear. Most work in AI adopts classical game-theoretic equilibrium analysis to predict agent behavior in such settings. This approach however does not provide us with any guarantee for the agent. In this paper we introduce competitive safety analysis. This approach bridges the gap between the desired normative AI approach, where a strategy should be selected in order to guarantee a desired payoff, and equilibrium analysis. We show that a safety level strategy is able to guarantee the value obtained in a Nash equilibrium, in several classical computer science settings. Then, we discuss the concept of competitive safety strategies, and illustrate its use in a decentralized load balancing setting, typical to network problems. In particular, we show that when we have many agents, it is possible to guarantee an expected payoff which is a factor of 8/9 of the payoff obtained in a Nash equilibrium. Our discussion of competitive safety analysis for decentralized load balancing is further developed to deal with many communication links and arbitrary speeds. Finally, we discuss the extension of the above concepts to Bayesian games, and illustrate their use in a basic auctions setup.
No associations
LandOfFree
Competitive Safety Analysis: Robust Decision-Making in Multi-Agent Systems does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Competitive Safety Analysis: Robust Decision-Making in Multi-Agent Systems, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Competitive Safety Analysis: Robust Decision-Making in Multi-Agent Systems will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-469002