Computer Science – Artificial Intelligence
Scientific paper
2003-10-23
Computer Science
Artificial Intelligence
Scientific paper
10.1063/1.1751362
Recent literature in the last Maximum Entropy workshop introduced an analogy between cumulative probability distributions and normalized utility functions. Based on this analogy, a utility density function can de defined as the derivative of a normalized utility function. A utility density function is non-negative and integrates to unity. These two properties form the basis of a correspondence between utility and probability. A natural application of this analogy is a maximum entropy principle to assign maximum entropy utility values. Maximum entropy utility interprets many of the common utility functions based on the preference information needed for their assignment, and helps assign utility values based on partial preference information. This paper reviews maximum entropy utility and introduces further results that stem from the duality between probability and utility.
No associations
LandOfFree
An information theory for preferences does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with An information theory for preferences, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and An information theory for preferences will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-412761