An information theory for preferences

Computer Science – Artificial Intelligence

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

10.1063/1.1751362

Recent literature in the last Maximum Entropy workshop introduced an analogy between cumulative probability distributions and normalized utility functions. Based on this analogy, a utility density function can de defined as the derivative of a normalized utility function. A utility density function is non-negative and integrates to unity. These two properties form the basis of a correspondence between utility and probability. A natural application of this analogy is a maximum entropy principle to assign maximum entropy utility values. Maximum entropy utility interprets many of the common utility functions based on the preference information needed for their assignment, and helps assign utility values based on partial preference information. This paper reviews maximum entropy utility and introduces further results that stem from the duality between probability and utility.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

An information theory for preferences does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with An information theory for preferences, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and An information theory for preferences will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-412761

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.