Ontological Crises in Artificial Agents' Value Systems

Computer Science – Artificial Intelligence

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

Decision-theoretic agents predict and evaluate the results of their actions using a model, or ontology, of their environment. An agent's goal, or utility function, may also be specified in terms of the states of, or entities within, its ontology. If the agent may upgrade or replace its ontology, it faces a crisis: the agent's original goal may not be well-defined with respect to its new ontology. This crisis must be resolved before the agent can make plans towards achieving its goals. We discuss in this paper which sorts of agents will undergo ontological crises and why we may want to create such agents. We present some concrete examples, and argue that a well-defined procedure for resolving ontological crises is needed. We point to some possible approaches to solving this problem, and evaluate these methods on our examples.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Ontological Crises in Artificial Agents' Value Systems does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Ontological Crises in Artificial Agents' Value Systems, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Ontological Crises in Artificial Agents' Value Systems will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-68984

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.