Learning from Humans as an I-POMDP

Computer Science – Robotics

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

The interactive partially observable Markov decision process (I-POMDP) is a recently developed framework which extends the POMDP to the multi-agent setting by including agent models in the state space. This paper argues for formulating the problem of an agent learning interactively from a human teacher as an I-POMDP, where the agent \emph{programming} to be learned is captured by random variables in the agent's state space, all \emph{signals} from the human teacher are treated as observed random variables, and the human teacher, modeled as a distinct agent, is explicitly represented in the agent's state space. The main benefits of this approach are: i. a principled action selection mechanism, ii. a principled belief update mechanism, iii. support for the most common teacher \emph{signals}, and iv. the anticipated production of complex beneficial interactions. The proposed formulation, its benefits, and several open questions are presented.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Learning from Humans as an I-POMDP does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Learning from Humans as an I-POMDP, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Learning from Humans as an I-POMDP will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-509211

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.