z-logo
Premium
Toward Self‐Motivated, Cognitive, Continually Planning Agents
Author(s) -
Liu Daphne,
Schubert Lenhart
Publication year - 2015
Publication title -
computational intelligence
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.353
H-Index - 52
eISSN - 1467-8640
pISSN - 0824-7935
DOI - 10.1111/coin.12029
Subject(s) - introspection , computer science , plan (archaeology) , cognition , artificial intelligence , simple (philosophy) , cognitive psychology , psychology , epistemology , archaeology , neuroscience , history , philosophy
We present a flexible initial framework for defining self‐motivated, self‐aware agents in simulated worlds, planning continuously so as to maximize long‐term rewards. While such agents employ reasoned exploration of feasible sequences of actions and corresponding states, they also behave opportunistically and recover from failure, thanks to their continual plan updates and quest for rewards. Our framework allows for both specific and general (quantified) knowledge and for epistemic predicates such as knowing‐that and knowing‐whether. Because realistic agents have only partial knowledge of their world, the reasoning of the proposed agents uses a weakened closed‐world assumption; this has consequences for epistemic reasoning, in particular introspection. The planning operators allow for quantitative, gradual change and side effects such as the passage of time, changes in distances and rewards, and language production, using a uniform procedural attachment method. Question answering (involving introspection) and experimental runs are shown for our particular agent ME in a simple world, demonstrating the value of continual deliberate, reward‐driven planning. Though the primary merit of agents definable in our framework is that they combine all of the aforementioned features, they can also be configured as single or multiple goal‐seeking agents and as such perform comparably with some recent experimental agents.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here