Opening the black box of trust: reasoning about trust models in a BDI agent
Publication Type:
Journal ArticleSource:
Journal of Logic and Computation, Oxford University Press (In Press)Abstract:
Trust models as thus far described in the literature can be seen as a monolithic structure: a trust model is provided with a variety of inputs and the model performs calculations, resulting in a trust evaluation as output. The agent has no direct method of adapting its trust model to its needs in a given context. In this article, we propose a first step in allowing an agent to reason about its trust model, by providing a method for incorporating a computational trust model into the cognitive architecture of the agent. By reasoning about the factors that influence the trust calculation the agent can effect changes in the computational process, thus proactively adapting its trust model. We give a declarative formalization of this system using a multi-context system and we show that three contemporary trust models, BRS, ReGReT and ForTrust can be incorporated into a BDI reasoning system using our framework.
A graded BDI agent model to represent and reason about preferences
Publication Type:
Journal ArticleSource:
Artificial Intelligence, Elsevier, Volume 175, p.1468-1478 (2011)Keywords:
BDI agents; multi-context systems; uncertainty; bipolar preferences; fuzzy logicAbstract:
In this research note, we introduce a graded BDI agent development framework, g-BDI for short, that allows to build agents as multi-context systems that reason about three fundamental and \emph{graded} mental attitudes (i.e. beliefs, desires and intentions). We propose a sound and complete logical framework for them and some logical extensions to accommodate slightly different views on desires.
