Opening the black box of trust: reasoning about trust models in a BDI agent
Publication Type:
Journal ArticleSource:
Journal of Logic and Computation, Oxford University Press (In Press)Abstract:
Trust models as thus far described in the literature can be seen as a monolithic structure: a trust model is provided with a variety of inputs and the model performs calculations, resulting in a trust evaluation as output. The agent has no direct method of adapting its trust model to its needs in a given context. In this article, we propose a first step in allowing an agent to reason about its trust model, by providing a method for incorporating a computational trust model into the cognitive architecture of the agent. By reasoning about the factors that influence the trust calculation the agent can effect changes in the computational process, thus proactively adapting its trust model. We give a declarative formalization of this system using a multi-context system and we show that three contemporary trust models, BRS, ReGReT and ForTrust can be incorporated into a BDI reasoning system using our framework.
Self-Conguring Sensors for Uncharted Environments
Publication Type:
Conference PaperSource:
The Second Workshop on Cooperative Games in Multiagent Systems (CoopMAS-2011) . Workshop co-located with AAMAS-2011, Taipei, Taiiwan (2011)Keywords:
agents; self-organisationReward-based region optimal quality guarantees
Weaving a Fabric of Socially Aware Agents
Sequential mixed auctions
Solving Sequential Mixed Auctions with Integer Programming
Trust Alignment: a Sine Qua Non of Open Multi-Agent Systems
Publication Type:
Conference PaperSource:
On the Move to Meaningful Internet Systems: OTM 2011., Springer, Volume 7044, Hersonissos, Greece, p.182-199 (2011)Abstract:
In open multi-agent systems trust is necessary to improve cooperation by enabling agents to choose good partners. Most trust models work by taking, in addition to direct experiences, other agents’ communicated evaluations into account. However, in an open multi-agent system other agents may use different trust models and as such the evaluations they communicate are based on different principles. This article shows that trust alignment is a crucial tool in this communication. Furthermore we show that trust alignment improves significantly if the description of
the evidence, upon which a trust evaluation is based, is taken into account.
Improving function filtering for computationally demanding DCOPs
Publication Type:
Conference PaperSource:
Workshop on Distributed Constraint Reasoning at IJCAI 2011, Barcelona, p.99-111 (2011)Abstract:
In this paper we focus on solving DCOPs in computationally demanding scenarios. GDL optimally solves DCOPs, but requires exponentially large cost functions, being impractical in such settings. Function filtering is a technique that reduces the size of cost functions. We improve the effectiveness of function filtering to reduce the amount of resources required to optimally solve DCOPs. As a result, we enlarge the range of problems solvable by algorithms employing function filtering.
