trust

Engineering trust alignment: Theory, method and experimentation

Publication Type:

Journal Article

Source:

International Journal of Human-Computer Studies, Elsevier, Volume 70, Issue 6, p.450-473 (2012)

URL:

http://www.sciencedirect.com/science/article/pii/S1071581912000353

Keywords:

channel theory; inductive logic programming

Abstract:

In open multi-agent systems trust models are an important tool for agents to achieve effective interactions. However, in these kinds of open systems, the agents do not necessarily use the same, or even similar, trust models, leading to semantic differences between trust evaluations in the different agents. Hence, to successfully use communicated trust evaluations, the agents need to align their trust models. We explicate that currently proposed solutions, such as common ontologies or ontology alignment methods, lead to additional problems and propose a novel approach. We show how the trust alignment can be formed by considering the interactions that agents share and describe a mathematical framework to formulate precisely how the interactions support trust evaluations for both agents. We
show how this framework can be used in the alignment process and explain how an alignment should be learned. Finally, we demonstrate this alignment process in practice, using a ?rst-order regression algorithm, to learn an alignment and test it in an example scenario.

Personalizing Communication about Trust

Publication Type:

Conference Paper

Source:

AAMAS '12: Proceedings of the 11th internationalconference on autonomous agents and multiagent systems, IFAAMAS, Valencia, Spain (2012)

Opening the black box of trust: reasoning about trust models in a BDI agent

Publication Type:

Journal Article

Source:

Journal of Logic and Computation, Oxford University Press (In Press)

Abstract:

Trust models as thus far described in the literature can be seen as a monolithic structure: a trust model is provided with a variety of inputs and the model performs calculations, resulting in a trust evaluation as output. The agent has no direct method of adapting its trust model to its needs in a given context. In this article, we propose a first step in allowing an agent to reason about its trust model, by providing a method for incorporating a computational trust model into the cognitive architecture of the agent. By reasoning about the factors that influence the trust calculation the agent can effect changes in the computational process, thus proactively adapting its trust model. We give a declarative formalization of this system using a multi-context system and we show that three contemporary trust models, BRS, ReGReT and ForTrust can be incorporated into a BDI reasoning system using our framework.

Talking about Trust in Heterogeneous Multi-Agent Systems

Publication Type:

Conference Paper

Source:

IJCAI 2011, AAAI Press, Barcelona, Spain, p.2820-2821 (2011)

Abstract:

In heterogeneous multi-agent systems trust is necessary to improve interactions by enabling agents to choose good partners. Most trust models work by taking, in addition to direct experiences, other agents’ communicated evaluations into account. However, in an open MAS other agents may use different trust models and the evaluations they communicate are based on different principles: as such they are meaningless without some form of alignment. My doctoral research gives a formal definition of this problem and proposes two methods of achieving an alignment.

Why does trust need aligning?

Publication Type:

Conference Paper

Authors:

Andrew Koster

Source:

13th Workshop on Trust in Agents Societies at AAMAS 2010, Toronto, Canada, p.125-136 (2010)

A complete fuzzy logical system to deal with trust management systems

Publication Type:

Journal Article

Source:

Fuzzy Sets and Systems, Volume 159, Issue 10, p.1191--1207 (2008)

Keywords:

Modal Fuzzy Logic

An Interaction-oriented Model of Trust Alignment

Publication Type:

Conference Paper

Source:

Seventh European Workshop on Multi-Agent Systems (EUMAS09), Cyprus (2009)

Abstract:

We present a mathematical framework and an implementation of a proof of concept for communicating about trust in terms of interactions. We argue that sharing an ontology about trust is not enough and that interactions are the building blocks that all trust- and reputation models use to form their evaluations. Thus, a way of talking about these interactions is essential to gossiping in open heterogeneous environments. We give an overview of the formal framework we propose for aligning trust and discuss an example implementation, which uses inductive learning methods to form a trust alignment. We highlight the strengths and weaknesses of this approach.

Towards an inductive algorithm for learning trust alignment

Publication Type:

Conference Paper

Source:

Student Session - European Agent Systems Summer School, Universität Bayreuth, Volume 47, Torino, Italy, p.5-11 (2009)

Abstract:

Knowing which agents to trust is an important problem in open multi-agent systems. A way to help resolve this problem is by allowing agents to relay information about trust to each other. We argue trust is a subjective phenomenon and therefore needs aligning. We present a mathematical framework for communicating about trust in terms of interactions. Based on this framework we present an algorithm based on clustering and inductive logic programming techniques to align agents' trust models.

An Interaction-oriented Model of Trust Alignment

Publication Type:

Conference Paper

Source:

13th Conference of the Spanish Association for Artificial Intelligence, CAEPIA 2009, Sevilla, Spain, p.655-664 (2009)

ISBN:

978-84-692-6424-9

Abstract:

We present a mathematical framework and an implementation of a proof of concept for communicating about trust in terms of interactions. We argue that sharing an ontology about trust is not enough and that interactions are the building blocks that all trust- and reputation models use to form their evaluations. Thus, a way of talking about these interactions is essential to gossiping in open heterogeneous environments. We give a brief overview of the formal framework we propose for aligning trust and discuss an example implementation, which uses inductive learning methods to form a trust alignment. We highlight the strengths and weaknesses of this approach.

Syndicate content