Engineering trust alignment: Theory, method and experimentation
Publication Type:
Journal ArticleSource:
International Journal of Human-Computer Studies, Elsevier, Volume 70, Issue 6, p.450-473 (2012)URL:
http://www.sciencedirect.com/science/article/pii/S1071581912000353Keywords:
channel theory; inductive logic programmingAbstract:
In open multi-agent systems trust models are an important tool for agents to achieve effective interactions. However, in these kinds of open systems, the agents do not necessarily use the same, or even similar, trust models, leading to semantic differences between trust evaluations in the different agents. Hence, to successfully use communicated trust evaluations, the agents need to align their trust models. We explicate that currently proposed solutions, such as common ontologies or ontology alignment methods, lead to additional problems and propose a novel approach. We show how the trust alignment can be formed by considering the interactions that agents share and describe a mathematical framework to formulate precisely how the interactions support trust evaluations for both agents. We
show how this framework can be used in the alignment process and explain how an alignment should be learned. Finally, we demonstrate this alignment process in practice, using a ?rst-order regression algorithm, to learn an alignment and test it in an example scenario.
Trust Alignment: a Sine Qua Non of Open Multi-Agent Systems
Publication Type:
Conference PaperSource:
On the Move to Meaningful Internet Systems: OTM 2011., Springer, Volume 7044, Hersonissos, Greece, p.182-199 (2011)Abstract:
In open multi-agent systems trust is necessary to improve cooperation by enabling agents to choose good partners. Most trust models work by taking, in addition to direct experiences, other agents’ communicated evaluations into account. However, in an open multi-agent system other agents may use different trust models and as such the evaluations they communicate are based on different principles. This article shows that trust alignment is a crucial tool in this communication. Furthermore we show that trust alignment improves significantly if the description of
the evidence, upon which a trust evaluation is based, is taken into account.
Talking about Trust in Heterogeneous Multi-Agent Systems
Publication Type:
Conference PaperSource:
IJCAI 2011, AAAI Press, Barcelona, Spain, p.2820-2821 (2011)Abstract:
In heterogeneous multi-agent systems trust is necessary to improve interactions by enabling agents to choose good partners. Most trust models work by taking, in addition to direct experiences, other agents’ communicated evaluations into account. However, in an open MAS other agents may use different trust models and the evaluations they communicate are based on different principles: as such they are meaningless without some form of alignment. My doctoral research gives a formal definition of this problem and proposes two methods of achieving an alignment.
An Interaction-oriented Model of Trust Alignment
Publication Type:
Conference PaperSource:
Seventh European Workshop on Multi-Agent Systems (EUMAS09), Cyprus (2009)Abstract:
We present a mathematical framework and an implementation of a proof of concept for communicating about trust in terms of interactions. We argue that sharing an ontology about trust is not enough and that interactions are the building blocks that all trust- and reputation models use to form their evaluations. Thus, a way of talking about these interactions is essential to gossiping in open heterogeneous environments. We give an overview of the formal framework we propose for aligning trust and discuss an example implementation, which uses inductive learning methods to form a trust alignment. We highlight the strengths and weaknesses of this approach.
Towards an inductive algorithm for learning trust alignment
Publication Type:
Conference PaperSource:
Student Session - European Agent Systems Summer School, Universität Bayreuth, Volume 47, Torino, Italy, p.5-11 (2009)Abstract:
Knowing which agents to trust is an important problem in open multi-agent systems. A way to help resolve this problem is by allowing agents to relay information about trust to each other. We argue trust is a subjective phenomenon and therefore needs aligning. We present a mathematical framework for communicating about trust in terms of interactions. Based on this framework we present an algorithm based on clustering and inductive logic programming techniques to align agents' trust models.
An Interaction-oriented Model of Trust Alignment
Publication Type:
Conference PaperSource:
13th Conference of the Spanish Association for Artificial Intelligence, CAEPIA 2009, Sevilla, Spain, p.655-664 (2009)ISBN:
978-84-692-6424-9Abstract:
We present a mathematical framework and an implementation of a proof of concept for communicating about trust in terms of interactions. We argue that sharing an ontology about trust is not enough and that interactions are the building blocks that all trust- and reputation models use to form their evaluations. Thus, a way of talking about these interactions is essential to gossiping in open heterogeneous environments. We give a brief overview of the formal framework we propose for aligning trust and discuss an example implementation, which uses inductive learning methods to form a trust alignment. We highlight the strengths and weaknesses of this approach.
Evaluation of the SIFT Object Recognition Method in Mobile Robots
Publication Type:
Conference PaperSource:
12th International Conference of the ACIA, IOS Press, Volume 202, Cardona, Spain, p.9-18 (2009)Keywords:
Computer Vision; Object Recognition; Mobile RobotsAbstract:
General object recognition in mobile robots is of primary importance
in order to enhance the representation of the environment that robots will use for
their reasoning processes. Therefore, we contribute reduce this gap by evaluating
the SIFT Object Recognition method in a challenging dataset, focusing on issues
relevant to mobile robotics. Resistance of the method to the robotics working conditions
was found, but it was limited mainly to well-textured objects.
Reaching semantic agreements through interaction
Publication Type:
Conference PaperSource:
4th AIS SigPrag Int. Pragmatic Web Conference Track, ICPW'09, at the 5th Int. Conference on Semantic Systems, i-Semantics'09, Verlag der Technischen Universität Graz, Graz, Austria, p.726-737 (2009)Keywords:
interaction model; alignment protocol; alignment mechanismAbstract:
We address the complex problem of semantic heterogeneity in multiagent communication by looking at semantics related to interaction. Our approach takes the state of the interaction in which agents are engaged as the basis on which the semantic alignment rests. In this paper we describe an implementation of this technique and provide experimental results on interactions of varying complexity.
