TítuloEngineering trust alignment: Theory, method and experimentation
Publication TypeJournal Article
Year of Publication2012
AuthorsKoster A, Schorlemmer M, Sabater-Mir J.
JournalInternational Journal of Human-Computer Studies
Volume70
Paginación450-473
EditorialElsevier
Palabras clavechannel theory, Inductive logic programming
Resumen

In open multi-agent systems trust models are an important tool for agents to achieve effective interactions. However, in these kinds of open systems, the agents do not necessarily use the same, or even similar, trust models, leading to semantic differences between trust evaluations in the different agents. Hence, to successfully use communicated trust evaluations, the agents need to align their trust models. We explicate that currently proposed solutions, such as common ontologies or ontology alignment methods, lead to additional problems and propose a novel approach. We show how the trust alignment can be formed by considering the interactions that agents share and describe a mathematical framework to formulate precisely how the interactions support trust evaluations for both agents. We show how this framework can be used in the alignment process and explain how an alignment should be learned. Finally, we demonstrate this alignment process in practice, using a ?rst-order regression algorithm, to learn an alignment and test it in an example scenario.

URLhttp://www.sciencedirect.com/science/article/pii/S1071581912000353