In open multi-agent systems trust models are an important tool for agents to achieve effective interactions. However, in these kinds of open systems, the agents do not necessarily use the same, or even similar, trust models, leading to semantic differences between trust evaluations in the different agents. Hence, to successfully use communicated trust evaluations, the agents need to align their trust models. In the seminar I will explicate that currently proposed solutions, such as common ontologies or ontology alignment methods, lead to additional problems and a novel approach is required. First I will briefly present a formal framework for aligning trust based on the interactions agents share. I will describe an algorithm based on this framework, which uses inductive learning algorithms, to accomplish this alignment. I will present some preliminary results of the implementation.
