- About IIIA
- Current news
- 25th anniversary
Enabling collaboration between agents with different backgrounds is one of the objectives of open and heterogeneous multiagent systems. This can bring together participants with different knowledge, abilities, and access to resources, creating a richly open environment. For this collaboration to succeed, it needs to deal with different kinds of heterogeneity that can exist between agents. An important aspect of this heterogeneity is the linguistic one. To coordinate their collaborative actions, agents need to communicate with each other; and to ensure meaningful communication it is essential that they use the same vocabulary (and understand it in the same way).
The problem of achieving common understanding between agents that use different vocabularies has been mainly addressed by techniques that assume the existence of shared external elements, such as a meta-language, a physical environment, or semantic resources. These elements are not always available and, even when they are, they may yield alignments that are not useful for the particular type of interactions agents need to perform, as they are not contextualized.
In this dissertation we investigate a different approach to vocabulary alignment. We consider agents that only share knowledge of how to perform a task, given by the specification of an interaction protocol. We study the idea of interaction-based vocabulary alignment, a framework that lets agents learn a vocabulary alignment from the experience of interacting; by observing what works and what does not in a conversation. To give an intuition, consider someone trying to order a coffee in a foreign country. Even if there is no common language, the interaction is likely to succeed, since it consists of simple, well-understood steps that interlocutors agree on. Moreover, it is likely that, if our subject repeats the ordering coffee interaction many times, she will end up learning how it is performed in the foreign language. While humans are very good at adapting in this way, this idea has not been explored in depth for the case of artificial agents.
Throughout this dissertation we study how agents can learn a new vocabulary when they follow specifications that use different formalizations. Concretely, we consider interaction-based vocabulary alignment for protocols specified with finite state machines, with logical constraints, and with a social semantics based on commitments. For each case, we provide techniques to infer semantic information from interacting, or observing interactions between other agents. We also analyze how these techniques can be used in combination with external alignments obtained in a different way. When these alignments are not necessarily correct, our techniques provide ways of repairing them.
For each type of specification we evaluate the proposed methods by simulating their use in a set of artificial, randomly generated protocols. This provides a general evaluation that does not suffer the biases of particular datasets. Later, we study how to apply our methods to an empirical dataset of human-crafted instructional protocols, obtained from the WikiHow webpage. We discuss the challenges of using our methods in protocols with natural language labels, and we show how the resulting method improves on the performance of using a well-known dictionary.
Summarizing, we present a vocabulary alignment method that is context-specific, lightweight, cheap and independent of external resources. This method can be used by agents as a low profile method of learning the vocabulary used in particular situations. We show that our method alone allows agents to find a useful alignment, although slowly. In combination with other resources, our technique provides not only a way of learning alignments faster, but also a way of obtaining different information (about the use of words in context) that may be difficult to find otherwise, and to repair external alignments.