Multiagent earth Learning

Cooperation and Learning among Case-based Reasoning Agents

"Communication and Learning in a Wide World"

  Currently multiagent systems are implemented using the Noos Agent Platform, an agent programming environment that supports agents designed in the Noos representation language to communicate, cooperate, and negotiate across the network in a FIPA-compliant way.

Article on case bartering for multiagent learning

A Bartering Approach to Improve Multiagent Learning
(Available online [PDF file])
Santiago Ontañon and Enric Plaza.
Multiagent systems offer a new paradigm to organize AI applications. We focus on the application of Case-Based Reasoning to multiagent systems. CBR offers the individual agents the capability of autonomously learn from experience. In this paper we present a framework for collaboration among agents that use CBR. We present explicit strategies for case bartering that address the issue of agents having a biased view of the data. The outcome of bartering is an improvement of individual agent performance and of overall multiagent system performance that equals the ideal situation where all agents have an unbiased view of the data. We also present empirical results illustrating the robustness of the case bartering process for several configurations of the multiagent system and for three different CBR techniques.

To be published in Int. Conf. Autonomous Agents and Multiagent systems AAMAS'02. ACM Press.

Article on case retention policies for multiagent learning

Collaboration Strategies to Improve Multiagent Learning
(Available online [PDF file])
In this paper we present a framework for collaboration among agents that use CBR. We present explicit strategies for case retain where the agents take in consideration that they are not learning in isolation but in a multiagent system. We also present case bartering as an effective strategy when the agents have a biased view of the data. The outcome of both case retain and bartering is an improvement of individual agent performance and overall multiagent system performance. We also present empirical results comparing all the strategies proposed.

To be published in Machine Learning ECML'02 (to appear in Lecture Notes on Artificial Intelligence, Springer Verlag)

Article on proactive learning

Learning When to Collaborate among Learning Agents.
(Available online [PDF file])
Santiago Ontañon and Enric Plaza.
Multiagent systems offer a new paradigm where learning techniques can be useful. We focus on the application of lazy learning to multiagent systems where each agents learns individually and also learns when to cooperate in order to improve its performance. We show some experiments in which CBR agents use an adapted version of LID (Lazy Induction of Descriptions), a CBR method for classification. We show that a collaboration policy (called Bounded Counsel) among agents that improve the agents performance with respect to their isolated performance. Later, we use decision tree induction and discretization techniques to learn how to tune the Bounded Counsel policy to a specific multiagent system---preserving always the individual autonomy of agents and the privacy of their case-bases. Empirical results concerning accuracy, cost, and robustness with respect to number of agents and case base size are presented. Moreover, comparisons with the Committee collaboration policy (where all agents collaborate always) are also presented.

Published in L. De Raedt, P. Flach (Eds.) Machine Learning: EMCL 2001. Lecture Notes in Artificial Intelligence 2167, p. 394-405. Springer-Verlag.

Article on Cooperation Policies for Case-Based Reasoning Agents

Ensemble Case-based Reasoning: Collaboration Policies for Multiagent Cooperative CBR.
(Available online [PDF file])
Enric Plaza, and Santiago Ontañon.
Multiagent systems offer a new paradigm to organize AI applications. Our goal is to develop techniques to integrate CBR into applications that are developed as multiagent systems. CBR offers the multiagent systems paradigm the capability of autonomously learning from experience. In this paper we present a framework for collaboration among agents that use CBR and some experiments illustrating the framework. We focus on three collaboration policies for CBR agents: Peer Counsel, Bounded Counsel and Committee policies. The experiments show that the CBR agents improve their individual performance collaborating with other agents without compromising the privacy of their own cases. We analyze the three policies concerning accuracy, cost, and robustness with respect to number of agents and case base size.

Published in Case-Based Reasoning Research and Development: ICCBR 2001,. Lecture Notes in Artificial Intelligence 2080, p. 437-451. Springer-Verlag.

[CoopCBR] [NoosAgents] [Features Terms] [Federated Learning] [Noos] [Team Members]

noos@iiia.csic.es

http://www.iiia.csic.es/Projects/FedLearn/MAL.html