Foundations of Analogical Inference and their Applications to
Symbolic Reasoning and Learning
Eva Armengol, Beatriz López,
IIIA - Artificial Intelligence Research Institute (CSIC)
Ramon López de Màntaras,
Enric Plaza i Cervera
Keywords: Case-based reasoning, Reflective languages, Analogical reasoning,
Introspective learning, Multistrategy learning, Cognitive architectures, Artificial Intelligence.
SUMMARY: A theoretical study and implementation of an integrated cognitive
architecture (ICA) based on analogical inference is proposed. The theoretical
study will improve the understanding, comparison and use of ICAs, specially
those integrating reasoning with learning. The theoretical study will be based
on the analysis of learning as a kind of introspective reasoning in the ICA.
Different learning methods (inductive, chunking, and case-based) will be
analysed concerning the self-model the ICA needs to integrate those methods.
This approach allows to give a clear semantics to the integration of different
reasoning modules in the ICA, and to improve the understanding ICAs
foundations. The approach also supports an implementation using introspective
languages. The practical proposal is implementing an analogy-based ICA by
extending the Noos introspective language
developed at IIIA.
- Contents of Progress Report
- 1. The reflective representation language Noos
- 1.1 Inference and reflection
- 1.2 Feature terms
- 1.3 The Noos WWW interface
- 2. New learning techniques
- 2.1 Case-based reasoning
- 2.2 Inductive learning
- 2.3 Compilative learning
- 3. Applications
The summary above is the original one in the proposal of the Analog
project. The main goal of the Analog Project was to study and implement a
computational framework for integration of different machine learning (ML)
methods into a problem solving system. We called this computational framework
an ICA, integrated cognitive architecture (related architectures are decribed
in the Survey of Cognitive
and Agent Architectures).The Analog project
hypothesis was that ML methods could be integrated if they were viewed as
introspective methods in a representation language. That is to say, a learning
method is one that knows, based on its past behavior, how to modify the actual
contents of a problem solving system in order to improve the future behavior.
In order to do this the system needs to be able to inspect its own past
behavior (introspection) and create new improved knowledge structures in the
language of the problem solving system. The approach taken was to start with
representation language Noos developed at our Institute that
integrate analogical reasoning (also called CBR --case-based reasoning) and
learning, and then extend Noos as needed for integrating other machine learning
methods. Summarizing, the target was a problem solving architecture capable of
integrating multistrategy learning. In this progress report we will discuss the
current definition and implementation of the Noos language, the integration of
different ML methods, the development of new learning techniques and their
implementation, and some application systems developed within the project.
1. The reflective representation language Noos
During the project, a new version of Noos was implemented as a result of the
study of requirements for new inductive and compilative (EBL) learning
techniques, the experience in the implementation of applications systems, and
formal studies about Noos as a representation language and about reflection in
AI systems --this last aspect was performed in cooperation with the ARREL
project, one of which partners was our Institute. We have developed a formal
model of the reflective architecture of Noos using Descriptive Dynamic Logic.
DDL is a framework for describing reflective systems that was started in
cooperation with the ARREL project (TIC 92-579-C02-01) and later continued in
1.1 Inference and reflection
The main extension of Noos expressive power has been providing three new sorts
of metalevel inference. It is possible now to ask whether there is any
reachable solution for a task (the P provability operator), whether a given
solution is currently known (the K epistemic operator), and all the possible
solutions to a task that satisfy the current constraints. These types of
inference are used on default reasoning and reasoning about preferences. For
instance, default reasoning is modelled as follows: if it cannot be proven of
an instance to be an exception (using the P operator), the default applies.
Reasoning about preferences allows a declarative way to introduce control
constructs in the Noos programs. Preferences are modelled as partial orders
over sets of alternatives upon which a Noos program is up to make a decision.
Reasoning about references allow a declarative specification of case retrieval
and selection using domain knowledge in case-based reasoning (CBR). Moreover,
higher-level preferences allow meta-level reasoning upon conflicting criteria
and have been used to model legal problems that involve non-monotonic
1.2 Feature terms
The introduction of ML methods based on induction shed new light on the
object-centered basis of Noos. It turned out that the subsumption relation
among Noos object descriptions we have already defined was very akin to the
those defined in psi-terms (developed by Aït-Kaci to model structured data
in Prolog-like languages), and also in feature structures (Carpenter). This
lead to the modelling of Noos object descriptions as feature-terms.
Consequently, Noos descriptions form a lattice regarding the subsumption
relation (the general-to-specific ordering relation among descriptions). Since
induction methods can be viewed as a search process among descriptions that
generalize (subsume) a set of training examples, it was clear that inductive
methods in Noos could be modelled and implemented as methods that perform
different strategies in searching the possible Noos descriptions in the
subsumption lattice. In fact, the subsumption relation was the main construct
used in the first version of Noos to retrieve cases for case-based reasoning
(CBR). Because of this, new inductive learning methods could now be integrated
seamlessly in the Noos framework with existing case-based learning.
The Noos WWW interface
The current implementation of Noos is an interpreter written in Common Lisp
that runs in Apple platforms (using MCL) and Unix platforms (using CMU Lisp).
The Apple version has a GUI that lacks the Unix version. However, we have
developed WebNoos, a World-Wide-Web interface to Noos. WebNoos allows any user
with a web-browser (like Netscape or Mosaic) to interact with a Noos system
from anywhere in the Internet. NoosWeb is a new concept of GUI that uses
hypertext instead of windows. NoosWeb supports the same functionality as the
Apple windowing interface with the advantage of being platform-independent and
using an already familiar visualizer system. Since HTTP protocol is stateless
and Noos interaction is session-oriented, NoosWeb keeps track of the user
interactions to provide stateful computation without any need to modify HTTP
protocol, server, client browser. NoosWeb's locator is
2. New learning techniques
Although the focus of the project was on a framework for integrating multiple
learning methods, there have also been results on developing new learning
techniques. In particular the Noos framework currently is able to integrate
learning methods based on analogy, induction and compilation. There have been
new ML techniques developed in these three areas --we will presently explain
them and later we'll explain some application systems that integrate
implemented methods based of them.
2.1 Case-based reasoning
Similitude terms is new method for case retrieval in CBR based on a
symbolic description of similitude --instead of the classical numerical
measure. The basic notion here is antiunification (AU). The AU of two Noos
descriptions is the MSG (most specific generalization) description that
subsumes both. Intuitively, it captures all that is in common on two
descriptions --those aspects in which they are similar and all of them. Thus
the AU of the current problem and a case in CBR defines its similitude term.
Using the subsumption ordering over similitude terms, we can induce a partial
order over the cases in memory from more to less similar cases. Consequently, a
CBR system can choose the most similar cases and also explain in what sense
they are similar --since it has a symbolic account of similarity. The
importance of similitudes is a second CBR technique we have
developed in which an entropy-based measure is used to estimate the
discrimination power of a similitude term. The set of cases in memory that are
subsumed by a similitude term are distributed along a partition of solution
classes, and measuring the set entropy we can assess if the aspects involved in
the similitude are also discriminant with respect to the solution classes the
system is dealing with. The main novelty of both techniques is that they allow
to work with structured representations --while classical distance-based
similarities can only be defined upon "flat" representations like
attribute-value vectors. These two techniques are currently used in the CBR
component of the SPIN application.
2.2 Inductive learning
INDIE is an inductive technique also based on AU. In the SPIN application INDIE
has been used to build up a hierarchy of concepts to identify marine sponges.
INDIE performs an heuristic search for MSG in a series of languages
L2, ..., Ln. First, INDIE computes from the examples in a class
the MSG description D1 by AU in L1 (that allows only one
disjunct). If the description is discriminant (does not subsume any counter
example) the method has reached a solution. Otherwise, INDIE uses the heuristic
based on the "López de Màntaras distance" to select the most
discriminant attribute in D1, Then the examples are divided in k
subsets according to which of the k values of D1 has each
example. The method now has advanced to the Lk language where a
disjunctive description of k disjuncts is allowed. INDIE computes then
the MSG for each subset forming a disjunctive generalization for the concept.
The process of checking whether these descriptions are discriminants (and
moving if need be to a more expressive language Lk+j) is recursive until
a discriminant description is found. Optionally, a post-processing method that
generalizes the final MSG description to most discriminant generalizations can
be applied. DISC is a second, related inductive technique. DISC computes a MGG
(most general generalization) description that is discriminant. When a MGG is
not discriminant it uses the same heuristic as INDIE to select the most
discriminant attribute to be included in a specialized description. DISC works
in a constrained language LAU such that only the attributes
appearing in the AU of the examples of a concept are considered. When examples
have different values for the selected attribute, they are split into subsets
accordingly and a disjunction of MGG descriptions is computed. When the
disjuncts in the description are discriminant the method stops, and otherwise
recursively specializes the non-discriminant ones. The novelty of both
techniques regard the way they exploit the properties of the structured
representation of feature terms --while rule-based learning exploit the
properties of clausal-form representations. The structure in feature terms help
in pruning the search space in that when an attribute is elided by the
technique all derived attributes can also be elided automatically (and vice
versa: until an attribute is considered derived attributes need not be
2.3 Compilative learning
Finally, we have developed PLEC, new analytical learning technique. PLEC is a
form of compilative learning (like EBL, explanation-based learning) for
learning to specialize and speed-up Noos methods. Because of the reflective
capabilities of Noos, methods are also descriptions. Noos methods are composed
of subtasks and submethods. PLEC can be seen as a process of generating new
methods by unfolding existing Noos methods. The unfolding is biased to
consider only those methods used in the subtasks in a particular problem
--introspection is used to analyze the proof tree of that probelm. The result
of PLEC is generating a new method that is a specialization of the existing one
and is assured to solve that problem (and similar ones) more efficiently. The
main process of PLEC is subtask elision: the methods used in the proof
tree for every subtask are unfolded into the main body of the new method. The
speed-up is achieved by pruning the search space: the original method allows
alternative methods in subtask while the PLEC-generated method cuts off unused
methods in the elided subtasks --only the successful method in each subtask is
unfolded. The novelty of this technique lies in the development of an EBL-like
technique --originally developed to learn rules-- to methods in an
object-centered representation language.
Several applications have been developed during the project that integrate
multistrategy learning and problem solving. CHROMA is an application that
recommends how to use chromatographic techniques to purify proteins from
tissues and cultures. CHROMA can learn to solve protein purification problems
by induction and CBR. These two learning methods, and their corresponding
problem solving methods are integrated by the reflective capabilities of Noos.
A novelty of the system is the use of meta-level reasoning to decide, on a
case-by-case basis whether a problem has to be attacked by one problem solving
methods or another. It is shown empirically that this approach is better than
the usual fixed sequencing of methods until one succeeds. SPIN is an
application for identifying species, genus, and family of marine sponges. SPIN
also integrates the INDIE inductive method and the similitude terms importance
method for CBR. We have also implemented a non-linear planning system that
learns from experience using CBR, and during this year the PLEC method, now
tested classical EBL problems, will also be integrated.
The permanent location of this article is
Publications of the Analog Project
Most of these publications are available online. You may consult the IIIA
Publications and the
catalog of IIIA Research
Enric Plaza, Ramon López de Mántaras, and Eva Armengol
On the Importance of Similitude: An Entropy-based Assessment.
EWCBR-96 European Workshop on Case-based Reasoning. Lecture Notes in
Artificial Intelligence (to be published), Springer Verlag.
Enric Plaza, Josep Lluís Arcos, and Francisco Martín.
Cooperation Modes among Case-Based Reasoning Agents.
ECAI-96 Workshop on Learning in DAI Systems.
J. L. Arcos, E. Plaza; Inference and reflection in the object-centered
representation language Noos. Journal of Future Generation Computer
Systems. Accepted for publication Elsevier Science Publ.
J. L. Arcos, E. Plaza; Reasoning abour preferences in a reflective
framework. Research Report.
F. Esteva, L. Godo, R López de Màntaras, E. Plaza,
Precedent-based Plausible Reasoning: A similarity-based model of case-based
B. López, E. Plaza (1995); Case-based learning of plans and goal
states in medical diagnosis. Artificial Intelligence in Medicine
Journal, Accepted for publication.
C. Sierra, L. Godo, R. López de Màntaras, M. Manzano,
Descriptive Dynamic Logic and its application to reflective architectures.
Journal of Future Generation Computer Systems. Accepted for
publication Elsevier Science Publ.
E. Armengol, E. Plaza; Explanation-based Learning: A Knowledge Level
Analysis. AI Review. Vol. 9, pp. 19-35. 1995 Kluwer Academic
J. L. Arcos, E. Plaza (1995); Reflection in Noos: An object-centered
representation language for knowledge modelling. IJCAI'95 Workshop: On
Reflection and Meta-Level Architecture and their Applications in AI.
Montréal, Canada, August 21, 1995, pp. 1-10.
E. Plaza (1995); Cases as terms: A feature term approach to the
structured representation of cases. Lecture Notes in Artficial
Intelligence, n. 1010, pp. 265-276. Springer-Verlag,
E. Plaza (1995); Aprender con Inteligencia Artificial. Arbor,
595, pp. 119-154. CSIC, Julio 1995.
E. Armengol, E. Plaza (1995); Integrating induction in a Case-based
Reasoner. Lecture Notes in Artificial Intelligence. n.
984, pp. 3-17, Springer-Verlag.
A. Aamodt, E. Plaza (1994), Case-Based Reasoning: Foundational Issues,
Methodological Variations, and System Approaches. AI Communications,
Vol. 7, n. 1, pp. 39-59. Ios Pres, March 1994
R. López de Mántaras (1994), Reasoning under Uncertainty
and Learning in Knowledge Based Systems: Imitating Human Problem Solving
Behavior. En: J.M. Zurada, R.J. Marks II, Ch. J. Robinson (eds.)
Computational Intelligence Imitating Life. IEEE Press, New Yorl 1994,
E. Plaza, J.L. Arcos (1994) Flexible Integration of Multiple Learning
Methods into a Problem Solving Architecture. Lecture Notes in Artificial
Intelligence, n. 784. Springer-Verlag 1994,
E. Armengol, E, Plaza (1994), A Knowledge Level Model of
Knowledge -Based Reasoning. Lecture Notes in Artificial Intelligence,
n. 837. Springer-Verlag 1994, pp.53-64.
J.L. Arcos, E. Plaza (1994), A Reflective Architecture for Integrated
Memory-Based Learning and Reasoning. Lecture Notes in Artificial
Intelligence, n. 837. Springer-Verlag 1994, pp.289-300.
J.L. Arcos, E. Plaza (1994), Integration of Learning into a Knowledge
modelling framework. Lecture Notes in Artificial Intelligence,
n. 867. Springer-Verlag 1994, pp.355-373.
E. Armengol, E. Plaza (1994); Integrating induction in a case-based
reasoner. Second European Workshop on Case-Based Reasoning. EWCBR-94.
Chantilly, Francia 7-10 Nov. 1994. pp. 243-251.
E. Plaza, A Aamodt, A. Ram, W. van de Velde, M. van Someren (1993),
Integrated Learning Architectures.En: Machine Learning: ECML-93.
P.B. Brazdil, Ed. Lecture Notes in Artificial Intelligence
n. 667. Springer-Verlag, pp. 429-441
E. Plaza, J.L. Arcos (1993), Using Reflection Principles in the
Integration of Learning and Problem Solving. Proceedings of the ECML-93
Workshop on Integrated Learning Architectures. ILA-93. E. Plaza, Ed. Viena,
Austria , Abril 1993
B. López, E. Plaza (1993), Case-based planning for medical
diagnosis. In: Methodologies for Intelligent Systems ( J. Komorowski
& Z.W. Ras, eds.). Lecture Notes in Artificial Intelligence.
n. 689. Springer-Verlag, pp. 96-105.
B. López (1993), Reactive Planning through the integration of a
case-base system and a rule-based system. Prospects for Artificial
Intelligence. (A. Sloman et al., eds.), IOS Press, 1993, pp. 189-198.
E. Plaza, J.L. Arcos (1993), Reflection and Analogy in Memory-based
Learning. MSL-93 Second International Workshop on Multistrategy
Learning. Harpers Ferry, USA, Mayo 93, pp. 42-49.