The Massive Memory Project

GrantReference: CICYT TIC90-0801-C02-01
Title: Project AMP: Development of a learning system based on a massive 
memory architecture
Project Responsible: Enric Plaza i Cervera
Institution: Consejo Superior de Investigaciones Científicas
Institute: Centre d'Estudis Avançats de Blanes (former name of IIIA)
Period: from 90.12.12	to: 93.12.12
Keywords:  Artificial Inteligence, Machine Learning, Analogy, 
           Case-based Reasoning, Representation Languages

This is a finished (1990-1993) project but a follow-up project is currently building upon it: The ANALOG Project.

Project Team

Project Goals

There is a white paper on the Massive Memory Project. The goal of the Massive Memory Project (MMP) is developing an experimental framework for building systems using different problem-solving methods and different ways of learning from experience. Moreover, the specific goal is to show that the right level for integration of problem-solving methods, learning from experience is an architecture based on an active memory. All represented structures are memory structures: cases, generalizations, domain models, problem-solving methods, learning methods, instances of problems solved by one method, etc. We can summarize the Massive Memory Project in six working hypothesis. The research around MMA is directed by a series of hypothesis:

Project Results

The main results of the project are theoretical and technical

Theoretical results

New learning methods

Integration of learning methods
Integration of learnming methods with problem solving systems is crucial for (a) supporting learning from past problem solving experiences and for (b) improving problem solving in future problems. Below there is information about a workshop and a position paper on the topic.

Development of a Memory Architecture

Technical results

The main technical result is the specification and implementation of NOOS, an object centered representation language with reflective capabilities. NOOS allows to uniformely represent reasoning and learning methods. We have shown it is easy to program and represent in NOOS inheritance methods (in several variations) and derivational analogy and other case-based methods (with different case retrieval and selection methods and criteria). These methods are explicit and programmable, instead of being built-in and fixed (as is the usual approach, sometimes for each application domain) [Plaza 92c]. NOOS allows a clear separation and distinction between different types of knowledge, i. e. levels and meta-levels and their interaction are clearly established and the programmer can describe the place each piece of knowledge occupies and it's relation to the others. Different learning methods can be then described in NOOS (see below and E. Plaza, J.L. Arcos (1993b), Reflection and Analogy in Memory-based Learning. MSL-93 Second International Workshop on Multistrategy Learning. Harpers Ferry, USA.)
  • The second version of the NOOS language is currently being developed in the ANALOG project.
    The memory architecture implemented is integrated and flexible: (1) methods themselves are represented declaratively as memory structures (clichés), and (2) memory processes are reified in the NOOS language and allows their explicit modification by programming. The memory architecture consists of the NOOS language, the reflective constructs of which support access and manipulation of all language objects, plus a set of memory retrieval methods that support searching and collecting problems solved in the past in the form of memory structures and use them as objects of the NOOS language usable in solving new problems (achieving learning). Thus "system behavior" achieved by methods is not procedural and opaque but stored in memory as declarative memory structures allowing their retrieval and inspection, so that the system can reason about them and their results, and reuse them when appropriate in future situations. This notion of memory is that which distinguishes the NOOS from other languages and is what permits to integrate learning methods (as methods with specific inference strategies based on searching the memory of past experiences). [Plaza y Arcos 93c].
    Regarding methods, we have developed reasoning and learning methods described as high level specifications (clichés) using knowledge-level analysis. Among this methods are: gaol-driven analogy, derivational analogy, analogy by determinations, case-based planning, variants generate-and-test methods (case-based and using induction to acquire the knowledge needed to generate and test hypotheses) [Plaza y Arcos 94c].
    • The BOLERO system was applied to the domain of pneumonia diagnosis.
    • CHROMA, an application for refining proteins using case-based and inductive methods was developed using NOOS. Currently a new version of CHROMA (being developed inside the ANALOG project) features a meta-level methods capable to decide which method is best in each situation.


    Here is a complete list of publications and reports of the AMP project (35 items).

    Some publications about this project are readable in the web:

    For any question or information just ask