Contributions to Artificial Intelligence: 1986-1995


INTRODUCTION

This text summarizes the first ten years of AI research activities of a group of people associated with the CSIC (Spanish Scientific Research Council), first at the AI department of the CEAB (Centre of Advanced Studies of Blanes) and presently at the IIIA (Artificial Intelligence Research Institute). All started in 1985 when the president of the CSIC, Prof. Trillas, commissioned Ramon López de Mántaras to start an AI group at the CEAB with a few graduate students and with the help of Jaume Agustí and Josep Aguilar-Martín (on leave from the CNRS). This is a balance of our contributions.

During the period october 85-october 95 the IIIA has had on the average about 20 members and, in all, a total of around 50 people, including visiting researchers, have been members of the IIIA during this period. Most of these researchers have had prior background in computer science, electrical engineering, physics or mathematics. Many graduate students in computer science have followed AI courses and carried out projects in the laboratory. Fourteen of these students have completed their PhD work in our Institute. The IIIA has also organized many workshops and conferences and has contributed to the creation and establishment of new scientific journals, in particular the European AI Journal: AI Communications.

During this 10-year period the members of the IIIA have published more than 300 papers. This figure represents about 50% of the total production of AI papers in Spain. The research work has been supported by 20 research grants (9 CEC grants, 1 UNESCO grant and 10 grants by the Spanish Ministry of Education and Science) and several research contracts with the industry. The total amount of funding obtained has been around 350 millions pesetas (almost 3 million dollars).

A total of 14 PhD theses have been supervised during this period by IIIA scientists and 10 more are in progress. IIIA researchers have been awarded the two most prestigious European awards for published papers (the "Digital European AI Research Award in 1987" and the "ECAI Programme Committee Best Paper Award" in 1992). The IIIA is the coordinating node of the European Community funded Network of Centres of Excellence in Machine Learning.

On average this funding represents about 50% of the total funding of the Institute, including salaries. A balance between fundamental research and applications has always been our concern. Various theoretical foundations at the leading edge of the field have been developed, including approximate reasoning models based on fuzzy and multi-valued logics, formal specification and refinement theories and languages, similarity logics, and a dynamic logic approach to reflective architectures. This fundamental research has always been guided by concrete challenging applications. Also, many working systems and tools have been built: knowledge-based systems, machine learning systems, case-based systems, and autonomous robots. Several of them have been distributed outside the Institute and some have been commercialised.

Intensive collaborations, mostly within the framework of European community programmes, have taken place with industries and academic institutions of many countries and ,particularly, with France, Belgium, the Netherlands, Italy, Germany, United Kingdom, Denmark, Eslovenia, United States, Mexico, and Argentina.

The rest of this text describes in some detail the major contributions and results obtained during the october 85-october 95 period. Depending on the different interests of the readers, we propose the following itineraries through the list of contributions:

* Knowledge-Based Systems itinerary:

comprising the contributions to: KBS Architectures, Modular Expert Systems, KBS Validation, Temporal Reasoning, Knowledge Acquisition and Machine Learning, Reflective Systems and Applications of KBS.

* Machine Learning itinerary:

comprising contributions to: Knowledge Acquisition and Machine Learning, Reflective Systems and Applications of Machine Learning.

* Fuzzy and Multiple-valued Logics itinerary:

comprising contributions to: KBS Architectures, Modular Expert Systems, Fuzzy and Multiple-valued Logics, Similarity Logic and Applications of Fuzzy Logic to Autonomous Mini-Robots.

* Automated Deduction and Algorithmic Optimisation itinerary:

comprising contributions to: Constraint Satisfaction, Temporal Reasoning, Efficient Automatic Deduction and Algorithmic Optimisation.

Besides these four itineraries it is worth noticing the contributions to the Incremental Design of Formal Specifications which have allowed us to incorporate software engineering techniques in the design of knowledge Based Architectures.

We have recently started research activities in Multi-Agent Systems (Communication, Cooperation, Negotiation and Federated Learning), in Applications of AI to Music, and in WWW interfaces to AI Systems.

 


Contributions to KBS ARCHITECTURES

The research on Knowledge Based Systems has been one of the initial interests of the group that has had continuity during this ten- year period. Motivated by several real applications, we created, formalized and implemented languages to better represent uncertainty and imprecision, based on fuzzy and multi-valued logics (see Contributions to Fuzzy and Multiple-valued Logics). These languages have been integrated in a two-generation tool (MILORD and MILORD II) on top of which most of the applications to real domains have been built.

MILORD is an expert system building tool developed between 1985 and 1989 within the framework of Carles Sierra's Ph. D. thesis. It allows to perform different calculi of uncertainty on an expert defined set of linguistic terms expressing truth degrees. Each calculus corresponds to specific conjunction, disjunction and implication operators. The internal representation of each linguistic truth value is a fuzzy subset of the interval [0,1]. The different calculi of uncertainty applied to the set of linguistic terms, result in a fuzzy subset that is approximated to a linguistic truth value belonging to the set of linguistic terms. This linguistic approximation keeps the calculus closed. This has the advantage that, once the linguistic truth values have been defined, the system computes, the conjunction, disjunction and implication operations for all the pairs of linguistic truth values in the term set off-line, and stores the results in matrices. Therefore, when MILORD is run, the propagation and combination of uncertainty is performed by simply accessing these precomputed matrices. This tool also uses a meta-level language to represent the strategies of execution of modules containing domain rules. This meta-control language has been the inspiration of some work done in Case-Based reasoning (see Contributions to Knowledge Acquisition and Machine Learning) as well. MILORD has been used in the development of several real applications (see Applications of Knowledge Based Systems).

MILORD II is an architecture for Knowledge Base Systems (KBS) that combines reflection and modularization techniques, together with an approximate reasoning component based on many-valued logics, to be able to define complex reasoning patterns at large. Its development started in 1989 and constitutes the main component of Josep Puyol's Ph. D. thesis. A Knowledge Base (KB) in MILORD II consists of a set of hierarchically interconnected modules. Each module contains an Object Level Theory (OLT) and a Meta-Level Theory (MLT) interacting through a reflective mechanism. From the logical point of view, MILORD II makes use of both many-valued logic and epistemic meta-predicates to express the truth status of propositions.

A paper describing MILORD was awarded the "1987 Digital European AI Research Paper Award", which is the most prestigious AI Prize given in Europe.

 

Selected publications

L. Godo, R. López de Mántaras, C. Sierra, A. Verdaguer (1987); Managing Linguistically expressed Uncertainty in MILORD: Application to Medical Diagnosis. 7th International Symposium on Expert Systems and Applications. Avignon'87. Avignon, France. pp. 571-596.

L. Godo, R. López de Mántaras , C. Sierra, A. Verdaguer (1989); MILORD: The Architecture and the Management of Linguistically Expressed Uncertainty. International Journal of Intelligent Systems. Vol. 4, num. 4, pp. 471-501.

C. Sierra (1989); MILORD: Arquitectura multinivell per a sistemes experts en classificació. PhD thesis, Universitat Politècnica de Catalunya.

C. Sierra, L. Godo (1993); Specifying Simple Scheduling Tasks in a Reflective and Modular Architecture. (Treur, J. & Wetter, Th. Eds) Formal Specification of Complex Reasoning Systems. Ellis Horwood, pp. 199-232.

J. Agustí, F. Esteva, P. García, L. Godo, R. López de Mántaras, C. Sierra (1994); Local Multi-valued Logics in Modular Expert Systems. Journal of Experimental and Theoretical Artificial Intelligence. Taylor & Francis Pub., Vol. 6 num. 3, pp. 303-321.

J. Puyol (1994); Modularization, Uncertainty, Reflective Control and Deduction by Specification in MILORD II, a Language for Knowledge-Based Systems. PhD thesis, Universitat Autònoma de Barcelona.

L. Godo, W. van der Hoek, J.J. Ch. Meyer and C. Sierra (1995); Many-valued Epistemic States. Application to a Reflective Architecture: Milord-II. Lecture Notes in Computer Science. Springer-Verlag, num. 945, pp. 440-452.

 


Contributions to MODULAR EXPERT SYSTEMS

The use of modularitzation techniques in expert system design is needed to adequate the general and spread knowledge in a knowledge base (KB) to specific subtasks. Specific subtasks generally make use of only a subset of the whole KB and use a specific inference engine. Thus, the modularity of the systems allows us to address main characteristics of human problem-solving: the adaptation of general knowledge to particular problems and the dependency of the management of uncertainty on the different subtasks being implemented in the modules of the system.

If the KB is allowed to deal with uncertainty and imprecision, the deduction inside the module must be based on uncertainty calculi. Moreover, if a modular expert system uses different uncertainty calculi for different modules, and these modules need to communicate, a correspondence between their uncertainty calculi must be established. Different types of communication will need different types of correspondence between uncertainty calculi. One of the most interesting from the practical point of view is the inference-preserving communication. The inference-preserving communication ensures the correctness of such communication in terms of consistency preservation.

Some contributions to this topic have been done in our Institute. Firstly we analysed the inference-preserving communication problem assuming that:

* Each uncertainty calculus is an inference mechanism defining an entailment relation.

* We restrict ourselves to the case which the different uncertainty calculi are given by a class of truth-functional Multiple-Valued Logics.

In this framework, the inference-preserving communication was given by means of two maps of entailment systems: the conservative, and the weak conservative; characterized when uncertainty calculi are given by a class of truth-functional Multiple-Valued Logics. Finally these results were generalized, in order to deal with both uncertainty and imprecision using bilattices structures and this approach was successfully tested in the design of the modular language for building the expert system MILORD-II (see Contributions to KBS Architectures).

 

Selected Publications

J. Agustí, F. Esteva, P. García, L. Godo, C. Sierra (1991); Combining Multiple-valued Logics in Modular Expert Systems. (Bruce d'Ambrosio et al., Eds.) Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence . Morgan Kaufmann, pp. 17-25.

F. Esteva, P. García, L. Godo (1993); Enriched Interval Bilattices and Partial Many-Value Logics: An Approach to deal with Graded Truth and Imprecision. Int. J. of Uncertainty, Fuzziness and Knowledge-Based Systems. Vol. 2, num. 1, pp 37-54.

J. Agustí, F. Esteva, P. García, L. Godo, R. López de Mántaras, C. Sierra (1994); Local Multi-valued Logics in Modular Expert Systems. Journal of Experimental and Theoretical Artificial Intelligence. Taylor & Francis Pub. Vol. 6, num. 3, pp. 303-321.

 


Contributions to KBS VALIDATION

As a step forward in our path in Knowledge Engineering, we focused on the problem of Validation of Knowledge-Based Systems. Starting on this topic in 1988, we were involved in the ESPRIT II VALID project from 1989 to 1991, first with Ramon Lopez de Mantaras and later Enric Plaza as project leaders of our team. This project produced a validation platform composed by several capable of working on KBSs based on different shells. The sophisticated architectures developed in our group --in particular the MILORD shell (see Contributions to KBS Architectures)-- generated new and interesting validation problems, which were studied by Pedro Meseguer in his PhD thesis in 1992. New verification issues appeared, caused by interactions among rules, metarules and modules, and not conceived previously in the literature. On the other hand, the set of applications developed with MILORD allowed us to face the practical side of validation, considering KBS debugging, refinement and final acceptance. Our research produced not only methods but also tools which were used successfully to validate different KBS. The new verification issues were satisfactorily solved using ATMS techniques embodied in the IN-DEPH II verifier. An incremental version of this verifier could cope with the high cost required by verifiers when applied to large KBSs. However important, verification presented some limitations because no precise specifications were available for every aspect to be validated. Using machine learning techniques, we developed refinement strategies which could identify erroneous knowledge elements and suggest possible repairs, from a set of cases with known solutions. Refinement strategies were implemented in the tool IMPROVER, which was successfully used.

The practical use of validation tools on real and large KBS –as, for instance, PNEUMONIA (see Applications of Knowledge Based Systems)–, gave us a first-hand experience on validation, realizing how and when these tools should be used. We learned to differentiate error messages from real flaws in knowledge, and for the latter how often they were solved adding more knowledge. For this reason, we always involved human experts and end-users in the validation process, being aware that, without them, this process would not had been  satisfactorily accomplished.

Some theoretical aspects were also studied. We performed a terminological investigation about the meaning of validation terms when applied to KBS and to more conventional software, coming up with a proposal which could keep the meaning of traditional terms and to include KBS characteristics. In the context of formal models for knowledge engineering, we have investigated the potential benefits of formal specifications in the validation activity.

The Institute has been very involved in the development of KBS Validation in Europe. Starting with the VALID project, founded by the ESPRIT II initiative, members of the Institute acted as co-chair of the first and second occurrences of the European Symposium of Verification and Validation, EUROVAV (Enric Plaza in 1991 and Pedro Meseguer in 1993). Furthermore, Pedro Meseguer was awarded the ECAI-92 Programme Committee prize for his paper on incremental verification presented at the ECAI-92 Congress. Together with Alun Preece, Pedro Meseguer gave tutorials on KBS Validation at the IJCAI-93 and ECAI-94 conferences.

 

Selected Publications

P. Meseguer (1992); Validation of Multi-Level Rule-Based Expert Systems.PhD, Universidad Politécnica de Cataluña.

P. Meseguer (1992); Incremental Verification of Rule-Based Expert Systems. Proceedings of the 10th European Conference on Artificial Intelligence, ECAI-92, Vienna, Austria, pp. 840-844.Th. Hoppe, P. Meseguer (1993); VVT Terminology: A Proposal. IEEE Expert Intelligent Systems & their applications. Vol. 8, num. 3, pp. 48-55.

P. Meseguer, A. Verdaguer (1993). Verification of Multi-Level Rule-Based Expert Systems: Theory and Practice. The International Journal of Expert Systems: Research & Applications, 3(2), pp. 163-192.

P. Meseguer (1993). Expert System Validation through Knowledge Base Refinement. Proceedings of the 13th International Joint Conference on Artificial Intelligence, IJCAI-93, Chambery, France, pp. 477-482.

P. Meseguer, E. Plaza (1994); The VALID Project: Goals, Development, and Results. International Journal of Intelligent Systems, (Dan E. O'Leary guest editor), John Wiley & Sons, Vol. 9, num. 9, pp. 867-892.

P. Meseguer, A. Preece (1996); Verification and Validation of Knowledge-Based Systems with Formal Specifications. Knowledge Engineering Review. Cambridge Univ. Press, Vol. 10, num. 4, pp. 331-344.

P. Meseguer, A. Verdaguer (1996); Expert System Validation through Knowledge Base Refinement. International Journal of Intelligent Systems. John Wiley &Sons. Vol. 11:7, pp. 429-462.

 


Contributions to FUZZY AND MULTIPLE-VALUED LOGICS

According to Zadeh, the Fuzzy logic term is used, at least, with two different meanings. Fuzzy Logic in broad sense refers to methodologies involving Fuzzy Sets and Possibility theories, whereas in narrow sense it refers to the various formal logical calculi underlying Fuzzy Set Theory. The theoretical research done in our group on Fuzzy Logic has covered both topics. The main contributions have been in the following three subjects:

Fuzzy Truth Values

The work on modelling inference in Fuzzy Logic using the Fuzzy Truth Values formalism started quite early in the IIIA with the PhD thesis of Lluís Godo. This formalism allows to express (some) inference patterns without the need to specify particular possibility distributions to represent the fuzzy statements involved in such inference patterns. It has been shown that Fuzzy Truth Values play the same role that classical truth values do in classical or many-valued logic. In this direction, we have also studied the closure system of inference operators in the above formalism as well as a semantical formalization of fuzzy logic as a logic with fuzzy truth values.

Multiple-valued Logic

The investigation of different fuzzy (or many-valued) logics, in the narrow sense, has been motivated by a fruitful cooperation since 1993 with the Institute of Computer Science of the Czech Academy of Sciences, led by Prof. Petr Hájek. This collaboration has resulted in a number of interesting publications about different systems of many-valued logic and their relation to main uncertainty calculi, such as probability theory or possibilistic logic.

 

Selected publications

L. Godo (1990); Contribució a l'estudi de models d'inferència en els sistemes possibilistics. PhD thesis. Universitat Politècnica de Catalunya.

L. Godo, J. Jacas, L. Valverde (1991); Fuzzy Values in Fuzzy Logic. International Journal of Intelligent Systems. Vol. 6, num. 2, pp. 199-212.

L. Godo, F. Esteva, P. García, J. Agustí (1991); A Formal Semantical Approach to Fuzzy Logic. Proceedings of the 21th International Symposium on Multiple-Valued Logic. ISMVL'91, pp. 72-79.

P. Hájek, D. Harmancova, F. Esteva, P. García and L. Godo, (1994); On Modal Logics for Qualitative Possibility in a Fuzzy Setting. (R. López de Mántaras & D. Poole, eds.) Uncertainty in Artificial Intelligence. Morgan Kaufmann Pub., pp. 278-285.

P. Hájek, L. Godo, F. Esteva (1995); Fuzzy logic and Probability. (P. Besnard & S. Hanks, Eds.) Uncertainty in Artificial Intelligence Conference. Morgan Kaufmann Pub., pp. 237-244.

P. Hájek, L. Godo, F. Esteva (1996); A complete many-valued logic with product conjunction. Archive for Mathematical Logic. Vol. 35, num. 3, pp. 191-208.

 


Contributions to SIMILARITY LOGIC

Similarity relations as generalizations of equivalence relation were defined by Zadeh in the late sixties. Most of the early work dealt with the application of these relations to cluster analysis. In the eighties Enric Trillas introduced a generalization of Zadeh's definition and Trillas and Valverde related similarity relations to equivalence connectives in fuzzy logic. In the nineties this type of fuzzy relations started to be used in order to obtain a semantics for fuzzy logic and to build a logical setting for dealing with sentences like "close to p", "not far from p" or "similar to p" being p a proposition. In both issues the contribution of our research group has been relevant.

In 1991 Ruspini published his studies on a semantics for fuzzy logic based on similarity relations. Based on this, Esteva, Godo and García proposed in 1994 a definition of a similarity logic as a propositional logic based on similarity relations. A complete analysis of the relations between this logic and the fragment of necessity-valued possibilistic logic and fuzzy-truth-valued logic was also achieved.

On the other hand, the concept of similarity has also been used by researchers of the Institute, in cooperation with the D. Dubois and H. Prade, to define graded consequence relations corresponding to different levels of approximation. The main idea underlying this approach is to approximate every classical proposition p by a fuzzy set of interpretations in such a way that the [[alpha]]-cuts of this fuzzy set provide a set of approximations of p. As it is expected, approximation in degree 1 coincides with p and approximation in degree 0 coincides with the classical set of all interpretations. In this setting, p entails q to the degree [[alpha]] if p classically entails the [[alpha]]-approximation of q. The results of the work done along this research line are both theoretical and practical. From the theoretical point of view, we have studied the properties, a syntactical characterization and a formalization in a multi-modal and a multi-valued setting of these graded entailment relations. From the practical point of view our results are also of interest. A nice framework for interpolative reasoning based on graded entailment has been developed and applications to case-based reasoning (see Contributions to Knowledge Acquisition and Machine Learning) as well as to analogical reasoning are being currently studied.

F. Esteva and L. Godo have been invited to be the Guest-Editors of a future volume of Studies in Vagueness (Springer-Verlag Series) in Similarity Logic.

 

Selected publications

F. Esteva, P. García, L. Godo (1994); Relating and Extending Semantical Approaches to Possibilistic Reasoning. International Journal of Approximate Reasoning. Elsevier Science Inc, Vol. 10 num. 4, pp. 311-344.

D. Dubois, F. Esteva, P. García, L. Godo, H. Prade (1995); Similarity-Based Consequence Relations. Lecture Notes in Artificial Intelligence. Springer-Verlag. num. 946, pp. 171-179.

 


Contributions to TEMPORAL REASONING

An intelligent activity related to the world can be strongly conditioned by the fact that world's state changes throughout time. Thus, the ability to represent and reason about temporal information is a fundamental issue in many AI areas. In medical reasoning, for instance, the order and duration of symptoms and exploratory analysis can be essential to identify the correct diagnosis and design the appropriate treatment. Another example is process supervision: the temporal evolution of the various parameters of the process is key information to forsee process failures and target deviations, and avoid them. Other areas where temporal information is clearly relevant are task scheduling, activity planning and natural language understanding.

In the early eighties, two major works, namely McDermott's temporal theory and Allen's interval reasoning system, started a prolific period of research on temporal knowledge representation.

At the IIIA, the experience in building expert systems using the MILORD-II shell, (see Contributions to KBS Architectures) specially in the above mentioned areas (see Applications of Knowledge Based Systems) lead to realizing the importance of giving an adequate treatment to time and gave rise to our interest in the area, with the ultimate goal of providing MILORD-II with a temporal reasoning component.

The first step, in 1990, was to organize a seminar series to discuss the major proposals in the literature. The comparative analysis of the existing results was continued, completed and reported in a state-of-the-art paper by Lluís Vila. It was the starting point for the work on diverse aspects of the temporal reasoning during the period 91-94. Lluís Vila, in cooperation with different specialists, contributed to the following topics: ontology and theory of time, first-order languages for temporal knowledge representation, temporal constraint processing, constraint-based temporal representation languages and approximate temporal reasoning.

Some of these results have been applied in designing MILORD-II's temporal component in Lluís Vila PhD thesis. Its representation language now includes a set of temporal predicates that allow to explicitly talk about the temporal occurrence of events and the holding of facts both at a given moment and throughout periods. Time is represented by temporal tokens and the temporal information can be expressed as constraints of the following types: (i) between instants (qualitative such as "i1 before i2" and metric such as "i2 -i1 <= 5"), between periods (qualitative only, such as "P1 before meets P2") and between durations (both qualitative and metric, e.g. "duration(P2) - duration(P1) <= 3"). A temporal manager supports efficient management of temporal tokens and efficient query answering over temporal constraints by applying temporal constraint satisfaction techniques. MILORD-II temporal features have been used in the development of two expert system prototypes: (i) diagnosis of lumbalgia pathologies (developed in cooperation with the physician Antoni Llovera) and (ii) monitoring of pigs for breeding in cooperation with IRTA (Institute for Research on Agronomic Techniques, Lleida, Catalonia, Spain).

 

Selected publications

L. Vila (1994); A Survey on Temporal Reasoning in Artificial Intelligence. AI Communications, IOS Press, Vol. 7 num. 1, pp. 4-28.

L. Vila (1994); On Temporal Representation and Reasoning in Knowledge-Based Systems. PhD thesis, Dept. of Computer Languages and Systems, Univ. Politècnica de Catalunya.

L. Vila, L. Godo (1994); On Fuzzy Temporal Constraint Networks. Mathware & Soft computing. Univ. Politècnica de Catalunya. Vol. 1, num. 3, pp. 315-334.

L. Godo, L. Vila (1995); Possibilistic Temporal Reasoning based on Fuzzy Temporal Constraints. Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence. IJCAI'95. Montreal, Canada. Vol. 2, pp. 1916-1922.

L. Vila, H. Reichgelt (1996); The token reification approach to temporal reasoning. Artificial Intelligence. Elsevier, Vol. 83, pp. 59-74.

 


Contributions to CONSTRAINT SATISFACTION

Since 1989, Constraint Satisfaction has been a research topic in the Institute, mainly carried out by Pedro Meseguer. Our work has been focused on two main points. Regarding constraint satisfaction algorithms, we have tried to combine systematic methods with local search strategies, aiming at developing complete and, at the same time, efficient procedures. For this purpose, we have formulated constraint satisfaction as the global optimization of an analitical function. Using the information contained in the local gradient of this function, we have generated new heuristics which have been shown very effective when used inside standard constraint algorithms for total and partial constraint satisfaction.

Regarding phase transition in maximal constraint satisfaction, our research was motivated by the necessity to identify hard problem classes where to test the effectivity of the already mentioned heuristics. We developed a new branch and bound based algorithm, with a sophisticated lookahead strategy. This algorithm showed to be more efficient than previous ones, and in addition we obtained empirical evidence of the existence of a phase transition in the average difficulty of MAX-CSP instances.

Institute researchers were first realizing the typical easy-hard-easy pattern of phase transition in constraint satisfaction also appeared when considering MAX-CSP problems, using a special version of branch and bound with directed arc-consistency among future variables.

 

Selected Publications

P. Meseguer (1989); Constraint Satisfaction Problems: An Overview. Artificial Intelligence Communications. IOS Press. Vol. 2: 1, pp. 3-17.

J. Larrosa, P. Meseguer (1995); Optimization-based Heuristics for Maximal Constraint Satisfaction. Proceedings of the First International Conference on Principles and Practice of Constraint Programming, CP-95, Cassis, France, pp. 103-120.

P. Meseguer, J. Larrosa (1995); Constraint Satisfaction as Global Optimization. Proceedings of the 14th International Joint Conference on Artificial Intelligence, IJCAI-95 Montreal, Canada.

J. Larrosa, P. Meseguer (1996); Phase Transition in MAX-CSP. Accepted in 12th European Conference on Artificial Intelligence, ECAI-96. John Wiley & Sons. Budapest, pp. 190-194.

J. Larrosa, P. Meseguer (1996); Exploiting the use of DAC in MAX-CSP. Lecture Notes in Computer Science. Springer, num. 1118, pp. 308-322.

 


Contributions to INCREMENTAL DESIGN OF FORMAL SPECIFICATIONS

In this research we address the problem of narrowing the gap between the language in which a problem may be described by users in a given domain (the modelling or knowledge level language which is usually informal) and the language in which the problem must be given substance as a formal specification (preferably executable for rapid prototyping pourposes).We concentrate on an incremental approach to executable formal specification where incomplete preliminary specifications can be expressed in notations accessible to non-specialist whilst providing clear points of refinement towards complete specifications in languages targeted to more specialised developers.

The Calculus of Refinements(COR)

Our first attempt in this direction was The Calculus of Refinements (COR) which sets a formal basis for incremental specification. COR is a simple functional specification language where an expression denotes a set of values and a COR specification is a set of inclusions between expressions, instead of expressions denoting single values and specifications being sets of equations, as it is usual in functional languages. Intuitively COR expressions can be interpreted as approximations related by its information content. So they can be successively refined when more information is available.

COR can also be seen more abstractly as a pre-order specification language and as a logical framework in which the transitivity of many logical consequence relations can be codified.

COR proved to have theoretical interest in itself. Studying its models, we found a new type of Lambda-Calculus model. The automation of the deduction with inclusions gave rise to a new rewriting technique we called bi-rewriting. This technique has been recently applied to Automated Theorem Proving with Transitive Relations. The completion of Bi-rewrite Systems has led to propose a new complete unification procedure for a variant of linear second order lambda calculus. The formal theory of COR has been the object of Jordi Levy's PhD thesis and its development into a practical specification language is currently the PhD research topic of Marco Schorlemmer.

A first order version of COR has been used in requirements capture for Prolog programs. The meaning of a Prolog predicate is often characterised according to the set of bindings which can be obtained for its arguments. It is therefore possible to develop a hierarchical arrangement of predicates by comparing the sets of results obtained for stipulated variables. Using this hierarchical structure, we provide proof rules which may be used to support part of the requirements capture process for logic programming.

Graphical Specification(GRASP)

The semantics set  of COR allows its graphical homomorphic representation based on higraphs, a combination of Venn diagrams and graphs. A graphical version of first order COR has been designed and is being experimented as an alternative graphical syntax of Prolog. This led to the study of visual declarative languages which is the research Jordi Puigsegur's PhD topic.

Part of this research has been embedded in a Lightweight Specification System (LSS) developed by Dave Robertson at the Department of Artificial Intelligence in the University of Edinburgh. LSS toolkit assists in the development of logic programs, using a variety of high level specification methods, GRASP being one of them.

 

Selected Publications

J. Levy, J. Agustí (1994); Implementing Inequality and Nondeterministic Specifications with Bi-rewriting Systems. Lecture Notes in Computer Science. Springer-Verlag num. 785, pp. 252-267.

D. Robertson, J. Agustí, J. Levy, J. Hasketh. (1994); Expressing program requirements using Refinement Lattices. Fundamenta Informaticae. IOS Press Vol. 21, num. 3, pp. 163-182.

J. Puigsegur, J. Agustí, D. Robertson (1996); A Visual Logic Programming Language. Proceedings IEEE Symposium on Visual Languages. IEEE Computer Society Press, Colorado, pp. 214-221.

L. Levy, J. Agustí (1996); Bi-rewrite Systems. Journal of Symbolic Computation. Vol. 22, pp. 1-36. 

 


Contributions to KNOWLEDGE ACQUISITION & MACHINE LEARNING

Since the very beginning in 1985, we have been working in knowledge acquisition as well as in learning. Early work by Ramon López de Mántaras focused on clustering algorithms based on probabilistic and fuzzy techniques. Enric Plaza addressed the problem of knowledge elicitation for expert systems using the psychological theory of personal constructs to elicite, analyse and refine knowledge in heuristic classification. The result of his work was the EAR knowledge acquisition tool and it was the core of his PhD thesis in 1987.

In 1991, Ramon López de Mántaras introduced a new attribute selection measure for ID3-like inductive learning algorithms. The measure, based on a distance between partitions, generates trees that are smaller than those generated using the original measures introduced by Quinlan without losing prediction accuracy. Several experimental studies performed inside and outside our Institute have confirmed the advantages of this new measure.

Enric Plaza and Ramon López de Mántaras are among the european pioneers in Case-Based Reasoning/Learning. Starting as early as 1989, they were the first to incorporate fuzzy techniques within a Case-Based system : A case-based apprentice able to learn from fuzzy examples. Another result of our activities in case-based research is the BOLERO system, developed in 1990-1993 by Beatriz López within her PhD. It is an important contribution to both case-based and rule-based expert systems. The object level knowledge of BOLERO is represented by rules and the meta-knowledge are the solved instances of problems conveniently organized in the memory of cases. The added-value of such hybrid system is the capability of learning meta-knowledge by experience. BOLERO has been integrated within the MILORD System (see Contributions to KBS Architectures) and has been successfully applied in a complex medical diagnosis problem using the rules for diagnosing pneumonias of the PNEUMON-IA expert system previously developed in our Institute as object knowledge  (see Applications of Knowledge-Based Systems). This research yielded important insights into the integration of learning and problem solving.

The integration of learning and problem solving as well as the integration of different learning methods within the same architecture are the core of our present activities in machine learning. The PhD thesis work of Josep Lluís Arcos addresses the flexible integration of learning and reasoning within the NOOS language and the PhD thesis work of Eva Armengol tackles the problem of the integration of learning methods based on knowledge modelling.

The Institute has been very involved in the consolidation of the European machine learning and case-based reasoning communities. Ramon López de Mántaras has been in the programme committee of the European Conferences on Machine Learning (ECML) since the beginning and Enric Plaza in the programme committee of the International Conferences on Case-Based Reasoning (ICCBR) and he is the programme committee co-chair of ICCBR'97. The Institute is a main node of MLnet, the European Network of Centres of Excellence in Machine Learning and Ramon López de Mántaras is, since October 1995, the academic coordinator of MLnet.

It is also worth to mentioning that AC2, one of the newest commercial inductive learning software products of the French software house "Isoft", incorporates the distance measure for attribute selection proposed by Ramon López de Mántaras.

 

Selected publications

R. López de Mántaras, J. Aguilar-Martín (1985); Self-learning Pattern Classification using a Sequential Clustering Technique. Pattern Recognition Journal, Vol. 18, num.3/4, pp. 271-277.

R. López de Mántaras, L. Valverde (1988); New Results in Fuzzy Clustering based on the concept of Indistinguishability Relation. IEEE Transactions on Patter Analysis and Machine Intelligence. Vol. 10, num. 5, pp. 754-757.

E. Plaza, R. López de Mántaras (1988); Model-based Knowledge Acquisition for Heuristic Classification Systems. European Conference on Artificial Intelligence ECAI'88. Munich, Pitman Pub., pp. 61-66

E. Plaza, R. López de Mántaras (1990); A Case-Based Apprentice that Learns from Fuzzy Examples. (Z. Ras et al., eds.) Methodologies for Intelligent Systems-5. North-Holland, pp. 420-427.

R. López de Mántaras (1991); A Distance-based Attribute Selection Measure for Decision Tree Induction. Machine Learning Journal. Vol. 6, num. 1, pp. 81-92.

B. López (1993); Reactive Planning through the integration of a case-base system and a rule-based system. (A, Sloman et al., eds.) Prospects for Artificial Intelligence. IOS Press, pp. 189-198.

B. López, E. Plaza (1993); Case-based planning for medical diagnosis. Lecture Notes in Artificial Intelligence. Springer-Verlag. num. 689, pp. 96-105.

E. Plaza, J.L. Arcos (1994); Flexible Integration of Multiple Learning Methods into a Problem Solving Architecture. Lecture Notes in Artificial Intelligence. Springer-Verlag. num. 784, pp. 403-406.

E. Armengol, E. Plaza (1995); Integrating induction in a Case-based Reasoner. Lecture Notes in Artificial Intelligence. Springer-Verlag. num. 984, pp. 3-17.

 


Contributions to REFLECTIVE SYSTEMS

A computational system is reflective when it has an internal representation of itself, that is, when it is part of its own domain. This allows the system to "reflect" computations about itself, i.e. actually modify itself as a result of its own computation besides its normal computation about an external domain. Around 1990, members of the Institute started to develop MILORD II, an extension of the multi-level architecture MILORD (see Contributions to KBS Architectures), having in mind the new paradigm of "Reflection". MILORD II combines reflective capabilities with enhanced modularity making the system more manageable and easier to design, understand and modify. It also offers the possibility of working with different ways of representing uncertainty in different modules (see Contributions to Modular Expert Systems). The actual development and implementation of MILORD II was part of the PhD thesis of Josep Puyol in 1994. Since then, several complex applications have been successfully carried out (see Applications of Knowledge-Based Systems).

Reflection continues to be one of the main research activities of the Institute. We are presently developing a particular propositional dynamic logic called Descriptive Dynamic Logic (DDL) in such a way that a Multi-language Knowledge Base, built within a concrete architecture, will be transformed into a theory in DDL. The aim of DDL is to be able to describe and compare different reflective architectures within the same framework. Dynamic logic has traditionally been used to describe and compare computational systems, but ours is the first attempt to formalize the dynamics involved in deductive reflective systems by means of a dynamic logic. We have already formally described several reflective architectures, including MILORD II, using DDL.

Our present work on Reflection is also related to Learning. The importance of reflection for learning has been established by a long line of psychological research. The main result of this research is that reflection upon instances of failed problem solving enables the problem-solver to reformulate the course of its own "thinking" so that, under similar future circumstances, it does not fail in a similar way, that is, failures are good opportunities for learning and an intelligent system can improve its problem solving capabilities by reflecting upon such failures. The NOOS system (see Contributions to Knowledge Acquisition and Machine Learning) incorporates reflective capabilities to learn from failures. NOOS has also been formally described using RDL.

One of the ways in which the Institute has pushed forward the sate of the art in Reflective AI Architectures has been through the organization of a special issue of the Journal Future Generation Computer Systems (FGCS) on "Meta-level and Reflective AI Architectures". Ramon López de Mántaras was invited to be the Guest Editor of this special issue.

 

Selected publications

J. Agustí, F. Esteva, P. García, L. Godo, R. López de Mántaras, C. Sierra (1994); Local Multi-valued Logics in Modular Expert Systems. Journal of Experimental and Theoretical Artificial Intelligence. Taylor & Francis Pub., Vol. 6 num. 3, pp. 303-321.

J. L. Arcos, E. Plaza(1996); Inference and reflection in the object-centered representation language Noos. Future Generation Computer Systems Journal. Elsevier. Special issue on Reflection and Meta-level AI Architectures . Vol. 12, pp. 173-188.

C. Sierra, L. Godo, R. López de Mántaras, M. Manzano(1996); Descriptive Dynamic Logic and its Applications to Reflective Architectures. Future Generation Computer Systems Journal. Elsevier. Special issue on Reflection and Meta-level AI Architectures . Vol. 12, pp. 157-171.

 


Contributions to EFFICIENT AUTOMATED DEDUCTION AND ALGORITHMIC OPTIMISATION

At the beginning of 1993, we started our activities in this area. The research on Automatic Deduction has been focused mainly on Many-Valued Logics Theorem provers and Parallel Logic Programming.

Many-Valued Logics Theorem provers

We focus on the design of efficient automated deduction mechanisms for several kinds of Many-valued Logics. The Many-valued Logics considered in our research have been showed to be useful in practice. Their main and common feature is that its set of infinite or finite truth-values is linearly ordered. We have presented pioneer results on the design of efficient inference engines and logic programming interpreters for some Many-valued Logics. Our ongoing research is conveyed and enlarged to model efficient Automatic Theorem Provers running on more general Many-valued formulas.

It is planned that the first PhD thesis carrying out by F. Manyà about Automated Deduction in Many-valued Logics will be completed during 1996.

Parallel Logic Programming

Having a Logic Program as an input, our aim is at interpreting, i.e. at resolving, the Logic Programs obtaining their solution(s) (if they exist) by adequately controlling  their chains of inferences performed in a Von Newman parallel architecture. The essential foundations of our approach lies on two general notions: first, an hybrid depth-first search for the state space modelling the Logic Programs; and second, the introduction of a new objet called multisystem as well as two multi-inference operations defined over the multisystems. This approach has been analytically proved to significantly outperform the best-known classical approaches. Its implementation on a parallel computer is being planned.

Algorithmic Optimisation

Some computational problems need solutions which mainly consist in designing efficient algorithms according to Computation and Algorithmic Theory. Our contributions have addressed the following problems:

Algorithm for the Scheduling Problem

This problem is formed by a set of tasks, resources and restrictions that expresses incompatibility among time execution tasks and resource-sharing tasks. In a few words, we introduced an incomplete algorithm that proposes a partial task order in which, in general, few restrictions are not satisfied, independently whether there exists a task order satisfying all the specified restrictions. This price is paid in order to ensure the fundamental property of having a polynomial worst-case complexity. This is so, because such NP-hard problem could be insoluble when it is formed by many tasks, resources and restrictions which indeed happen in real applications. Searching for a solution close to the optimal one (it such exits), the method can run in semi-automatic form: every time the algorithm cannot satisfy a set of restrictions, it asks the expert, which is the "least important" restriction of the set. Hence, the proposed order satisfies the most important restrictions according to the expert's knowledge.

Algorithms for Non-Supervised Learning Problems

A second algorithmic optimisation contribution was made in the setting of non-supervised learning. In short, this research area tries to determine general concepts from a set of known elementary pieces of knowledge. It is therefore, an abstraction process. The proposed learning system involves two polynomial algorithms. The first one is a classification algorithm where all the set of elementary knowledge pieces forms the input and a partition of them is given as an output. Elementary pieces are put into classes under a certain criterion of similarity among them. After that, a second algorithm takes this partition and tries to discover more abstract or higher level knowledge. This algorithm transforms each class in a hierarchy of subclasses with new and local criterion of similarity. The complexity of this second algorithm is also polynomial and its power to determine fine and abstract concepts has been satisfactorily proven by practical experiments.

We are involved in an European COST Action called "Many-valued Logics for Computer Science", which assemble highly reputed research groups existing in Europe which conduct research in several computational aspects of Many-valued Logics. We have also had an active and close collaboration with the Computer Science Faculty of the University of Karlsruhe sponsored by the German and Spain Ministries of Education and Science. Finally, we have been invited to prepare a monographic special issue about Automated Deduction in Many-valued Logics of the international journal 'Mathware and Soft Computing'.

 

Selected publications

G. Escalada-Imaz (1992); A Parallel Interpretation Model of Logic Programs Based on Multisystems. (M. Valero et al., Eds.) Parallel Computing and Transputer Applications IOS Press/CIMNE, Part I, pp.745-754.

J.M. Lázaro, J. Maseda, F. Díaz, H. Stureson, G. Escalada-Imaz (1993); INTESIMPRO-Simple Dynamic Scheduling for Discrete Manufacturing. (J. Dorn & K. Froeschl, Eds.) Scheduling of Production Processes. Ellis Horwood: Ellis Horwood Series in Artificial Intelligence. pp 130-138.

G. Escalada-Imaz, F. Manyà (1994); The Satisfiability Problem in Multiple-valued Horn Formulae. IEEE International Symposium on Multiple-Valued Logic. Boston, USA. pp 250-256.

A. M. Martínez-Enríquez, G. Escalada-Imaz, C. Villegas-Santoyo (1994); Integration of different Heuristics to learn concepts. 1994 IEEE International Conference on Systems, Man, and Cybernetics. IEEE Systems, Man and Cybernetics Society. Vol. 1, pp. 425-430.

G. Escalada-Imaz, F. Manyà (1995); Efficient Interpretation of Propositional Multiple-valued Logic Programs. Lecture Notes in Computer Science. Springer, Vol. 945, pp 428-439.

G. Escalada-Imaz, F. Manyà; (1995); Reducing Time by avoiding Process Serialisations in Or-Parallel Interpretation of Logic Programs. 1995 IEEE International Conference on Systems, Man and Cybernetics. IEEE Systems, Man and Cybernetics Society. Vancouver, Vol. 5, pp. 4539-4544.

 


Applications of KNOWLEDGE-BASED SYSTEMS

Since 1986 we started working in the solution to real problems using our MILORD tool (see Contributions to KBS Architectures). The first application was the expert system PNEUMON-IA on the diagnosis of commonly-acquired pneumoniae. This problem required extensive research in the area of uncertainty, to satisfactorily represent the lack of precise diagnostic procedures of the domain. It took two years to complete it. In 1988 it was validated, and the results presented in 1989 in Albert Verdaguer's M.D. thesis. In 1987 we started another application in the area of Rheumatological diseases and Colagenosis. The more heterogeneous nature of the set of diseases included in this application forced us to develop more complex and declarative control structures to represent the dynamics of the reasoning that the expert needed to model the diagnostic processes. These problems partly motivated the current group interest in the study of computational reflection. The application was validated in 1989 and the results published in Miquel Belmonte's M.D. thesis in 1990.

We have also designed a Real Time Expert System to control all the signals becoming from the sensors measuring all the relevant variables to know the state of a new-born child during a post-surgical period. As soon as the state of a patient becomes irregular, it must be immediately detected and evaluated, and in the case of a serious state, the Expert System has to quickly activate the calls to nurses and/or physicians. Thus, our Expert System works under very strong restrictions of time. In order to fulfil these Real Time restrictions, we have proposed a simple representation language for the Knowledge Base and an Off-line compiler.

We have also been involved in applications to industrial problems. In particular, from 1988 to 1992, we developed a successful diagnostic system for defects in TV screens manufactured by PHILIPS. This research was done in the framework of ESPRIT I and ESPRIT II projects. The diagnostic system was connected to a vision system capable of detecting different categories of defects, and to an information system that provided data from the process plant. Using this combination of information a ranking of the most plausible causes of the defect was generated as output. Another industrial problem we have been working in using MILORD II (see Contributions to KBS Architectures) is the supervision of production in pig farms. The results of this work are going to be transferred to several farms in the near future thanks to a grant by the Spanish Ministry of Industry.

Another application that has been finished and validated using MILORD II is Spong-IA, an automatic classification tool for marine sponges. It covers all the atlanto-mediterranean taxonomy up to the level of family and a part of it up to the level of species. It passed an international experts validation process with great success. The main results of this work were presented in 1995 as Marta Domingo's Ph. D. on Biology.

Other applications are currently under development in the areas of Medicine and Agriculture. All of them following the now long tradition of interdisciplinary work of our Institute, as the already finished applications show up.

 

Selected publications

R. Felix, L. Godo, A. Hoffmann, C. Moraga, C. Sierra (1989); VLSI Chip-Architecture Selection using Reasoning Based on Fuzzy Logic. Proceedings of the 19th IEEE International Symposium on Multiple-valued Logic. ESMVL'89. China. 165-171.

A. Verdaguer (1989); PNEUMON-IA: desenvolupament i validació d'un sistema expert d'ajuda al diagnòstic mèdic. PhD thesis, Universitat Autònoma de Catalunya.

M.A. Belmonte (1990); RENOIR: un sistema experto para la ayuda en el diagnóstico de colagenosis y artropatías inflamatorias. PhD thesis, Universitat Autònoma de Barcelona.

A. Verdaguer, A. Patak, J.J. Sancho, C. Sierra, F. Sanz (1992); Validation of the Medical Expert System PNEUMON-IA. Computers and Biomedical Research. vol. 25, num. 6, pp. 511-526.

L. Vila, C. Sierra, A.B. Martínez, J. Climent (1992); Intelligent Process Control by means of Expert Systems and Machine Vision. Lecture Notes in Computer Science. Springer-Verlag. num. 604. , pp. 185-194.

G. Escalada-Imaz, J. Jaureguizar, X. Pastor and G. Fita (1993); Interpreting Physiopathological States under Environment Temporal Restrictions. (S. Andreassen et al., Eds.) Artificial Intelligence in Medicine. IOS Press, Vol. 10, pp. 241-244.

M. Belmonte, C. Sierra, R. López de Mántaras (1994); RENOIR An Expert System Using Fuzzy Logic for Rheumatology Diagnosis. International Journal of Intelligent Systems. John Wiley & Sons, Inc. Vol. 9, num. 11, pp. 985-1000.

Ch. Hernández, J.J. Sancho, M.A. Belmonte, C. Sierra, F. Sanz (1994); Validation of the Medical Expert System RENOIR. Computers and Biomedical Research Academic Press, Vol. 27, num. 6, pp. 456-471.

M. Domingo (1995); An Expert System Architecture for Taxonomic Domains. An Application in Porifera: The development of Spongia. PhD thesis, University of Barcelona.

J. Jaureguizar-Núñez, X. Pastor-Durán, G. Escalada-Imaz, A. Palomeque-Rico, F. Fita-Rodríguez, R. Lozano-Rubí; A Knowledge-Based System for Real-Time Physiopathological Diagnosis States in a Critical Care Setting. New Review of Expert Systems Journal. (in press)

 


Applications of MACHINE LEARNING

Application to Medical Diagnosis

A time-consuming problem in building expert systems is the fine-tuning of rule interactions needed to obtain a valid behavior. An empirical study made at Carnegie Mellon University points out that the knowledge engineer spends about 30% to 60% of the time performing such "tuning". Beatriz López and Enric Plaza tackled this important practical issue with the development of the BOLERO system (see Contributions to Knowledge Acquisition and Machine Learning), and its application to the domain of pneumonia diagnosis . The goal was learning the control knowledge needed to control the rule base of medical knowledge so as to automatically "tune" the medical expert system.

The approach taken by BOLERO is that of a meta-level system that uses the reflection mechanism involved in object-level/meta-level interaction to integrate learning and problem solving. BOLERO consists of:

* the rule-based object-level of MILORD (see Contributions to KBS Architectures) capable of approximate reasoning for diagnosis tasks, and

* a meta-level case-based planner that learns how to decide, at each instant, which object-level diagnosis are considered to be worthwhile.

The object-level contains domain knowledge for problem solving (how to deduce plausible diagnoses) from the patient data, while the meta-level contains (in fact, learns) strategic knowledge (planning from all possible goals which are likely to be useful).

The case-based planner is used to control the search space of the object-level improving the whole system efficiency. The control regime is reactive: every time there is a new information in the object-level the control is passed to the meta-level assuring that the system is dynamically capable of generating new strategic plans for the currently available information. As a result, BOLERO is capable of reactive behavior: any change in this situation is able to provoke a change in the strategic plan and thus in the goals to be pursued by the system. BOLERO is the first planning system we are aware of, that combines case-based reasoning and learning with reactive capabilities organized by a reflective architecture. BOLERO was developed by Beatriz López within her PhD "Learning strategic knowledge for diagnostic systems".

Application to Protein Purification

The integration of different learning methods and different problem-solving methods is an important topic for improving the use of Machine Learning in industrial and service settings. Two systems have been built to investigate the combination of case-based learning and inductive learning. The first one is CROMA, a system to support decision-making in the domain of protein purification developed by Eva Armengol and Enric Plaza using the NOOS  language (see Contributions to Knowledge Acquisition and Machine Learning). Given a specific protein, and a tissue or a culture from which to purify it, CROMA recommends a purification plan formed by a sequence of lab operations called chromatographic techniques. The system learns from past protein purification cases in two ways: a case-based and an inductive learning method. Moreover, the case-based reasoning method and classification method solve new problems using the knowledge that the respective learning method has acquired. Empirical results in investigating different combination strategies of these methods result in CROMA being more or less efficient. The experiments show that a metalevel method that adaptively selects, on a case-by-case basis, the method to use is better than a pre-fixed application strategy of methods, as is usually performed in some multistrategy learning systems.

Application to Marine Spongies Identification

A second learning system developed in NOOS by Eva Armengol and Enric Plaza, is SPIN, a system for the identification of marine sponges. The main goal here has been to investigate learning methods in a domain where instance descriptions (sponges) have incomplete data, non-applicable predicates. The structured representation supported by the NOOS language has proved to be adequate to capture the non-applicability of predicates. A new inductive method that generalizes structured instances into structured descriptions, based on the notion of antiunification of feature terms, has been developed. Moreover, the structured representation of instances in the domain of spongies yields generalizations that --being structured-- are very intuitive to the domain experts.

 

Selected publications

B. López (1993); Aprenentatge i generació de plans per a sistemes experts. PhD thesis, Universitat Politècnica de Catalunya.

B. López (1993); Reactive Planning through the integration of a case-base system and a rule-based system. (A, Sloman et al., eds.) Prospects for Artificial Intelligence. IOS Press, pp. 189-198.

B. López, E. Plaza (1993); Case-based planning for medical diagnosis. Lecture Notes in Artificial Intelligence. Springer-Verlag. num. 689, pp. 96-105.

E. Armengol, E. Plaza (1995); Integrating induction in a Case-based Reasoner. Lecture Notes in Artificial Intelligence. Springer-Verlag. num. 984, pp. 3-17.

Continuing work after 1995 can be found on the Learning Systems webpage.

 


Applications of FUZZY LOGIC TO AUTONOMOUS MINI-ROBOTS

A very recent application of Fuzzy Logic undertaken in our Institute within the framework of the PhD work of Maite López is applied to the problem of the acquisition of maps of unknown environments by means of a troup of autonomous mini-robots (the ANTs project). The goal of map generation is to obtain the most plausible position of walls and obstacles based on the infrared perception of several mini-robots. The mini-robots detect portions of walls or obstacles with different degrees of precision depending on the length of the run and the number of turns that they have done. The main problem is to decide whether several detected portions, represented by imprecise segments, are from the same wall or obstacle or not. If two segments are from the same wall or obstacle, a segment fusion procedure is applied to produce a single segment. This process of segment fusion is followed by a completion process in which hypothesis are made with respect to non-observed regions. The completion process is achieved by means of hypothetical reasoning based on declarative heuristic knowledge about the orthogonal environments in which the mini-robots evolve. Finally, an alignment process also takes place so that, for example, two walls separated by a doorway are properly aligned. All these operations are based on modelling the imprecise segments by means of fuzzy sets. More concretely, the position of the wall segment is a fuzzy number and the length a fuzzy interval. The main advantage of using fuzzy techniques is that the position and imprecision of the resulting fused segments can be very easily computed. Furthermore, using Fuzzy sets to model the imprecission about the position of obstacles is very appropriate. The results obtained are extremely encouraging.

This application is being done in close collaboration with Josep Amat's team from the Technical University of Barcelona. Robotics is a rich and challenging field for fuzzy sets applications as well as for learning, and our future plans are expected to continue our activities in this field.

 

Selected publications

J. Amat, F. Esteva and R. López de Mántaras (1995); Autonomous navigation troup for cooperative modelling of unknown environments. Proceeding of the 7th International Conference on Advanced Robotics, ICAR'95. San Feliu de Guíxols, Spain, Vol. I, pp. 383-389.

J. Amat, R. López de Mántaras, C. Sierra; Cooperative Autonomous Low-cost Robots for Exploring Unknown Environments. Lecture Notes in Control and Information Sciences, Springer-Verlag (in press).