The Noos Representation Language
The aim of my PhD was the design and implementation of a representation language for developing knowledge systems that integrate problem solving and learning. Our proposal was that this goal could be achieved with a representation language with representation constructs close to knowledge modeling frameworks and with episodic memory and reflective capabilities.
We developed a reflective object-centered representation language close to knowledge modeling frameworks. The language was based on the task/method decomposition principle and the analysis of models required and constructed by problem solving methods. Formalizated using feature terms, a formal approach to object-centered representations, we provided a formalism for integrating different learning techniques.
The integration of machine learning tasks has as implication that the knowledge modeling of the implemented knowledge system has to include modeling of learning goals. Moreover, machine learning techniques have to be modeled inside the KM framework and the knowledge requirements of ML have to be addressed. The integration of learning requires the capability of accessing (introspection) to solved problems (that we call episodes) and of modifying the knowledge of the system.
The second proposal was that learning methods are methods (in the sense of knowledge modeling PSM) with introspection capabilities that can be also analyzed in the same task/method decomposition way. Thus, learning methods can be uniformly represented as methods and integrated into our framework.
The third proposal was that whenever some knowledge is required by a problem solving method, and that knowledge is not directly available, there is an opportunity for learning. We called those opportunities impasses, following SOAR terminology, and the integration of learning was realized by learning methods capable of solving these impasses.
Examples of applications developed with Noos were presented.
cF: yet another Case-Based Reasoning Framework
The cF Framework is a child of the Noos platform. The goal was to concentrate the efforts in providing a simple framework for developing prototypes of CBR applications.
When AI technologies are applied to real-world problems, it is often difficult for developers to anticipate all possible eventualities. Especially in long-lived systems, changing circumstances may require changes not only to domain knowledge but also to the reasoning process that brings it to bear.
This research investigates applying introspective reasoning to improve the performance of a case-based reasoning system, by guiding learning to improve how a case-based reasoning system applies its cases. The success of a CBR system depends not only on its cases, but also on its ability to use those cases appropriately in new situations, which depends on factors such as the system’s similarity measure and the case adaptation mechanism. Consequently, it is desirable to enable CBR systems to improve the knowledge and processes by which they bring their cases to bear.
The goal of our introspective reasoning system is to detect reasoning failures and to refine the function of reasoning mechanisms, to improve system performance on future problems. To achieve this goal, the introspective reasoner monitors the reasoning process, determines the possible causes of its failures, and performs actions that will affect future reasoning processes.
Our model is domain independent, that is, it is focused on the general case-based reasoning process for retrieval and adaptation, rather than on specific details of those processes for any particular domain. The model deals with three types of knowledge: indexing knowledge, ranking knowledge, and adaptation knowledge. To apply the model to any concrete application, domain-specific retrieval and adaptation mechanisms must be linked to the model.
Introspective reasoning is organized into five tasks:
(1) the monitoring task, in charge of maintaining a trace of the CBR process;
(2) the quality assessment task, which analyzes the quality of the solutions proposed by the system;
(3) the blame assessment task, responsible for identifying the reasoning failures;
(4) the hypothesis generation task, in charge of proposing learning goals; and
(5) the hypothesis evaluation task, which assesses the impact of proposed improvements on solution generation (see Figure below).