A Project coordinated by IIIA.
Principal investigator:
Collaborating organisations:
Universitat de Lleida (UdL)
As automated decision-making systems become ubiquitous thanks to the advancement in Artificial Intelligence, there are also pressing requirements, both ethical and legal, to build such systems in a way that they are trustworthy. The development of data-driven AI systems has seen successful application in diverse domains related to modeling social platforms and cybersecurity analysis in general; however, many of these systems cannot explain the rationale behind their decisions, nor do they possess the specific knowledge to do so, which is a major drawback, especially in critical domains. It is clear now to the scientific community that the effectiveness of these systems is limited by their inability to explain their decisions and actions to human users. In line with this issue, the main goal of this project is to work towards the formalization and development of explainable automated reasoning tasks. In particular, in this project we focus on the role of conflicting information in automated decision- making in explainable intelligent systems. The notion of inconsistency or conflicts arises from the over-specification of information and has been extensively studied in many contexts in the last decades. Our hypothesis is that the use of formal logic-based languages as a means for knowledge representation and reasoning is key in the construction of explainable systems handling inconsistencies. Given the varied skill set that this proposal puts together, we aim at developing both theoretical foundations as well as methods and tools to represent and handle conflicting information within automated decision-making explainable systems.
First, since argumentation-based reasoning has demonstrated potential in addressing inconsistency handling, we propose to investigate several extensions that significantly extend the expressive power of such frameworks. In particular, the use of conditionals as the representation language and the incorporation of domain-specific knowledge in the form of weighted arguments have the potential to better handle conflicting information and will probably impose new challenges on the definition of the semantics of the argumentation process as well as in its computational complexity and algorithmic properties.
Second, we plan to work on SAT and extensions like MaxSAT and MinSAT. MaxSAT and MinSAT take as input inconsistent formulas and minimize/maximize the number of inconsistent constraints. While SAT is competitive for decision problems, MaxSAT and MinSAT provide a competitive approach for optimization problems. We propose to devise new in-processing and pre-processing techniques for improving the performance of SAT and MaxSAT solvers, develop new solvers for MinSAT, define and analyze proof systems for MaxSAT/MinSAT, and apply such solvers to solve challenging argumentation and explainability problems.
Finally, we want to apply the theoretical and computational results of the project to the analysis of discussions in social networks and threat detection in cybersecurity. On the one hand, we aim to use argumentation-based reasoning to understand how different users interact with each other, and to try to characterize their behavior in a social network. On the other hand, we propose to explore the analysis of explanations for intelligent systems in cybersecurity based on an argumentative interpretation of the results predicted by some cybersecurity classifiers.