LINEXSYS
LINEXSYS

LINEXSYS
LINEXSYS
 : 
Logic-based Methods for Inconsistency Management in Explainable Intelligent Systems
Logic-based Methods for Inconsistency Management in Explainable Intelligent Systems

A Project coordinated by IIIA.

Web page:

Principal investigator: 

Collaborating organisations:

Universitat de Lleida (UdL)

Universitat de Lleida (UdL)

Funding entity:

Ministerio de Ciencia e Innovación
Ministerio de Ciencia e Innovación

Funding call:

Funding call URL:

Project #:

PID2022-139835NB-C21
PID2022-139835NB-C21

Total funding amount:

185.125,00€
185.125,00€

IIIA funding amount:

Duration:

01/Sep/2023
01/Sep/2023
31/Aug/2026
31/Aug/2026

Extension date:

As automated decision-making systems become ubiquitous thanks to the advancement in Artificial Intelligence, there are also pressing requirements, both ethical and legal, to build such systems in a way that they are trustworthy. The development of data-driven AI systems has seen successful application in diverse domains related to modeling social platforms and cybersecurity analysis in general; however, many of these systems cannot explain the rationale behind their decisions, nor do they possess the specific knowledge to do so, which is a major drawback, especially in critical domains. It is clear now to the scientific community that the effectiveness of these systems is limited by their inability to explain their decisions and actions to human users. In line with this issue, the main goal of this project is to work towards the formalization and development of explainable automated reasoning tasks. In particular, in this project we focus on the role of conflicting information in automated decision- making in explainable intelligent systems. The notion of inconsistency or conflicts arises from the over-specification of information and has been extensively studied in many contexts in the last decades. Our hypothesis is that the use of formal logic-based languages as a means for knowledge representation and reasoning is key in the construction of explainable systems handling inconsistencies. Given the varied skill set that this proposal puts together, we aim at developing both theoretical foundations as well as methods and tools to represent and handle conflicting information within automated decision-making explainable systems.

First, since argumentation-based reasoning has demonstrated potential in addressing inconsistency handling, we propose to investigate several extensions that significantly extend the expressive power of such frameworks. In particular, the use of conditionals as the representation language and the incorporation of domain-specific knowledge in the form of weighted arguments have the potential to better handle conflicting information and will probably impose new challenges on the definition of the semantics of the argumentation process as well as in its computational complexity and algorithmic properties.

Second, we plan to work on SAT and extensions like MaxSAT and MinSAT. MaxSAT and MinSAT take as input inconsistent formulas and minimize/maximize the number of inconsistent constraints. While SAT is competitive for decision problems, MaxSAT and MinSAT provide a competitive approach for optimization problems. We propose to devise new in-processing and pre-processing techniques for improving the performance of SAT and MaxSAT solvers, develop new solvers for MinSAT, define and analyze proof systems for MaxSAT/MinSAT, and apply such solvers to solve challenging argumentation and explainability problems.

Finally, we want to apply the theoretical and computational results of the project to the analysis of discussions in social networks and threat detection in cybersecurity. On the one hand, we aim to use argumentation-based reasoning to understand how different users interact with each other, and to try to characterize their behavior in a social network. On the other hand, we propose to explore the analysis of explanations for intelligent systems in cybersecurity based on an argumentative interpretation of the results predicted by some cybersecurity classifiers.

As automated decision-making systems become ubiquitous thanks to the advancement in Artificial Intelligence, there are also pressing requirements, both ethical and legal, to build such systems in a way that they are trustworthy. The development of data-driven AI systems has seen successful application in diverse domains related to modeling social platforms and cybersecurity analysis in general; however, many of these systems cannot explain the rationale behind their decisions, nor do they possess the specific knowledge to do so, which is a major drawback, especially in critical domains. It is clear now to the scientific community that the effectiveness of these systems is limited by their inability to explain their decisions and actions to human users. In line with this issue, the main goal of this project is to work towards the formalization and development of explainable automated reasoning tasks. In particular, in this project we focus on the role of conflicting information in automated decision- making in explainable intelligent systems. The notion of inconsistency or conflicts arises from the over-specification of information and has been extensively studied in many contexts in the last decades. Our hypothesis is that the use of formal logic-based languages as a means for knowledge representation and reasoning is key in the construction of explainable systems handling inconsistencies. Given the varied skill set that this proposal puts together, we aim at developing both theoretical foundations as well as methods and tools to represent and handle conflicting information within automated decision-making explainable systems.

First, since argumentation-based reasoning has demonstrated potential in addressing inconsistency handling, we propose to investigate several extensions that significantly extend the expressive power of such frameworks. In particular, the use of conditionals as the representation language and the incorporation of domain-specific knowledge in the form of weighted arguments have the potential to better handle conflicting information and will probably impose new challenges on the definition of the semantics of the argumentation process as well as in its computational complexity and algorithmic properties.

Second, we plan to work on SAT and extensions like MaxSAT and MinSAT. MaxSAT and MinSAT take as input inconsistent formulas and minimize/maximize the number of inconsistent constraints. While SAT is competitive for decision problems, MaxSAT and MinSAT provide a competitive approach for optimization problems. We propose to devise new in-processing and pre-processing techniques for improving the performance of SAT and MaxSAT solvers, develop new solvers for MinSAT, define and analyze proof systems for MaxSAT/MinSAT, and apply such solvers to solve challenging argumentation and explainability problems.

Finally, we want to apply the theoretical and computational results of the project to the analysis of discussions in social networks and threat detection in cybersecurity. On the one hand, we aim to use argumentation-based reasoning to understand how different users interact with each other, and to try to characterize their behavior in a social network. On the other hand, we propose to explore the analysis of explanations for intelligent systems in cybersecurity based on an argumentative interpretation of the results predicted by some cybersecurity classifiers.

2024
Yamil Soto,  Cristhian Ariel D. Deagustini,  Maria Vanina Martinez,  & Gerardo I. Simari (2024). A Mathematical Conceptualization of Bundle Sets in Defeasible Logic. Proc. of ACM SAC . [BibTeX]
Daniel Grimaldi,  Maria Vanina Martinez,  & Ricardo O Rodriguez (2024). Moderated revision. International Journal of Approximate Reasoning, 166, 109126. https://doi.org/10.1016/j.ijar.2024.109126. [BibTeX]  [PDF]
2023
Dami{\'{a}}n Ariel Furman,  Pablo Torres,  Jos{\'{e}}A. Rodr{í}guez,  Diego Letzen,  Maria Vanina Martinez,  & Laura Alonso Alemany (2023). High-quality argumentative information in low resources approaches improve counter-narrative generation. Houda Bouamor, Juan Pino, & Kalika Bali (Eds.), Findings of the Association for Computational Linguistics: {EMNLP} 2023, Singapore, December 6-10, 2023 (pp. 2942--2956). Association for Computational Linguistics. https://doi.org/10.18653/V1/2023.FINDINGS-EMNLP.194. [BibTeX]
Alger Sans Pinillos,  & Vicent Costa (2023). Más allá de los datos: la transformación digital del museo tradicional. Daimon, 90, 81-94. https://doi.org/10.6018/daimon.563231. [BibTeX]
Eva Armengol
Tenured Scientist
Phone Ext. 431851

Vicent Costa
Tenured Scientist
Phone Ext. 431850

Pilar Dellunde
Adjunct Scientist
Francesc Esteva
Adjunct Professor Ad Honorem
Lluís Godo
Research Professor
Phone Ext. 431857

Felip Manyà
Scientific Researcher
Phone Ext. 431854

María Vanina Martinez
Tenured Scientist
Phone Ext. 431817

Pedro Meseguer
Scientific Researcher
Phone Ext. 431862