iTRUST
iTRUST

iTRUST
iTRUST
 : 
Interventions against Polarisation in Society for Trustworthy Social Media: From Diagnosis to Therapy
Interventions against Polarisation in Society for Trustworthy Social Media: From Diagnosis to Therapy

A Project coordinated by IIIA.

Web page:

Principal investigator: 

Collaborating organisations:

Funding entity:

Ministerio de Ciencia e Innovación
Ministerio de Ciencia e Innovación

Funding call:

Proyectos de Colaboración Internacional 2022-2
Proyectos de Colaboración Internacional 2022-2

Funding call URL:

Project #:

PCI2022-135010-2
PCI2022-135010-2

Total funding amount:

207.870,00€
207.870,00€

IIIA funding amount:

Duration:

01/Nov/2022
01/Nov/2022
31/Oct/2025
31/Oct/2025

Extension date:

Digitalisation is rapidly transforming our societies, transforming the dynamics of our interactions, transforming the culture of our debates. Trust plays a critical role in establishing intellectual humility and interpersonal civility in argumentation and discourse: without it, credibility is doomed, reputation is endangered, cooperation is compromised. The major threats associated with digitalisation - hate speech and fake news - are violations of the basic conditions for trusting and being trustworthy which are key for constructive, reasonable and responsible communication as well as for the collaborative and ethical organisation of societies. These behaviours eventually lead to polarisation, when users repeatedly attack each other in highly emotional terms, focusing on what divides people, not what unites them.

Focusing on three timely domains of interest – public health, gender equality and global warming – iTRUST will deliver (i) the largest ever dataset of online text, annotated with features relevant for ethos, pathos and reframing; (ii) a new methodology of large-scale comparative trust analytics to detect implicit patterns and trends in hate speech and fake news; (iii) a novel empirical account of how these patterns affect polarisation in online communication and in society at large; and (iv) AI-based applications that will transfer these insights into interventions against hate speech, fake news and polarisation. Given the relevance for the knowledge-based society, the project puts great emphasis on outreach activities and users’ awareness in collaboration with media, museums and other partners.

The consortium consists of five experienced PIs with expertise in rhetoric, comparative political science, corpus linguistics, natural language processing, multi-agent systems and computational argumentation. The group is complemented by senior experts (ACP and KEP) in fields that provide valuable extensions, such as media studies and AI-based technologies. Our long-term ambition is to establish a pan-European network and foundations for trustworthy AI in response to the EC priority of “Europe fit for the Digital Age”.

Digitalisation is rapidly transforming our societies, transforming the dynamics of our interactions, transforming the culture of our debates. Trust plays a critical role in establishing intellectual humility and interpersonal civility in argumentation and discourse: without it, credibility is doomed, reputation is endangered, cooperation is compromised. The major threats associated with digitalisation - hate speech and fake news - are violations of the basic conditions for trusting and being trustworthy which are key for constructive, reasonable and responsible communication as well as for the collaborative and ethical organisation of societies. These behaviours eventually lead to polarisation, when users repeatedly attack each other in highly emotional terms, focusing on what divides people, not what unites them.

Focusing on three timely domains of interest – public health, gender equality and global warming – iTRUST will deliver (i) the largest ever dataset of online text, annotated with features relevant for ethos, pathos and reframing; (ii) a new methodology of large-scale comparative trust analytics to detect implicit patterns and trends in hate speech and fake news; (iii) a novel empirical account of how these patterns affect polarisation in online communication and in society at large; and (iv) AI-based applications that will transfer these insights into interventions against hate speech, fake news and polarisation. Given the relevance for the knowledge-based society, the project puts great emphasis on outreach activities and users’ awareness in collaboration with media, museums and other partners.

The consortium consists of five experienced PIs with expertise in rhetoric, comparative political science, corpus linguistics, natural language processing, multi-agent systems and computational argumentation. The group is complemented by senior experts (ACP and KEP) in fields that provide valuable extensions, such as media studies and AI-based technologies. Our long-term ambition is to establish a pan-European network and foundations for trustworthy AI in response to the EC priority of “Europe fit for the Digital Age”.

2024
Yamil Soto,  Cristhian Ariel D. Deagustini,  Maria Vanina Martinez,  & Gerardo I. Simari (2024). A Mathematical Conceptualization of Bundle Sets in Defeasible Logic. Proc. of ACM SAC . [BibTeX]
Lissette Lemus del Cueto
Contract Engineer
Stephanie Malvicini
PhD Student
María Vanina Martinez
Tenured Scientist
Phone Ext. 431817

Pablo Noriega
Científico Ad Honorem
Carles Sierra
Research Professor
Phone Ext. 431801