EVASAI
EVASAI

EVASAI
EVASAI
 : 
Value Awareness in Social AI
Value Awareness in Social AI

A Project coordinated by IIIA.

Web page:

Principal investigator: 

Collaborating organisations:

Universidad Rey Juan Carlos (URJC)

Universitat Politècnica de València (UPV)

Universidad Rey Juan Carlos (URJC)

Universitat Politècnica de València (UPV)

Funding entity:

Ministerio de Ciencia, Innovación y Universidades
Ministerio de Ciencia, Innovación y Universidades

Funding call:

Proyectos de Generación de Conocimiento y Formación de Personal Investigador Predoctoral, Convocatoria 2024
Proyectos de Generación de Conocimiento y Formación de Personal Investigador Predoctoral, Convocatoria 2024

Funding call URL:

Project #:

PID2024-158227NB-C31 / C32 / C33
PID2024-158227NB-C31 / C32 / C33

Total funding amount:

432.000,00€
432.000,00€

IIIA funding amount:

153.750,00€
153.750,00€

Duration:

01/Sep/2025
01/Sep/2025
31/Aug/2028
31/Aug/2028

Extension date:

The value-alignment problem has arisen as one of the main challenges regarding the risks of AI and the development of ethical AI. As AI is becoming more powerful and autonomous, there is fear that its actions might not fit with our values. The value-alignment problem is defined as the problem of ensuring that an AI system's behaviour is aligned with human values. To address this new challenge of engineering values into AI, a range of research questions have emerged: how human values can be learnt; how individual values can be aggregated to the level of groups; how arguments that explicitly reference values can be made; how decision making can be value-driven Nevertheless, this rapidly growing field of research is still in its infancy, and significant work remains to be done.

EVASAI extends value awareness to 'reasoning about values', in contrast to merely 'reasoning with values', and introduces the social dimension of values. This enables agents, human and software, to critically examine, interpret, and adapt values in context. It allows agents to reflect on each other's values to help understand and predict the value-driven motivations of others, ultimately enhancing interactions and collaborations. When assessing the values of a collective, evaluating the implications and trade-offs of different values could help facilitate consensus-building, and ultimately steer interactions across diverse value systems. Since values are widely recognised as a primary motivator of behaviour, value awareness at a social level becomes a cornerstone of social AI.

The main novelty of our proposal lies in laying the groundwork for developing ethical social AI through next-level value awareness. In this regard, the 3 main areas in which this project will expand knowledge and foster the research are: i) reasoning (at the individual level) about the values of others in social interactions, ii) inferring/learning the meaning of values as well as the value-based preferences of agents (human or artificial) or agent societies, and iii) reasoning (at the collective level) to reach consensus or agreements on the values that should underline the social interactions in a group or society.

The second novelty of this proposal is applying our models and mechanisms to examples from real-life applications, from health and firefighting domains to school place allocations systems. Although we believe that our research is rather fundamental and the investigation on systems that can reason with and over human values will carry on in the next years, the evaluation on real-world problems will allow us not only to validate theoretical concepts, but also ensures that they address practical challenges and can have a tangible impact on society.

The value-alignment problem has arisen as one of the main challenges regarding the risks of AI and the development of ethical AI. As AI is becoming more powerful and autonomous, there is fear that its actions might not fit with our values. The value-alignment problem is defined as the problem of ensuring that an AI system's behaviour is aligned with human values. To address this new challenge of engineering values into AI, a range of research questions have emerged: how human values can be learnt; how individual values can be aggregated to the level of groups; how arguments that explicitly reference values can be made; how decision making can be value-driven Nevertheless, this rapidly growing field of research is still in its infancy, and significant work remains to be done.

EVASAI extends value awareness to 'reasoning about values', in contrast to merely 'reasoning with values', and introduces the social dimension of values. This enables agents, human and software, to critically examine, interpret, and adapt values in context. It allows agents to reflect on each other's values to help understand and predict the value-driven motivations of others, ultimately enhancing interactions and collaborations. When assessing the values of a collective, evaluating the implications and trade-offs of different values could help facilitate consensus-building, and ultimately steer interactions across diverse value systems. Since values are widely recognised as a primary motivator of behaviour, value awareness at a social level becomes a cornerstone of social AI.

The main novelty of our proposal lies in laying the groundwork for developing ethical social AI through next-level value awareness. In this regard, the 3 main areas in which this project will expand knowledge and foster the research are: i) reasoning (at the individual level) about the values of others in social interactions, ii) inferring/learning the meaning of values as well as the value-based preferences of agents (human or artificial) or agent societies, and iii) reasoning (at the collective level) to reach consensus or agreements on the values that should underline the social interactions in a group or society.

The second novelty of this proposal is applying our models and mechanisms to examples from real-life applications, from health and firefighting domains to school place allocations systems. Although we believe that our research is rather fundamental and the investigation on systems that can reason with and over human values will carry on in the next years, the evaluation on real-world problems will allow us not only to validate theoretical concepts, but also ensures that they address practical challenges and can have a tangible impact on society.

No publications uploaded yet
Dave de Jonge
Research Fellow
Phone Ext. 431825

Ramon Lopez de Mantaras
Adjunct Professor Ad Honorem
Phone Ext. 431828

Pablo Noriega
Científico Ad Honorem
Nardine Osman Alameh
Tenured Scientist
Phone Ext. 431826

Manel Rodríguez Soto
Research Fellow
Phone Ext. 431832

Jordi Sabater-Mir
Tenured Scientist
Phone Ext. 431856

Carles Sierra García
Research Professor
Phone Ext. 431801