CA | ES | EN
Some shallow remarks on the use of values in artificial autonomous systems

I propose to peek into the possibility of using moral values as a device to harness the autonomy of artificial systems. The talk should outline the challenge of developing a theory of values that has a distinctive AI bias: its motivation, the foundational questions, the distinctive features, the potential artefacts, the methodological challenges, and the practical consequences of such a theory. Fortunately for everyone, it will not. The talk will only look into a restricted understanding of the problem of embedding values into the governance of autonomous systems. In fact, I will only pay attention to some of the obvious practical problems one needs to overcome if one intends to claim that an autonomous system is aligned with a particular set of values. Hopefully, this timid approach will reveal enough of the breath and beauty of an artificial axiology to justify taking a closer look into it.

 

Pablo Noriega is a tenured scientist of the IIIA. His main research interest is in the governance of open multiagent systems. This talk reflects recent collaboration with Mark d'Inverno (Goldsmiths, U.of London), Julian Padget (U. of Bath), Enric Plaza (IIIA), Harko Verhagen (Stockholm U.) and Toni Perello-Moragues.