Seminar

UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence
UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence

27/Apr/2021
27/Apr/2021

Speaker:

Natalia Criado Pacheco
Natalia Criado Pacheco

Institution:

UKRI Centre - King's College London
UKRI Centre - King's College London

Language :

EN
EN

Type :

Webinar
Webinar

Description:

The overarching aim of the UKRI Centre for Doctoral Training (CDT) in Safe and Trusted Artificial Intelligence (STAI) is to train the first generation of AI scientists and engineers in methods of safe and trusted AI. An AI system is considered safe when we can provide assurances about the correctness of its behaviour, and it is considered trusted if the average user can have confidence in the system and its decision making. The CDT focuses particularly on the use of model-based AI techniques for ensuring the safety and trustworthiness of AI systems. Model-based AI techniques provide an explicit language for representing, analysing and reasoning about systems and their behaviours. Models can be verified and solutions based on them can be guaranteed as safe and correct, and models can provide human-understandable explanations and support user collaboration and interaction with AI – key for developing trust in a system. In this talk, we will present the central vision, programme, and core research areas.

Dr Natalia Criado is a Senior Lecturer in Computer Science at King's College London and a member of the UKRI Centre for Doctoral Training (CDT) in Safe and Trusted Artificial Intelligence (STAI).

The overarching aim of the UKRI Centre for Doctoral Training (CDT) in Safe and Trusted Artificial Intelligence (STAI) is to train the first generation of AI scientists and engineers in methods of safe and trusted AI. An AI system is considered safe when we can provide assurances about the correctness of its behaviour, and it is considered trusted if the average user can have confidence in the system and its decision making. The CDT focuses particularly on the use of model-based AI techniques for ensuring the safety and trustworthiness of AI systems. Model-based AI techniques provide an explicit language for representing, analysing and reasoning about systems and their behaviours. Models can be verified and solutions based on them can be guaranteed as safe and correct, and models can provide human-understandable explanations and support user collaboration and interaction with AI – key for developing trust in a system. In this talk, we will present the central vision, programme, and core research areas.

Dr Natalia Criado is a Senior Lecturer in Computer Science at King's College London and a member of the UKRI Centre for Doctoral Training (CDT) in Safe and Trusted Artificial Intelligence (STAI).