CA | ES | EN
Seminar

Challenges and Opportunities of Asynchronous Multi-Agent Reinforcement Learning
Challenges and Opportunities of Asynchronous Multi-Agent Reinforcement Learning

23/May/2023
23/May/2023

Speaker:

Enrico Marchesini
Enrico Marchesini

Institution:

Northeastern University
Northeastern University

Language :

EN
EN

Type :

Hybrid
Hybrid

Description:

TBA

Real setups pose significant challenges for modern Deep Reinforcement Learning algorithms; agents struggle to explore high-dimensional environments and have to provably guarantee safe behaviors under partial information to operate in our society. In addition, multiple agents (or humans) must learn to interact while acting asynchronously through temporally extended actions. I will present our work on fostering diversified exploration and safety in real domains of interest. We tackle these problems from different angles, such as (i) using Evolutionary Algorithms as a natural way to foster diversity; (ii) leveraging Formal Verification to characterize policies' decision-making and designing novel safety metrics to optimize; (iii) designing macro-action-based algorithms to learn coordination among asynchronous agents.

Enrico Marchesini is a Postdoctoral research associate in the Khoury College of Computer Sciences at Northeastern University, advised by Christopher Amato. He completed his Ph.D. in Computer Science at the University of Verona (Italy), advised by Alessandro Farinelli. His research interests lie in topics that can foster real-world applications of Deep Reinforcement Learning. For this reason, he is designing novel algorithms for multi-agent systems while promoting efficient exploration and safety in asynchronous setups.

Real setups pose significant challenges for modern Deep Reinforcement Learning algorithms; agents struggle to explore high-dimensional environments and have to provably guarantee safe behaviors under partial information to operate in our society. In addition, multiple agents (or humans) must learn to interact while acting asynchronously through temporally extended actions. I will present our work on fostering diversified exploration and safety in real domains of interest. We tackle these problems from different angles, such as (i) using Evolutionary Algorithms as a natural way to foster diversity; (ii) leveraging Formal Verification to characterize policies' decision-making and designing novel safety metrics to optimize; (iii) designing macro-action-based algorithms to learn coordination among asynchronous agents.

Enrico Marchesini is a Postdoctoral research associate in the Khoury College of Computer Sciences at Northeastern University, advised by Christopher Amato. He completed his Ph.D. in Computer Science at the University of Verona (Italy), advised by Alessandro Farinelli. His research interests lie in topics that can foster real-world applications of Deep Reinforcement Learning. For this reason, he is designing novel algorithms for multi-agent systems while promoting efficient exploration and safety in asynchronous setups.