With my supervisor, Amanda Prorok, I study resilience and heterogeneity in multi-agent and multi-robot systems. For my research, I employ techniques from the fields of Multi-Agent Reinforcement Learning and Graph Neural Networks.
Prior to my PhD, I investigated the problem of transport network design for multi-agent routing.
Download my CV.
PhD in Computer Science, Present
University of Cambridge
MPhil in Advanced Computer Science, 2021
University of Cambridge
BEng in Computer Engineering, 2020
Politecnico di Milano
BenchMARL is a library for benchmarking Multi-Agent Reinforcement Learning (MARL) using TorchRL. BenchMARL allows to quickly compare different MARL algorithms, tasks, and models while being systematically grounded in its two core tenets: reproducibility and standardization.
BenchMARL is a library for benchmarking Multi-Agent Reinforcement Learning (MARL) using TorchRL. BenchMARL allows to quickly compare different MARL algorithms, tasks, and models while being systematically grounded in its two core tenets: reproducibility and standardization.
In this paper, we crystallize the role of heterogeneity in MARL policies. We introduce Heterogeneous Graph Neural Network Proximal Policy Optimization (HetGPPO), a paradigm for training heterogeneous MARL policies that leverages a Graph Neural Network for differentiable inter-agent communication. HetGPPO allows communicating agents to learn heterogeneous behaviors while enabling fully decentralized training in partially observable environments. Through simulations and real-world experiments, we show that: (i) when homogeneous methods fail due to strong heterogeneous requirements, HetGPPO succeeds, and, (ii) when homogeneous methods are able to learn apparently heterogeneous behaviors, HetGPPO achieves higher resilience to both training and deployment noise.
To find relevant content, try searching publications or filtering using the buttons below.
*