📚 Publications
Relational Conformal Prediction for Correlated Time Series

We propose a novel conformal prediction method based on graph deep learning. Our method can be applied on top of any time series predictor, can learn the relationships across the time series and, thanks to an adaptive component, can handle non-exchangeable data and nonstationarity in the time series.
BN-Pool: a Bayesian Nonparametric Approach to Graph Pooling

We introduce BN-Pool, the first clustering-based pooling method for GNNs that adaptively determines the number of supernodes in the pooled graph. This is done by partitioning the graph nodes into an unbounded number of clusters using a generative model based on a Bayesian non-parametric framework.
Interpreting Temporal Graph Neural Networks with Koopman Theory

We propose an XAI technique based on Koopman theory to interpret temporal graphs and the spatio-temporal Graph Neural Newtworks used to process them. The proposed approach allows to identify nodes and time steps when relevant events occur.
MaxCutPool: differentiable feature-aware Maxcut for pooling in graph neural networks
ICLR 2025

We propose a novel approach to compute the MAXCUT in attributed graphs, i.e., graphs with features associated with nodes and edges. Our approach is robust to the underlying graph topology and is fully differentiable, making it possible to find solutions that jointly optimize the MAXCUT along with other objectives. Based on the obtained MAXCUT partition, we implement a hierarchical graph pooling layer for Graph Neural Networks, which is sparse, differentiable, and particularly suitable for downstream tasks on heterophilic graphs.
Graph-based Forecasting with Missing Data through Spatiotemporal Downsampling
ICML 2024

Spatiotemporal graph neural networks achieve striking results by representing the relationships across time series as a graph. Nonetheless, most existing methods rely on the often unrealistic assumption that inputs are always available and fail to capture hidden spatiotemporal dynamics when part of the data is missing. In this work, we tackle this problem through hierarchical spatiotemporal downsampling. The input time series are progressively coarsened over time and space, obtaining a pool of representations that capture heterogeneous temporal and spatial dynamics. Conditioned on observations and missing data patterns, such representations are combined by an interpretable attention mechanism to generate the forecasts.
The expressive power of pooling in Graph Neural Networks
NeurIPS 2023

A graph pooling operator can be expressed as the composition of 3 functions: SEL defines how to form the vertices of the coarsened graph; RED computes the vertex features in the coarsened graph; CON computes the edges in the coarsened graphs. In this work we show that if certain conditions are met on the GNN layers before pooling, on the SEL, and on the RED functions, then enough information is preserved in the coarsened graph. In particular, if two graphs are WL-distinguishable, their coarsened versions will also be WL-dinstinguishable.
Total Variation Graph Neural Networks
ICML 2023

We propose the Total Variation GNN model, which can be used to cluster the vertices of an annotated graph, by accounting both for the graph topology and the vertex features. Compared to other GNNs for clustering, TVGNN creates sharp cluster assignments that better approximate the optimal (in the minimum cut sense) partition. The TVGNN model can also be used to implement graph pooling in a deep GNN architecture for tasks such as graph classification.
Scalable Spatiotemporal Graph Neural Networks
AAAI 2023

SGP is novel approach based on an encode-decoder architecture with a training-free spatiotemporal encoding scheme and where the only learned parameters are in the node-level trainable decoder (an MLP). Representations for each point in time and space can be precomputed and the decoder can be trained by sampling uniformly time and space thus gettig rid of the dependency on sequence lenght and graph size for what concerns the computational complexity of the training procedure. The spatiotemporal encoder relies on two modules: 1) a randomized recurrent neural network for encoding sequences and 2) a propagation process through the graph structure exploiting powers of a graph shift operator.