Department Faculty and Students Share Work at ICLR 2025

Faculty members and graduate students from our department recently presented five research articles at the International Conference on Learning Representations (ICLR) 2025, one of the premier conferences in machine learning. The acceptance rate for the conference is below 30%.

man in white windbreaker
Michael Chen

The work of Professor Pavan Aduri and graduate student Michael Chen, along with collaborators N. V. Vinodchandran, Ruosong Wong,  and Lin Yang, “Regret-Optimal List Replicable Bandit Learning: Matching Upper and Lower Bounds,”  introduces and studies the notion of list replicability in the context of multi-armed bandit (MAB) problems, providing both upper and lower bounds on list applicability.

The paper by Associate Professor Wensheng Zhang and graduate student Ming Liu, "Is Your Video Language Model a Reliable Judge?", examines the reliability of using a video language model (VLM) to evaluate the performance of other VLMs. It highlights the limitations of current models in serving this role and explores collaborative approaches to enhance reliability. The paper also emphasizes the need for robust evaluation frameworks that account for both the capabilities and inherent biases of individual models.

 man in a black sweatshirt
Zhaoing Yu

The work of Assistant Professor Hongyang Gao and graduate student Zhaoning Yu in the paper “MAGE: Model-Level Graph Neural Networks Explanations via Motif-based Graph Generation” presents MAGE, a motif-based explainer for GNNs on molecular data. Unlike prior methods, MAGE identifies meaningful substructures, like rings, by learning class-specific motifs and generating valid molecular graphs for explanation. This leads to more interpretable and chemically valid insights, validated across six molecular datasets.

Two additional papers from our department's faculty and students were accepted. One was an oral presentation paper, and one was a spotlight paper. The oral presentation paper “Global Convergence in Neural ODEs: Impact of Activation Functions,” by Assistant Professor Hongyang Gao, graduate student Siyuan Sun, former graduate student Tianxiang Gao, and collaborator Hailing Liu studies how activation functions impact the training of Neural ODEs. It shows that smoothness ensures unique solutions, while nonlinearity preserves learning dynamics. Together, these properties enable global convergence under gradient descent, with experiments supporting both theory and practical scalability. Oral presentations are a notable recognition, as less than 2% of submissions are accepted.

The spotlight paper “Computational Explorations of Total Variation Distance” by  Professor Pavan Aduri and his collaborators,  Arnab Bhattacharyya, Sutanu Gayen, Dimitrios Myrisiotis, and N. V. Vinodchandran explores computational aspects of total variation (TV) distance, providing a polynomial-time algorithm for checking equivalence between mixtures of product distributions. It also establishes a hardness result for estimating the total variation distance between Ising model distributions. Only 3.5% of all submissions are accepted as spotlight papers,  making this a remarkable accomplishment.