Colloquia: Sihong Xie, Transparent and fair machine learning on graphs for humans

Event
Speaker: 
Sihong Xie
Wednesday, October 20, 2021 - 4:25pm to 5:25pm
Location: 
Zoom or 1230 Communications Bldg
Event Type: 

Sihong Xie

Speaker: Sihong Xie, Lehigh University

Title: Transparent and fair machine learning on graphs for humans

Abstract: Graph data are found in many domains, including biochemistry, neural science, computer science, civil engineering, recommender systems, etc.. Graphical models are machine learning techniques to infer hidden knowledge embedded in the graphs. As ML can aid decision-making or impact many's life quality in the long and short terms, one must take the humans in the loop of graphical model designs and applications. I will touch transparency and fairness of ML on graphs that can directly or indirectly improve how humans use the models. First, we conduct a study to understand the relationship between human perception of and trust in the graphical models. The study shows that there are two perception modes corresponding to two explanation desiderata, simulatability, and counterfactual relevance. I propose a multi-objective optimization method to search Pareto-optimal explanations that balance the two desiderata to maximize human trust in the models. Second, a stable cause makes more sense as an explanation while an explanation insensitive to important changes is misleading. We study both robustness and sensitivity of explanation of contrastive learning for graph comparison. Without domain knowledge, we extract generic self-explanations to stabilize input-specific explanations and propose a self-adaptive constrained optimization to balance the robustness and sensitivity in the explanations. Lastly, unfair predictions can impact end-users but fairness can be measured in many possibly conflicting ways. I investigate the satisfaction of multiple fairness criteria on large graphs with skew degree distribution by formulating linear systems as certificates. When trade-offs are necessary, I propose a multi-gradient descent algorithm to find Pareto fronts from which human users can select a solution with the desired trade-off. I conclude with several future directions in making graphical models more usable for humans

Bio: Sihong Xie is an assistant professor at the Department of Computer Science and Engineering, Lehigh University. He received his Ph.D. in 2016 from the Department of Computer Science at the University of Illinois at Chicago. His research interest includes misinformation detection in adversarial environments, interpretable and fair graphical models, and human-ML collaboration in data annotation. Dr.Xie has published over 60 papers in major data mining conferences, such as KDD, ICDM, WWW, AAAI, IJCAI, WSDM, SDM, TKDE, with over 1,900 citations and an h-index of 17. Dr. Xie serves as senior PC for AAAI and PC members on other ML and AI conferences, including KDD, ICLR, ICDM, and SIGIR.

Zoom: https://iastate.zoom.us/j/97708759001 Or, go to https://iastate.zoom.us/join and enter meeting ID: 977 0875 9001  

Category: