M.S. Final Oral Exam: Trent Muhr

Trent Muhr
Tuesday, July 2, 2024 - 8:00am
Event Type: 

Privacy-Preserving Detection of Poisoning Attacks in Federated Learning

With federated learning, local learners train a shared global model using their own data, and report model updates to a server to aggregate and then update the global model. Such a learning paradigm may suffer from two attacks: privacy attacks by the untrusted server; adversarial attacks (e.g., poisoning attacks) by malicious learners. There is extensive research on addressing each of the attacks separately, but there is no scheme that can address both of them. In this paper, we propose a scheme that enables both privacy-preserving aggregation and poisoning attack detection at the server, by utilizing additive homomorphic encryption and a trusted execution environment (TEE). Our evaluation based on an implemented prototype system demonstrates that our scheme can attain a similar level of detection accuracy as the state-of-the-art poisoning detection scheme, and that the increased computational workload can be parallelized and mostly executed outside of the TEE. A privacy analysis shows that the proposed scheme can protect individual learners' model updates from being exposed.

Committee: Wensheng Zhang (major professor), Ying Cai, and Simanta Mitra

Join on Zoom: https://iastate.zoom.us/j/98410972209