Ph.D. Research Proficiency Exam: Trenton Muhr

Ph.D. Research Proficiency Exam: Trenton Muhr

Dec 13, 2022 - 1:00 PM
to , -

Speaker:Trenton Muhr

Privacy-Preserving Detection of Poisoning Attacks in Federated Learning

With federated learning, local learners train a shared global model using their own data, and report model updates to a server to aggregate and then update the global model. Such a learning paradigm may suffer from two attacks: privacy attacks by the untrusted server; adversarial attacks (e.g., poisoning attacks) by malicious learners. There is extensive research on addressing each of the attacks separately, but there is no scheme that can address both of them. In this paper, we propose a scheme that enables both privacy-preserving aggregation and poisoning attack detection at the server, by utilizing additive homomorphic encryption and a trusted execution environment (TEE). Our evaluation based on an implemented prototype system demonstrates that our scheme can attain a similar level of detection accuracy as the state-of-the-art poisoning detection scheme, and that the increased computational workload can be parallelized and mostly executed outside of the TEE. A privacy analysis shows that the proposed scheme can protect individual learners’ model updates from being exposed.

Committee: Wensheng Zhang (major professor), Samik Basu, Ying Cai, Soma Chaudhuri, and Forrest Bao

Join on Zoom: Please click this URL to start or join. https://iastate.zoom.us/j/94254617672 or, go to https://iastate.zoom.us/join and enter meeting ID: 942 5461 7672