Title: Differential Privacy, Adaptive Data Analysis, and Free Speedups via Sampling
Abstract: In this talk, I will tell you about a recently-discovered beautiful connection between differential private mechanisms and preventing overfitting in machine learning. Then I will discuss my work on how to use sampling to speed up such differentially-private mechanisms for machine learning without reducing the resulting accuracy. In particular, I will describe a mechanism that provides significant speed-up over previous mechanisms, without needing to increase the total amount of data required to maintain the same generalization error as before. I also will show how these general results also yield a simple, fast, and unified approach for adaptively optimizing convex and strongly convex functions over a dataset. This work is joint with Benjamin Fish and Benjamin I. P. Rubinstein.
Bio: Lev Reyzin is an Associate Professor at UIC's Mathematics Department and Director of the UIC Foundations of Data Science Institute. His research spans the theory and practice of machine learning. Prior to coming to UIC, Lev was a Simons Postdoctoral Fellow at Georgia Tech, and before that, an NSF CI-Fellow at Yahoo! Research, where he tackled problems in computational advertising. Lev received his Ph.D. on an NSF graduate research fellowship from Yale under Dana Angluin and his undergraduate degree from Princeton. He serves on the Steering Committee of the Association for Algorithmic Learning Theory, as an Associate Editor of Ann. Math. Artif. Intell., and as an Editorial Board Reviewer for J. Mach. Learn. Res. Recently, he was program chair for ALT 2017 and ISAIM 2020. His work has earned awards at ICML, COLT, and AISTATS and has been funded by the National Science Foundation and the Department of Defense.
After the presentation, there will be a short time for discussion and questions afterwards.
Please contact Bridgette Hare for virtual conference instructions