Algorithm designers often introduce random choices into their algorithms in an effort to improve efficiency. However, random bits cannot necessarily be produced for free, so deterministic algorithms are preferable to randomized algorithms, all else being equal. Is randomness ever truly necessary for efficient computation?
What, ultimately, is the role of randomness in computing?
In this talk, I will discuss the "L = BPL" conjecture, which says that for every clever randomized algorithm, there is an even cleverer deterministic algorithm that does the same job with roughly the same *space complexity.* To try to prove this conjecture, the most traditional approach is based on pseudorandom generators (PRGs), which have applications beyond derandomizing algorithms. There are also other approaches based on variants of the PRG concept, most notably "weighted PRGs" and "hitting set generators." I will give an overview of my contributions in this area (with collaborators), consisting of new constructions and applications of these three types of generators.
William Hoza is a postdoctoral fellow at the Simons Institute for the Theory of Computing at the University of California, Berkeley. His PhD is from the University of Texas at Austin, where he was advised by David Zuckerman. He studies topics in complexity theory, especially pseudorandomness and derandomization.