Computer vision systems seek to recover properties of the physical world from measurements of reflected light. To do so, they must solve ill-posed estimation problems by leveraging the statistical structure present in natural scenes. In the first part of the talk, I will introduce a new inference framework that can efficiently reason with different notions of spatial structure, at different scales and over different regions across the visual field. This allows the accurate recovery of continuous-valued maps of scene properties---depth, surface orientation, reflectance, motion, etc.---from image data. Specifically, I will describe a method that uses this framework to estimate scene depth from a single image, by training a neural network to produce dense probabilistic estimates of different elements of local geometric structure, and harmonizing these estimates to produce consistent depth maps.
Ayan Chakrabarti is currently a Research Assistant Professor at the Toyota Technological Institute at Chicago. He completed his PhD. in Engineering Sciences from Harvard University in 2011, advised by Prof. Todd Zickler, and was a post-doctoral fellow at Harvard from 2011-14. Dr. Chakrabarti works on applying tools from machine learning to problems in computer vision and computational photography---dealing with the design of accurate and efficient algorithms for visual inference, and of new kinds of high-capability sensors and cameras. His research seeks solutions to these problems by considering both the physics of image formation, and the statistics of natural images and scenes.