PhD Preliminary Oral Exam: Weisi Fan
Safety-Driven Learning for Perception and Decision-Making in Autonomous Systems
Perception components in autonomous systems are often developed and optimized independently of downstream decision-making and control components, relying on established performance metrics like accuracy, precision, and recall. Traditional loss functions, such as cross-entropy loss and negative log-likelihood, focus on reducing misclassification errors but fail to consider their impact on system-level safety, overlooking the varying severities of system-level failures caused by these errors. To address this limitation, we propose a novel training paradigm that augments the perception component with an understanding of system-level safety objectives. Central to our approach is the translation of system-level safety requirements, formally specified using the rulebook formalism, into safety scores. These scores are then incorporated into the reward function of a reinforcement learning framework for fine-tuning perception models with system-level safety objectives. Simulation results demonstrate that models trained with this approach outperform baseline perception models in terms of system-level safety. In the meantime, we introduce new performance metrics: the absolute class error (ACE), expectation of absolute class error (EACE) and variance of absolute class error (VACE) for evaluating the accuracy of such probabilities. We test this metric using different neural network architectures and datasets. Furthermore, we present a new task-based neural network for object classification and compare its performance with a typical probabilistic classification model to show the improvement with threshold-based probabilistic decision-making.
Degree: PhD Co-Major in Computer Science and Statistics
Committee: Tichakorn Wongpiromsarn (co-major professor), Ulrike Genschel (co-major professor), Soumik Sarkar, Chunlin Li, Yan-Bin Jia, Ali Jannesari, and Hongyang Gao
Join on Zoom: https://iastate.zoom.us/j/99006931958