Ph.D. Research Proficiency Exam: Olukorede Fakorede

Event
Speaker: 
Olukorede Fakorede
Friday, June 3, 2022 - 3:00pm
Event Type: 

Improving Adversarial Training by Regularizing with Loss Ratio and Logit Loss

The vulnerability of deep learning classifiers to adversarial examples has received much attention lately, prompting researchers to propose various methods for defending a neural network against adversarial examples. Adversarial training and its variants, which utilize adversarial examples in the training process, have been the most successful defense method. Observing that generating adversarial examples (for untargeted attacks) involve minimizing the model’s confidence in making the correct prediction by increasing its loss, we propose minimizing the loss ratio of the model’s prediction on adversarial examples to the corresponding benign examples. In addition, we hypothesize that robustness against targeted attacks could be improved when information about both correct and incorrect classes is included in the adversarial training process, and study minimizing the loss between adversarial and natural logits as a way to improve robustness against targeted attacks. We, therefore, propose a new training objective for adversarial training called loss ratio and logits loss adversarial training (LRLLAT) objective. The experimental results show that the proposed LRLLAT method improves the state-of-the-art on robustness against strong adversarial attacks in both white-box and black-box settings.

Committee: Jin Tian (major professor), Ali Jannesari, Qi Li, Chris Quinn, and Robyn Lutz

Join on Zoom: Please click this URL to start or join. https://iastate.zoom.us/j/2238898166?pwd=ckJHTDdRTlV0MjFoWllUdS9hRlBNZz09 Or, go to https://iastate.zoom.us/join and enter meeting ID: 223 889 8166 and password: 781219