Ph.D. Final Oral Exam: Mohammed Khaleel

Ph.D. Final Oral Exam: Mohammed Khaleel

Nov 13, 2023 - 8:30 AM
to , -

Speaker:Mohammed Khaleel

On Interpretation Methods for Deep Neural Networks

Deep neural networks have achieved remarkable success in various fields, ranging from natural language processing to image recognition. However, the inherent opacity of deep neural networks has raised concerns about their explainability and interpretability. Explainability and interpretability are critical issues for the future deployment of artificial intelligence in several domains, including healthcare. They improve model transparency, which increases the trustworthiness of the machine learning models and their outputs. In this talk, we will explore critical challenges surrounding the explanation and interpretation of deep neural networks across various domains, such as natural language processing and image recognition.

The first challenge we will address focuses on the evaluation of interpretation methods for text classification. Existing methods rely heavily on classification accuracy or prediction confidence, lacking a quantifiable measure of alignment with human interpretation. This is due to the lack of a large publicly available interpretation ground truth. Such a ground truth will help advance interpretation methods by providing better quantitative evaluation. Manual labeling of important words for each document to create a large interpretation ground truth is very time-consuming and prone to significant disagreement among human annotators. To tackle this issue, we introduce the Interpretation methods for Deep text Classification benchmark (IDC), comprising three pseudo-interpretation ground truth datasets and three performance metrics. The pseudo-interpretation ground truth generated by IDC agrees with human annotators on sampled movie reviews. The IDC benchmark facilitates the quantitative evaluation of six recent interpretation methods.

The second challenge delves into the limitations of existing pixel-based interpretation methods in providing meaningful insights into complex concepts, particularly in medical image recognition. Concept-based interpretation methods, on the other hand, require adding self-explainable layers to the neural network, which will impact the prediction accuracy. Furthermore, interpretation methods that provide multiple levels of concepts require manual labeling of these concepts. We propose the Hierarchical Visual Concept (HVC) interpretation framework, which hierarchically presents the most relevant visual concepts at multiple semantic levels to enhance model interpretability without sacrificing accuracy. This approach automatically learns concepts during training, eliminating the need for costly manual labeling.

Additionally, we adapted the HVC framework to introduce VisActive, a visual concept-based active learning method for image classification under class imbalance. VisActive recommends the most informative images from a large unlabeled and highly imbalanced dataset for manual labeling. The goal is to improve the performance of an image classifier while minimizing manual labeling efforts. VisActive effectively leverages visual concepts to expand the labeled dataset size, promoting diversity and balance in class representations. VisActive achieved significant performance improvements in comparison to the state-of-the-art deep active learning methods.

The third challenge focuses on explaining deep learning model failures. We introduce ClarifyConfusion, an explanation method that learns visual prototypes of each class to identify the causes of model confusion. By visualizing the image features contributing to incorrect classifications, ClarifyConfusion provides insights into model failure cases, which is essential for enhancing the safety and reliability of deep learning systems.

Overall, this dissertation offers novel solutions to enhance the explanation and interpretation of deep neural networks, contributing to the advancement of transparent and trustworthy artificial intelligence systems across various application domains.

Committee: Ying Cai (major professor), Adisak Sukul, Wallapak Tavanapong, Jin Tian, Johnny Wong, and David Peterson

Join on Zoom: https://iastate.zoom.us/j/91953413768