Ph.D. Preliminary Oral Exam: Mohammed Khaleel

Mohammed Khaleel
Friday, November 19, 2021 - 3:00pm
Atanasoff B0029
Event Type: 

On Interpretation Methods for Convolutional Neural Networks

In recent years, Convolutional Neural Networks (CNNs) have been intensively explored for image classification in several application domains, including medicine. Due to the success and the black-box nature of CNN-based models, the interpretability of these models has received an influx of research attention. Machine learning interpretation is the process of revealing the behavior of the complex machine learning model in decision making. Interpretation techniques provide insight into the internal working of the overall model (global interpretation) and/or how the model arrives to the class prediction for a specific input image (local interpretation). Most existing local interpretation methods provide interpretations that highlight regions within the image that support the class prediction of an input image. These interpretations are unable to provide further details to aid understanding of a complex concept in a domain such as medicine. Recently, interpretation methods that provide multiple visual concept levels (object, shape, texture, and color) have begun to emerge. However, they require manually labeled concepts at various semantic levels. In medicine, manual labeling by domain experts of medical images with multiple semantic levels is not time nor cost feasible. We propose a novel Hierarchical Visual Concept (HVC) interpretation framework to explain predictions made by CNN-based image classification models. Given an input image and the corresponding predicted class, HVC explains the prediction with a concept hierarchy of most relevant visual concepts at multiple levels together with measurements of how these concepts are present in the training data. The concepts are automatically learned during training such that the lower-level concepts in the hierarchy support the corresponding higher-level concepts. The hierarchical explanation is aimed to aid understanding of a complex concept in a domain such as medicine. We evaluated HVC interpretation of classification results by VGG16 and ResNet50 classifiers on public and private colonoscopy image datasets. The evaluation by a gastroenterologist on the colonoscopy dataset shows that the concepts learned by HVC are highly relevant for the correct classification. At the object level explanation in the hierarchy, the expert was convinced in 87%-100% of the cases. The interpretation at this level is easy to understand in 100% of the cases but drops to 84% at the color concept level.

Committee: Ying Cai (major professor), Adisak Sukul, Wallapak Tavanapong, Jin Tian, Johnny Wong, and David Peterson

Join on Zoom: Meeting ID 935 2562 9083