On Decomposing Deep Learning Model into Modules
Deep learning is being incorporated in many modern software systems. Deep learning approaches train a deep neural network (DNN) model using training examples and then use the DNN model for prediction. While the structure of a DNN model as layers is observable, the model is treated in its entirety as a monolithic component. To change the logic implemented by the model, e.g. to add/remove logic that recognizes inputs belonging to a certain class, or to replace the logic with an alternative, the training examples need to be changed and the DNN needs to be retrained using the new set of examples. We argue that decomposing a DNN into DNN modules— akin to decomposing a monolithic software code into modules—can bring the benefits of modularity to deep learning. First, we showed how fully connected deep neural networks can be decomposed into modules, where each module is responsible for recognizing a single output class. These modules can further be reused and replaced to form a new DNN model. Second, we propose how this output class-based decomposition approach can be extended to other deep learning models, e.g., convolutional neural networks, natural language processing, etc. Third, we propose to enable structured programming to document the interaction between the modules and increase explainability. Forth, we propose to study how deep learning undergoes changes in the life span to identify different decomposition criteria.
Committee: Hridesh Rajan (major professor), Gianfranco Ciardo, Myra Cohen, Wei Le, and Jin Tian
Join on Zoom: Please click this URL to start or join. https://iastate.zoom.us/j/96860135061?pwd=MXErU3BkNTZTWmhJWWlFaDNVSUxaUT09
Or, go to https://iastate.zoom.us/join and enter meeting ID: 968 6013 5061 and password: 989246