PhD Preliminary Oral Exam: Quazi Ishtiaque Mahmud
AutoParLLM: GNN-guided Context Generation for Zero-Shot Code Parallelization using LLMs
In-Context Learning (ICL) has been shown to be a powerful technique to augment the capabilities of LLMs for a diverse range of tasks. This work proposes AUTOPARLLM, a novel way to generate context using guidance from graph neural networks (GNNs) to generate efficient parallel codes. We evaluate AUTOPARLLM on 12 applications from two well-known benchmark suites of parallel codes: NAS Parallel Benchmark and Rodinia Benchmark. Our results show that AUTOPARLLM improves the state-of-the-art LLMs (e.g., GPT-4) by 19.9% in NAS and 6.48% in Rodinia benchmark in terms of CodeBERTScore for the task of parallel code generation. Moreover, AUTOPARLLM improves the ability of the most powerful LLM to date, GPT-4, by achieving ≈17% (on NAS benchmark) and ≈16% (on Rodinia benchmark) better speedup. In addition, we propose OMPSCORE for evaluating the quality of the parallel code and show its effectiveness in evaluating parallel codes.
Committee: Dr. Ali Jannesari (Major Professor), Dr. Myra B Cohen, Dr. Liyi Li, Dr. Qi Li, and Dr. Samik Basu.