Ph.D. Final Oral Exam: Rohit Dureja

Event
Speaker: 
Rohit Dureja
Wednesday, September 9, 2020 - 1:00pm
Location: 
Virtual
Event Type: 

Model checking large design spaces: theory, tools, and experiments

In the early stages of design, there are frequently many different models of the system under development constituting a design space. The different models arise out of a need to weigh different design choices, to check core capabilities of system versions with varying features, or to analyze a future version against previous ones in the product line. Every unique combination of choices yields competing system models that differ in terms of assumptions, implementations, and configurations.  Formal verification techniques, like model checking, can aid system development by systematically comparing the different models in terms of functional correctness, however, applying model checking off-the-shelf may not scale due to the large size of the design space for today's complex systems.  We present scalable algorithms for design-space exploration using model checking that enables an exhaustive comparison of all competing models in large design spaces.

Model checking a design space entails checking multiple models and properties. We present algorithms that automatically prune the design space by finding inter-model relationships and property dependencies. We observe that sequential enumeration of the design space generates models with small incremental differences. Typical model-checking algorithms do not take advantage of this information and end up re-verifying "already-explored" state spaces across models. We present algorithms that learn and reuse information from solving related models in sequential model-checking runs.  Conversely, design space model-checking tasks often mandate checking several properties. State-of-the-art tools do not optimally exploit subproblem sharing between "nearly-identical" properties. We present a near-linear runtime algorithm for partitioning properties into provably high-affinity groups for individual model-checking tasks. The verification effort expended for one property in a group can be directly reused to accelerate the verification of the others. Building upon these ideas, we optimize parallel verification to maximize the benefits of our proposed techniques. We propose methods to minimize redundant computation, and dynamically optimize work distribution when checking multiple properties for individual models.

Committee: Kristin Yvonne Rozier (major professor), Gianfranco Ciardo, Robyn Lutz, Samik Basu, and Hridesh Rajan

Email nlewis1@iastate.edu for videoconference link if you are interested in viewing this exam.