Davis Automated Reasoning Group

The Davis Automated Reasoning Group (DARG), led by Prof. Aditya Thakur, is a team of researchers who develop tools and techniques for reasoning about software. The research conducted by DARG intersects with the areas of programming languages, formal methods, machine learning, and software engineering.

Our Projects

Efficient Abstract Interpretation

Abstract interpretation is a general framework for expressing static program analyses. It reduces the problem of extracting properties of a program to computing an approximation of the least fixpoint of a system of equations. This project aims to improve the time and memory performance of abstract interpretation, specifically the fixpoint computation central to all abstract interpreters. Our research is used in two open-source abstract interpreters: NASA IKOS and Facebook SPARTA. This research is partially supported by Facebook Testing and Verification (TAV) Research Award 2018.

  1. Memory-efficient Fixpoint Computation
    27th Static Analysis Symposium (SAS), 2020
    arXiv Code Details
  2. Deterministic Parallel Fixpoint Computation
    Proc. ACM Program. Lang. (POPL), 2020
    PDF ACM© Code Video Details

Neuro-Symbolic Program Analysis

Neuro-symbolic program analysis augments static program analysis with machine learning. This research is partially funded by NSF grant CCF-1464439 and Facebook Probability and Programming Research Award 2019.

  1. Effective Error-Specification Inference via Domain-Knowledge Expansion
    Proceedings of the 27th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE’19) , 2019
    PDF ACM© Code Details
  2. Path-Based Function Embedding and Its Application to Error-Handling Specification Mining
    Proceedings of the 26th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE’18) , 2018
    PDF ACM© Code Details
  3. Path-Based Function Embeddings
    Proceedings of the 40th International Conference on Software Engineering: Companion Proceeedings (ICSE), 2018
    PDF ACM© Details

Static Analysis and Modification of Deep Neural Networks

Deep neural networks (DNNs) have become the state-of-the-art in a variety of applications including image recognition and natural language processing. Moreover, they are increasingly used in safety and security-critical applications such as autonomous vehicles. This project aims to improve our understanding of DNNs by developing techniques to analyze, verify, and modify them. We have introduced a symbolic representation for deep neural networks, which partitions the network's input domain such that, within each partition, the output of the network is affine with respect to the input. Such a symbolic representation was used in the first approach for exact computation of Integrated Gradients. We have also defined a general notion of Abstract Neural Networks to speed up analyzers. This research is partially funded by Facebook Probability and Programming Research Award 2020.

  1. Abstract Neural Networks
    27th Static Analysis Symposium (SAS), 2020
    arXiv Code Details
  2. Correcting Deep Neural Networks with Small, Generalizing Patches
    NeurIPS 2019 Workshop on Safety and Robustness in Decision Making, 2019
    PDF Details
  3. Computing Linear Restrictions of Neural Networks
    Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems (NeurIPS), 2019
    Proceedings PDF Poster Slides Code Details

People

Current members of the DARG include Prof. Aditya Thakur, Sung Kook Kim, Zhe Tao, Matthew Sotoudeh, Jack Abukhovski, Eric Li.
Alumni members include Daniel DeFreez.