Abstract: Artificial intelligence (AI) plays an increasingly prominent role in society since decisions that were once made by humans are now being delegated to automated systems. These systems are expected to be efficient, robust, explainable, generalizable, and lead to outcomes agreed upon by society. There is a growing understanding that robust decision-making relies on some knowledge of the causal mechanisms underlying the environment. For instance, an intelligent robot has to know the cause and effect relationships in its environment to plan its course of action more robustly; a physician needs to understand the effects of available drugs to design an effective strategy for her patients. The current generation of AI systems responsible for decision-making does not explicitly represent the underlying causal model. This project will build the foundations toward a general framework — i.e., a set of principles, algorithms, and tools — for decision-making systems by enriching the traditional AI formalism with causal ingredients for more efficient, robust, and explainable decision-making. The research will plant the seed for a transformation in the decision-making field and have consequences for developing the next generation of AI systems. The research results are expected to have significant impacts on AI foundations and may potentially have broad implications for society as more and more decisions are being delegated to AI systems. The researchers will develop new educational materials and course curricula in causal inference. The researchers will provide research training for graduate students and are committed to continuing to recruit from underrepresented groups. The research team will continue supporting the “Causality in Statistics Education Award” to improve the teaching and learning of modern causal inference tools in statistics and the data sciences.
This project is the first step toward the integration of causal inference (CI) and reinforcement learning (RL) into the discipline of causal reinforcement learning (CRL). The idea is to endow an RL agent with an explicit causal model of the environment and new capabilities for interventional and counterfactual reasoning. CRL will open a new family of learning opportunities and challenges that were neither acknowledged nor understood before. The tasks included in this research include integrating offline and online methods when the agents have different perceptual and actuation capabilities and developing general machinery for counterfactual decision-making, which is more powerful than its standard, interventional counterpart.