Leilani H. Gilpin
Assistant Professor, UC Santa Cruz
UC CalCompute Coalition · UC Santa Cruz
AI Safety Researcher & Developer
Leilani H. Gilpin is an Assistant Professor in the Department of Computer Science and Engineering at UC Santa Cruz, where she leads the AI Explainability and Accountability (AIEA) Lab and holds a faculty affiliate appointment in the Science and Justice Research Center. She holds a Ph.D. in Electrical Engineering and Computer Science from MIT (2020), an M.S. in Computational and Mathematical Engineering from Stanford University (2013), and dual B.S. degrees in Computer Science (Highest Honors) and Mathematics (Honors) from UC San Diego (2011). Her research develops the theories and methods needed for autonomous AI systems to explain their own decisions, detect their own errors, and be held accountable when they fail — a set of capabilities that are foundational to any public or government deployment of AI at scale.
Gilpin’s central research contribution is the design of self-explaining AI architectures that build transparency in by design rather than imposing it from the outside. Her doctoral dissertation introduced Anomaly Detection through Explanations (ADE), a full-system monitoring framework that enables autonomous vehicles to detect and narrate inconsistencies in their own behavior in human-legible terms. This work seeded two major active research lines at UCSC. The first is neuro-symbolic (NeSy) AI: hybrid architectures that pair the perceptual power of neural networks with the auditability of symbolic logic, applied to domains including traffic reasoning, legal rule synthesis, and domain-specific question answering (e.g., her ProSLM framework, published at the International Conference on Neural-Symbolic Learning and Reasoning, 2024). The second is stress testing and robustness evaluation: her DANGER framework generates synthetic dangerous scenarios specifically to probe AI systems under conditions that curated benchmark datasets — such as KITTI or Waymo — deliberately exclude. Both lines are supported by an Air Force Office of Scientific Research Young Investigator Program (YIP) award, Frame-Based Monitoring to Detect and Explain Multimodal Autonomous System Errors ($450,000, March 2024–February 2027), as well as prior Underwriters Laboratory subawards through Northwestern University totaling over $275,000 for work on explanation robustness and LLM hallucination monitoring.
Beyond the technical core, Gilpin has developed an influential framework for institutional accountability in complex AI systems. Her “Accountability Layers” paper (AAAI 2023) addresses a problem central to AI governance: when a deployed system fails, the cause is rarely a single model error but rather an interaction among data quality, model assumptions, distributional shift, and system integration. Her layered decomposition approach provides a structured vocabulary for assigning responsibility across those contributing factors — exactly the kind of tool that regulators, procurement officers, and legislative staff need when auditing AI systems used in public services. She has extended this line of reasoning to autonomous vehicles in particular through co-authored work on semi-automated synthesis of driving rules (VehicleSec 2023) and on explaining multimodal sensor errors (IEEE DSAA 2021), and has engaged directly with policymakers through a 2023 UC congressional briefing on artificial intelligence, alongside faculty from UC Berkeley, UCSB, and UCSD.
Gilpin’s commitment to equitable access to AI research is both structural and institutional. UC Santa Cruz is a Hispanic-Serving Institution, and her lab — the AIEA Lab — has mentored over 100 students from undergraduate through postdoctoral levels, with sustained investment in research experiences for students from underrepresented groups. She co-leads a $345,000 Learning Lab Grand Challenge grant (Building Data Science Communities for Improving Student Success, 2023–2026) and serves as a Women in ML (WiML) PhD mentor. She also leads a UC Interdisciplinary Innovation Program grant ($40,000, 2025–2026) studying how large language models can support computational thinking and skill development across disciplines — reflecting a conviction that the benefits of AI research infrastructure should extend well beyond computer science departments. Her open publication practice, including her widely cited survey “Explaining Explanations: An Overview of Interpretability of Machine Learning” (IEEE DSAA 2018, 4,800+ citations), is explicitly designed to make XAI methods legible and usable by UCs and practitioners who lack access to frontier compute resources or large interpretability teams.
Gilpin serves on the Governance Advisory Committee of the Center for Advancing Safety of Machine Intelligence (CASMI) at Northwestern University, on the EU Bias Advisory Board, and as General Chair of the 2025 NeuroSymbolic AI Conference (NeSy 2025). She is a 2024 AFOSR Young Investigator, a 2022 AAAI New Faculty Highlight, and a 2020 Rising Star in EECS. Her research on explainable, accountable, and robustly tested AI systems positions her as a key voice in California’s effort to build AI infrastructure that is safe, transparent, and equitably accessible.