Pranav Anand
Professor of Linguistics, UC Santa Cruz
UC CalCompute Coalition · UC Santa Cruz
Linguist and NLP UC
Pranav Anand is Professor of Linguistics and Faculty Director of the Humanities Institute at the University of California, Santa Cruz, where he has taught and conducted research since 2006. Trained in formal semantics at MIT (Ph.D., 2006) and mathematics at Harvard (B.A., 2001), Anand works at the intersection of linguistic theory and computational language research — asking how context, perspective, and social purpose shape the meaning of language, and what it would take for AI systems to understand language in those richer terms. At a moment when large language models are being trained on the full breadth of human discourse and deployed to generate, moderate, and summarize content at scale, his scholarship offers some of the most rigorous available grounding for understanding where those systems succeed and where they are liable to fail.
Anand’s core theoretical research concerns how perspective is grammatically encoded in language — why expressions like find, must, and evaluative predicates are true or false only relative to a point of view, and what it means for a sentence to report someone else’s experience versus assert a speaker’s own judgment. His work in this vein has appeared in Linguistics and Philosophy, Semantics and Pragmatics, and Language, often in collaboration with Natasha Korotkova, Valentine Hacquard, and Dan Hardt. This research bears directly on one of the most consequential failure modes in deployed AI: the conflation of subjective opinion with objective fact. Formal accounts of how perspective is grammatically represented provide principled tools for evaluating — and improving — how AI systems handle contested, evaluative, and ideologically charged content.
Alongside his theoretical work, Anand has led a sustained program of computational research on persuasion, stance, and argumentation in online discourse. He has co-developed and analyzed several influential corpora, including the Internet Argument Corpus 2.0, a large-scale SQL-structured database of dialogic social media, and the Santa Cruz Sluicing Data Set (published in Language, 2021). His published work in this area spans stance classification in political debate, the detection of hyperpartisan versus non-hyperpartisan speech in online commentary, argument strength and audience effects in persuasion, and the use of summarization to extract argument facets from social media — research conducted with collaborators including Marilyn Walker, Jean Fox Tree, and Steve Whittaker, and supported in part by NSF and IARPA funding. These datasets and methods constitute open research infrastructure of exactly the kind that a publicly accessible AI compute initiative is designed to support and democratize.
As Faculty Director of the Humanities Institute, Anand has worked institutionally to ensure that humanistic expertise shapes how AI is developed and understood. His Spencer Foundation–funded project Writing with ChatGPT: The Learning Promises and Perils of Co-Writing with Generative Models (2023–2025, co-PI Hannah Hausman, ~$250,000) investigates the educational equity implications of generative AI tools in academic settings — who benefits, under what conditions, and at what cost to learning. His NEH-supported project Humanizing Technology (2022–2023, co-PI Jasmine Alinder, $150,000) developed cross-disciplinary frameworks for keeping AI development accountable to human values, histories, and cultures. An earlier IARPA-funded grant on Language Evidence for Social Goals: A Linguistic Approach to Persuasion Moves in Discourse ($1,053,841) further demonstrates the policy-relevant reach of his research program.
Pranav Anand holds a Ph.D. in Linguistics from MIT and a B.A. in Mathematics from Harvard University. He is based in Santa Cruz, California, and his work connecting formal linguistic theory to computational modeling, online discourse, and the social dimensions of AI makes him a distinctive voice at the intersection of language science and the public interest case for responsible AI infrastructure.