Mary-Anne Williams
Professor, Business School
University of New South Wales
research.unsw.edu.au/people/professor-mary-anne-williams
Short bio
Mary-Anne Williams is a leading AI scholar and innovator. She is a Professor at the Business School, University of New South Wales (UNSW Sydney), Australia, and founder of the UNSW Business AI Lab and the UTS Magic Lab. Her research spans AI, Human-AI interaction, AI agents, social robotics, decision theory, machine reasoning, and the foundations of rational choice, with a long-standing commitment to building transparent, trustworthy, and human-centred AI. Mary-Anne completed her PhD at the University of Sydney, where she began a lifelong intellectual partnership with the late Pavlos Peppas of the University of Patras. Their collaboration, spanning more than 30 years, was marked by deep scientific insight, enduring friendship, and strong family ties linking Australia and Greece. They completed their doctorates together, received their first major Australian Research Council grant together, celebrated their first rejected paper together, and went on to publish more than 30 papers across logic, belief revision, rational choice, AI theory and robotics. Their joint work has advanced formal models of reasoning and decision-making under uncertainty, helping shape foundational advances in AI.
Talk Title: “Rationalizing the Observable Risky Choices of Agents”
with the late Pavlos Peppas University of Patras, Greece
A central challenge in the age of agentic AI is enabling humans to reliably manage, govern, and collaborate with autonomous systems whose behavior must be anticipated despite only limited observability of their choices under risk and uncertainty. Addressing this challenge requires a robust framework for inferring and predicting AI agent behavior from its observed choices in risky situations. In this paper, we extend the theory of rational choice under risk to predict if an agent's risky choices are rational. Many mission critical AI applications require agents to be predictably rational. Building on the von Neumann–Morgenstern Utility Theorem, we distinguish between an agent’s underlying internal preferences and those that are visible to external observers, and partial preferences are a constraint on the observer rather than the AI agent's indecision. Restricting attention to binary choices, we derive necessary and sufficient conditions under which an observer can rationalize an agent’s preferences with missing and/or purely qualitative information, thereby enabling principled prediction of behavior under uncertainty. We provide separate characterizations for environments in which probabilities are known and for those in which only ordinal comparisons are available. A central component of our analysis is a novel result on product inequalities, which is of independent mathematical interest and underpins our characterization of rationalizability with qualitative preference information. Our results resolve a standing open question and substantially extend the theory of rational choice under risk to settings with partially observable agent preferences, providing a rigorous foundation for anticipating, governing, and coordinating with agentic AI systems.
