Skip to main content
CS Colloquium | September 16, 2024

Generative AI and Mental Imagery

Photo of Ron Chrisley

Ron Chrisley
University of Sussex, Director of the Centre for Cognitive Science

Stevenson 1400
12:00 PM

To what extent can artificial cognitive systems model, or even truly be said to possess, mental imagery?

One study, (Cabrita 2024), drawing on (Yampolskiy 2017), proposes susceptibility to optical illusions as a sufficient condition for experiencing mental imagery, and evaluates GPT4-o accordingly.

I argue that this criterion is too weak, and instead propose that for an agent to be credited with mental imagery it should demonstrate the capacity to perform not just image processing (i.e., pattern sensitivity), but imagistic reasoning. Accordingly, I propose two necessary criteria for the ascription of mental imagery: explanatory need, and (human-level) reasoning performance.

To assess the extent to which a given AI system meets these criteria, the paradigm in the classic study of imagistic reasoning by Finke, Pinker and Farah (1989) is used. This paradigm asks human subjects to imagine familiar objects and shapes, asks them to transform these shapes “internally” or “mentally”, and then asks the subjects to answer questions about the resulting imagined shapes. Other paradigms (e.g. the Mental Rotation Test from (Shepard and Metzler 1971)) are also considered.

Two kinds AI systems, multi-modal models, and chained models, are assessed with respect to the imagistic reasoning tasks. Proposals for how their limitations might be overcome are offered.
The talk closes by considering a chess problem that (Penrose 2017) puts forward as requiring imagistic reasoning that (he claims) AI cannot perform.

Note: This is a joint event with the Forum in Ethics, Law, and Society (PHIL 205) and the CS Colloquium series (CS 390).