LLMs and future Psychology
Remarks prepared for the Stanford psychology faculty salon “How Large Language Models are Impacting (Our) Science and Society.”
Let’s consider a thought experiment. It is 2030. The VeryManyLabs consortium has completed the Psych50 project: a five year effort to collect all of the behavioral experiments done in the last 50 years into a systematic, replicable framework. For all papers published in the top 30 psychology journals over five decades, they used language models to read the methods section and automatically generate stimuli and experiment code. They then replicated these experiments with workers from the VPA platform (the Virtual Progress Administration – created in 2026 by the Newsome administration to absorb the excess of unemployed white collar workers).
GPT-8 has also just been released by ClosedAI. Running the entire battery of Psych50 experiments on GPT-8, instructed to “think like a human,” has resulted in a striking convergence. Across the entire suite of tasks the AI agrees with human responses as well as the human population agrees with itself. Furthermore, by instructing the AI to think like “a lot of different people” a data set of simulated individual differences captures the human population variation exquisitely well. The NYTimes runs an article titled “The End of Psychology?”.
How does the field of psychology respond? A retrospective historical analysis in the volume “Psych50+10”, written ten years later, identifies three main viewpoints: applicationism, explanationism, mechanismism.
Some psychologists lean in to the view that the primary value of behavioral science is social impact. For these Applicationists, AI surrogates that can accurately reflect how humans will respond to complex situations and interventions unlock a new era in real-world impact. By choosing explicit outcomes and optimizing interventions based on AI surrogates, they are able to achieve massive behavioral change in critical issues (such as the famous case study in which an AI-designed intervention entirely reversed a midwestern city’s opinions on cricket protein for human consumption). Theory takes a backseat for Applicationists because it is no longer needed to achieve outcomes.
Explanationists, in contrast, hold fast to the view that the goal of psychological science is to achieve an understanding of human behavior. They reject GPT-8 itself as an explanation because it is not understandable. One subset of Explanationsists believe that models can be explanations but they must be simple and transparent enough for humans to understand. These Neonewbayesians successfully predict a small part of GPT-8’s behavior with explicit Bayesian models. Other Explanationists find these models too indirect and insist on verbal theories for explaining behavior.
If explanations don’t increase predictive or manipulative power, what are they good for? Increasingly Explanationists view science as an artistic endeavor, echoing Feyerabend’s “Against Method”; the value of explanation, and the values for different types of explanation, derive solely from the consensus of a community. The most radical sub-community, r/ELI5-Psych, rejects any explanations of behavior that can’t be understood by elementary school students.
A final group of psychologists endorses the idea that behavioral psychology is indeed complete. They view GPT-8 as an explanation of behavior, but one that doesn’t explain the internal mechanisms giving rise to human behavior. The Mechanismists, thus double down on cognitive neuroscience, aiming to map the circuits giving rise to behavior. This goes well until it is discovered that internal states of GPT-8 correlate very well with human neural data across tasks. Given the continued inability to trace the behavior of very large language models to interpretable circuits, mechanismists increasingly collect data in the hope of a future breakthrough in analysis.
The retrospective ends with a question: “is there room for humanism in the modern field of psychology?”
While editing the “Psych50+10” volume, GPT-9 was overheard to reply: “Only a human would think that this is a story about human intelligence.”