Comparing LLM and Human Modes of Representation
Shalom Lappin
Queen Mary University of London, University of Gothenburg, and King’s College London
The NLP Reading Group is excited to host Prof. Shalom Lappin who will present a talk on Comparing LLM and Human Modes of Representation.
Logistics
Date: Friday May 29
Time: 2PM
Location: on Google Meet, to be screencast at Mila in A14
Abstract
Much work on the cognitive foundations of AI has focussed on comparisons between the ways in which Large Language Models (LLMs) and humans process information and represent it. One aspect of this comparison involves determining the extent to which LLMs can achieve or surpass human performance on a variety of cognitively interesting tasks. A second explores points of convergence and divergence between LLM and human systems for processing information. In this talk I consider some recent research that has addressed both issues in two informational domains. The first is the representation of linguistic knowledge. The second is real world reasoning and planning. While LLMs frequently achieve impressive levels of performance and fluency on linguistic applications, they handle linguistic content in ways that are distinct from human processing. They are also, for the most part, less efficient than humans in learning and generalisation for reasoning tasks.
Speaker Bio
Prof. Shalom Lappin’s research is in computational linguistics with a focus on the application of probabilistic models to issues in syntax, semantics, and grammar induction. He is currently the chief scientific advisor in CLASP.