LLMs and Language Grounding

Reto Gubelmann and Stevan Harnad


The NLP Reading Group is excited to have a very special reading group session with a presentation on LLMs and (the relevance of) language grounding by Reto Gubelmann followed by a formal response to the talk by Stevan Harnad. Following the response, we will open things up for discussion. The talk will happen in hybrid fashion on Zoom and in A14 on Friday May 9th at 1PM. In-person attendance is encouraged as Prof. Harnad will be there in person to offer his response!

Talk and Response Description

Pragmatic Norms Are All You Need – Why The Symbol Grounding Problem Does Not Apply to LLMs

Do LLMs fall prey to Harnad’s symbol grounding problem (SGP), as it has recently been claimed? We argue that this is not the case. Starting out with countering the arguments of Bender and Koller (2020), we trace the origins of the SGP to the computational theory of mind (CTM), and we show that it only arises with natural language when questionable theories of meaning are presupposed. We conclude by showing that it would apply to LLMs only if they were interpreted in the manner of how the CTM conceives the mind, i.e., by postulating that LLMs rely on a version of a language of thought, or by adopting said questionable theories of meaning; since neither option is rational, we conclude that the SGP does not apply to LLMs.

Paper link: paper

Response to the Pragmatic Norms paper by Stevan Harnad

This talk will be followed by a response by Prof. Stevan Harnad some of which will be tied to his recent paper Language writ large: LLMs, ChatGPT, meaning, and understanding (abstract below):

Apart from what (little) OpenAI may be concealing from us, we all know (roughly) how Large Language Models (LLMs) such as ChatGPT work (their vast text databases, statistics, vector representations, and huge number of parameters, next-word training, etc.). However, none of us can say (hand on heart) that we are not surprised by what ChatGPT has proved to be able to do with these resources. This has even driven some of us to conclude that ChatGPT actually understands. It is not true that it understands. But it is also not true that we understand how it can do what it can do. I will suggest some hunches about benign “biases”—convergent constraints that emerge at the LLM scale that may be helping ChatGPT do so much better than we would have expected. These biases are inherent in the nature of language itself, at the LLM scale, and they are closely linked to what it is that ChatGPT lacks, which is direct sensorimotor grounding to connect its words to their referents and its propositions to their meanings. These convergent biases are related to (1) the parasitism of indirect verbal grounding on direct sensorimotor grounding, (2) the circularity of verbal definition, (3) the “mirroring” of language production and comprehension, (4) iconicity in propositions at LLM scale, (5) computational counterparts of human “categorical perception” in category learning by neural nets, and perhaps also (6) a conjecture by Chomsky about the laws of thought. The exposition will be in the form of a dialogue with ChatGPT-4.

Speaker Bios

Reto Gubelmann: I am a researcher working at the intersection of philosophy and natural language processing (NLP). My primary research area is the philosophical theory and computational implementation of argumentation and logical inference. In philosophy, I’m influenced by inferentialism and pragmatism. In NLP, most of my research involves large language models (LLMs). However, I hypothesize that LLMs, being world-class associators (very good system 1 thinkers, to borrow from Kahnemann), might not be ideally suited for logical reasoning, calling for a hybrid, neuro-symbolic approach, combining the strengths of LLMs (system 1) with rule-based methods (system 2). My secondary research area concerns the philosophical reflection about the kind of things that LLMs are, e.g., on whether they understand or speak language in the full sense of these terms.

Stevan Harnad: I’m a professor of psychology at UQÀM, adjunct professor at McGill, and emeritus professor at University of Southampton. My research is on category learning, categorical perception and the nature and origin of language. I wrote a paper 35 years ago on the “symbol grounding problem”: it has been long and much-cited, but little understood. Now, with LLMs, it seems to be getting a second wind…

Logistics

Date: May 9th
Time: 1PM
Location: A14 or Zoom (See email)