Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con

Out of the Software Crisis – Baldur Bjarnason: “For the past year or so I’ve been spending most of my time researching the use of language and diffusion models in software businesses. One of the issues in during this research—one that has perplexed me—has been that many people are convinced that language models, or specifically chat-based language models, are intelligent. But there isn’t any mechanism inherent in large language models (LLMs) that would seem to enable this and, if real, it would be completely unexplained. LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think. LLMs are a mathematical model of language tokens. You give a LLM text, and it will give you a mathematically plausible response to that text. There is no reason to believe that it thinks or reasons—indeed, every AI researcher and vendor to date has repeatedly emphasised that these models don’t think. There are two possible explanations for this effect:

  1. The tech industry has accidentally invented the initial stages a completely new kind of mind, based on completely unknown principles, using completely unknown processes that have no parallel in the biological world.
  2. The intelligence illusion is in the mind of the user and not in the LLM itself.

Many AI critics, including myself, are firmly in the second camp. It’s why I titled my book on the risks of generative “AI” The Intelligence Illusion. For the past couple of months, I’ve been working on an idea that I think explains the mechanism of this intelligence illusion. I now believe that there is even less intelligence and reasoning in these LLMs than I thought before. Many of the proposed use cases now look like borderline fraudulent pseudoscience to me…”

Sorry, comments are closed for this post.