The Chinese Room Argument

03 Jun 2023 - 28 Sep 2025
Open in Logseq
    • I always thought that Searle's Chinese Room Argument was remarkably stupid. It was a magic trick of sorts. The scenario: you have a vast complex system capable of understanding (a vast network of encoded rules, using chinese characters) and a simple agent (a non-chinese speaking person) that animates them into a computational process. The simple engine does not understand Chinese, therefore, no understanding is going on.
    • I trust the problems with this are obvious (if not, Dennett and Hofstadter somewhere have a detailed takedown, or a set of them, I echo their Systems Reply). When it comes to computational models of mind, I was always a Minskyian pluralist, which means I did not believe that thinking there was a single simple agent – the mind is a tangle of interrelated machines and agencies; intelligence was the product of their complex interrelated activity.
    • And yet – look at the insane ferment around LLMs. These look suspiciously like Chinese Rooms! A big, mostly static structure of uninterpretable symbols being constantly evaluated and updated by a single, simple, purely mechanical engine. The simple engine obviously doesn't understand anything, the system as a whole might give the appearance of understanding, but doesn't really have it. Chinese Rooms are now a trillion-dollar industry and everyone is falling over themselves to make them or use them.
    • Searle died recently, and the AI thinkers who did battle with him (Minsky and Papert) have also passed on, but I really wish I could hear them continue their argument in this new age of very capable but ontologically dubious machine intelligence.