The Chinese Room Argument

03 Jun 2023 08:39 - 17 Jun 2023 08:29
Open in Logseq
    • I always thought that Searle's Chinese Room Argument was remarkably stupid. It was a magic trick of sorts –
    • The scenario: you have a vast complex system capable of understanding (a vast network of encoded rules, using chinese characters) and a simple agent (a non-chinese speaking person) that animates them into a computational process. The simple engine does not understand Chinese, therefore, no understanding is going on.
    • I trust the problems with this are obvious (if not, Dennett and Hofstadter somewhere have a detailed takedown, or a set of them, I echo their "System Reply"). When it comes to computational models of mind, I was always a Minsky pluralist, which means I did not believe that thinking there was a single simple agent – the mind is a tangle of interrelated machines and hence agencies.
    • And yet – look at the insane ferment around LLMs. These look suspiciously like Chinese Rooms! A big mostly static structure being constantly evaluated and updated by a really simple engine (basically in ML, simple linear algebra operations).