AMMDI is an open-notebook hypertext writing experiment, authored by Mike Travers aka mtraven. It's a work in progress and some parts are more polished than others. Comments welcome! More.
Searle's challenge to AI was something like, the mind couldn't be a symbol system because it required the "causal powers" of a nervous system. This was dumb, because computational systems, when embodied, can also have causal powers, but he was also on to something.
AI has a long history of ignoring its philosophical critics, thinking that technical skill trumps their tired and ancient abstractions. This is sometimes valid, but two of the classic philosophical critics Hubert Dreyfus and John Searle actually were worth paying attention to, it turns out. The same might be true of Hart, although I can't quite see it.
Something about AI inspires a huge variety of terrible arguments against it. Penrose's had something to do with Gödel. Another famous one, Searle's Chinese Room, had flaws that anybody with any programming experience at all could see through in an instant.
Some stuff on color vision that had me gritting my teeth and wanting to go science-nerd on them. Not all that wrong, but still. Led to a discussion of qualities inherent in things (as opposed to the modern materialist view, where they are mostly held to be products of brain processes). Brought up what sounds like a really dumb Searle theory Seeing Things as They Are: A Theory of Perception.
book by John Searle. He's really the anti-Minsky isn't he? Just wrong-headed about everything, in a useless way (unlike Dreyfus whose critique of AI is actually good for something). But I should stretch myself! What if Searle's POV is not valueless, just because it goes against the grain of my naturalist training?
I believe the worst [philosophical] mistake of all is the cluster of views known as Dualism, Materialism, Monism, Functionalism, Behaviorism, Idealism, the Identity Theory, etc. The idea these theories all have in common is that there is some special problem about the relation of the mind to the body, consciousness to the brain, and in their fixation on the illusion that there is a problem, philosophers have fastened onto different solutions to the problem
That actually sounds perfectly sensible? I am happy to consider those all as different aspects of the same bad idea.
This mistake goes back to the Ancients, but it has received its most famous exposition by Descartes in the seventeenth century, and has continued right through to the present mistakes such as the contemporary Computational Theory of Mind, Functionalism, Property Dualism, Behaviorism, etc.
OK, well, here is the difference, computationalists think they have overcome dualism, Searle sees it as just another manifestation of dualism. He has a bit of a point.
A mistake of nearly as great a magnitude overwhelmed our tradition in the seventeenth century and after, and it is the mistake of supposing that we never directly perceive objects and states of affairs in the world, but directly perceive only our subjective experiences.
This, OTOH, seems utterly confused, not even wrong, senseless. What could it even mean?
He advocates something he calls Direct Realism – and I assume he doesn't deny the reality of, say the neurobiology of vision, so he must mean that all that machinery doesn't matter somehow? I don't get it (to be fair I have only very lightly skimmed the book, which seems pretty tedious).
There's hits of antirepresentationalism, which I will give a bit more credit to (although antirepresentationalism seems just as confused as representationalism). But I suspect that Searle's picture of what it is to be an intelligent agent is just completely different from mine.
Here's a take: To the extent the self is solid and real, direct perception is a thing. A normal self perceives a tree "directly" without being aware of the underlying machinery. A cognitive scientist can call attention to it, but to what end? Describing the signals and sensors and filters and computations might be very interesting, but doesn't address actual experience, which is what he and most people care about.
I always thought that Searle's Chinese Room Argument was remarkably stupid. It was a magic trick of sorts. The scenario: you have a vast complex system capable of understanding (a vast network of encoded rules, using chinese characters) and a simple agent (a non-chinese speaking person) that animates them into a computational process. The simple engine does not understand Chinese, therefore, no understanding is going on.
I trust the problems with this are obvious (if not, Dennett and Hofstadter somewhere have a detailed takedown, or a set of them, I echo their Systems Reply). When it comes to computational models of mind, I was always a Minskyian pluralist, which means I did not believe that thinking there was a single simple agent – the mind is a tangle of interrelated machines and agencies; intelligence was the product of their complex interrelated activity.
And yet – look at the insane ferment around LLMs. These look suspiciously like Chinese Rooms! A big, mostly static structure of uninterpretable symbols being constantly evaluated and updated by a single, simple, purely mechanical engine. The simple engine obviously doesn't understand anything, the system as a whole might give the appearance of understanding, but doesn't really have it. Chinese Rooms are now a trillion-dollar industry and everyone is falling over themselves to make them or use them.
Searle died recently, and the AI thinkers who did battle with him (Minsky and Papert) have also passed on, but I really wish I could hear them continue their argument in this new age of very capable but ontologically dubious machine intelligence.