Searle's challenge to AI was something like, the mind couldn't be a symbol system because it required the "causal powers" of a nervous system. This was dumb, because computational systems, when embodied, can also have causal powers, but he was also on to something.
Some stuff on color vision that had me gritting my teeth and wanting to go science-nerd on them. Not all that wrong, but still. Led to a discussion of qualities inherent in things (as opposed to the modern materialist view, where they are mostly held to be products of brain processes). Brought up what sounds like a really dumb Searle theory Seeing Things as They Are: A Theory of Perception.
book by John Searle. He's really the anti-Minsky isn't he? Just wrong-headed about everything, in a useless way (unlike Dreyfus whose critique of AI is actually good for something). But I should stretch myself! What if Searle's POV is not valueless, just because it goes against the grain of my naturalist training?
I believe the worst [philosophical] mistake of all is the cluster of views known as Dualism, Materialism, Monism, Functionalism, Behaviorism, Idealism, the Identity Theory, etc. The idea these theories all have in common is that there is some special problem about the relation of the mind to the body, consciousness to the brain, and in their fixation on the illusion that there is a problem, philosophers have fastened onto different solutions to the problem
That actually sounds perfectly sensible? I am happy to consider those all as different aspects of the same bad idea.
This mistake goes back to the Ancients, but it has received its most famous exposition by Descartes in the seventeenth century, and has continued right through to the present mistakes such as the contemporary Computational Theory of Mind, Functionalism, Property Dualism, Behaviorism, etc.
OK, well, here is the difference, computationalists think they have overcome dualism, Searle sees it as just another manifestation of dualism. He has a bit of a point.
A mistake of nearly as great a magnitude overwhelmed our tradition in the seventeenth century and after, and it is the mistake of supposing that we never directly perceive objects and states of affairs in the world, but directly perceive only our subjective experiences.
This, OTOH, seems utterly confused, not even wrong, senseless. What could it even mean?
He advocates something he calls Direct Realism – and I assume he doesn't deny the reality of, say the neurobiology of vision, so he must mean that all that machinery doesn't matter somehow? I don't get it (to be fair I have only very lightly skimmed the book, which seems pretty tedious).
There's hits of antirepresentationalism, which I will give a bit more credit to (although antirepresentationalism seems just as confused as representationalism). But I suspect that Searle's picture of what it is to be an intelligent agent is just completely different from mine.
Here's a take: To the extent the self is solid and real, direct perception is a thing. A normal self perceives a tree "directly" without being aware of the underlying machinery. A cognitive scientist can call attention to it, but to what end? Describing the signals and sensors and filters and computations might be very interesting, but doesn't address actual experience, which is what he and most people care about.
Something about AI inspires a huge variety of terrible arguments against it. Penrose's had something to do with Gödel. Another famous one, Searle's Chinese Room, had flaws that anybody with any programming experience at all could see through in an instant.
AI has a long history of ignoring its philosophical critics, thinking that technical skill trumps their tired and ancient abstractions. This is sometimes valid, but two of the classic philosophical critics Hubert Dreyfus and John Searle actually were worth paying attention to, it turns out. The same might be true of Hart, although I can't quite see it.
I always thought that Searle's Chinese Room Argument was remarkably stupid. It was a magic trick of sorts –
The scenario: you have a vast complex system capable of understanding (a vast network of encoded rules, using chinese characters) and a simple agent (a non-chinese speaking person) that animates them into a computational process. The simple engine does not understand Chinese, therefore, no understanding is going on.
I trust the problems with this are obvious (if not, Dennett and Hofstadter somewhere have a detailed takedown, or a set of them, I echo their "System Reply"). When it comes to computational models of mind, I was always a Minsky pluralist, which means I did not believe that thinking there was a single simple agent – the mind is a tangle of interrelated machines and hence agencies.
And yet – look at the insane ferment around LLMs. These look suspiciously like Chinese Rooms! A big mostly static structure being constantly evaluated and updated by a really simple engine (basically in ML, simple linear algebra operations).