Notes on HAL 9000

07 Nov 2023 04:18 - 07 Nov 2023 04:28
Open in Logseq

    • We've paid a good bit of attention to 2001 in this course, but it struck me as odd, given that it is supposed to be about AI, that it has focused almost entirely on the Dawn of Man section with the man-apes and cosmic match-cut, and pretty much ignored HAL 9000, the tragically doomed AI who is the real heart of the movie, by far its most human character.
    • I had a couple of fresh minor insights on the film since starting the course. One is the similarity of HAL to the Space Jockey from Alien – both intelligences fused to their ships, although in HAL's case it is even more extreme, he never had any other body, he was always that way. He has not so much fused with technology as emerged from it, or has been composed out of it. And both end up very dramatically dead, and both are somewhat peripheral to the main action of the films (that is, you could cut them out completely and the films would still work as narrative, although less well as art).
    • Another realization (seems obvious in retrospect but I don't think I quite saw it before) – HAL's story and the main action of 2001, the human ascendency story, are kind of versions or variants of the same thing, call it the transmission of intelligence, and specifically from the high to the low. The monolith represents a transcendent intelligence entering the lives of some rude apes and turning them into something new, different, elevated over what they were. And the AI dream is the transformed apes attempting the same thing, they are going to bestow their intelligence on a mere material mechanism. They are trying, in their way, to pass on their gift.
      As Sussman and the Maharal of Prague noticed, the propensity to create in one's own image is recursive.
    • Of course it doesn't actually work out. HAL's intelligence is fatally defective on a practical level and, it seems, not quite as transcendent as ours. He has consciousness and desires and the fear of death but you can't quite imagine him ascending to a godlike plane of being the way the human astronaut eventually does (another thing that makes HAL seem more human, more like us). Poole ascends, HAL is consigned to nothingness. In the sweep of 2001's story, HAL is a something of wrong turn. Some transcendent force of intelligence is developing humanity from apes through bland modern technicians to gods, and HAL is at best a mistake, an attempt to develop intelligence in a new way that doesn't really work out, a kind of warning.
    • Another thing I realized, and this is less flattering to the moviemakers -- while HAL's breakdown and death has to be one of the most masterful bits of cinema ever made, and not to take anything from it, but it's also a bit silly. It's a slightly more sophisticated version of the Star Trek cliche where you feed a computer a logical contradiction and it blows itself up. Well, a really intelligent computer wouldn't blow up, and it wouldn't go crazy from conflicting goals or the need to keep secrets, because those are so basic to intelligence that it would have to be able to deal with those as a matter of course.
    • One reason HAL might not have come up that much in this course is that he represents an earlier version of AI, quite different in nature from the LLM-style AIs of today. HAL is based on what is sometimes called GOFAI, good old-fashioned AI, which was largely variants of formal symbolic logic. Such systems, or so it was thought, would be good at reasoning but bad at emotions, creativity, human relations. LLMs work on entirely different principles. They have no logic at all, they just predict. They are very good at a certain type of creativity because they are flowy surrealists at their core, everything kind of blends with everything else. If the earlier visions of AIs were kind of autistic, the AIs of today are like sociopaths, who will tell you whatever you want to hear without regard to whether it is true or not, the very idea is alien to them.
    • Followup

      • Someone asked me a followup question about GOFAI
      • It's a bit hard to explain GOFAI, it is such a relic of an earlier era. Back in the 1960s it was thought that symbolic reasoning and formal logic was a good model for human intelligence. The mind contains representations of the world (facts aka propositions) and rules or algorithms for reasoning about them. This is a pretty standard, if wrong, model of thought (it's the basis of analytic philosophy) and the GOFAI thrust was to take it literally, and build machines that worked that way. While these machines might have some learning capabilities, most of their knowledge was meant to be hand-crafted by programmers or "knowledge engineers".
      • There were lots of problems with this. It turns out that figuring out patterns of thought and turning them into code is hard, and doesn't scale. Other deeper issues as well. There were some successes of this kind of AI, notably machine to do complex reasoning in very specialized domains (medical diagnosis, complex scheduling and configuration problems), but it never quite got the kind of traction or generality it aimed at.
      • The newer AI, by contrast, eschews explicit construction of knowledge or procedures, preferring to learn representations by processing massive amounts of data with little or no human-designed knowledge. They are radically empiricist, Humean in contrast to the Cartesian rationalism of GOFAI.
      • Another way to explain it – this seems a bit simplistic but it kind of works – GOFAI was trying to model and replicate conscious thought, LLMs and the like, to the extent they model any human capability, are closer to the subconscious – they are good at pattern recognition and association, not reasoning. A real human-level AI would have to have both and indeed a lot of people hope to unite the strengths of the two different approaches.