Jill Nephew

03 Jun 2023 08:39 - 17 Jun 2023 08:29
Open in Logseq
    • A pointer to podcasts by Jill Nephew on WS, here's what I wrote before listening:
      • Haven't listened yet but I can anticipate, I've felt the same thing, a kind of existential nausea at the ability of these systems to fake a kind of presence. They bind to our need for interaction without providing the authentic relationship that we seek. There is nothing there, and we know it, but it doesn't matter, our selves may be just as plastic and ephemeral (that is the Buddhist view as I understand it) but we don't like to confront that. There is something very nihilistic about LLMs, it's like industrial-strength postmodernism. You thought meaningfulness was difficult to achieve under late capitalism, you haven't seen anything yet, once we get these semantic mulching machines hooked up to everything.
      • Sent that, now listening. She has an impressive background, knows the tech, but her framing in terms of "magic trick" is faulty, because many of the people building the tricks don't think they are magic.
    • Now actually listening
      • Thought I had written more but maybe not.
      • Really interesting person, I'm kind of surprised I hadn't heard her name before. Obviously knows what she is talking about when it comes to software. But I didn't care for her basic framing, that AI is a magic (in the stage-magic sense), a trick being foisted by unscrupulous technological conmen on the unqualified and gullible public. I mean, it may be that, but it is also the case that many of the magicians have fooled themselves, or are in the process of doing so. That makes it something different. (She addresses this later on, sayiing basically that money can produce a lot of self-delusion)
      • On a deeper level, I think her definitiveness and uncertainty is unwarrented. We know how the AI algorithms work (to some extent), but we don't know how minds work, making it difficult to say for certain that what the AIs are doing isn't real thinking, but just an imitation. I actually tend to agree with her that it isn't, but the point is that the boundary between real and fake is not so distinct, it has become blurred, and that is a cultural fact, something we are all grappling with out of necessity.
      • Now that I've written it out, I think that aside from some quibbles we pretty much see eye-to-eye.
      • Her riff on how we don't have proper global climate monitoring was good (scary but good)
      • Host says "guns don't have agency", needs to read Latour
      • She has an elaborate theory of how AI is cognitive poison.
      • "statistics should not be used for decision support" WHAT? Around 34:30...and AI are statistical monstrosities. Also doesn't seem to think insurance companies should pay attention to statistics...instead use something "natureal intelligence" with autobiographcial memory...I mean that sounds good (although I don't undertand it).
      • Grounding problem. Somehow related to Friston free-energy theory. But also the same critique as situated action of course.
      • What is insanity? Our brain being unable to ground or situate its content. Er, em, overgeneralization. Fits in with her theory of cognitive degradation. A cult machine.
      • "engineering fiction" 100% not the way it works. Michael Levin, Lee Cronin (never heard of them). All human qualities (natural intelligence, consciousness, rationality, intelligence) arise from same source, a living source, that has a history. That is not the architecture of LLMs (true enough).
      • She's very confused about what algoirthms can do vs natural systems. False dichotomy, and ahistorical.
      • She advocates abandoning them altogether, anybody who adopts them are part of a suicide cult.
      • "alignment problem" is a made up thing (kind of agree)
      • randomness as a simulation of life (disagree). She doesn't understand or rejects the concept of noise. Very confused.
    • To watch