orthogonality thesis

30 Oct 2021 02:15 - 23 Mar 2024 08:54
Open in Logseq
    • I'm happy when life's good and when it's bad I cry I got values but I don't know how or why
    • Rationalism eschews specific goals and goods. They are relegated to the word "values" or "utility"; the focus of attention is on the powerful and fully general machinery of goal-satisfaction, for both human and computational agents.
    • The independence of goals and goal-satisfaction machines is a foundational principle of Rationalism, under the banner orthogonality thesis. This is one of those assumptions that seems axiomatic to rationalists and completely wrongheaded to me.
    • It's not original with them of course, it was a founding principle of GOFAI, incarnated in Newell and Simon's notion of a General Problem Solver and the entire subfield of planning
    • The orthogonality thesis: Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.
    • Objections
      • A cybernetic goal is a relationship of an agent to an environment. Rational-AI goals, like capitalism goals, view the environment strictly as a resource to be exploited.
    • Incoming

      • Outside in - Involvements with reality » Blog Archive » Against Orthogonality
        • Always a bit disturbing to find myself on the same side as Nick Land.
        • The orthogonalists, who represent the dominant tendency in Western intellectual history, find anticipations of their position in such conceptual structures as the Humean articulation of reason / passion, or the fact / value distinction inherited from the Kantians. They conceive intelligence as an instrument, directed towards the realization of values that originate externally.
        • The philosophical claim of orthogonality is that values are transcendent in relation to intelligence. This is a contention that Outside In systematically opposes....To look outside nature for sovereign purposes is not an undertaking compatible with techno-scientific integrity, or one with the slightest prospect of success.
          • I think a rationalist might quibble with "transcendent" – they claim is that they are independent. But Land is absolutely on the money here. There are no transcendent purposes that intelligence serves, the purposes of intelligent life are immanent in the living (and are necessarily immanent in AIs too, although we don't have good intuitions for what that would mean)
        • The main objection to this anti-orthogonalism, which does not strike us as intellectually respectable, takes the form: If the only purposes guiding the behavior of an artificial superintelligence are Omohundro drives, then we’re cooked. Predictably, I have trouble even understanding this as an argument. If the sun is destined to expand into a red giant, then the earth is cooked — are we supposed to draw astrophysical consequences from that? Intelligences do their own thing, in direct proportion to their intelligence, and if we can’t live with that, it’s true that we probably can’t live at all. Sadness isn't an argument.
          • This neatly encapsulate a central point of late-Landism; that intelligence (artificial or otherwise) is a sort of cosmic force that is wholly independent of and oblivious to human needs. Rationalists share this view but think that they have some ability to contain and control this force with their feeble alignment incantations; Land is laughing at them.
      • Scott Aaronson rejects it Shtetl-Optimized » Blog Archive » Why am I not terrified of AI? (sensibly, if a bit weirdly folding in Nazis)
        • Only recently did I clearly realize that I reject the Orthogonality Thesis in its practically-relevant version. At most, I believe in the Pretty Large Angle Thesis....In the Orthodox AI-doomers’ own account, the paperclip-maximizing AI would’ve mastered the nuances of human moral philosophy far more completely than any human—the better to deceive the humans, en route to extracting the iron from their bodies to make more paperclips. And yet the AI would never once use all that learning to question its paperclip directive. I acknowledge that this is possible. I deny that it’s trivial.
        • WWII was (among other things) a gargantuan, civilization-scale test of the Orthogonality Thesis. And the result was that the more moral side ultimately prevailed, seemingly not completely at random but in part because, by being more moral, it was able to attract the smarter and more thoughtful people.
        • I think we should consider the possibility that powerful AIs will not be best understood in terms of the monomanaical pursuit of a single goal—as most of us aren’t, and as GPT isn’t either. Future AIs could have partial goals, malleable goals, or differing goals depending on how you look at them.