About
    • AMMDI is an open-notebook hypertext writing experiment, authored by Mike Travers aka @mtraven. It's a work in progress and some parts are more polished than others. Comments welcome! More.
Search
Full
Incoming links
from computational constitution
from LWMap/Naming the Nameless
  • This belief that aesthetics can vary independently from the values they encode is oddly similar to the orthogonality thesis; it presumes that something high level is completely independent from its low-level implementation.
from LWMap/Being a Robust Agent
  • The independence of goals and goal-satisfaction machines is a foundational principle of rationalism, under the name orthogonality thesis. This thesis one of those assumptions that seems axiomatic to rationalists and completely wrong to me.
from alignment
  • A buzzword from recent AI that refers to the practice or goal of making an artificial agent's goals compatible with human ones. This is not an obviously stupid idea. I happen to think that the way rationalism and AI think about goals is kind of stupid (see orthogonality thesis) and as a consequence, the efforts at alignment seem mostly misguided to me.
from paperclip maximizer
  • An imaginary intelligence that exemplifies the consequences of the orthogonality thesis; in the sense that it is an intelligence with vast capabilities for rational action, but in the pursuit of an idiotic goal that is dangerous to human values. See the LessWrong page.
from Rationalism
  • Taking as axiomatic things that are extremely questionable at best (orthogonality thesis, Bayesianism as a theory of mind).
Twin Pages

orthogonality thesis

27 Dec 2020 07:39 - 01 Jan 2022 07:48

    • I'm happy when life's good and when it's bad I cry I got values but I don't know how or why
    • Rationalism eschews specific goals and goods. They are relegated to the word "values" or "utility"; the focus of attention is on the powerful and fully general machinery of goal-satisfaction, for both human and computational agents.
    • The independence of goals and goal-satisfaction machines is a foundational principle of Rationalism, under the banner orthogonality thesis. This is one of those assumptions that seems axiomatic to rationalists and completely wrongheaded to me.
    • It's not original with them of course, it was a founding principle of GOFAI, incarnated in Herbert Simon's notion of a General Problem Solver and the entire subfield of planning
    • The orthogonality thesis: Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.
    • It's not original with them of course, it was a founding principle of GOFAI, incarnated in Newell and Simon's notion of a General Problem Solver and the entire subfield of planning.
    • Objections
      • A cybernetic goal is a relationship of an agent to an environment. Rational-AI goals, like capitalism goals, view the environment strictly as a resource to be exploited.
    • Incoming

      • Outside in - Involvements with reality » Blog Archive » Against Orthogonality
        • Always a bit disturbing to find myself on the same side as Nick Land.
        • The orthogonalists, who represent the dominant tendency in Western intellectual history, find anticipations of their position in such conceptual structures as the Humean articulation of reason / passion, or the fact / value distinction inherited from the Kantians. They conceive intelligence as an instrument, directed towards the realization of values that originate externally.
        • The philosophical claim of orthogonality is that values are transcendent in relation to intelligence. This is a contention that Outside In systematically opposes....To look outside nature for sovereign purposes is not an undertaking compatible with techno-scientific integrity, or one with the slightest prospect of success.
          • I think a rationalist might quibble with "transcendent" – they claim is that they are independent. But Land is absolutely on the money here. There are no transcendent purposes that intelligence serves, the purposes of intelligent life are immanent in the living (and are necessarily immanent in AIs too, although we don't have good intuitions for what that would mean)
        • The main objection to this anti-orthogonalism, which does not strike us as intellectually respectable, takes the form: __If the only purposes guiding the behavior of an artificial superintelligence are Omohundro drives, then we’re cooked__. Predictably, I have trouble even understanding this as an argument. If the sun is destined to expand into a red giant, then the earth is cooked — are we supposed to draw astrophysical consequences from that? Intelligences do their own thing, in direct proportion to their intelligence, and if we can’t live with that, it’s true that we probably can’t live at all. Sadness isn't an argument.
          • This neatly encapsulate a central point of late-Landism; that intelligence (artificial or otherwise) is a sort of cosmic force that is wholly independent of and oblivious to human needs. Rationalists share this view but think that they have some ability to contain and control this force with their feeble alignment incantations; Land is laughing at them.