I'm happy when life's good and when it's bad I cry I got values but I don't know how or why
The orthogonality thesis: Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.
The orthogonalists, who represent the dominant tendency in Western intellectual history, find anticipations of their position in such conceptual structures as the Humean articulation of reason / passion, or the fact / value distinction inherited from the Kantians. They conceive intelligence as an instrument, directed towards the realization of values that originate externally.
The philosophical claim of orthogonality is that values are transcendent in relation to intelligence. This is a contention that Outside In systematically opposes....To look outside nature for sovereign purposes is not an undertaking compatible with techno-scientific integrity, or one with the slightest prospect of success.
The main objection to this anti-orthogonalism, which does not strike us as intellectually respectable, takes the form: __If the only purposes guiding the behavior of an artificial superintelligence are Omohundro drives, then we’re cooked__. Predictably, I have trouble even understanding this as an argument. If the sun is destined to expand into a red giant, then the earth is cooked — are we supposed to draw astrophysical consequences from that? Intelligences do their own thing, in direct proportion to their intelligence, and if we can’t live with that, it’s true that we probably can’t live at all. Sadness isn't an argument.