This neatly encapsulate a central point of late-Landism; that intelligence (artificial or otherwise) is a sort of cosmic force that is wholly independent of and oblivious to human needs. Rationalists share this view but think that they have some ability to contain and control this force with their feeble alignment incantations; Land is laughing at them.
A buzzword from recent AI that refers to the practice or goal of making an artificial agent's goals compatible with human ones. This is not an obviously stupid idea. I happen to think that the way rationalism and AI think about goals is kind of stupid (see orthogonality thesis) and as a consequence, the efforts at alignment seem mostly misguided to me.