Rationalismists are not really Gradgrinds, they are in fact a pretty playful and imaginative bunch in their way. But their ideology is grim, and their nightmares of paperclip maximizer have a Gradgrindian aspect to them.
interesting. Was Eichmann not intelligent? I mean, he may have been a moral idiot, but he seemed to function OK (for some definitions of OK) without a well-developed moral sense. But OK, there is some connection between intelligence and morality, and the idea of an amoral intelligence divorced or independent from morality (as in paperclip maximizer ) has something wrong with it.
AI people take this phrase overly literally, or perhaps not literally enough. Although arrogant by nature, most are humble enough to know that if there is such a thing as a soul, our current techniques don't really approach it. Instead the aim is to construct "intelligence" which is a somewhat de-souled way to talk about minds. But intelligence without soul is a recipe for horror (see paperclip maximizer).
The need to find some kind of value or purpose in a meaningless universe is kind of an unacknowledged note throughout Rationalism discourse, emerging in its nightmare dreams of like the paperclip maximizer or Roko's Basilisk or its attempts to wax poetic about utilitarianism. This is not really meant as a criticism. I see Rationalism as a sincere attempt to build something necessary – a religion, a shared way of making meaning – on top of the unpromising nihilist foundations of the materialist worldview. I'm sympathetic to their goals and efforts but kind of dubious about their solution.
While this imaginary intelligence is only implied here, the rationalists have a worked-out explicit image for it: the paperclip maximizer. Despite this model exerting a powerful gravitational pull on rationalist thought, it is wholly imaginary. There are no pitiless maximizer engines, unless you count capitalism.
An imaginary intelligence that exemplifies the consequences of the orthogonality thesis; in the sense that it is an intelligence with vast capabilities for rational action, but in the pursuit of an idiotic goal that is dangerous to human values. See the LessWrong page.
This seems entirely implausible to me. Part of this exercise is to investigate and defend that intuition and related doubts about the ironclad mathematical certainties that Rationalism produces so effortlessly.
My most popular blog post of all time (see AI risk ≡ capitalism) made the observation that the real human-hostile stupid-genius optimization engine is capitalism. Apparently what I thought of as a stunning insight is actually something of a cliche, but I'm still pretty convinced of its importance.