The chairman of the Central Military Committee of the Illuminated People’s Republic of Discordia today issued a statement affirming that the Less Wrong conceptual imperialists and their lapdog “Scott Alexander” will bitterly rue the day they set out to challenge the might of the IPRD. Their foolish effrontery knows no limits, but the Discordian masses, steadfastly holding to the line of Timothylearyist-Antonwilsonist thought, will deliver a merciless crushing response.
Conflict vs Mistake is a one of those SlateStarCodex pieces that, regardless of whatever problems I have with it, has become foundational to my own discourse as well as that of the Rationalist community. (His Meditations on Moloch is another and there are probably others). It inspired me to write a couple of long blog posts. Scott's a powerful and subtle thinker, and it was an interesting exercise trying to figure out just where he goes wrong; where I find myself pulling in a different direction.
Social shaming also isn’t an argument. It’s a demand for listeners to place someone outside the boundary of people who deserve to be heard; to classify them as so repugnant that arguing with them is only dignifying them.
Rationalists have founded MIRI (the Machine Intelligence Research Insititute) to deal with this problem; and CFAR (Center for Applied Rationality) to promulgate rationalist self-improvement techniques. They are also tightly connected to the Effective Altruism movement. They've attracted funding from shady Silicon Valley billionaires and allies from within the respectable parts of academia. And they constitute a significant subculture within the world of technology and science, which makes them important. They are starting to penetrate the mainstream, as evidenced by this New Yorker article about some drama on the most popular rationalist blog, SlateStarCodex.
I'm writing this in the wake of the blowup between SlateStarCodex and NYT that is rocking the internet; and I'm doing it to remind myself that even if SSC has dubious politics and his arguments can be bad in sneaky ways, he's an excellent writer who has a way of bringing the most abstruse of concepts to life; and so I'm going to engage with him on that level if possible. And it turns out that this essay revolves around questions of agency.
SlateStarCodex was a window into the Silicon Valley psyche. There are good reasons to try and understand that psyche, because the decisions made by tech companies and the people who run them eventually affect millions.
One of SlateStarCodex's most famous posts is his take on Allen Ginsberg's Howl, Meditations on Moloch. At least, it takes off from Ginsberg's portrait of industrial civilization being animated by Moloch, a brutal god who demands sacrifice. SlateStarCodex's take on Moloch is interesting; he identifies it not with an actual deity or with some personified attributes of the human mind, but with certain natural characteristics of the overall dynamics of life, unfortunate competitive dynamics that lead to bad results for everyone, such as the ruthlessness required by evolution and Malthusian economics. Moloch is not an agent, it's just the brutal way things are.
We're in a situation where neutrality is complicity, and I feel like an asshole for even saying that, but I think it reflects something real. Refusal to recognize this may be why rationalists like SlateStarCodex keep finding themselves in hot water and making lame or cute defenses for themselves.
To the point where SlateStarCodex had a whole post sneering at it. He seems to miss what I think is the real point, which is not that capitalism is more dangerous than AI, or the inverse. It's that AI (especially in its current form) is made in the image of capitalist rationality; it is in some respects a fever dream of capitalist rationality. The same things that are wrong with capitalism are wrong with rationalism and rationalist AI, because they are themselves aspects of some more general tendency.
Saying that today in most places will get you canceled, although Charles Murray is still going strong and there is some tolerance for such chin-stroking racism in certain quarters, including SlateStarCodex.
Slate Star Codex is the former blog of Scott Alexander (aka Scott Siskind aka SSC), the most widely-read person in Rationalism sphere these days (2020 or so).
Due to a recent dustup between him and the New York Times (widely publicized and discussed all over the place – I'm not going go over that story here, but I'm pretty onboard with Will Wilkinson's take) he now writes at a Substack, Astral Codex Ten.
My general opinion: he's an amazingly prolific and clever writer but there's something off about his viewpoint. This is part of what has gotten him into trouble with the mainstream, and it's quite related to my general objections with Rationalism. I've written quite a bit trying to pick apart some of his posts, and I freely admit that has generally been a very rewarding intellectual experience even if I don't end up vibing with him.