AI risk ≡ capitalism

30 Oct 2021 02:15 - 04 Dec 2023 07:53
Open in Logseq
    • My most-read blog post ever is Hostile AI: You’re soaking in it!, about the relationships between killer AI and capitalism (in that both are hyperpowered goal achievement machines that may or may not have their goals aligned with human ones).
    • Seems like a lot of other people, especially science fiction writers, have also had this idea:
      • Charlie Stross (also)
        • Corporations do not share our priorities. They are hive organisms constructed out of teeming workers who join or leave the collective: those who participate within it subordinate their goals to that of the collective...In short, we are living in the aftermath of an alien invasion.
      • Ted Chiang
        • I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations.
    • To the point where SlateStarCodex had a whole post sneering at it. He seems to miss what I think is the real point, which is not that capitalism is more dangerous than AI, or the inverse. It's that AI (especially in its current form) is made in the image of capitalist rationality; it is in some respects a fever dream of capitalist rationality. The same things that are wrong with capitalism are wrong with rationalism and rationalist AI, because they are themselves aspects of some more general tendency.
    • See Accelerationism, which also had this insight long ago, but they have chosen to embrace the antihuman properties of both technology and capitalism.
    • Oh interesting Artificial Intelligence and the Transformation of Capitalism (Talk) — AI • Objectives • Institute
      • Capitalism and artificial intelligence are both powerful optimization systems, but their relationship is more than metaphorical. New research shows that they share a deeper mathematical relationship. Markets are a type of neural network that perform gradient descent by backpropagation. More specifically, supply chains and competitive markets learn through backpropagation.
      • Capitalism and AI have another property in common: they have the wrong objective function. From economic theory, it’s clear markets fail when it comes to issues such as providing public goods, inequality, long-term planning, managing tail risks, and accounting for externalities. Sometimes policies are able to correct these externalities through taxation, subsidies, or other legal restrictions. However, these policies frequently take decades to implement rather than months. If capitalism had better objectives, many of these problems would not arise in the first place.
      • The AI Objectives Institute therefore proposes the creation of a new institution focused on aligning capitalism as the first case of powerful AI. For artificial intelligence, markets and humanity, our objective is to create better objectives.