AI Risk

30 Oct 2021 02:15 - 17 Jun 2023 08:29
Open in Logseq
    • There are two quit different versions of possible harms caused AI in the air these days:
      • The long-term superintellignce risk that is an obsession of Rationalism.
      • The short-term danger that real-world AI systems like image recognition, predictive policing programs, etc, will perpetual existing structural inequalities or otherwise cause social harms.
    • As for the first, see AI risk ≡ capitalism. I really think there is something deep going on here in terms of different theories of social agency.
      • The Rationalist types who worry about AI Risk tend to be libertarian; that is, they are generally view capitalism favorably; while capitalisms monstrous tendencies are displaced into a mythical superintelligence. Capitalism and technology are powerful but can be tamed through reason.
      • The left-wing, SJW types who worry about AI bias and its potential to massively increase the surveillance powers of the state and private interests, whether they are explicitly socialist or not, they view capitalism and technology as extremely power forces that require checking through poltical opposition.
      • Accelerationism acknowledges capitalism/technology as an insanely powerful force, but there is no hope in opposing it or trying to tame it. Not sure what their program is – surfing the wave until it crashes? They are tightly linked with neoreaction which advocates "monarchy", by which they mean a single, all-powerful nexus of power. I guess that is another approach to social agency, albeit a ridiculous one.
    • Further reading
      • Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies . OUP Oxford. Kindle Edition.