The first risk is that it is always dangerous to try to relieve ourselves of the responsibility of understanding exactly how our wishes will be realized. Whenever we leave the choice of means to any servants we may choose then the greater the range of possible methods we leave to those servants, the more we expose ourselves to accidents and incidents.
The ultimate risk comes when our greedy, lazy, master-minds attempt to take that final step––of designing goal-achieving programs that are programmed to make themselves grow increasingly powerful, by self-evolving methods that augment and enhance their own capabilities. ...
As for the first, see AI risk ≡ capitalism. I really think there is something deep going on here in terms of different theories of social agency.
The Rationalist types who worry about AI Risk tend to be libertarian; that is, they are generally view capitalism favorably; while capitalisms monstrous tendencies are displaced into a mythical superintelligence. Capitalism and technology are powerful but can be tamed through reason.
The left-wing, SJW types who worry about AI bias and its potential to massively increase the surveillance powers of the state and private interests, whether they are explicitly socialist or not, they view capitalism and technology as extremely power forces that require checking through poltical opposition.
Accelerationism acknowledges capitalism/technology as an insanely powerful force, but there is no hope in opposing it or trying to tame it. Not sure what their program is – surfing the wave until it crashes? They are tightly linked with neoreaction which advocates "monarchy", by which they mean a single, all-powerful nexus of power. I guess that is another approach to social agency, albeit a ridiculous one.