the argument is that human beings and social systems composed of human beings are inherently, in Birhane's terms, absolutely unpredictable. And so any attempt by a technology to accurately predict human behavior, to accurately assess or represent human personality is essentially a doomed prospect that just won't work. And it will never.
Predictive models, due to their use of historical data, are inherently conservative. They reproduce and reinforce norms, practices, and traditions of the past
Given massive power disparity, those engaged in the practices of designing, developing, and deploying ML systems—effectively shoehorning individual people and their behaviours into prede- fined stereotypical categories
Ubiquitous deployment of ML models to high-stake situations creates a political and economic world that benefits the most privileged and harms the vulnerable.
They do not invent the future. Doing that, OʼNeil (2016, p. 204) emphasizes, “requires moral imagination, and thatʼs something only humans can provide.”
We have so far looked at how individual people and social systems, as complex adaptive systems, are active, dynamic, and necessarily a historical phenomenon whose precise pathway is unpredictable. Contrary to this, we find much of current applied ML classifying, sorting, and predicting these fluctuating and contingent agents and systems, in effect, creating a certain trajectory that resembles the past
Technology that envisages a radical shift in power (from the most to the least powerful) stands in stark opposition to current technology that maximizes profit and efficiency. It is an illusion to expect technology giants to develop AI that centres on the interests of the marginalized. Strict regulations, social pressure through organized movements, strong reward systems for technol- ogy that empowers the least privileged, and a completely new application of technologies (which require vision, imagination, and creativity) all pave the way to a technologically just future.