Weird Studies/Mechanical Dollhouse

06 Jul 2022 09:34 - 08 Dec 2022 06:36
Open in Logseq
    • I'm having trouble formulating my opinion about this episode and the underlying paper. I'd say I agree with its main point and purpose, but disagree on a lot of the details. Whether these details are interesting or not I can't be sure, they are significant to me but they don't really affect the main point.
    • The agreement: yes, the digital (as practiced in capitalism at least) leaves out important things in its attempt to be representation of reality, and as it takes over more of our lives, those things can become endangered, and so the takeover should be resisted.
    • The disagreements, well, there are many but here are a couple:
      • (1) This sort of discourse has a tendency to conflate things which are related but are not exactly the same: digital technology, computational theories of mind, the deployment of machine learning under capitalism, and the underlying mathematics. These are all obviously related, and they all no doubt have things wrong with them – but not exactly the same thing. Perhaps their problems may all be trackable to a single source such as "Cartesianism" or Technic, but maybe not.
      • (2) The constant allusions to computations lack of fluidity and ambiguity are really making me crazy, for the specific reason that all of modern AI is based on fluidity and ambiguity, that's how Machine Learning works, in contrast to earlier generations of AI that did have serious rigidity problems, and didn't work as well. If you've seen pictures from DALL-E or other AI art generators, you know that they don't lack in fluidity, shapes freely morph into other ones in a way that mimics surrealism.
      • Internally, all the recent successes of AI are due to adopting continuous mathematics over the discrete mathematics used by your father's IBM machine. That is, they do not work with binary on/off variables or rigidly-bounded categories, but with emergent, fluid, shades-of-grey. That is in fact central to their operation and why they have been successful, and distinguishes them from older approaches to AI that rested more on binary logic.
      • What DALL-E lacks compared to the original Dali is not fluidity, but any sense of semantics or meaningfulness. Dali's work connects to the sexual unconscious, but DALL-E's work connects to nothing, or maybe to the preconscious layers of the brain – not the repressed, but the basic physiological mechanisms of vision, shape detectors and the like.
    • Notes

      • the argument is that human beings and social systems composed of human beings are inherently, in Birhane's terms, absolutely unpredictable. And so any attempt by a technology to accurately predict human behavior, to accurately assess or represent human personality is essentially a doomed prospect that just won't work. And it will never.
        • JFM, podcast around 6:45
        • OK, a couple of problems here. First, modeling is inherently a project of approximation, it's never 100% accurate. So representations of the human can be good or bad but cannot be expected to be perfect. Second, AI claims in the limit not to "represent" the human but to simulate it and re-create it – which is even crazier than claiming to model it of course, but raises different issues than accuracy.
      • No, you don't have to fully understand something to automate it, that's practically the whole point of AI, especially the ML variety.
      • Discreteness vs continuous – but again, ML is all about the continuous.
      • analog vs digital photography – they get this kind of wrong as well. Analog photography relies just as much on the discrete as digital, and you can this in the phenomenon of "graininess" that is one of the major considerations for pre-digital photographers.
      • atomism blamed on Descartes or Newton...ok but I thought it went back to Lucretius.
      • "you predict in order to control" – well sort of. The question is, is this something human minds do (predict) and is it for reasons of control?
      • Control must assume categories are absolute – no not really. (23:30)
      • Creativity (24:30) - there's a long tradition of computational creativity (eg AARON). Lots of ink has been spilled on whether this is real or "the creativity of the serpent", but it ought to be acknowledged rather than just assuming computer systems can't be creative. There's been a lot of theorizing in the art world about how to understand generative art (see Generative art - Wikipedia).
      • AI Art systems seem to be missing something (true enough)
      • Cartesian model still haunts science – now that's true (they blame it on money, not sure that is right, but what do I know?)
      • John Cage didn't like improvisation? Who knew.
    • More
      • Predictive models, due to their use of historical data, are inherently conservative. They reproduce and reinforce norms, practices, and traditions of the past
      • Argh yes that is what predictive models do.
      • Given massive power disparity, those engaged in the practices of designing, developing, and deploying ML systems—effectively shoehorning individual people and their behaviours into prede- fined stereotypical categories
      • Argh, again, bad description of ML.
      • Ubiquitous deployment of ML models to high-stake situations creates a political and economic world that benefits the most privileged and harms the vulnerable.
      • I mean, probably true, but its capitalism that does this, ML just accelerates the process.
      • They do not invent the future. Doing that, OʼNeil (2016, p. 204) emphasizes, “requires moral imagination, and thatʼs something only humans can provide.”
        • – Yet (ok probably don't want to take that position)
      • We have so far looked at how individual people and social systems, as complex adaptive systems, are active, dynamic, and necessarily a historical phenomenon whose precise pathway is unpredictable. Contrary to this, we find much of current applied ML classifying, sorting, and predicting these fluctuating and contingent agents and systems, in effect, creating a certain trajectory that resembles the past
      • I have news, ML systems are nothing if not complex adaptive systems. And Artificial Life is not distinct from computation (as far as I know, I'm way behind in the field)
      • Technology that envisages a radical shift in power (from the most to the least powerful) stands in stark opposition to current technology that maximizes profit and efficiency. It is an illusion to expect technology giants to develop AI that centres on the interests of the marginalized. Strict regulations, social pressure through organized movements, strong reward systems for technol- ogy that empowers the least privileged, and a completely new application of technologies (which require vision, imagination, and creativity) all pave the way to a technologically just future.
      • Maybe picking on the language, but its interesting that the solution isn't "use technology for better purposes", or "train people who are marginal and anti-capitalist to be technologists". It's "a strong reward system". Guess we know who is in charge!
        • OK, really don't want to post that one!
      • better!

      • Truthfully I am having a mixed reaction to this episode and paper, but that's maybe because I work professionally with the digital and only do philosophy and critique in my spare time, strictly as an amateur. BUT – my one recentish attempt to write a somewhat academic paper is relevant to the subject of taxonomies,
      • Interesting episode and paper! I'm a computer guy, and only dabble in philosophy and cultural criticism, but my one sort-of academic publication in recent years is relevant to the subject of typologies which came up (they are called ontologies or object models in the trade): AMMDI: Politics and Pragmatism in Scientific Ontology Construction
      • more better ?