optimizing

30 Oct 2021 02:15 - 15 Sep 2023 11:05
Open in Logseq
    • One of my gripes with Rationalism is the unquestioned assumption that intelligence is about optimizing some quantity. Closely related to the similar gripe about winning. I find this an impoverished way to think about the mind.
    • That equation of rationality with maximizing some quantity is often grounded out in the Von Neumann–Morgenstern utility theorem, which shows that under certain axioms of rationality, agents will behave as if they are maximizing some function. This is unconvincing because the axioms involved do not correspond to how real minds work.
    • The informal intuition is that the logics of evolution and economics seem like they inexorably impose competition on us; and so there is always going to be something corresponding to winning and losing and thus scoring high or low, even when that is not an explicit way of thinking.
    • I'm a conflict theory type of guy so would not want to go against the idea that competition is fundamental. Something about the optimization framework doesn't sit right with me though. Humans have the unique ability to redefine their own competitive landscapes, and reducing all human intelligent action to an optimization problem seems to miss what thinking is really about.
    • So it's not that optimization isn't real, it's that it's not a good way for humans to think about themselves. I guess this is most evident in the area of reproduction; the most basic form of optimization, implemented by evolution. With humans, reproductive success is obviously important but people who pursue it directly via some kind of rational process are considered kind of creepy, and often don't do very well at it. It's the kind of goal that can only be achieved indirectly (see anti-purpose).
    • Even evolution isn't quite the grim Molochian engine it is sometimes portrayed as. If it was just optimizing something, you'd think it would just produce oceans of E. Coli or something similar, rather than the wild profusion of forms it actually comes up with. Which again is not to say that optimization theory doesn't have a place, but it doesn't seem to have quite as fundamental a place as Rationalists seem to assume.
    • ‘[Definitions] of intelligence used throughout the cognitive sciences converge towards the idea that “Intelligence measures an agent’s ability to achieve goals in a wide range of environments”,’ write Anna Salamon and Luke Muehlhauser in Intelligence Explosion: Evidence and Import, a research paper published by the Machine Intelligence Research Institute. ‘We might call this the “optimisation power” concept of intelligence, for it measures an agent’s power to optimise the world according to its preferences across many domains.’
      • Tom Chivers. The AI Does Not Hate You: Superintelligence, Rationality and the Race to Save the World
    • I Prefer World Optimisation