• AMMDI is an open-notebook hypertext writing experiment, authored by Mike Travers aka @mtraven. It's a work in progress and some parts are more polished than others. Comments welcome! More.
Incoming links
from Deleuze
  • Sounds William Blake ish to me. Also resonates with the situated action critique of planning and AI goals somehow.
    • The AI/rationalist model is: you have a representation of some goals, and a representation of reality (via the senses), your executive machine diffs these and use the differences to drive actions that are intended to reduce the differences to zero.
    • The real problem with this is that it's entirely representational and ignores embodiment.
from LWMap/Embedded Agency
  • It's instructive to compare this treatment of embeddedness with the similar work done under the rubric of situated action. One of the points of situated action was that traditional AI approaches to planning and action were too complex (in both the colloquial sense of "too complicated" and computer-science sense of "intractably unscalable"). The solution was to envision a radically simplified model of the mechanics of mind that relied on a closer interaction with the actual external world, rather than a laborious effort of generating a representation and then doing optimization and planning with it.
from Technic and Magic
  • This recalls the situated action critique of AI, unsurprisingly, I think both they and Campagna are drinking from the Heideggerian well.
from métis
from Christopher Alexander
  • Above are very interesting and resonates with the situated action distinction between plans as programs and plans as recipes to guide improvised conduct. Alexander is advocating jazz architecture!
Twin Pages

situated action

16 Jan 2021 09:15 - 01 Jan 2022 07:48

    • The situated action approach to artificial intelligence involved a radical rethinking of all aspects of its problem domain of intelligent action. It was deeply influenced by Heidegger, Ethnomethodology, and phenomenology, and a few other strains of thought that are not very common in technical discourse, or at least were not at that place and time (1980s MIT AI lab).
    • The old-fashioned model of mind which they were critiquing was something like this:
      • The central job of minds is making representations of the world, and using these (together with some goals) to derive plans and take actions.
      • This means that intelligent beings can be modularized into
        • a representational part, which encodes the world and goals,
        • a sensing part which builds and updates the representation
        • a reasoning part which computes plans of action that are intended to get closer to the goals
        • an execution part that converts the plans into actual actions or behaviors