AMMDI is an open-notebook hypertext writing experiment, authored by Mike Travers aka mtraven. It's a work in progress and some parts are more polished than others. Comments welcome! More.
Marvin kind of hated the trend towards embodiment and situated action that was on the rise at the AI lab in the 80s. He didn't think building robots was the way to AI, that it was a distraction. This was a bit curious since he did robotics in his early days, including a famous robotic arm. But now he was focused on common sense and the Society of Mind, and thought dealing with embodiment was a distraction. And he very much did not like philosophers, and the emphasis on Heidegger probably triggered a lot of hostility (with roots dating back way before my time, to his interactions with Hubert Dreyfus )
Above are very interesting and resonates with the situated action distinction between plans as programs and plans as recipes to guide improvised conduct. Alexander is advocating jazz architecture!
Rod Brooks around 38:00 - problem with Hal is he didn't have a body (see situated action ). Dennett shows up, good line about Cog – there's nobody home, but some of the furniture is being moved in.
It's instructive to compare this treatment of embeddedness with the similar work done under the rubric of situated action. One of the points of situated action was that traditional AI approaches to planning and action were too complex (in both the colloquial sense of "too complicated" and computer-science sense of "intractably unscalable"). The solution was to envision a radically simplified model of the mechanics of mind that relied on a closer interaction with the actual external world, rather than a laborious effort of generating a representation and then doing optimization and planning with it.
Sounds William Blake ish to me. Also resonates with the situated action critique of planning and AI goals somehow.
The AI/rationalist model is: you have a representation of some goals, and a representation of reality (via the senses), your executive machine diffs these and use the differences to drive actions that are intended to reduce the differences to zero.
The real problem with this is that it's entirely representational and ignores embodiment.
To give you one idea of some of the dumb things that have diverted the field, you need only consider situated action theory. This is an incredibly stupid idea that somehow swept the world of AI. The basic notion in situated action theory is that you don't use internal representations of knowledge. (But I feel they're crucial for any intelligent system.) Each moment the system —typically a robot—acts as if it had just been born and bases its decisions almost solely on information taken from the world—not on structure or knowledge inside. For instance, if you ask a simple robot to find a soda can, it gets all its information from the world via optical scanners and moves around. But it can get stuck cycling around the can, if things go wrong. It is true that 95 percent or even 98 percent of the time it can make progress, but for other times you're stuck. Situated action theory never gives the system any clues such as how to switch to a new representation.
When the first such systems failed, the AI community missed the important implication; that situated action theory was doomed to fail. I and my colleagues realized this in 1970, but most of the rest of the community was too preoccupied with building real robots to realize this point.
To give you one idea of some of the dumb things that have diverted the field, you need only consider situated action theory. This is an incredibly stupid idea that somehow swept the world of AI. The basic notion in situated action theory is that you don't use internal representations of knowledge. (But I feel they're crucial for any intelligent system.) Each moment the system —typically a robot—acts as if it had just been born and bases its decisions almost solely on information taken from the world—not on structure or knowledge inside. For instance, if you ask a simple robot to find a soda can, it gets all its information from the world via optical scanners and moves around. But it can get stuck cycling around the can, if things go wrong. It is true that 95 percent or even 98 percent of the time it can make progress, but for other times you're stuck. Situated action theory never gives the system any clues such as how to switch to a new representation.
The situated action approach to artificial intelligence involved a radical rethinking of all aspects of its problem domain of intelligent action. It was deeply influenced by Heidegger, Ethnomethodology, and phenomenology, and a few other strains of thought that are not very common in technical discourse, or at least were not at that place and time (1980s MIT AI lab).
The old-fashioned model of mind which they were critiquing was something like this:
The central job of minds is making representations of the world, and using these (together with some goals) to derive plans and take actions.
This means that intelligent beings can be modularized into
a representational part, which encodes the world and goals,
a sensing part which builds and updates the representation
a reasoning part which computes plans of action that are intended to get closer to the goals
an execution part that converts the plans into actual actions or behaviors