It's instructive to compare this treatment of embeddedness with the similar work done under the rubric of situated action. One of the points of situated action was that traditional AI approaches to planning and action were too complex (in both the colloquial sense of "too complicated" and computer-science sense of "intractably unscalable"). The solution was to envision a radically simplified model of the mechanics of mind that relied on a closer interaction with the actual external world, rather than a laborious effort of generating a representation and then doing optimization and planning with it.
The AI/rationalist model is: you have a representation of some goals, and a representation of reality (via the senses), your executive machine diffs these and use the differences to drive actions that are intended to reduce the differences to zero.
The real problem with this is that it's entirely representational and ignores embodiment.
The situated action approach to artificial intelligence involved a radical rethinking of all aspects of its problem domain of intelligent action. It was deeply influenced by Heidegger, Ethnomethodology, and phenomenology, and a few other strains of thought that are not very common in technical discourse, or at least were not at that place and time (1980s MIT AI lab).
The old-fashioned model of mind which they were critiquing was something like this:
The central job of minds is making representations of the world, and using these (together with some goals) to derive plans and take actions.
This means that intelligent beings can be modularized into
a representational part, which encodes the world and goals,
a sensing part which builds and updates the representation
a reasoning part which computes plans of action that are intended to get closer to the goals
an execution part that converts the plans into actual actions or behaviors