In AI, this manifested as the planning problem. This is a standard AI problem, in which an agent has a desired state and some available actions, as well as access to a representation of the current state of the world. The planning problem is defined as coming up with.a sequence of actions that when executed will achieve the goal. Chapman proved that this problem was computationally intractable, which was pretty significant, but for me the more interesting part of his work was how it led to a critique of the very basic foundations of the cognitive model, including representationalism and the default modularization of mind into perception, cognition, and motor activity.
The idea that minds are made of "representations", internal symbolic structures which in some way model or represent the external world, and thought is a matter of doing computations with those representations.
An almost unquestioned assumption behind computationalism and cognitive science. People like Dryefus and Agre and Chapman were able to question it, using Heidegger, while more down-to-earth types like Rod Brooks questioned it from more pragmatic engineering standpoint.
Orality and Literacy is another questioning of representationalism (via Agre's @Writing and Representation). The argument boiled down is that representionalism implicitly assumes a model of brain contents that is like writing (eg, symbols are stable, passive things operated on by external processes). The realization of orality, and how it is much more fundamental to mental operations, completely changes how you think about minds.