There are many reasons to pursue the dream of artificial intelligence, whether they be scientific, economic, or simply deriving from whatever force causes life and intelligence to try to replicate themselves in new forms. But there is a subtler reason that I think is implicit in these essays, which is that almost all of our preconceived ideas about the mind are severely wrong and broken (including such ancient concepts as willpower, freedom, consciousness, and innate ability), and this limits us and causes us needless suffering. Computational ideas give us a radically new way to see ourselves, and their power as a tool for human self-reflection has barely been touched. Getting these ideas into the hands of more people, especially children and other learners, may be one of the most important things we can do for human progress.
One rather immediate practical application of refactoring agency is that it can provide a better relationship with your distractors. All of us are fighting distractions – web sites, noises, snacking, minor tasks, watching episodes of Bad Lipreading: – anything that momentarily seems more attractive than the task we are supposed to be working on. It is slightly paradoxical, but I have found that endowing these distractors with agency helps me to politely but firmly dismiss their attempts to grab my attention. Maybe it’s not so paradoxical – if resisting distractors requires willpower, it is not so fanciful to think that it is easier to resist an agent than an inanimate attractor, if only because we have lots of practice and techniques for opposing other agents.