MINSKY: ... Computer science is not really about computers at all, but about ways to describe processes. As soon as those computers appeared, this became an urgent need. Soon after that we recognized that this was also what we'd need to describe the processes that might be involved in human thinking, reasoning, memory, and pattern recognition, etc.
JB: You say 1950, but wouldn't this be preceded by the ideas floating around the Macy Conferences in the '40s?
MINSKY: Yes, indeed. Those new ideas were already starting to grow before computers created a more urgent need. Before programming languages, mathematicians such as Emil Post, Kurt Gödel, Alonzo Church, and Alan Turing already had many related ideas. In the 1940s these ideas began to spread, and the Macy Conference publications were the first to reach more of the technical public. In the same period, there were similar movements in psychology, as Sigmund Freud, Konrad Lorenz, Nikolaas Tinbergen, and Jean Piaget also tried to imagine advanced architectures for "mental computation." In the same period, in neurology, there were my own early mentors—Nicholas Rashevsky, Warren McCulloch and Walter Pitts, Norbert Wiener, and their followers—and all those new ideas began to coalesce under the name "cybernetics." Unfortunately, that new domain was mainly dominated by continuous mathematics and feedback theory. This made cybernetics slow to evolve more symbolic computational viewpoints, and the new field of Artificial Intelligence headed off to develop distinctly different kinds of psychological models.
JB: Gregory Bateson once said to me that the cybernetic idea was the most important idea since Jesus Christ.
MINSKY: Well, surely it was extremely important in an evolutionary way. cybernetics developed many ideas that were powerful enough to challenge the religious and vitalistic traditions that had for so long protected us from changing how we viewed ourselves. These changes were so radical as to undermine cybernetics itself. So much so that the next generation of computational pioneers—the ones who aimed more purposefully toward Artificial Intelligence—set much of cybernetics aside.
The practicality of the alternative—finding useful mechanistic interpretations of those mentalistic notions that have real value—is associated in its most elementary forms with what we call cybernetics, and in its advanced forms with what we call artificial intelligence.
To be sure, [early cybernetics] had had their own intellectual ancestors, but here for the first time we see a sufficiently concrete (i.e., technical) foundation for the use of mentalistic language as a constructive and powerful tool for describing machines. It is ironic that these ideas descend more from the "idealistic" rather than from the "mechanistic" lines in metaphysical and psychological thought! For the mechanistic tradition was fatally dominated by the tightly limited stock of kinematic images that were available, and did not lead to models capable of adequate information processing. The idealists were better equipped (and more boldly prepared) to consider more sophisticated abstract structures and interactions, though they had no mechanical floor upon which to set them.
Every now and then there is a centennial and I think the reason that there is this fake Freud title on my talk is that 1995 is the centennial of the first good book on structural psychology, namely Freud’s Interpretation of Dreams, which came out in 1895. It’s the first major work which discusses the mind as though there weren’t a single little homunculus or mastermind in the mid- dle of it pulling the strings.
Freud’s book is the first one that I know of that says that in the mind there isn’t a single thing there, there are at least three or four major systems. There is the common-sense reasoning system – later he wrote a book on the psychopathology of everyday life in which he tries and fails to account for how people do ordinary common-sense reasoning, but he recognizes that as a very hard problem. If you look at the later Freud a lot of people think that Freud must have studied emotions and emotional illnesses and disorders because that was the really hard and important problem, but Freud himself said the really hard thing to understand is how people do ordinary everyday non-neurotic reasoning.
The reason I suggested that he was a great AI researcher is that in trying to make this theory of how people’s motivations worked in this period 1895 and a couple years thereafter, he wrote this long essay, some of which is lost, for the Project for Scientific Psychology. The project wasn’t published until around 1950 by Carl Pribram, and it has little diagrams which are sort of neural network like, but more like the hydraulic motivation theory that later Nikolaas Tinbergen and Konrad Lorenz won the Nobel Prize for, this diagram that is the foundation of modern ecological theory.
But the point is that Freud is an important figure because he’s the first one to consider that the mind is a big complicated kludge of different types of machinery which are specialized for different functions. If you look at the brain you find 400-odd different pieces of computer architecture with their own busses and somewhat different architecture. Nobody knows much about how any of those work yet but they’re on the track. The idea that the brain is made of neurons and works largely by sending signals along the nerve fibers which are the connections between them is only from about 1890. The neuron doctrine, that the nerve cells are not a single organism, not a single thing but there are a lot of independent gadgets which do separate things and send signals that are informational rather than chemical and so forth, is only a hundred years old
The mind is a community of “agents”. Each has limited powers and can communicate only with certain others. The powers of mind emerge from their interactions for none of the Agents, by itself, has significant intelligence. [. . . ] Everyone knows what it feels like to be engaged in a conversation with oneself. In this book, we will develop the idea that these discussions really happen, and that the participants really “exist”. In our picture of the mind we will imagine many “sub-persons”, or “internal agents”, interacting with one another. Solving the simplest problem – seeing a picture – or remembering the experience of seeing it – might involve a dozen or more – perhaps very many more – of these agents playing different roles. Some of them bear useful knowledge, some of them bear strategies for dealing with other agents, some of them carry warnings or encouragements about how the work of others is proceding. And some of them are concerned with discipline, prohibiting or “censoring” others from thinking forbidden thoughts.
Note: This book uses the term "resource" where my earlier book, The Society of Mind, used "agent. "I made this change because too many readers assumed that an "agent" is a personlike thing (like a travel agent) that could operate independently, or cooperate with others in much the same ways that people do.
To understand what we call the Self we first must see what Selves are for. One function of the Self is to keep us from changing too rapidly. Each person must make some long-range plans in orderto balance single-purposeness against attempts to do everything at once. But it is not enough simply to instruct an agency to start to carry out our plans. We also have to find some ways to constrain the changes we might later make-to prevent ourselves from turning those plan-agents off again! If we changed our minds too recklessly, we could never know what we might want next. We'd never get much done because we could never depend on ourselves.
According to the modern scientific view, there is simply no room at all for "freedom of the human will". Everything that happens in our universe is either completely determined by what's already happend in the past or else depends, in part, on random chance...whatever actions we may "choose", they cannot make the slightest change in what might otherwise have been...
Whenever we find some scrap of order in the world, we have to attribute it to Cause—and whenever things seem to obey no laws at all, we attribute that to Chance. This means that the dominion controlled by Will can only hold what, up to now, we don't understand.
Does this mean that we must embrace the modern scientific view and put aside the ancient myth of voluntary choice? No. We can't do that: too much of what we think and do revolves around those old beliefs. Consider how our social lives depend upon the notion of responsibility and how little that idea would mean without our belief that personal actions are voluntary.
If one has an inappropriate vision in the imagination, one generates an inappropriate “phase-portrait for the geometry of behavior” of the self. Our culture, lacking a vision of a multidimensional model of consciousness, simply oscillates back and forth between an excessively reified materialism and a compensatorily hysterical nihilism. This Nietzschean nihilism, in all its deconstructionist variants, has pretty much taken over the way literature is studied in the universities, and it also rules the cognitive science of Marvin Minsky, Dan Dennett, and Patricia and Paul Churchland, in which the self is looked upon as a superstition that arose from a naive folk psychology that existed before the age of enlightenment brought about by computers and artificial intelligence. This materialist/nihilist mind-set controls the universities.
To give you one idea of some of the dumb things that have diverted the field, you need only consider situated action theory. This is an incredibly stupid idea that somehow swept the world of AI. The basic notion in situated action theory is that you don't use internal representations of knowledge. (But I feel they're crucial for any intelligent system.) Each moment the system —typically a robot—acts as if it had just been born and bases its decisions almost solely on information taken from the world—not on structure or knowledge inside. For instance, if you ask a simple robot to find a soda can, it gets all its information from the world via optical scanners and moves around. But it can get stuck cycling around the can, if things go wrong. It is true that 95 percent or even 98 percent of the time it can make progress, but for other times you're stuck. Situated action theory never gives the system any clues such as how to switch to a new representation.
When the first such systems failed, the AI community missed the important implication; that situated action theory was doomed to fail. I and my colleagues realized this in 1970, but most of the rest of the community was too preoccupied with building real robots to realize this point.
it's hard to know what to do with the word "exist"
you can't ever know that you exist, you might be a simulation.
its the [computational] process itself that is the real thing , it doesn't have to exist in any ordinary sense. "
I wouldn't use the word real at all, I think it's obsolete and unnecessary...to say that this button is real in this universe make sense, to ask if the universe itself is real makes no sense at all.
I also think I am a bit of an anti-realist – not that I don't believe in reality, but I think that the concept of "the Real" is useless, especially if anything you can think of is real. If that's the case, the word draws no useful distinctions and should be avoided. Unless you split it up, maybe: my desk is real, Beauty is real, and my imaginary dragon friend are all real, but they are not real in the same ways and we need to understand how those different modes work, and how they relate to each other. Perhaps they are all aspects of the one real Real, which we cannot grasp directly.
1995 is the centennial of the first good book on structural psychology, namely Freud’s Interpretation of Dreams, which came out in 1895. It’s the first major work which discusses the mind as though there weren’t a single little homunculus or mastermind in the middle of it pulling the strings.
If you look at the later Freud a lot of people think that Freud must have studied emotions and emotional illnesses and disorders because that was the really hard and important problem, but Freud himself said the really hard thing to understand is how people do ordinary everyday non-neurotic reasoning.
The reason I suggested that he was a great AI researcher is that in trying to make this theory of how people’s motivations worked in this period 1895 and a couple years thereafter, he wrote this long essay, some of which is lost, for the Project for Scientific Psychology. The project wasn’t published until around 1950 by Carl Pribram, and it has little diagrams which are sort of neural network like, but more like the hydraulic motivation theory that later Nikolaas Tinbergen and Konrad Lorenz won the Nobel Prize for, this diagram that is the foundation of modern ecological theory.
But the point is that Freud is an important figure because he’s the first one to consider that the mind is a big complicated kludge of different types of machinery which are specialized for different functions.
What is understanding? It seems to me that the problem that has happened in psychology on the whole and even in artificial intelligence, is that they’ve missed the lesson that Freud taught us, and I’m not saying that I’m a great admirer about the next 40 years of his research on psychosexual theories and so forth. I read a lot of that once but it seems to me that by 1905 Freud had produced a whole bunch of AI-like ideas that should have been followed up, but people got more interested in his theories of emotions and neuroses.
What is understanding? What I claim is that there isn’t any such thing. There is no such thing as consciousness...So what is there if there is no such thing as understanding? Here is a little paradox, it’s the reason that everybody in the philosophy business has gotten trapped into hopeless nonsense
The same thing with Penrose who says, “I don’t see any method by which a computer could be conscious, and so it must be...” – it’s like the real reason why people think that Edward de Vere wrote Shakespeare is that they look at these plays and they say that nobody could write well enough to write these plays, so it must have been someone else!
Computer science was the first way of talking that let you think about how complicated processes could go on. So, that’s what’s missing in these traditional philosophers like Penrose and Searle, they’re still using the ordinary, naive ideas like understanding and consciousness.
I am inclined to doubt that anything very resembling formal logic could be a good model for human reasoning... In particular, 1doubt that any logic that prohibits self-reference can be adequate for psychology: no mind can have enough power - without the power to think about Thinking itself. Without Self-Reference it would seem immeasurably harder to achieve Self-Consciousness -- which, so far as Ican see, requires at least some capacity ot reflect on what it does.
Since we have no systematic way to avoid all the inconsistencies of commonsense logic, each person must find his own way by building a private collection of "cognitive censors" to suppress the kinds of mistakes he has discovered in the past.
It's no secret that Marvin was a strong atheist, and irrespective of one's beliefs, one can appreciate — dare I say "enjoy" — his bluntness. "If you want real meaning, and you can't find one, it's all very well to make one up," he told me. "But I don't see how that [God] solves any problems. Unless you say how God works, saying that God exists doesn't explain anything."
People ask if machines can have souls – And I ask back whether souls can learn. It does not seem a fair exchange — if souls can live for endless time and yet not use that time to learn — to trade all change for changelessness. And that's exactly what we get with inborn souls that cannot grow: a destiny the same as death, an ending in a permanence incapable of any change and hence, devoid of intellect.
What are those old and fierce beliefs in spirits, souls, and essences? They're all insinuations that we're helpless to improve ourselves.
What are our slowest-changing agencies of all? Later we'll see that these must include the silent, hidden agencies that shape what we call character. These are the systems that are concerned not merely with the things we want, but with what we want ourselves to be — that is, the ideals we set for ourselves.
When victims of these incidents [mystical experiences] become compelled to recapture them, their lives and personalities are sometimes permanently changed [but I thought change was good]. The others seeing the radiance in their eyes...are drawn to follow them. But to offer hospitality to paradox is like leaning towards a precipice. You can find out what it is like by falling in, but you may not be able to fall out again. Once contradiction finds a home, few minds can spurn the sense-destroying force of slogans such as "all is one".
Artificial realms like mathematics and theology are built from the start to be devoid of interesting inconsistency. But we must be careful not to mistake our own inventions for natural phenomena we have discovered.
On the contrary, people often insist that determinism would indeed make choice futile even in such clear-cut situations. Accordingly, many reject determinism and invent an incoherent ‘‘free will’’ to preserve a sense of efficacy of their actions. Even those who explicitly disavow free will may still need to pretend other- wise in order to salvage the feeling that choices matter (for example, Minsky advocates such a subterfuge in The Society of Mind, p. 307). When I reflect that the future and past alike sit immutably in spacetime, I do feel an uncomfortable challenge to the notion that my choices make a difference, even in the most clear-cut instances.
It would be wonderful never to make mistakes. One way would be to always have such perfect thoughts that none of them is ever wrong. But such perfection can't be reached. Instead we try, a-s best we can, to recognize our bad ideas before they do much harm. We can thus imagine two poles of self-i-prou.-int. On one side we try to stretch the range of the ideas we generate: this leads to mor" id""r, but also to more mistakes. On the other side, we try to learn not to repeat mistakes we've made before. All communities evolve some prohibitions and taboos to teil their members what they shouldn't do. That, too, must happen in our minds: we accumulate memories to tell ourselves what we shouldn't think.
Suppressor-agents* wait until you get a certain "bad idea." Then they prevent your taiiig the cinesponding action, and make you wait until you think of some alternative. lf a suppressor could speak, it would say, "Stop thinking that!"
Censor-agents need not wait until a certain bad idea occurs; instead, they inter- cept the iates of mind that usually precede that thought. lf a censor could speak, it would sdy, "Don't even begin to think that!"
This is footage from a bus ride when Tim was visiting Japan for a conference. Zack Leary remembers watching the the fall of the Soviet Union on TV during the trip so we guess it's probably 1991. This is also the first time I met Marvin Minsky and his wife Gloria. I remember translating a "debate" between Marvin and Tim where they were arguing about whether humans had a soul. Tim said yes and Marvin said no. "The Society of Mind" had just come out in Japanese. To Marvin's dismay, it turned out that in Japan, the word for "mind" and "soul" were the same and were closer to the definition of "soul." The Japanese publishers had translated the title of his book "Society of the Mind" to "Society of the Soul" and Timothy poked Marvin with glee. Tim and Marvin had a very playful and fun relationship with clashing world views - but their interaction was always fun and enlightening to listen to.
“For maybe 30 or 40 years, support for semantics and how language actually works virtually vanished from the planet earth. Most linguistics departments changed their linguistics departments to grammar departments and theories of formal syntax. It is a phenomenon I hope to never see again, where an important field is replaced by a relatively unimportant one.”
TwinPages: RalfBarkow | WebSeitzWiki | WikiWikiWeb
In general I think if you put emphasis on believing a set of rules that comes from someone you view as an authority figure then there are terrible dangers...most of the cultures exist because they've taught their people to reject new ideas. It's not human nature, it's culture nature. I regard cultures as huge parasites.
There really isn't anything called rational, everything depends on what goals you have and how you got them...
I too am convinced that the days of programming as we know it are numbered, and that eventually we will construct large computer systems not by anything resembling today's meticulous but conceptually impoverished procedural specifications. Instead, we'll express our intentions about what should be done in terms of gestures and examples that will be better designed for expressing our wishes and convictions. Then these expressions will be submitted to immense, intelligent, intention-understanding programs that then will themselves construct the actual, new programs
In order for that to happen, though, we will have to invent and learn to use new technologies for "expressing intentions". To do this, we will have to break away from our old, though still evolving, programming languages, which are useful only for describing processes. But this brings with it some serious risks!.
The first risk is that it is always dangerous to try to relieve ourselves of the responsibility of understanding exactly how our wishes will be realized. Whenever we leave the choice of means to any servants we may choose then the greater the range of possible methods we leave to those servants, the more we expose ourselves to accidents and incidents.
The ultimate risk comes when our greedy, lazy, master-minds attempt to take that final step––of designing goal-achieving programs that are programmed to make themselves grow increasingly powerful, by self-evolving methods that augment and enhance their own capabilities. ...
Consider how one can scarcely but see a hammer except as something to hammer with
An icon's job is not to represent the truth about how an object (or program) works. An icon's purpose is, instead, to represent how that thing can be used! And since the idea of a use is in the user's mind––and not inside the thing itself––the form and figure of the icon must be suited to the symbols that have accumulated in the user’s own development
Now it is easy enough to say that the mind is a society, but that idea by itself is useless unless we can say more about how it is organized. If all those specialized parts were equally competitive, there would be only anarchy, and the more we learned, the less we'd be able to do. So there must be some kind of administration, perhaps organized roughly in hierarchies, like the divisions and subdivisions of an industry or of a human political society.