Marvin Minsky

03 Jan 2021 12:19 - 10 Jul 2021 10:23

    • A major influence and my advisor at MIT. I contributed an introduction to Inventive Minds, a collection of his essays on education.
    • Father of artificial intelligence, Marvin Minsky died on Sunday aged 88 - Market Business News
    • This clip really echoed with Nietzsche's Notes on Daybreak:
      • In general I think if you put emphasis on believing a set of rules that comes from someone you view as an authority figure then there are terrible dangers...most of the cultures exist because they've taught their people to reject new ideas. It's not human nature, it's culture nature. I regard cultures as huge parasites.
      • Also at 6:10, a bit more on culturally-induced cognitive blindness
      • At 7:40, in the midst of a discussion on how emotions like anger are not separate from rationality but are more like modes of thought:
      • There really isn't anything called rational, everything depends on what goals you have and how you got them...
    • This essay Minsky True Names Afterword seemed particularly rich in nuggets relevant to AMMDI, I extracted a few below.
      • On intentional programming.
        • I too am convinced that the days of programming as we know it are numbered, and that eventually we will construct large computer systems not by anything resembling today's meticulous but conceptually impoverished procedural specifications. Instead, we'll express our intentions about what should be done in terms of gestures and examples that will be better designed for expressing our wishes and convictions. Then these expressions will be submitted to immense, intelligent, intention-understanding programs that then will themselves construct the actual, new programs
        • In order for that to happen, though, we will have to invent and learn to use new technologies for "expressing intentions". To do this, we will have to break away from our old, though still evolving, programming languages, which are useful only for describing processes. But this brings with it some serious risks!.
      • On AI Risk
        • The first risk is that it is always dangerous to try to relieve ourselves of the responsibility of understanding exactly how our wishes will be realized. Whenever we leave the choice of means to any servants we may choose then the greater the range of possible methods we leave to those servants, the more we expose ourselves to accidents and incidents.
        • The ultimate risk comes when our greedy, lazy, master-minds attempt to take that final step––of designing goal-achieving programs that are programmed to make themselves grow increasingly powerful, by self-evolving methods that augment and enhance their own capabilities. ...
      • Marvin goes Heideggarian
        • Consider how one can scarcely but see a hammer except as something to hammer with
      • On functional representation
        • An icon's job is not to represent the truth about how an object (or program) works. An icon's purpose is, instead, to represent how that thing can be used! And since the idea of a use is in the user's mind––and not inside the thing itself––the form and figure of the icon must be suited to the symbols that have accumulated in the user’s own development
      • The government of the Society of Mind
        • Now it is easy enough to say that the mind is a society, but that idea by itself is useless unless we can say more about how it is organized. If all those specialized parts were equally competitive, there would be only anarchy, and the more we learned, the less we'd be able to do. So there must be some kind of administration, perhaps organized roughly in hierarchies, like the divisions and subdivisions of an industry or of a human political society.
    • "I didn't hire people to do jobs; I hired people who had goals"

Search
Map
Incoming links
from Introduction to Inventive Minds
from Inventive Minds
from Minsky vs Chomsky
  • Marvin Minsky had a strong disdain for Noam Chomsky's linguistics. I don't know if there were personal or political elements involved; I understood it as a very deep intellectual disagreement, one that involved a radical difference in the goals and methods appropriate to understanding the mind.
from nihilism
  • I come from a background of fairly radical materialism – my late advisor Marvin Minsky delighted in calling humans "meat machines". I think this was mostly to deliberately needle his humanist enemies, who were incapable of appreciating that machines can be wonderfully intricate embodiments of intelligence. He was not a nihilist, but the materialist concept of mind that he advocated could seem that way from the outside.
from Society of Mind
from Play as a Cognitive Primitive
  • Is a form of quotation a cognitive primitive? In a sense it must be, because any kind of mental representation has to recall a past state of affairs but not entirely. (Note: this is almost exactly Marvin Minsky's K-lines).
from Infrastructure of intention
  • How do humans and animals manage their various divergent intentions? (Freud, Tinbergen, Marvin Minsky)
from Agency at the Media Lab
  • This was all lots of fun, and the systems were successful as academic projects go. But it wasn't leading me to the Grand Insights I thought I should be having. The implicit vision behind these efforts was something that could scale up to something more like Marvin Minsky's Society of Mindd, which was a mechanical model not just of animal behavior but of human thought. I don't think that ever happened, and while I might blame my own inadequacies it might be also be that Minsky's theories were not very language-like. A good language like Lisp is built around basically a single idea, or maybe two. Minsky's theory was a suite of many dozens of ideas, each of which was at least in theory mechanizable, but they didn't necessary slot together cleanly as concepts do in a pristine language design.
from Minsky on Philosophers
from relationship between cybernetics and psychoanalysis
  • Oh my: The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future | | download has a chapter by Marvin Minsky entitled "Why Freud Was the First Good AI Theorist"
    • Pretty typical Marvin, not much new, but some good lines
    • This is the kind of AI shit that I find embarrassing, or worse, makes me want to run and be Joe Weizenbaum.
    • 1995 is the centennial of the first good book on structural psychology, namely Freud’s Interpretation of Dreams, which came out in 1895. It’s the first major work which discusses the mind as though there weren’t a single little homunculus or mastermind in the middle of it pulling the strings.
    • If you look at the later Freud a lot of people think that Freud must have studied emotions and emotional illnesses and disorders because that was the really hard and important problem, but Freud himself said the really hard thing to understand is how people do ordinary everyday non-neurotic reasoning.
    • The reason I suggested that he was a great AI researcher is that in trying to make this theory of how people’s motivations worked in this period 1895 and a couple years thereafter, he wrote this long essay, some of which is lost, for the Project for Scientific Psychology. The project wasn’t published until around 1950 by Carl Pribram, and it has little diagrams which are sort of neural network like, but more like the hydraulic motivation theory that later Nikolaas Tinbergen and Konrad Lorenz won the Nobel Prize for, this diagram that is the foundation of modern ecological theory.
    • But the point is that Freud is an important figure because he’s the first one to consider that the mind is a big complicated kludge of different types of machinery which are specialized for different functions.
    • What is understanding? It seems to me that the problem that has happened in psychology on the whole and even in artificial intelligence, is that they’ve missed the lesson that Freud taught us, and I’m not saying that I’m a great admirer about the next 40 years of his research on psychosexual theories and so forth. I read a lot of that once but it seems to me that by 1905 Freud had produced a whole bunch of AI-like ideas that should have been followed up, but people got more interested in his theories of emotions and neuroses.
    • A lot of trashing of Penrose, Searle, Brooks.
    • What is understanding? What I claim is that there isn’t any such thing. There is no such thing as consciousness...So what is there if there is no such thing as understanding? Here is a little paradox, it’s the reason that everybody in the philosophy business has gotten trapped into hopeless nonsense
    • The same thing with Penrose who says, “I don’t see any method by which a computer could be conscious, and so it must be...” – it’s like the real reason why people think that Edward de Vere wrote Shakespeare is that they look at these plays and they say that nobody could write well enough to write these plays, so it must have been someone else!
      • – Good one!
    • Computer science was the first way of talking that let you think about how complicated processes could go on. So, that’s what’s missing in these traditional philosophers like Penrose and Searle, they’re still using the ordinary, naive ideas like understanding and consciousness.
from Lisp
from dumbbell theory