About
    • AMMDI is an open-notebook hypertext writing experiment, authored by Mike Travers aka @mtraven. It's a work in progress and some parts are more polished than others. Comments welcome! More.
Search
Full
Incoming links
from Minsky on Philosophers
from Marvin Minsky/Society of Mind
from On Purpose
  • Marvin Minsky believed something like this, and would say so when prodded, eg "they misunderstand, and should be ignored." Crevier
from Marvin Minsky/Inventive Minds
    • I didn't come up with the title but I recognized it as the right one as soon as it was proposed. It perfectly captures an important point that is sometimes lost in talk of constructionism – that any kind of learning is a profoundly creative process and should be recognized as such. You don't learn by passively storing up information in your head; you learn by constructing a personal mental model of a domain, and given the uniqueness of your own situation, that is a necessarily creative and inventive act.
    • Marvin himself was astonishingly inventive; from the confocal microscope to his improvised fugues. In these essays he examines the roots of his own inventiveness and speculates on how education could be redesigned to encourage it in others.
    • Inventive Minds
    • {{youtube }}
from William Irwin Thompson
  • If one has an inappropriate vision in the imagination, one generates an inappropriate “phase-portrait for the geometry of behavior” of the self. Our culture, lacking a vision of a multidimensional model of consciousness, simply oscillates back and forth between an excessively reified materialism and a compensatorily hysterical nihilism. This Nietzschean nihilism, in all its deconstructionist variants, has pretty much taken over the way literature is studied in the universities, and it also rules the cognitive science of Marvin Minsky, Dan Dennett, and Patricia and Paul Churchland, in which the self is looked upon as a superstition that arose from a naive folk psychology that existed before the age of enlightenment brought about by computers and artificial intelligence. This materialist/nihilist mind-set controls the universities.
    • Well that's a pretty standard take, can't say that I'm interested. More interesting is that he talks about The Embodied Mind.
from nihilism
  • I come from a background of fairly radical materialism – my late advisor Marvin Minsky delighted in calling humans "meat machines". I think this was mostly to deliberately needle his humanist enemies, who were incapable of appreciating that machines can be wonderfully intricate embodiments of intelligence. He was not a nihilist, but the materialist concept of mind that he advocated could seem that way from the outside.
from 2001: A Space Odyssey
  • I learned from reading Michael Benson's https://amzn.to/3ogBXzx that Marvin Minsky, who served as a consultant on AI and other matters, was almost beaned by a falling wrench when Kubrick gave him a demo of the huge rotating set for the Discovery centrifuge.
from Minsky vs Chomsky
  • Marvin Minsky had a strong disdain for Noam Chomsky's linguistics. I don't know if there were personal or political elements involved; I understood it as a very deep intellectual disagreement, one that involved a radical difference in the goals and methods appropriate to understanding the mind.
from Introduction to Inventive Minds
from Lisp
from goddinpotty/devlog
  • Created some new hierarchies, for LWMap and Marvin Minsky. Seems like an awkward process and the resulting links don't flow. Oh well. Still a useful kind of structure to have.
from Marvin Minsky/Society of Mind
    • The original Society of Mind formulation was very agentic, to the point where parts of the mind were conceived of as sub-selves engaging in conversation with each other:
    • The mind is a community of “agents”. Each has limited powers and can communicate only with certain others. The powers of mind emerge from their interactions for none of the Agents, by itself, has significant intelligence. [. . . ] Everyone knows what it feels like to be engaged in a conversation with oneself. In this book, we will develop the idea that these discussions really happen, and that the participants really “exist”. In our picture of the mind we will imagine many “sub-persons”, or “internal agents”, interacting with one another. Solving the simplest problem – seeing a picture – or remembering the experience of seeing it – might involve a dozen or more – perhaps very many more – of these agents playing different roles. Some of them bear useful knowledge, some of them bear strategies for dealing with other agents, some of them carry warnings or encouragements about how the work of others is proceding. And some of them are concerned with discipline, prohibiting or “censoring” others from thinking forbidden thoughts.
    • In later days, presumably in attempts to make it seem more like an implementable theory, the agentic language was toned down, and in Marvin Minsky/The Emotion Machine, Minsky's follow-up to The Society of Mind, he explicitly disavows such talk:
    • Note: This book uses the term "resource" where my earlier book, The Society of Mind, used "agent. "I made this change because too many readers assumed that an "agent" is a personlike thing (like a travel agent) that could operate independently, or cooperate with others in much the same ways that people do.
from Play as a Cognitive Primitive
  • Is a form of quotation a cognitive primitive? In a sense it must be, because any kind of mental representation has to recall a past state of affairs but not entirely. (Note: this is almost exactly Marvin Minsky's K-lines).
from Marvin Minsky/The Emotion Machine
from Meditations on the Tarot/1 The Magician
  • recall how Marvin Minsky would regularly cite "all as one" as a prime example of a brain-damaging idea. But OK, he's getting at something specific, and I have to agree with the intent, there is only one unified reality and if we perceive it as split into separate realms (eg human, divine) it's due to flaws in our perception.
from logseq/issues
  • Minor interaction bug. Type ( Marvin Minsky ) (or whatever – after typing the closing ]], you wind up with cursor beyond the ). Annoying and weird.
from relationship between cybernetics and psychoanalysis
  • Oh my: The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future | | download has a chapter by Marvin Minsky entitled "Why Freud Was the First Good AI Theorist"
    • Pretty typical Marvin, not much new, but some good lines
    • This is the kind of AI shit that I find embarrassing, or worse, makes me want to run and be Joe Weizenbaum.
    • 1995 is the centennial of the first good book on structural psychology, namely Freud’s Interpretation of Dreams, which came out in 1895. It’s the first major work which discusses the mind as though there weren’t a single little homunculus or mastermind in the middle of it pulling the strings.
    • If you look at the later Freud a lot of people think that Freud must have studied emotions and emotional illnesses and disorders because that was the really hard and important problem, but Freud himself said the really hard thing to understand is how people do ordinary everyday non-neurotic reasoning.
    • The reason I suggested that he was a great AI researcher is that in trying to make this theory of how people’s motivations worked in this period 1895 and a couple years thereafter, he wrote this long essay, some of which is lost, for the Project for Scientific Psychology. The project wasn’t published until around 1950 by Carl Pribram, and it has little diagrams which are sort of neural network like, but more like the hydraulic motivation theory that later Nikolaas Tinbergen and Konrad Lorenz won the Nobel Prize for, this diagram that is the foundation of modern ecological theory.
    • But the point is that Freud is an important figure because he’s the first one to consider that the mind is a big complicated kludge of different types of machinery which are specialized for different functions.
    • What is understanding? It seems to me that the problem that has happened in psychology on the whole and even in artificial intelligence, is that they’ve missed the lesson that Freud taught us, and I’m not saying that I’m a great admirer about the next 40 years of his research on psychosexual theories and so forth. I read a lot of that once but it seems to me that by 1905 Freud had produced a whole bunch of AI-like ideas that should have been followed up, but people got more interested in his theories of emotions and neuroses.
    • A lot of trashing of Penrose, Searle, Brooks.
    • What is understanding? What I claim is that there isn’t any such thing. There is no such thing as consciousness...So what is there if there is no such thing as understanding? Here is a little paradox, it’s the reason that everybody in the philosophy business has gotten trapped into hopeless nonsense
    • The same thing with Penrose who says, “I don’t see any method by which a computer could be conscious, and so it must be...” – it’s like the real reason why people think that Edward de Vere wrote Shakespeare is that they look at these plays and they say that nobody could write well enough to write these plays, so it must have been someone else!
      • – Good one!
    • Computer science was the first way of talking that let you think about how complicated processes could go on. So, that’s what’s missing in these traditional philosophers like Penrose and Searle, they’re still using the ordinary, naive ideas like understanding and consciousness.
from dumbbell theory
from Marvin Minsky/Inventive Minds
from Infrastructure of intention
  • How do humans and animals manage their various divergent intentions? (Freud, Tinbergen, Marvin Minsky)
from imprimers
  • Marvin Minsky term for something roughly like Freud's ego-ideal: someone whom you learn not just facts or skills, but goals and values. People in this role are also those we feel emotional attachment to.
from Agency at the Media Lab
  • This was all lots of fun, and the systems were successful as academic projects go. But it wasn't leading me to the Grand Insights I thought I should be having. The implicit vision behind these efforts was something that could scale up to something more like Marvin Minsky's Society of Mind, which was a mechanical model not just of animal behavior but of human thought. I don't think that ever happened, and while I might blame my own inadequacies it might be also be that Minsky's theories were not very language-like. A good language like Lisp is built around basically a single idea, or maybe two. Minsky's theory was a suite of many dozens of ideas, each of which was at least in theory mechanizable, but they didn't necessary slot together cleanly as concepts do in a pristine language design.
from Firing up the Emotion Machine
Twin Pages

Marvin Minsky

23 Jan 2021 08:52 - 06 Jan 2022 08:17

    • Father of artificial intelligence, Marvin Minsky died on Sunday aged 88 - Market Business News
    • This clip really echoed with Nietzsche's Notes on Daybreak:
      • In general I think if you put emphasis on believing a set of rules that comes from someone you view as an authority figure then there are terrible dangers...most of the cultures exist because they've taught their people to reject new ideas. It's not human nature, it's culture nature. I regard cultures as huge parasites.
        • Gotta say that while I admire the wit of this I disagree...it's got this underlying individual vs culture stance which is kind of adolescent and philistine (and he probably doesn't really believe it, it probably is just a random sniping in the ongoing low-level conflict between science and the academic humanities that Marvin was always willing to stoke.)
      • Also at 6:10, a bit more on culturally-induced cognitive blindness
      • At 7:40, in the midst of a discussion on how emotions like anger are not separate from rationality but are more like modes of thought:
      • There really isn't anything called rational, everything depends on what goals you have and how you got them...
    • This essay Minsky True Names Afterword seemed particularly rich in nuggets relevant to AMMDI, I extracted a few below.
      • On intentional programming.
        • I too am convinced that the days of programming as we know it are numbered, and that eventually we will construct large computer systems not by anything resembling today's meticulous but conceptually impoverished procedural specifications. Instead, we'll express our intentions about what should be done in terms of gestures and examples that will be better designed for expressing our wishes and convictions. Then these expressions will be submitted to immense, intelligent, intention-understanding programs that then will themselves construct the actual, new programs
        • In order for that to happen, though, we will have to invent and learn to use new technologies for "expressing intentions". To do this, we will have to break away from our old, though still evolving, programming languages, which are useful only for describing processes. But this brings with it some serious risks!.
      • On AI Risk
        • The first risk is that it is always dangerous to try to relieve ourselves of the responsibility of understanding exactly how our wishes will be realized. Whenever we leave the choice of means to any servants we may choose then the greater the range of possible methods we leave to those servants, the more we expose ourselves to accidents and incidents.
        • The ultimate risk comes when our greedy, lazy, master-minds attempt to take that final step––of designing goal-achieving programs that are programmed to make themselves grow increasingly powerful, by self-evolving methods that augment and enhance their own capabilities. ...
      • Marvin goes Heideggarian
        • Consider how one can scarcely but see a hammer except as something to hammer with
      • On functional representation
        • An icon's job is not to represent the truth about how an object (or program) works. An icon's purpose is, instead, to represent how that thing can be used! And since the idea of a use is in the user's mind––and not inside the thing itself––the form and figure of the icon must be suited to the symbols that have accumulated in the user’s own development
      • The government of the Society of Mind
        • Now it is easy enough to say that the mind is a society, but that idea by itself is useless unless we can say more about how it is organized. If all those specialized parts were equally competitive, there would be only anarchy, and the more we learned, the less we'd be able to do. So there must be some kind of administration, perhaps organized roughly in hierarchies, like the divisions and subdivisions of an industry or of a human political society.
    • "I didn't hire people to do jobs; I hired people who had goals"
      • {{youtube }}