Early efforts to model human-like thinking with machines using rules were interesting but failed in a number of ways to capture even simple ways that humans think. Marvin Minsky, AI pioneer at MIT, insists that we understand the mistakes and can begin to appreciate how the mind actually works in functional terms from the lessons we have learned. Learninig from our past mistakes, what a novel idea.
To put this into perspective, the question of whether a machine model can adequately describe a brain has long been considered in terms of either strong AI or weak AI. Most people find weak AI plausible: computers can solve certain kinds of problems better than humans. We mostly balk at strong AI however: machines can literally think like humans and solve the same kinds of problems just as well.
In The Emotion Machine, Marvin Minsky presents a very machine-like architecture that he claims actually represents the way real minds probably work in fundamental respects. That sounds pretty much like strong AI. So a lot of people will reject the concept of this book out of hand. I think that would be a mistake. Minsky has done a very good job identifying plausible specifics of why AI programs have failed to deliver on, where they have actually managed to deliver, and speculates on how we can fill in the gaps.
No, he doesn't spend time arguing against Searle's Chinese Room or other conundrums of AI, he just presents his case and gives examples in a clear, simple, accessible way. And I am persuaded that he probably gets a lot right. Probably more than he gets wrong. And that's a lot better than a lot of critics will give him credit for because it goes against both the mainstream disdain for strong AI and the mainstream love of flashy neuroscience images.
Minsky skips right on past the issue of connectionist networks vs. semantic networks and simply posits that we had to evolve semantic representations at some point. How is left as an exercise for neuroscientists. There is a lot of "details to be filled in later" sort of thinking here, so don't look to this book as a detailed physical model of the brain. This is a high level functional model of the mind and I like it.
So I claim that this is an important book that seems to promise a 21st century reboot of scientific naturalism as our guiding philosophy for the future. Minsky takes on nothing less than an overall architectural model for the mind in natural terms. It is brilliant. Too brilliant to be appreciated in its time because Minsky makes complex ideas so accessible that the biggest challenge for this book is that people will not appreciate its power. It reads like a simple AI model of a mind, but it is much deeper than that because of the amount of deep thought that has gone into it and the consideration of the weaknesses as well as strengths of previous AI programs.
We are currently in the grip of a widespread fascination with poorly understood pop neuroscience, and most readers will be deeply disappointed that this book does not attempt to wrestle with brain science at all. I think that's a strength because it means Minsky is not falling into the weird metaphysical spins that we too often see in pop neuroscience books, especially those by non-researchers and over-enthusiastic under-trained journalists.
What Minsky is doing here is simply coming up with a logical model of what a mind has to be able to do to provide the capabilities that we observe real human minds to possess. Sounds simple, right? No, not at all. The reason Minsky has accomplished something special here is that he recognizes many of the powerful fallacies we usually fall into when we introspect about thinking and rely on traditional models. We tend to think of emotions and reasoning as separate kinds of things, and then we talk about how they are both needed and how they interact. But as Minsky points out, both neuroscience and psychology seem to provide us evidence that these are points on a continuum, not different kinds of things. Minsky takes that seriously and builds on it.
The result is something amazing that looks like a simplistic mechanical model of the mind but captures some deep insights into how minds really work.
The central implication of Minsky's model is an epistemological stance that resourcefulness in human thinking is a matter of switching between different kinds of representations, each used in a different way of thinking, each of which captures something essential about specific things in our world while neccessarily leaving out other details. A mind can't comprehend everything at once. Some decisions simply don't have an optimal answer because they look different from different angles.
The key concept underlying Minsky's model is that minds as we think of them had to start with simple rules for recognizing and responding to cues, had to be able to incorporate goals in some form in those rules as well, and then eventually had to be able to recognize kinds of problem and activate appropriate ways of thinking. It makes sense to think of this in terms of logical levels of recognizers and responders, and importantly, what Minsky calls "critics" and "selectors," where each new level provides some way to resolve conflicts that arise in the level below it.
So conflicts in our instincts can be resolved by learned rules, conflicts in learned rules can be resolved by deliberation strategies, and in turn levels with different kinds of representations of the problem and eventually the problem solver and their own ways of thinking. Once the problem solver can represent themselves and their own thinking, we have the power to shape our own thinking in meaningful ways.
I'm really not doing justice to this book in this review, because it's power is in the details of his examples and how they illustrate the architecture at work. Suffice to say that I think if you find a functional architecture of the mind of interest, I highly recommend this book. I think it gives a much more fundamental understanding of how minds most probably work than any amount of flashy recent brain scans, and certainly more than untestable holistic and quantum mechanical theories will ever tell us until we better understand the functional design. Neuroscience in the future will, I believe, be filling in the details of a framework very much like this one.