Monday, August 29, 2011
The venerable Yerkes-Dodson curve (sometimes referred to as “The Inverted-U”) describes performance vs. arousal in a way that shows performance being degraded under extremes of arousal, and this relationship indeed does seem to apply fairly widely.
That’s the high school textbook picture, and it’s accurate for the most part. Still, in real life, upon closer inspection, arousal is a very complex phenomenon involving a number of different neurotransmitter systems in the brain and affecting different kinds of skills in different ways. For one thing, it doesn’t seem to apply equally to different activities. For another thing, it doesn’t seem to apply equally to different people. But on average, it holds up fairly well.
During the Civil War in the United States, it has been estimated that only about 25% of soldiers remembered to fire their muskets in combat. Many muskets were found with up to 5 charges in the barrel, indicating that soldiers kept reloading without firing. Most of us don’t think clearly in a crisis, we rely on simple well-learned habits that might not be what is needed for the situation at hand. A more expert marksman who doesn’t fire their weapon isn’t performing proportionately to their skill. Clearly, extreme arousal can degrade our performance as Yerkes-Dodson predicts.
What’s harder to tell from this picture is what is different about the soldiers who did fire their muskets. Were those the more expert soldiers in some sense? Or were they the more brave? Or were they different in some other way? In other words, does extreme arousal really degrade performance in general and negate differences in skill, or does it actually bring out differences in skill in greater relief, while demonstrating the importance of a different kind of skills, those less vulnerable to degradation?
A dilemma arises with the Yerkes-Dodson curve if we assume that skills break down under pressure. The skills that are preserved under high arousal seem to be the ones that we have overlearned through long practice. Yet these kind of overlearned skills are also among the hallmarks of the expert. So it isn’t obvious that expert performance should necessarily degrade under high arousal, at least not more than less expert skills. Just because experts rely on more finely honed skills doesn’t mean they should be more subject to losing those skills under stress, they may actually be less vulnerable. How do we resolve this dilemma?
Dissecting the Inverted-U: What are the Real Effects?
One possibly relevant finding is that intellectually demanding tasks seem to be more degraded by arousal and that tasks requiring persistence are less degraded. The Easterbrook Cue-Utilization theory says that this is in part because an increase in arousal leads to a decrease in number of cues that can be utilized, an effect that has been reinforced by other research and has been called perceptual narrowing.
The significance of perceptual narrowing is that it does not seem to be affected by skill level and so it may represent a way of distinguishing the more specific effect of arousal on expert performance. Experts seem to experience this kind of narrowing of the spotlight of their attention under high arousal the same as others do. The question is how it affects their performance.
The most robust effect of high arousal is that our ability to deal with surprises is significantly compromised. High arousal focuses our attention such that we are only aware of a narrow range of predictable central events and we tend to completely ignore unlikely events that would normally get some of our attention. Think about it, this could be good or bad, depending on the role of surprises in the environment. Being unable to respond effectively to a soldier sneaking up on you would be a bad thing in combat. Failing to be distracted by things that don’t affect you would be a positive result.
If my expertise depends on being able to scan the environment widely and respond to novelty, then it seems it will probably be significantly compromised by high arousal. If my expertise depends on being able to focus on a narrow range of stimuli and execute well-learned skills in response to them for an extended period, then high arousal will probably enhance my performance.
Interestingly, the effects of low arousal seem roughly consistent with this model as well. Rather than being blind to things happening at the periphery, at low arousal we seem to be overly distracted by things happening outside the center of our attention, for our attention to wander.
Another effect of high arousal is one seen especially when we feel we are in danger. We tend to not only narrow the spotlight of our attention, but also to rely more on immediate subjective experience and to reject other sources of information that we might ordinarily consider more objective. Under high arousal and threat we tend to resort to our own immediate sensory experience and mistrust all other sources. Again, this seems fairly robust and happens to experts as much as non-experts. Various military programs discovered this effect to their dismay when highly trained personnel have often abandoned their elaborate electronic information systems under combat conditions to depend on their own senses.
One more robust effect of high arousal is variability in some kinds of performance, a phenomenon originally called “blocking” when it was discovered. “Blocking” refers to the appearance of occasional “blocks” where information processing for the task at hand is apparently momentarily interrupted, and decision responses are markedly slower during extended cognitive work. Since this only happens after extended work, it has been interpreted as a kind of “mental fatigue.” Some theorists have interpreted this as an indication that our attention is involuntarily shifting to sources irrelevant to the task at hand.
Beyond the Inverted-U: The Role of Interpretation
One way to make sense of varying performance under high arousal is to take our interpretation of the situation into account. Previous research supporting the Yerkes-Dodson law dealt with situations where the range of interpretations was probably relatively narrow. This leaves margin for us to hypothesize that our interpretation of the situation might play an additional role, even one that challenges the very shape of the Yerkes-Dodson curve.
Some recent theorists have indeed suggested that the Yerkes-Dodson curve only applies under certain conditions and that high arousal consistently improves our performance under other conditions, particularly those where we interpret the situation as an exciting challenge rather than a threat and where we perceive that we have the skills to thrive in it.
This potentially changes the relationship between expertise, arousal, and performance in a fundamental way.
According to these theories of positive psychology, depending on the degree of challenge we perceive and our skills for the situation, a high arousal situation can either facilitate or degrade our performance. We might experience the same situation and the same arousal level negatively as anxiety or anger on the one hand or positively as challenge and excitement on the other hand. This would determine whether the high arousal makes us perform worse or better.
For example, a more or less neutral interpretation might have an effect on performance resembling the Yerkes-Dodson law. A very negative interpretation of the situation might have a catastrophic effect on performance even worse than the Yerkes-Dodson law predicts. A very positive interpretation of the situation would have a more uniformly positive relationship of arousal and performance. In this way, the positive psychology theory of arousal and performance is thought by its proponents to explain a wide range of results.
Conclusion: Arousal and Expertise
Is arousal a serious challenge to the power of expertise?
From research consistent with Yerkes-Dodson we know that …
Low arousal can degrade performance because of our body is inadequately prepared for rigorous demands:
■Insufficient oxygenation of working muscles,
■Cooling is not functioning optimally,
■Digestion and excretion are using energy,
■Available glucose in the liver hasn’t been released,
■Alertness and readiness to respond quickly are compromised.
High arousal can degrade performance because our body is prepared for rapid, strenuous response but not for finely controlled motor skills, reasoning, strategic planning, or flexible response to changes in the situation:
■Excess muscle tension for fine control
■Some fine coordination impaired
■Spontaneous attention shifts prevented
■Intermittent blocking of verbal behavior and decision making with extended effort due to “mental fatigue”
The data we’ve examined so far imply that arousal can very well negate the value of expertise under some conditions. If we’re doing surgery in a combat zone we might well have our skills compromised and a good corpsman with adequate basic skills might be as valuable as or more so than a master surgeon under those conditions. A weaker chess player might well consistently defeat much stronger players in high pressure speed matches if they have less of a tendency to “choke” under the pressure. Objective reasoning and strategic planning are significantly compromised by high arousal, especially if the arousal is negative. Extended performance of some kinds is hampered by “mental fatigue.” In even the best cases, high arousal reduces our ability to respond spontaneously and adaptively to surprises at the periphery of our activity.
This is far from a completely negative assessment of the effect of arousal on expert performance however. Experts can learn to interpret a wider range of situations as positive, possibly preventing the downside of the Yerkes-Dodson curve, can learn to rely on skills that do not require the kind of fine coordination that degrades with high arousal, can learn skills and habits that don’t require planning and reasoning, and can learn skills for managing their own arousal level. In short, in addition to their domain expertise, experts can learn to:
1.Make better use of high arousal
2.Rely on skills that don’t degrade with high arousal
3.Better manage their own arousal level
With this flexibility, arousal is a far less serious challenge to the power of expertise than it might seem from a simplistic application of the Yerkes-Dodson law.
 (Yerkes & Dodson, 1908)
 For example, see (Hockey, 1986) for a review of the evidence for general degradation of performance under high arousal conditions.
 (Easterbrook , 1959)
 (Broadbent, 1971), (Kahneman, 1973)
 “This is usually thought of as a reduction in the ability to deal effectively with relatively unlikely peripheral events in favor of focusing on more likely central events.” (Schmidt, 1989)
 (MacMillan, Entin, & Serfaty, 1994)
 I suppose Obi Wan Kenobi would approve since he recommended this to Luke Skywalker when he attacked the Death Star in Star Wars. Fortunately, Luke’s narrowly defined and well learned task was well suited to performance under high arousal. However in a situation where it is imperative to gather and process information more widely rather than focus on a narrow target, trusting our own senses rather than an information panel could easily become a fatal mistake.
 (Bills, 1931)
 (Bertelson & Joffe, 1963)
 (Broadbent, 1958)
 For example, see: (Csikszentmihalyi, 1998)
Although it covers a very wide range of activities, the large body of expertise research takes place in areas where we can easily identify how good people are based on standards of performance within the field itself, and where, to put it bluntly, skill matters. What about performance in real world, where things are a lot messier, and the more skillful exponent doesn’t always come out on top?
This indeed turns out to be a very real issue. While having a certain amount of skill is always valuable, it isn’t always the case that being more skillful means that we perform even better. A little skill might be good, but more skill might not be better. How can this be true?
Consider these possibilities for why being more skillful might not make us perform better:
1. Extremes of Arousal: The general state of our nervous system in response to a situation can in turn affect the performance of our trained skills, although the reason for this is surprisingly poorly understood theoretically. Picture trying to drive a challenging obstacle course while very sleepy, anxious, or terrified. Extremes of arousal may plausibly affect expertise, and perhaps even negate large differences in expertise, although the effects would probably depend on some interaction of the type or activity and whether it was low or high arousal. And it turns out that the way we interpret the situation can be an important factor as well.
2. Transfer Failure: The situation at hand may resemble the situation we practiced for, but be different enough that our skills matter less. If I learn to drive a car and then manage to drive a truck, I’m transferring my skills. If I crash the truck because I can’t figure out how to operate the different controls properly or because the different response of vehicle confuses me, then we have transfer failure. My expertise doesn’t help me if it doesn’t transfer to the situation I’m in.
3. Domain Unpredictability: Some things seem to be intrinsically difficult to predict, so no amount of experience makes us better at predicting things in those domains. I don’t necessarily get better at predicting earthquakes by living through a few earthquakes, and I don’t necessarily get better at predicting slot machine payoffs by playing more, although I might learn other valuable lessons.
Despite the power of expertise across such a wide range of activities, it’s entirely possible that our performance may depend more on something other than expertise under some conditions. I’m going to examine these challenges to the power of expertise one at a time.
Who Ya Gonna Call?
Let’s say you’re working on your computer and it starts acting strangely. You get errors that don’t understand or it crashes for no apparent reason. If you aren’t sure what to do at first, where will you look for help? You might perform a web search for the symptoms to see if it’s a known problem and other people have solved it before you. You might run some diagnostic program or an antivirus scan because those are the tools you happen to have.
If you can’t fix it easily and you aren’t confident with computers you’ll probably start looking for help from another person at some point. Who? If it were me, I probably wouldn’t head down to the local college and find the top honors student or someone in the local Mensa chapter. I probably wouldn’t look for someone with great SAT scores or someone really good at Sudoku or even a master electrician. I’d look for someone with a lot of experience with computers and a proven track record fixing them. I’d look for an expert, and an expert specifically in that area, not just a smart person or an expert in a related area.
I stacked the deck a little bit with this question, because I picked a problem that is probably going to be technical in nature. That is, it seems like it will require some specialized knowledge to solve because it involves computers which are complicated devices that are a little mysterious to the average person and far less so for someone who has worked extensively with them.
It turns out, though, that my guess is pretty accurate for a wide range of fields, not just highly technical ones. Knowledge about the job turns out to be a far better predictor of performance than how high our IQ is or any other general disposition, not just in certain kinds of jobs but across a wide range from complex technical work to manual labor. Just as I’d rather have a computer expert help me rather than my friend with an astronomical IQ, in most cases I’d prefer someone who has job experience rather than someone very smart but inexperienced. And I can point to research evidence that supports my preference.
Seeing Differently vs. Seeing More
Even in many areas where we would tend to expect pure reasoning ability to play a large role, it turns out that on average experience tends to win out consistently over any more general ability or measurement we have come up with.
The research that inspired the modern study of expertise began with the game of chess. Think about chess for just a moment. Chess is an activity with a small number of relatively simple rules. Yes chess has the reputation for being a difficult game. But that’s not because chess is hard to play. Nearly anyone can learn the game. It’s because we soon discover that differences in individual ability are immense.
The difference between someone who plays chess for fun who doesn’t study the game seriously, and an average tournament player, is like night and day. It doesn’t seem like much of a competition most of the time. The difference between an average tournament player and a strong one is just as large, which is why there is a rating system.
Ratings allow people of similar ability to play relatively evenly, or to estimate handicaps as they do in golf. The difference between a strong player and a master is similarly imposing as is that between the master and a grandmaster, and between the average grandmaster and a world champion.
How can a game with a handful of simple rules end up with people playing at such astronomical differences in ability? This was the question that intrigued early researchers trying to figure out how people solve problems. The obvious answer is that the stronger players must be seeing more on the board. But what are they seeing differently?
When most of us look at the chess board we see a collection of pieces in different places that are allowed to move in particular ways. We know what we have to do to win; we have to trap the king. We also know some ways to accomplish that. For example we can capture the opponent’s pieces so we have a bigger army, and we can harass the opponent’s pieces so that they are forced into a less defensible position, allowing us to attack the king. Everyone who plays the game, even for fun, knows these things. Still most of us pretty much have to guess at how to get from some arbitrary position to that result.
If I move here, I’ll attack this piece, but how do I know that my opponent doesn’t have some better move in response that is even stronger? More insidiously, is that move by my opponent actually setting up a surprise for me later? If so, what are my options? These kinds of considerations quickly lead to the very intuitive notion that being better at chess is really about calculation, about being able to imagine a lot of different moves, and what might happen if we made them, and keeping track of all that imagining. The better player must be seeing more moves on the board, figuring out what the options are more accurately, and then predicting the outcome.
This is indeed how early chess software played the game well. It looked at the possible moves, looked at the possible responses to each move, evaluated the resulting positions, and chose the move that seemed to give the best outcome based on what the opponent was able to do. The trouble was that trying to do this more than a couple of moves ahead turned out to be a very demanding calculation. More demanding than even the most powerful computers could handle. Researchers were curious as to whether seeing more moves in their mind is really what good players were doing.
Maybe the human brain is really that much more powerful at calculation than we thought. Or maybe the brain is doing something else entirely?
In a pioneering study of chess players in the 1940’s a Dutch psychologist found the surprising answer. I say his work was pioneering not just because it was early but because it led to entire fields of research based upon it and validating his basic findings. The most compelling and surprising findings:
...Weaker players examined the same number of moves as stronger players, and equally thoroughly (!)
...Stronger players could recognize an actual game position far better than weaker players.
...Stronger players were just as bad as weaker players at recognizing an arbitrary configuration of pieces.
This may not seem so earthshattering at first, but think about the implications. Experts at chess consistently beat weaker players, but without examining more moves and without examining the outcomes of those moves more thoroughly. They aren’t “looking ahead more” and they aren’t “reasoning better” and they aren’t even remembering more in general. They do remember more about chess in a sense but not because they have a better memory. And looking ahead is important, but not by keeping track of moves. Their ability is a result of their mind being better trained to remember chess configurations in particular and to use that knowledge quickly and efficiently to evaluate moves.
So what are chess experts seeing that the rest of us aren’t? They aren’t seeing more moves ahead, they are seeing the board in terms of chess configurations instead of seeing it in terms of individual pieces. Their mind has been trained to see meaningful configurations of pieces instead of individual moves. They are not seeing more per se, they are seeing differently. They are seeing in terms of larger and more meaningful groupings. Experts with extended experience acquire a larger number of more complex patterns and use these new patterns to store knowledge about which actions should be taken in similar situations.
The result is profound. We have a game where a few simple rules results in an incalculably large number of possible sequences of moves. But we become good at this game of many, many moves not by thinking about more moves but by thinking in terms of larger patterns: patterns of pieces rather than movements by individual pieces.
Through practice, chess masters have trained their mind to recognize the unique meaningful patterns that apply to their game. Further, the ability to learn to recognize new patterns (along with a huge capacity to remember them) seems to be something we all possess, not just chess masters. It is a fundamental principle of learning, at least learning to be a chess expert.
Even more interesting, we don’t recognize this as knowledge, in the sense of things we recognize that we know. I know that I know some things. I know that I know all sorts of facts like the capital of some of the U.S. states and the number of sides in a triangle and Newton’s formula relating force and mass and acceleration. These sorts of things are considered explicit knowledge.
Chess masters can’t write down most of the patterns they know, both because those patterns are so vast and because they use them without thinking about them. The patterns they learn become part of their chess intuition in a manner of speaking. A common technical term for this is tacit knowledge. We use tacit knowledge in our thinking without realizing that we are using it. This is why it took focused research to discover what was going on in the minds of chess masters.
Tacit knowledge becomes part of our perception. Chess masters see the board differently; for example they often immediately see positions as good or bad without having to do the kind of analysis that the rest of us would have to rely upon.
Tacit knowledge is also used automatically in our thinking. When chess masters guess at the best move in a given position, their guess is informed by their vast database of tacit knowledge, so it is very different from the guess made by a weaker player. Experts make better guesses in their area of expertise. This is what I mean by their “chess intuition” above.
Trained Intuition and Better Guesses
You might be wondering at this point why I’ve spent so much time talking about chess experts. Or you may have guessed the answer. The most interesting conclusions from the research on chess masters are by no means limited to chess masters. Very similar or consistent results have been obtained across a staggeringly wide variety of fields from physical pursuits like wrestling and ballet to intellectual subjects like calculus and philosophy to artistic activities like painting and violin playing, to a wide variety of everyday jobs, to oddball activities like picking the winners at the horse races. Even among scientists, where the role of abstract reasoning is particularly central and the subject matter particularly challenging, productivity doesn’t seem to be predicted on the whole by supposed general ability measures such as IQ.
The chess findings are a particularly useful rhetorical device here because chess seems like it should be so dependent on reasoning and analysis. It turns out that experts analyze chess positions with the help of a vast mental database of chess configurations that apply without any recognition that they know them. The resulting perception and memory of the board just seems natural to them as a result of practice. Examined closely, in spite of its natural appearance for some people, the effortlessness of deep expertise seems to be an extreme kind of skill acquisition far more than an expression of talent.
Even if you interpret all of these findings from different fields very conservatively, collectively they still tell us something of tremendous importance about how we become good at things. We modify the way we perceive the activity. In effect, we train our intuition about the activity.
In all of these activities, researchers have found that time spent in the activity lets us acquire a new way of perceiving patterns in that activity that let us transcend the limits of our working memory and sequential reasoning capacity. That’s why expertise consistently outperforms IQ or working memory capacity or other general measures as a predictor of performance in virtually every activity that has been studied so far. And expertise is not just specialized knowledge or skills; it is also more importantly an accumulation of organized tacit knowledge that lets us make better guesses.
 (Hunter, 1986)
 (de Groot, 1965)
 This has been the most common interpretation of the chess research findings amongst expertise researchers, based on the influential theory of Chase and Simon. (Chase & Simon, 1973), (Simon & Chase, 1973)
 “Explicit knowledge,” basically just means things we know that can be easily identified and written down. The descriptor declarative is sometimes used as well, meaning that we can declare it.
 In contrast to “explicit knowledge,” this is often referred to as “tacit knowledge,” meaning things we know but we can’t easily express, especially things that support action. Tacit knowledge is usually assumed to be useful for doing things more than for taking part in our conscious reasoning processes. The descriptor procedural is sometimes also used for tacit knowledge because we think of it as involving procedures for doing things rather than declarations about things. For this reason, a common rule of thumb is that tacit knowledge refers to “know how” whereas explicit knowledge refers to “know that” (i.e. I know that grass is green). The casual rule of thumb is troublesome because we don’t really know how we do those things we call procedural, the usage of the word “know” in “know how” is very different than the word “know” in “know that.”
 Following the pioneering chess research, research into other areas reinforced the same finding: expert performance depends heavily on a large accumulated memory of patterns that give us a different “intuitive perceptual orientation” to tasks. “Experts can ‘see’ what challenges and opportunities a particular situation without affords.” (and without doing any analysis) (Perkins, 1995, p. 82)
 One of the leading and best known figures in the study of expertise is K. Anders Ericsson, whose research encompasses a particularly wide range of fields. An excellent and accessible overview of work in diverse areas of expertise research is Ericsson’s edited collection: The Road to Excellence (Ericsson, 1996).
 (Taylor, 1975)
 (Proctor & Dutta, 1995), (VanLehn, 1996)
Monday, August 22, 2011
What’s the most important thing about problem solving? If you paid attention in school, you probably would respond: getting the right answer!
There’s nothing wrong with wanting to get the right answer. Or is there? I want to raise four related concerns:
- The problem structuring concern: Problems don’t always arise in a form that has an identifiable single right answer. Often there are different best answers for different sets of possible criteria, with different sets of tradeoffs.
- The motivated thinking concern: The kind of thinking we do in order to feel we are right, to be seen by others as being right, or to advocate the right answer to others can overwhelm the kind of thinking needed to solve the problem in the best way.
- The my-side bias concern: We have a natural tendency to look selectively for evidence in favor of the first good guess we make explicit, to ignore evidence for alternatives, and to think in ways that support our favored alternative.
- The belief overkill concern: The my-side bias is often reinforced in such a way that that certain of our intuitions become treated as aspirations or universal facts of nature and this extends beyond things that can be verified empirically between observers. Compelling intuitions can guide our thinking into limited preferred patterns, reinforced by selective use of evidence and also by social patterns of polarized thinking.
I argue that these concerns, along with various inferences we can reasonably make about how the mind works, necessitates a certain approach to thinking, especially about more difficult problems.
These factors mean that we have to learn to adopt and leverage different perspectives in order to harvest all of the information available and the expertise needed to solve complex problems. This is why when it comes to problem solving worthy of the name, thinking clearly is more important than thinking correctly along predetermined lines.
At this point you probably have your own concerns. You might be wondering whether I am advocating some sort of fluffy relativistic “there’s no right or wrong and all perspectives are valid” sort of approach to thinking.
That’s not the case. I use the term Clear Thinking because I truly believe there is such a thing as identifiably better and worse thinking, leading to better or worse conclusions and that it very often makes a critical difference whether we get it right.
My point is just that all of us (not just other people) assume we are getting it right much more often than we really are getting it right, and that very knowledge about our own thinking processes is a key to Clear Thinking.
This means that when we need to think clearly about complex problems, we need to use our knowledge about and skill at problem solving itself to root out our own shortcuts, make our thinking more explicit, bring alternate perspectives into play, and in general consider more alternatives than would otherwise come to mind.
Sunday, August 21, 2011
What Makes Some People Better Problem Solvers Than Others?
The Dilemma and Challenges of Exceptional Thinking Abilities
Simply getting the right answer isn’t always the best way to think about solving difficult problems. For some problems outside of the classroom and aside from questions of knowledge from within well established domains, there may not be a single right answer.
There may be additional alternatives to be considered that aren’t known yet or which don’t seem right at first but can be turned into better solutions. Needing to be right (getting the answer that others seem to think is right), or needing to think we’re right, or needing to be seen by others as being right, may mislead our thinking and blind us to better answers.
Thinking Too Much: Defying Common Sense
Wanting to be seen as clever or wanting to be seen as an expert often similarly restricts our thinking. We very often settle for the first guess that seems right or the way other people seem to be thinking. There are sometimes good reasons to stop thinking about a problem and settle for an answer, but most of the time we use shortcuts rather than stopping when we truly have the best answer available to us.
Shortcuts are natural to us and they are an important part of what makes us good thinkers. Shortcuts in thinking are part of our common sense. It doesn’t seem right to sit and reflect on something that has an obvious answer. It can seem like a peculiarity or a symptom of subscribing to some bizarre overly complex view of reality, or maybe even a character flaw.
The trouble is that our common sense that serves us so legitimately well in so many everyday situations turns out ot be poorly suited to many other kinds of complex and counter-intuitive situations. Our natural instincts for reasoning are significantly better adapted to some kinds of problems than others.
The shortcuts that serve us for biological needs like feeding ourselves and mating and getting along with other people in small groups tend to fail us when we think about things like cultures, corporations, markets, and nations or when we’re presented with a completely different kind of problem.
Importantly, the way we learn is not well suited to automatically recognizing which kinds of situations we are thinking poorly in. The shortcuts in our thinking work so well because we rely on them so naturally. We don’t necessarily get an alarm bell in our mind that we are thinking in the wrong way about a problem. We instead get responses from our natural learning systems that we experience as compelling feelings and intuitions that guide our thinking.
It is only by learning about the thinking process itself and how our own mind works that we begin to learn how to make best use of our natural learning systems to think clearly about problems that our natural abilities are not well optimized to solve, situations where our common sense and our intutions fail us. This learning also helps us reason through situations where we have to think across different domains of expertise without a sense of how well we have captured the meaningful patterns in each of those domains.
Sure, when there’s a right answer, we want to be able to figure out what it is. More generally though we want to think clearly about the problem. This means thinking in a way that leads to, if not an ultimate perfect answer, the best solution available, even if that means bringing more expertise and different perspectives into play and challenging our own intuitions.
How do we know when our shortcuts and intuitions are failing us and that the situation requires a different kind of thinking? I think it comes down to making it a priority to learn about our own thinking while we are learning other things. This means being strategic about thinking: knowing as much as possible about our own tools and resources, both their strengths and their weaknesses.
Identifying Our Strengths and Identifying Our Weaknesses
I’ve been a professional problem solver for decades and from time to time I have worked alongside other problem solvers whose abilities truly amazed me. Some people are able to look at a situation and see opportunities and possibilities that others cannot seem to appreciate until after they become real solutions, and sometimes not even then.
Some of those same remarkable problem solvers then even more remarkably sometimes make the worst mistakes in certain situations. They apply their knowledge and skills in ways that just don’t fit the situation, and sometimes the very qualities that often serve them so well in other situations now make them overconfident in their answers. This book is about learning from both their triumphs and their failures, as well as our own triumphs and failures. It’s about learning to think better.
I have devoted many years trying to understand what it is that is different about exceptional problem solvers, and to what degree their abilities can be duplicated and perhaps even improved upon to avoid the worst mistakes that they also tend to make.
This sets out two primary challenges for me:
--> What makes some people so much better problem solvers than others, especially across different kinds of problems?
--> What causes otherwise great problem solvers to make such awful mistakes so often?
Sunday, August 14, 2011
The human nervous system is not a logic engine, it evolved to serve human biology. This has profound implications for the way we think and what we must do to improve our thinking. Our explanations are guided by powerful intuitions that often seem to defy the theoretical ideal of rationality.
Expertise refers to the way a mind with natural learning abilities organizes its experience purposefully for action. This is where our guessing ability comes from. Expertise provides our built-in guidance for effective thinking in particular areas.
Information is the fuel for thinking, without which expertise would be an engine with a dry tank.
Tools and processes are the way we leverage our strengths and compensate for our weaknesses.
Exceptional problem solvers make better use of available resources than the rest of us and also gather more of the right resources around themselves. This isn’t magic and it isn’t something we’re born with. Nearly anyone can learn to do these things better. Nearly anyone can learn to make better guesses and also to leverage good guesses into more powerful reasoning.
This book introduces an approach which I call Clear Thinking. The basis of this approach is strategic. Through a realistic and accurate ongoing understanding of the strengths and weaknesses of our own mind, we learn to make best use of our ever changing strengths and minimize or compensate for our ever changing weaknesses. In this way we make increasingly better use of our resources and approach the ideal of clear thinking.
You should understand from the start that this is a lifetime learning process. You can’t learn to radically improve your thinking in a weekend seminar, a critical thinking course you can complete in a semester, or even a degree you can earn in a few years. To become smarter you have to learn the mindset, strategies, processes, skills, tactics, and habits of becoming smarter, and this learning is difficult, rewarding, and lifelong.
Clear Thinking is an approach that you incorporate into your daily decision making and problem solving by learning the associated tools and principles and by coming to embody the intellectual virtues shared by the best problem solvers.
--> Useful human knowledge, skills, and attitudes tend to break down into domains. Among other reasons, this is possibly because the human brain is organized into somewhat discrete learning systems for dealing with different kinds of biological needs.
--> Different subjects we learn have their own domain with their own domain-specific rules and methods of study. Our practical abilities tend to be organized into domains for the most part. The domain-specific elements of thinking are critical to Clear Thinking and also to education in general.
--> Most problem solving is relatively routine and involves dealing with particulars of a situation relevant to a specific domain of activity rather than dealing with abstract principles.
--> Since problem solving so often involves dealing with particulars, individual differences in problem solving ability are largely a result of specialized expertise in particular domains rather than a more generalized reasoning ability.
--> Speciallized domain expertise is the result of systematically acquired experience in which we structure our mind in a way that lets us think efficiently about a specific kind of activity in a particular way.
--> We also have important abilities that apply to multiple domains of knowledge at once or which cross domains. These domain-general elements are the ones emphasized when we try to improve problem solving and decision making through “critical thinking” and similar approaches. I have adopted the term Clear Thinking rather than “critical thinking” only because I think the emphasis on criticism can be misleading.
--> Some people are better individual problem solvers than others because they have learned to make use of their cognitive talents, domain-specific expertise, and domain-specific knowledge, by means of domain-general problem solving knowledge, skills, strategies, and attitudes.
--> One of the most critical things we can do in order to improve our thinking is to distinguish domain-specific from domain-general elements. We acquire and apply these different kinds of elements in very different ways and they have different kinds of influence on our thinking.
--> A great significance of domain-general vs. domain-specific elements is partly that more intelligent and more expert problem solvers often make even worse mistakes than less intelligent and less expert problem solvers due to negative artifacts of their abilities such as overconfidence, overspecialization, and the amplification of natural biases.
--> One way we can avoid the worst mistakes is by learning realistically about the strengths and weaknesses of human abilities in general. This becomes an important aspect of our domain-general problem solving knowledge, skills, strategies, and attitudes.
--> The domain-general emphasis of Clear Thinking is mostly intended to make the thinking process more explicit in order to make better use of our guesses. Making the thought process more explicit is the essence of the ideal of rationality.
--> The domain-general elements are also significant because they help us learn how to shift between different perspectives. Importantly, this is not because different perspectives are somehow all equally valid. It is because a perspective is much like a lens which makes some things easier to see than others. Useful bits of knowledge are sometimes obscured by our current perspective, and shifting perspectives can help additional alternatives become more visible.
--> Some groups are better collective problem solvers than others due to differences in the patterns of their interactions in making use of their individual expertise, knowledge, skills, strategies, and attitudes. This becomes another important domain-general element of human thinking.