Tuesday, December 06, 2011
Edited by David A. Oakley , Michael Heap , and Richard J. Brown
This is the most authoritative and comprehensive summary of hypnosis research compiled in over a decade, so anyone interested in the subject should take note. Noteworthy contrubutions include Helen Crawford on the genetics and neuropsychology, Steve Lynn on clinical correlates, Judith Rhue on the development of the underlying abilities, Donald Gorassini on enhancing hypnotizability, Graham Wagstaff on the sociocognitive view of high hypnotizability, and two superb chapters by Amanda Barnier and Kevin McConkey exploring what we now know about what it means when we say someone is highly hypnotizable. The book finishes up with a unique sort of perspective chapter, an invited independent viewpoint on the field from a psychologist previously unfamiliar with hypnosis research.
This information is too important and too interesting to leave buried in the hypnosis research specialty journals.
My review on Amazon is here.
Sunday, December 04, 2011
By Catherine Whitney (Author), Peter J. Dr D'Adamo (Author)
[Review based on Kindle version]
From my Amazon review:
"... is it plausible in general that metabolic traits cluster along with personality traits, and that these clusters imply different health regimens? Of course it's possible. I'm strongly disposed from my own experience to think there are personality traits that lead us to construct particular rough kinds of niches, and it makes sense that these might have distinct metabolic patterns. But does the author get it right that eating Turkey vs. Chicken or Beef vs. Pork for a particular type really is a difference that makes a difference? That part needs more research."
The question this book raises in my mind is whether the author's GeneTypes significantly improve the guesses we make about the best nutrition for us as individuals, compared to just knowing our medical risk factors and knowing our rough somatotype.
This book takes off from the author's previous writings about how blood type can predict a cluster of traits that include individual response to nutrients, and modifies it into a 6 part theory of types. The 6 part model is given an evolutionary rationale and the author links metabolic differences, personality traits, and speculation about ecological niches where those traits and metabolic differences might be especially adaptive. The resulting categories are so clean and tell such a compelling sort of story that they seem to invite accusations of similarity to fortune telling. But if they're right, then this would a pretty cool system, wouldn't it? At least in theory.
- The Hunter: Tall, thin, and intense, with an overabundance of adrenaline and a fierce, nervous energy that winds down with age, the Hunter was originally the success story of the human species. Vulnerable to systemic burnout when overstressed, the Hunter’s modern challenge is to conserve energy for the long haul.
- The Gatherer: Full-figured, even when not overweight, the Gatherer struggles with body image in a culture where thin is “in.” An unsuccessful crash dieter with a host of metabolic challenges, the Gatherer becomes a glowing example of health when properly nourished.
- The Teacher: Strong, sinewy, and stable, with great chemical synchronicity and stamina, the Teacher is built for longevity—given the right diet and lifestyle. This is the genotype of balance, blessed with a tremendous capacity for growth and fulfillment.
- The Explorer: Muscular and adventurous, the Explorer is a biological problem solver, with an impressive ability to adapt to environmental changes, and a better than average capacity for gene repair. The Explorer’s vulnerability to hormonal imbalances and chemical sensitivities can be overcome with a balanced diet and lifestyle.
- The Warrior: Long, lean, and healthy in youth, the Warrior is subject to a bodily rebellion in midlife.With the optimal diet and lifestyle, the Warrior can overcome the quick-aging metabolic genes and experience a second, “silver,” age of health.
- The Nomad: A GenoType of extremes, with a great sensitivity to environmental conditions—especially changes in altitude and barometric pressure, the Nomad is vulnerable to neuromuscular and immune problems. Yet a well-conditioned Nomad has the enviable gift of controlling caloric intake and aging gracefully.
If the model is right, it is a pretty remarkable and perhaps revolutionary approach to making better guesses at how to improve our own nutrition based on our biochemical individuality. The evidence for its plausibility seems reasonable, but there is still little evidence validating the particular categories here.
Full review on Amazon here.
Sunday, November 20, 2011
This one presents my own spin on his argument for a SANE diet as an effective long term strategy for counteracting the trend toward obesity and obesity-related chronic illness.
In addition to recommendations for brief, infrequent, high intensity exercise to counter insulin resistance and reset metabolism, Bailor promotes what he calls a SANE diet. This is an acronym for satiety, "aggression", nutrition, and efficiency. Rather than duplicate his explanation, I'll just summarize by saying this refers to the quality of food rather than the quanitity, and takes a number of different quality factors into consideration. This note is intended to present my version of the argument for a mixed collection of quality factors in an anti-obesity strategy.
1. Human biology tends to regulate its body weight in a very narrow range for long periods under a wide range of natural and ancestral environments.
I came to understand this well over a decade ago and in my opinion the evidence base for it has grown rather than being seriously questioned in any significant way. This model does have a potentially misleading aspect to it in practice. Many people seem to assume that a body weight set point means different people are predisposed to different body weights. That is only partly true. The set point model does not assume a fixed value, it assumes that we tend to regulate our weight within a narrow range for long periods of time. It is obvious that the body fat set point changes over time, otherwise creeping obesity simply wouldn't happen. The issue is not whether the set point can change, but whether (and how) we change it in the other direction. The forces on it appear to be asymmetric.
My 1998 article on body fat regulation, showing the weak understanding I had at that point of the specifics but I think accurately depicting the concept and showing its historical origins and scientific rationale.
There have also been a number of helpful popular press articles expanding on the dynamic set point concept and its relation to weight loss plateau.
Obesity expert Arya Sharma discusses the set point concept and a unique theory of how it works.
2. Increasingly throughout the contemporary world, a combination of global economic drivers and local environments has created conditions that defeat the biology of weight regulation and cause body weight to creep upwards, resulting in an "obesity epidemic."
A recent Lancet special issue on obesity gave a summary of the current perspective on factors in obesity.
3. Clinical studies suggest a role for dietary glycaemic index (GI) in bodyweight regulation and diabetes risk.
There are a number of lines of evidence leading to the conclusion that glycemic index plays a central role in weight regulation through glucose homeostasis. I singled out this article because it also describes why some of the implications of that conclusion are controversial.
4. The effect of lowering GI is a dynamic change to metabolism, not a static linear dropping of weight because the body adapts to the loss of energy in order to maintain its set point. Static models of calories in vs. calories out do not take the effects of metabolism into consideration and therefore do not accurately predict weight loss.
Even though glycemic load is an important factor in obesity, we can't just cut out starches and sugars without making any other change and expect to keep losing weight. We will tend to compensate in other ways because of the significance of metabolism in weight regulation. As a result it is now considered essential to consider metabolism in predictions of weight loss rather than just calorie balance.
5. Solely manipulating GI in isolation has a diminishing return over time in weight loss because the reduction of calorie intake tends to slow metabolism and because of changes in potentially confounding dietary factors such as fiber content, protein content, palatability, and energy density.
This is a continuation of the previous claim, based on the evidence gathered from trying to control weight solely by reducing glycemic load.
6. CONCLUSION: An individual nutrition strategy that replaces high GI foods by taking energy density, palatability, fiber, protein, energy density, ad other factors into account can more successfully use a reduction in glycemic index to control body fat by improving fat metabolism, slowing or stopping the accumulation of insulin resistance, and reversing the upward creeping of the body fat set point..
None of that is particularly surprising scientifically as far as I know. The set point model of weight control has been standard in both human and animal research for a long time, it is not at all controversial among obesity and physiology researchers. You simply have to take metabolism into account when considering how the body absorbs, utilizes, and catabolizes nutrients and energy. You can't just assume that everything we take in minus some estimate of what we are "burning" is going to be an accurate model of body weight fluctuation.
The part where there is still room for argument is the specific causal sequence, and that has some additional implications. There's an old conundrum in psychology research: "do we cry because we are sad, or are we sad because we cry?" The answer seems intuitively obvious but it turns out that both causal models are partly right because there is a feedback look from our behavior to the way we feel.
There is a similar conundrum in obesity research. Do we get fat because we eat too much and don't move enough, or do we eat more and move less because of the effects of obesity. Again it turns out that both capture part of what is really going on. We do gain weight when we eat more and move less, at least in the short term before our metabolism starts to regulate our weight back to normal. But the thing that causes our weight to more permanently creep upward is the changes to our metabolism over time. And those hormonal and cellular changes make us less good at burning fat and better at absorbing it, changing our body fat set point over time.
Todd I. Stark
This first commentary is my perspective on the history of the issues and how I perceive they have been resolved scientifically so far.
The Low Carb Revolution
Years ago, Dr. Atkins shocked popular culture and infuriated proponents of low fat diets with the claim that people could lose weight and reduce their risk of heart disease by minimizing the mainstay of the typical diet, carbohydrates, replacing them with proteins and fats.
This was a controversial claim not only because low fat diets were popular at the time but also because in emphasizing a low carbohydrate diet, Atkins was claiming that we could be healthy with minimal amounts of important dietary elements like fiber and the nutrition we get from vegetables. He was opposed by the people who promoted an ever higher proportion of vegetables in our diet and who encouraged us to minimize saturated fats, which were long widely thought to increase the risk of heart disease and obesity. At the same time, a lot of people argued that obesity can and should be sensibly addressed with the reasonable commonsense approach of eating a little less and moving a little more: get active and cut back a little on calories each day.
Two Battles Emerge
This turned into two polarized contests of opinion in popular culture.
First, there was a conflict between the folks who think quality of food is important in weight control and those who think that emphasis is misplaced, that we should just measure calories in and calories out. The implication of the food quality view: eating more of some things and less of others should affect our weight over time without making constant efforts to otherwise monitor and control our intake by counting calories. The implication of the calorie balance model: just eat a few calories less and move around a little bit more to burn a few more calories and you will lose weight consistently.
Second, among those who consider the quality of food to be important, there was the battle of the low-carb lobby vs. the vegetable lobby. Should you eliminate carbohydrates because of their effect on metabolism, or should you eliminate saturated fats because "fats make us fat" and raise blood cholesterol?
As is often the case in polarized positions it seems from our perspective today that both sides (in both contests) got some things right and both sides got some things wrong.
Calorie Balance vs. Metabolism: The Result
The standard and consensus view of obesity among researchers is that passive overeating is the best overall description of the cause. It is widely accepted, though not universally, that the amount of caloric energy we take in plays a causal role in obesity. Further the standard and consensus view is that this results from a combination of global economic factors and local environmental factors (the "obesogenic environment").
It is also a standard and consensus view among obesity researchers and physiologists that body weight is regulated by metabolic factors (via chemical messengers) that are not not related to the number of calories we take in and the number of calories estimated for the activity we perform.
What the calorie balance model captures: our biology doesn't violate the laws of conservation of mass and energy. Eating less and catabolizing body mass for energy does cause us to lose body mass. Eating more and demanding less energy utilization does cause us to gain body mass.
What the metabolism model captures: The calorie balance model clearly predicts that our body weight should consistently fluctuate according to the overall energy balance between what we take in and our activity. This is demonstrably not the case both from common experience and controlled studies. The prediction of the calorie balance model only apply for a short time and then our body changes internally to absorb food differently and to use it differently for energy. Our appetite and subjective energy levels are also changed. This is collectively expressed as the body regulating its mass in various ways around a relatively stable point.
The common experience of a "weight loss plateau" coincides with the concept of a "weight set point" around which our body maintains itself in a narrow range over time.
When we simply eat less, we lose weight for a little while and then the loss slows down and eventually stops and tends to reverse itself into return to baseline weight and sometimes even gain above that original level.
Further, the signficance of metabolism implies that quantity of energy taken in and demanded is only part of the causal model. Metabolism is affected differently by the specific foods we eat over time and the specific kinds of activities we engage in, not just or even primarily the total amount of energy involved. The metabolism model better captures the importance of food quality than does the calorie balance model.
Low Carb vs. Veggies: The Result
Given the consensus view that metabolism is an important factor in weight control and that the kinds of food we eat are an important factor in metabolism, we arrive at the second battle, over what it means to eat higher quality food more conducive to health and fighting obesity.
What the low carb lobby got right and the vegetable lobby got wrong: saturated fat isn't itself an important cause of heart disease and obesity. You can consume reasonable amounts of fat, including saturated fat from animal sources, and still have a very healthy diet.
What the low carb lobby got wrong and the vegetable lobby got right: The low carb lobby deals with the problems of insulin response in a gross and overly restrictive way by choosing proteins and fats over carbohydrates in general rather than balancing them and eliminating high glycemic carbohydrates. This makes levels of fiber too low and eliminates important sources of nutrients and often crowds out vegetable nutrition sources using animal ones. This is probably healthier than the typical high glycemic diet, but not nearly optimal. Minimizing carbohydrates is a bad strategy because we get many nutrients from carbohydrates, especially vegetables.
What both the low carb lobby and the vegetable lobby both got right from the start: Sugars and starches as a mainstay of our diet are killing many people. Insulin resistance is a precursor to type 2 diabetes and is associated with and exacerbated by a high glycemic diet as well as by obesity. Starches are high glycemic.
The low carb lobby recognizes the role of high glycemic foods in obesity and chronic illness but they generallize it too far to all carbohydrates as an overly restrictive and overly simplified strategy.
The vegetable lobby is too concerned with the role of animal fat in nutrition and focuses on that rather than the more important distinction of higher glycemic vs. lower glycemic carbohydrates and a more balanced diet.
Saturated fat doesn't itself cause heart disease or obesity. Combined with a high glycemic diet, saturated fat exacerbates the problems caused by constant massive triggering of the insulin response because of a combination of factors:
(1) fats plus sugars tend to increase appetite dramatically,
(2) fats have a high energy density compared to their nutrition,
(3) saturated fats in particular have no protective effect for heart disease compared to omega-3 from polyunsaturated fats.
People tend to lower their risk of heart disease when they switch from saturated to unsaturated fats. This is one of the strongest arguments for eliminating saturated fat from our diet. However it turns out that this benefit is probably not because saturated fats cause heart disease. It is more likely because unsaturated fats have a protective effect compared to saturated fats.
Lowering glycemic load allows us to use nutrients more effectively, to metabolize all fats more effectively, and tends to reduce the risk of heart disease and obesity far more than reducing saturated fats.
So what works?
What dietary strategy best lowers our risk of chronic illnesses and obesity? Do they all fail equally? Do different people respond differently to different regimens?
It turns out that individuality is a real factor, both biochemical and psychological, and learning to make the principles work in your own life and for your own biology is critical. Still, there are also some general principles that we can rely on as a very good start.
The low carb strategy often works for as long as people can manage to maintain it and is healthier than the typical diet but it is not optimal for health and most people find it hard to sustain and eventually have health problems or fail to sustain it. It is not balanced nutritionally, it is too severe in limiting valuable and healthy vegetable sources of nutrition.
A vegetarian strategy often works at least for a while and is also healthier than the typical diet but it is also difficult for most people to maintain and as a general strategy it is too severe in limiting valuable and healthy animal sources of protein and what for many people are satisfying sources of texture and flavor in animal fats.
Taking what they both get right and eliminating what each gets wrong, we end up with:
1. Avoid high glycemic carbohydrates as far as possible or only under very specific conditions.
As a mainstay of our diet are a primary cause of both obesity and chronic illnesses via their effect on our response to insulin and our ability to absorb and metabolize nutrients. In other words, these commonly accepted processed products of agriculture cause a "clogging" of our metabolism over time that defeats our evolved ability to regulate our own metabolism. High glycemic carbohydrates are not essential for health and we tend to become dependent on them in various ways so minimizing them is an important first step to better health. There are specific strategies you can use to minimize their negative effects if you want to use them for athletic reasons or because you can't bear to eliminate them from your life entirely, but you have to be wary of how often you do this and the timing of it.
2. Balance the remaining nutrients between proteins, low-glycemic carbohydrates, and fats.
Compared to high glycemic foods, these categories all contain many healthy and valuable sources of nutrition.
Why a balance? In addition to just having a variety of nutrients of different kinds as well as a variety of tastes and textures to make eating more satisfying, there are also some specific nutritional reasons.
A diet of too much fat and too little protein and carbohydrate fails because fat contains relatively little nutrition compared to its energy density and tends to crowd out our intake of fiber important to digestive health as well as various nutrients. A diet of too much fat also fails because we have to eat a lot of it to make us feel satisfied in lieu of protein, fiber and water because of its low nutritional density.
A diet of too much protein fails because it avoids fiber and avoids important sources of vitamins and minerals and because it requires us to limit our protein sources very severely. It is also unhealthy to take in extreme amounts of protein. You can live on a high protein diet for a while if it isn't too high, but it isn't optimal. Temporary protein-sparing fasts are effective and popular among some athletes.
A diet of too much low-glycemic carbohydrates (taken to an extreme this would basically be a vegetarian diet that not only minimizes starches and sugars, but also seeds, nuts, and soy) is not really something anyone seriously recommends or practices as far as I know, but if they did it would fail because it would not have adequate protein sources to sustain health and would be relatively unsatisfying. Even strict vegans eat vegetable protein and fat sources, they just avoid animal sources. A diet with a huge amount of low-glycemic carbohydrates is perfectly healthy and sustainable so long as it also includes a reasonable amount of plant fat and protein sources.
The lesson here is just to mostly eat natural foods while avoiding high glycemic carbohydrates rather than striving for extremes of individual components. Avoid high glycemic foods and otherwise eat to satisfy your hunger. The specifics of satisfying our hunger are important however, balance is important in order to get a good combination of fiber, protein, water, nutrition, taste, texture, and other factors to satisfy us.
3. Take advantage of the "Paleo" principle but don't be limited entirely by it
I think the principle behind "Paleo" diets is basically correct, that our metabolism seems to be tuned by evolution to process natural animal and plant sources of nutriton that we could hunt or gather, and has not adapted yet to our dietary preponderance of processed agricultural products. This is another way of saying that high glycemic carbohydrates (the main category of highly processed and cultivated foods in our diet: the wheat, corn, potato, and rice products) are a very bad thing to emphasize all of the time.
In general the principles behind paleo eating agree with those proposed here, however they also focus on a rationale that can be misleading, which is why I consider it a fad even though it gets things mostly right. Trying to guess what our ancestors might have eaten and in what amounts is not nearly as useful a way to figure out what is healthy as just applying general principles and nutritional research results. By all means it makes sense to look to Paleo recipes and products as candidates for making it easier to maintain good nutrition. Also it is convenient and useful to imagine whether something could be hunted or gathered to know whether it is a nutritional source likely to be compatible with our metabolism.
Just don't limit yourself to the principle that we can only eat what our ancestors ate or elaborate arguments about the Pleistocene lifestyle. You have more options which you can intelligently apply to use modern foods and still be healthy.
4. Don't expect restricting calories to help you in the long run if you are eating poorly.
If you stop eating competely, you eventually die because your body eats its own tissue to try to meet its energy requirements. So when we talk about calorie restriction we aren't talking about not eating. We are talking about changing the timing of when we eat and eating less in a given period relative to what we would normally eat. We might eat less in a given period than we otherwise would, or we might eat nothing for a specific period of time. There are many strategies for calorie restriction. Some of them have a place in athletic performance and some of them can also have a place in a health regimen, but we have to be careful about how we go about it and how much we rely on it.
Sometimes the most reasonable approach is not the best. Portion control and light or moderate exercise seem perfectly reasonable and have been long recommended as a treatment or prevention for obesity. But for the vast majority of people they don't work for this purpose. Our metabolism and appetite conspire to maintain a stable weight against our efforts to make small reasonable changes in how much we eat and how much we move. We have to more directly influence our appetite and metabolism to deal with obesity. Simply eating less of the same things we are eating does not neccessarily help because it does not address the metabolic causes of our problems.
Fasting, as opposed to portion control, is a more potentially valuable and effective health strategy. This is because it can help you learn how to better control cravings, and can help you get your body fat lower than you could reasonably get with frequent eating and for longer periods. Some evidence suggests that fasting itself might trigger healthy changes in the body, although these are hard to distinguish from just the benefits of controlling body fat and eating more healthy overall. Fasting is much more difficult to do effectively, much less sustainable, and much less effective and healthy when combined with a high glycemic diet.
Your best bet is probably to eliminate high glycemic carbohydrates and get the hang of eating well first, then try fasting as an additional strategy if you want to, to make it even more effective and sustainable.
5. Don't just load up on "low fat" foods. But do emphasize lean protein sources over fatty ones.
By the best evidence available now, it seems that reducing fat by itself plays virtually no role in controlling obesity or chronic illness. Nutritionally, fat should be considered "neutral" or "filler" compared to good nutritional sources like non-starchy vegetables and good protein sources like seafood, lean meats, and egg whites. The problem with a "low fat" strategy is that if you go out of your way to avoid fats, you will also tend to replace the missing energy with starches and sugars.
If you could eliminate fats without replacing them with high glycemic foods, this strategy would probably work a lot better. Lean protein sources are an important part of modern nutrition, not because fat is bad, but because protein is so valuable. Fatty protein sources are really fats, not proteins, so they should be considered filler in your diet, rather than assuming they are good sources of protein. You have to eat an unreasonable amount of fatty protein sources to get enough protein to be valuable for satiety and nutrition.
6. If you find yourself eating something high-glycemic, at least mitigate its effects with fiber and protein and avoid fat under those conditions.
The effect of food on our insulin levels is the primarily culprit in obesity and many chronic diseases. However this glycemic effect is not just a result of what individual foods we eat, it is result of their combined effect on the amount of glucose circulating in our blood. There is no way to simply counter-act the glycemic effect short of avoiding high glycemic foods entirely but you can reduce it by lowering the combined glycemic effect by increasing the amount of fiber and protein you eat along with high glycemic foods. If you are going to indulge in high glycemic food, you can reduce the damage it causes by avoid simultaneous eating of fats (which boost appetite strikingly when combined with starches and sugars) and eat lean protein and good fiber sources with the high glycemic foods instead.
Note: This article was inspired by Jonathan Bailor's highly recommended book, "The Smarter Science of Slim."
Todd I. Stark
Sunday, November 13, 2011
Highly recommended, he does a great job integrating a lot of research faithfully into an accessible picture and offers practical advice consistent with it.
I also mention some things I would have written a little differently, like expanding the discussion of exercise a little beyond just eccentric high resistance and popular but less well studied strategies based on nutrient timing, and perhaps incorporating some discussion of self-experimentation in that light.
My book review on Amazon.
Sunday, September 04, 2011
“We absorb information in some common generic way and then apply our individual talents to using that information to solve problems.”
As straightforward and intuitive as this description sounds, it contains a counter-productive assumption about knowledge, confusing it with information. We tend to assume that the things we experience are experienced in the same way by other people. We might all look at the same world, but we make different observations and draw different conclusions from it. The knowledge base we each build over our lifetime is not just a straightforward result of the information presented to us; it is also in part a result of how we organize that information, which can be very different from one person to another. This is the foundation of the expertise model.
It is not enough to have the right information to solve a problem, the information must also be organized in a way that lets us think about it in the right way.
Expertise is a key factor in effective problem solving because it permits us to organize information about a domain, recognize patterns in that domain, apply domain knowledge to new kinds of problems, and incorporate new information about that domain.
The expertise model gives us several key insights into the problem solving process that the commonsensical view above misses:
It tells us that the way information is organized is critical to how that information can be used for problem solving. Many problem solving principles will deal with how information is best organized to solve problems.
It tells us that in certain key areas, deep accurate understanding is important and not just superficial familiarity. Both intuitive decision making and more formal methods rely on deep accurate understanding acquired from experience.
It tells us that a great deal of deliberate practice with good feedback is needed to acquire deep understanding.
It tells us that more skillful problem solving emphasizes principles, while less skillful problem solving relies on procedures.
In spite of the tremendous power of the expertise model, it has its own limitations as well. Expertise lets us detect meaningful patterns of information in particular domains because of the way we organize our own knowledge.
The organization of domain-specific knowledge in our mind has important implications. It means that experts have differently organized knowledge for thinking in different domains. The fact that the expertise model is so thoroughly domain-specific forces us to now confront the most serious challenge of all to the expertise model: the challenge of transfer. If it takes so much deliberate practice to become good in a given domain, how does anyone manage to become good at more than a very narrow range of activities? How do our narrowly cultivated abilities support other activities? Or do we have important abilities that are not domain-specific as well? What does it take to apply our hard-won expertise to problems different than the ones we specifically practiced for?
The Failure of Mental Exercise
At one time, it was widely believed that people could develop their mind by doing mental work such as solving logic puzzles, learning mathematics, reading classics of literature, and learning to speak Latin. A long history of sometimes large scale research on this approach to mental ability revealed it to apparently have very little promise. Literacy in general, while valuable for its own sake, simply does not have much effect on other thinking abilities. Our ability to solve puzzles doesn’t tend to generalize very well unless the training specifically teaches the underlying patterns and provides us with a way of remembering them. We don’t automatically apply the lessons of solving one problem to solving a structurally similar but different problem. The different appearance of problems tends to throw us off. When we learn strategies for solving problems, we tend to learn them in a way that is tied to the specific kinds of problems that we used for learning them. Expertise, the research confirms, tends to be very context specific.
How Transfer Does Happen: Two Roads
The problem with this result is that while it seems consistent with the expertise model, it doesn’t quite make sense in terms of our everyday experience. All of us routinely do apply what we know to new kinds of problems. We aren’t equally incompetent in dealing with any sort of novel problem; our existing expertise clearly does sometimes give us an advantage in another domain. We also see negative influence of expertise when our existing abilities interfere with our attempts to perform in a similar domain. Expertise does seem to transfer between domains under some conditions. The question is what those conditions might be.
Research done in the late 1980’s and early 1990’s confirmed that transfer of ability between domains does occur consistently under certain conditions. One cognitive psychologist working with preschool children on simple tasks discovered that the 3 and 4 year olds could use lessons they learned under one set of conditions in a completely different set of conditions, but especially if they were shown how the different problems resembled each other and how the goals were similar, they were familiar with the problem areas, the examples also had rules associated that the children figured out for themselves, and if the learning took place in a social context that specifically encouraged them to spell out the principles, explanations, and justifications to use.
Research such as this led to a general two-pronged theory of transfer, proposed by David Perkins and Gavriel Salomon. The theory is based on the finding that transfer sometimes takes place between similar domains, and sometimes takes place between very dissimilar domains, and that these seem to happen under different conditions.
What the Perkins and Salomon theory calls “low road transfer” happens when situations appear to us to be similar according to simple perceptual cues rather than any deep structural pattern. This seems to be a matter of stimulus triggers. Specific elements in the situation help us recognize and apply skills and knowledge from our memory based on recognizing those elements from our practice. Since low road transfer is pattern-bound, it doesn’t generally lead to transfer to different situations. Practicing under a variety of different conditions however can help is gradually stretch our skills from one context to a similar one to generalize our skills further. Low road transfer is a result of the variety of conditions under which we practice rather than any specific cognitive skills or strategies aside from those specific to the domain. Low road transfer is a perceptual-memory phenomenon.
When a situation bears a superficial similarity to one we’ve trained for, we recognize stimulus patterns and our expertise is evoked via low road transfer. This is how many people manage to drive a truck reasonably well after having learned to drive a car for example. Even though the mechanics are very different, the steering, pedals, and so on are all familiar enough to trigger our learned skills for driving. That is, until we find ourselves in a situation where the fit isn’t so good between our skills and the ones that are needed.
What the Perkins and Salomon theory calls “high road transfer” seems to be a completely different matter. High road transfer involves the deliberate and mindful abstraction of principles during practice and using more general cognitive skills and strategies to apply them to completely different situations. In high road transfer, the learner actively seeks connections between different situations in which to apply the principles they’ve learned. High road transfer is a cognitive phenomenon.
We see that the similarity mechanism of transfer is limited. It only works for relatively similar situations and it works in a very automatic and unthinking way. To apply expertise to a very differently appearing situation with underlying structural similarity (such as we might need for more abstract problem solving) we need high road transfer and we need to use abilities we associate with conscious reflection. This allows us to transfer expertise from deliberate abstraction of principles to entirely different kinds of problems.
The lesson of the transfer challenge to expertise is that expertise does not automatically apply outside of its domain. We have to very deliberately either: (1) work on practicing in widening ranges of situations to facilitate generalization or (2) work on abstracting and applying general principles mindfully from our practice, or both.
Conclusion: Transfer and Expertise
We’ve seen that expertise is a very powerful model that explains in some detail how we organize tacit knowledge for recognizing patterns and solving a particular domain of problems. This appears to explain the lion’s share of differences in human abilities in problem solving. We’ve also seen that expertise can be acquired in such a way that it can be generalized to an increasingly wider range of conditions and in a way that makes it less likely to fail catastrophically under extreme conditions, making expertise a potentially very robust resource for problem solving.
We’ve also seen that the expertise model misses a small but critical aspect of problem solving; it does not tell us how people manage to deal with surprises or with domains that are characterized by surprises. The expertise research consistently shows strong dependence on specific contexts. We do not automatically generalize our skills or strategies to new kinds of problems just by acquiring deep expertise in a domain.
Novelty offers our most serious challenge to the power of expertise. The very concept of expertise implies domain-specificity, and domain-specificity implies that expertise is honed to deal effectively with a particular range of situations. Novelty, both within a domain and outside that domain, creates problems for the standard expertise model that need to be addressed.
Novel but superficially similar situations can be handled through expertise, but only if we specifically widen our practice to deal with a broader range of conditions.
Completely novel situations in other domains that don’t resemble the ones we practice for except in terms of their underlying deep structure can be handled through expertise as well, but only by deliberate attention to learning and applying general principles as well as acquiring domain expertise.
The Story So Far: Going Beyond Expertise
The expertise model tells us how we acquire useful patterns of tacit knowledge from experience through deliberate practice with good feedback. The expertise model explains how we deal effectively with the sorts of situations where we have accumulated extensive practice. Expertise thus acquired becomes part of our intuitive understanding of situations, enhancing, modifying, and extending our existing commonsense intuitions.
The expertise model also challenges us to explain how it is that we are able to deal with extreme yet realistic conditions and novel problems even though expertise tends to be very context-specific. Applying expertise to very different situations requires deliberate mindful work at abstracting principles and applying them through our capacity for reflective thinking. This kind of reflective thinking is not adequately captured by the expertise model. Either we need to expand the expertise model to handle the challenges we have identified, or else we need a more expansive concept to describe our abilities.
 (Thorndike & Woodworth, 1901), (Thorndike, The influence of first year Latin upoin the ability to read English, 1923)
 (Scribner & Cole, 1981)
 (Simon & Hayes, Psychological differences among problem isomorphs, 1977)
 (Brown, 1989)
 (Perkins & Salomon, 1987), (Perkins & Salomon, Teaching for transfer, 1988), (Salomon & Perkins, 1989)
Saturday, September 03, 2011
We almost universally recognize the legitimacy of experts in a number of different domains. In many academic fields such as mathematics, sciences, history, literature, and other academic areas, some people know much more and consistently perform much better than others in tests of ability. Similarly for many professional fields and various sports and games, we recognize that there are experts who outperform the majority of us.
A key finding in modern learning and human performance research has been the discovery of how expertise is acquired. This discovery became possible with the advent of cognitive science, allowing us to model the human brain as an organized collection of information rather than just a collection of behavioral patterns. As we learned about the specific differences between novices and experts in each field, we discovered certain general principles that apply to a very wide range of different fields.
A formidable body of this type of research has overturned the intuitive view that novices and experts differ because some people are simply more naturally talented than others. Experts seem to perform so effortlessly that we tend to attribute great natural ability to them rather than a different kind and degree of experience. Contrary to this intuition, expertise via deliberate practice is our best model so far of individual differences in ability in a wide range of activities. Expertise is a result of experience, and not just any experience, but deliberate practice where we meet challenges in that domain, are immersed in purposeful practice, gain knowledge from other people who are already good at it, have good coaches, and benefit from quality feedback for our performance.
At the same time as verifying the legitimacy of expertise in many fields and establishing the central role of deliberate practice, social environment, coaching, and feedback, we have also discovered that there are some fields where our performance doesn’t benefit from these factors.
In spite of the tremendous power of the expertise model, it has its own limitations as well. Expertise lets us detect meaningful patterns of information in particular domains because of the way we organize our own knowledge. This assumes that there are meaningful patterns to detect that human beings are capable of using effectively. This is not always the case.
Where the Experts Fail
In the 1950’s, research into medical diagnosis and prognosis revealed something shocking: presumed experts didn’t seem to predict medical outcomes any better than novices. This line of research continued over time to demonstrate that prognosis and diagnosis in clinical work in medicine was often not improved by experience when experts relied upon informal gathering of data and their trained intuitions.
Professional experience and presumed expertise also seem to make no difference in predicting the outcome of psychotherapy by psychologists, who it turns out also fare no better than less trained individuals.
If there is a skill to predicting medical and psychotherapeutic outcomes in general, it doesn’t seem to be acquired from the standard professional training, or typically through experience with patients, and it isn’t obvious how else it might be acquired. There is perhaps good reason why some experienced doctors seem very reluctant to make predictions about outcomes, and maybe more of them should heed this lesson.
As a result of the difficulty of prediction in areas like this, novices using simple formal statistical methods have often outperformed the experts in tests in spite of the greater experience and training of the experts (or perhaps in some cases partly because of it).
This is not by any means to imply that statistical methods are always superior to expertise, even in a particular field where clinical experience has proven less than optimal. However, it does give us good reason to pause and reflect on the meaning of this finding for the expertise model. In some domains, the best we can do for prediction is provided by a simple statistical method; and the value of expertise in particular reaches a limit fairly early on in the training for those fields.
What Makes the Value of Expertise Vary So Much?
Research into the value of expertise in different domains shows it to vary with:
1.the level of inference required (moderate levels of inference are more conducive to using expertise than high levels of inference),
2.whether experience or training available is adequate to confer expertise,
3.whether the conditions and instruments available allow for the expression of expertise
What this tells us is that even though expertise helps us make sense of complex situations by recognizing patterns, there is also a limit to how well acquired expertise can help us make better judgments in very complex situations. The more specialized knowledge we need in a field just to understand what is going on, the more likely it is that expertise will fail us when the situation requires a great deal of challenging inference. In the most complex fields at least, it may be that intelligence can also play an important role alongside expertise.
Both intelligence and expertise play some role in every field, but each is more important to some fields than others and at different points in the development and expression of ability. The relevance of intelligence in a field seems to depend to a large degree on the role that abstract reasoning plays in success in that field. The relevance of expertise is more general. The role of expertise in a field depends on how well the situation is made comprehensible to the expert through specialized tools, the quality of their training and experience, and the kinds of conditions in which they have to perform.
Even allowing for a role for intelligence in particularly difficult technical fields requiring very high inference levels, there are fields where neither expertise nor intelligence nor any combination of the two seems to predict performance any better than simple methods.
We’ve discovered that in some areas, experts perform significantly better than non-experts and consistently outperform computer models of various kinds because of their rich background of task-relevant skills and knowledge. In .other areas, simple computer models, statistical indexes, and non-experts consistently outperform experts.
Intelligence may play more of a role in ability in highly technical domains where a high level of inference is often required in addition to recognizing important patterns. Domains are apparently not all equal with regard to what it takes to be good at them.
How Surprises Can Negate Expertise
The difference has to do with the varying role of understanding the situation for solving problems in different fields, and the role that surprise plays in each field. Fields involving things that move freely and things that scale wildly rather than behaving according to standard statistical methods tend to produce surprises that can’t be managed primarily by either intelligence or expertise or both. In these areas the requisite intelligence is relatively low and the practical role of expertise is relatively marginal because reasoning doesn’t help much and it is particularly difficult to get the necessary skills even if you can identify them. So in these fields, simple statistical rules can sometimes perform as well as any expert, regardless of their IQ.
For examples of fields more or less dominated by surprises think of stockbrokers, risk management advisors, clinical psychologists, counselors, psychiatrists, admissions officers, court judges, economists, financial advisors, and intelligence analysts. Think in general of all the fields where experts fare poorly compared to non-experts, where overconfidence cancels out the benefits of expertise, or where time spent in formal practice has relatively little impact on effective outcomes.
In these fields, formal domain-specific expertise and general intelligence provide relatively little advantage in producing good outcomes compared to simple algorithms, direct local observation, direct experience, practical skills, and domain general problem solving skills. Formal expertise and intelligence in these fields especially tends to produce overconfidence more than real predictive ability. It’s not impossible that there may be some real experts in these fields who fare better than others, but they are particularly difficult to identify and train with formal methods.
Not all fields are dominated by surprises regardless of intelligence and expertise. Think of fields involving things that stay put or else move within strictly defined ranges according to physical laws or arithmetic or statistical relationships. These are much better suited to intelligence and domain-specific expertise because in those fields a better understanding of identifiable patterns and potentially complex information does tend to lead to better prediction of outcomes. Think of theoretical mathematicians and physicists, astronomers, test pilots, firefighters, livestock, grain, and soil judges, accountants, chess masters, insurance analysts (who deal with Gaussian topics like mortality), competitive athletes, and surgeons. Think in general of the many fields studied by expertise researchers where deliberate formal practice yields measureable improvements in results, and where the critical skills can be identified and trained.
We Don’t Learn Well from History
Part of the problem with expertise in fields where surprise plays an important role is that we don’t learn well from history in general. One of our consistent biases is that we systematically overweight the likelihood of events that actually happened, relative to ones that didn’t happen (but could have). This means that we have a very strong predisposition to describe events that happened as if they were fated to happen that way. This also means that we tend to think of our descriptive stories as if they were also explanations, not just descriptions. The remarkable power of stories becomes a disadvantage for explanation because the narrative content tends to replace our ability to analyze cause and effect.
We become experts by being exposed to similar conditions over and over again and learning from consistent patterns in our experience. When a domain is characterized by events that are relatively uncommon yet influential, our confidence in our ability to predict events in that domain tends to grow way out of proportion to our actual ability to predict or explain the course of events. Our hindsight bias (“I knew it all along”) often kicks in to replace our missing explanatory ability.
The human mind is particularly well suited to remembering and making sense of events after the fact by weaving facts into a plausible narrative, and particularly poorly suited to capturing actual frequencies of events in order to use that information in other judgments. Our common sense excels at generating plausible stories for what happens, our expertise then generates trained intuitions that add to our confidence in our explanations, but in some cases does not also add to our explanatory ability. Then history leaves us with only a single chain of events to explain, the one that actually happened. We infer from all of this that we are explaining why a sequence of events took place in a particular situation, whereas we have often only described the events, not explained them.
Conclusion: Surprises and Expertise
We saw in the previous section that extremes of arousal can negate some kinds of expertise, especially expertise relying on fine motor skills. We also saw that our mindset can determine whether expert performance is retained during high arousal or fails catastrophically. In addition we saw that our ability to flexibly adapt our responses to novelty in the situation is hampered by high arousal.
Now we see that novelty offers a more general and more serious kind of challenge than just our tendency to lock in to central stimuli under high arousal. When relatively uncommon events tend to be influential in a domain, the power of expertise to help us predict and explain events is severely compromised and often even negated entirely. In these cases we have a compelling natural tendency to tell plausible stories and rely on them as explanations, and additional expertise only serves to increase our overconfidence.
 A June 2008 review of major trends in expertise research: (Charness & Tuffiash, 2008)
 An expert is typically defined for research purposes as someone who consistently performs more than two standard deviations above the mean average performance on representative tasks for their domain, assuming that ability in the field can be represented by measureable tasks and that this ability is normally distributed (Ericsson & Charness, 1994). Experts defined in this way are assumed to be roughly the top 5% of the performers in a field.
 The body of research has been variously summarized in popular books by journalists but a far better source for reviewing the evidence directly is the edited technical article collection: The Cambridge Handbook of Expertise and Expert Performance (Ericsson, Charness, Feltovich, & Hoffman, 2006)
 Classic early research showing the limits of human judgment from experience was done by Paul E. Meehl. Meehl demonstrated the limits of informal aggregation of data and prognostication by presumed experts in clinical situations such as diagnosing patients and predicting medical outcomes (Meehl, 1954). An influential review of research showing the superiority of actuarial vs. clinical judgment appeared in the journal Science in 1989: (Dawes, Faust, & Meehl, 1989)
 (Dawes, 1994)
 (Westen & Weinberger, 2005)
 The level of inference means the amount of specialized individual knowledge needed to understand what is going on. Situations with low levels of inference are understandable by most people, those with high levels of inference are only accurately understood by experts. Even experts utilize their abilities better in situations of lower levels of inference.
 There is a lot of ongoing controversy about various aspects of intelligence measurement and what it can tell us, but one of the things that most theorists agree on regarding individual differences in intelligence measurements is that they seem to correspond in some sense to our capacity to handle complexity. (Neisser, et al., 1996)
 This was one of the main points made by Nassim Nicholas Taleb in his entertainingly and ironically sharp book exhorting the importance of epistemological humility in the face of this sort of unpredictability in important domains, The Black Swan (Taleb, 2007)
 For examples in technical literature making this argument more clearly, see: (Lombrozo, 2006), and (Lombrozo, 2007). The point is made even more emphatically in (Dawes R. , 1979). Formal techniques for causal analysis take the lure of stories explicitly into account by using various methods to compensate for it and force analysts to think in causal terms rather than relying on our more natural instincts for telling stories about what happened. (Gano, 2008)
 A good technical article introducing hindsight bias and the related idea of “creeping determinism” (what happened is what was most likely to happen) is (Fischhoff, 1982)
There is a more detailed discussion of the difference between stories and explanations in the chapter History is a Fickle Teacher in (Watts, 2011, pp. 108-134)
Monday, August 29, 2011
The venerable Yerkes-Dodson curve (sometimes referred to as “The Inverted-U”) describes performance vs. arousal in a way that shows performance being degraded under extremes of arousal, and this relationship indeed does seem to apply fairly widely.
That’s the high school textbook picture, and it’s accurate for the most part. Still, in real life, upon closer inspection, arousal is a very complex phenomenon involving a number of different neurotransmitter systems in the brain and affecting different kinds of skills in different ways. For one thing, it doesn’t seem to apply equally to different activities. For another thing, it doesn’t seem to apply equally to different people. But on average, it holds up fairly well.
During the Civil War in the United States, it has been estimated that only about 25% of soldiers remembered to fire their muskets in combat. Many muskets were found with up to 5 charges in the barrel, indicating that soldiers kept reloading without firing. Most of us don’t think clearly in a crisis, we rely on simple well-learned habits that might not be what is needed for the situation at hand. A more expert marksman who doesn’t fire their weapon isn’t performing proportionately to their skill. Clearly, extreme arousal can degrade our performance as Yerkes-Dodson predicts.
What’s harder to tell from this picture is what is different about the soldiers who did fire their muskets. Were those the more expert soldiers in some sense? Or were they the more brave? Or were they different in some other way? In other words, does extreme arousal really degrade performance in general and negate differences in skill, or does it actually bring out differences in skill in greater relief, while demonstrating the importance of a different kind of skills, those less vulnerable to degradation?
A dilemma arises with the Yerkes-Dodson curve if we assume that skills break down under pressure. The skills that are preserved under high arousal seem to be the ones that we have overlearned through long practice. Yet these kind of overlearned skills are also among the hallmarks of the expert. So it isn’t obvious that expert performance should necessarily degrade under high arousal, at least not more than less expert skills. Just because experts rely on more finely honed skills doesn’t mean they should be more subject to losing those skills under stress, they may actually be less vulnerable. How do we resolve this dilemma?
Dissecting the Inverted-U: What are the Real Effects?
One possibly relevant finding is that intellectually demanding tasks seem to be more degraded by arousal and that tasks requiring persistence are less degraded. The Easterbrook Cue-Utilization theory says that this is in part because an increase in arousal leads to a decrease in number of cues that can be utilized, an effect that has been reinforced by other research and has been called perceptual narrowing.
The significance of perceptual narrowing is that it does not seem to be affected by skill level and so it may represent a way of distinguishing the more specific effect of arousal on expert performance. Experts seem to experience this kind of narrowing of the spotlight of their attention under high arousal the same as others do. The question is how it affects their performance.
The most robust effect of high arousal is that our ability to deal with surprises is significantly compromised. High arousal focuses our attention such that we are only aware of a narrow range of predictable central events and we tend to completely ignore unlikely events that would normally get some of our attention. Think about it, this could be good or bad, depending on the role of surprises in the environment. Being unable to respond effectively to a soldier sneaking up on you would be a bad thing in combat. Failing to be distracted by things that don’t affect you would be a positive result.
If my expertise depends on being able to scan the environment widely and respond to novelty, then it seems it will probably be significantly compromised by high arousal. If my expertise depends on being able to focus on a narrow range of stimuli and execute well-learned skills in response to them for an extended period, then high arousal will probably enhance my performance.
Interestingly, the effects of low arousal seem roughly consistent with this model as well. Rather than being blind to things happening at the periphery, at low arousal we seem to be overly distracted by things happening outside the center of our attention, for our attention to wander.
Another effect of high arousal is one seen especially when we feel we are in danger. We tend to not only narrow the spotlight of our attention, but also to rely more on immediate subjective experience and to reject other sources of information that we might ordinarily consider more objective. Under high arousal and threat we tend to resort to our own immediate sensory experience and mistrust all other sources. Again, this seems fairly robust and happens to experts as much as non-experts. Various military programs discovered this effect to their dismay when highly trained personnel have often abandoned their elaborate electronic information systems under combat conditions to depend on their own senses.
One more robust effect of high arousal is variability in some kinds of performance, a phenomenon originally called “blocking” when it was discovered. “Blocking” refers to the appearance of occasional “blocks” where information processing for the task at hand is apparently momentarily interrupted, and decision responses are markedly slower during extended cognitive work. Since this only happens after extended work, it has been interpreted as a kind of “mental fatigue.” Some theorists have interpreted this as an indication that our attention is involuntarily shifting to sources irrelevant to the task at hand.
Beyond the Inverted-U: The Role of Interpretation
One way to make sense of varying performance under high arousal is to take our interpretation of the situation into account. Previous research supporting the Yerkes-Dodson law dealt with situations where the range of interpretations was probably relatively narrow. This leaves margin for us to hypothesize that our interpretation of the situation might play an additional role, even one that challenges the very shape of the Yerkes-Dodson curve.
Some recent theorists have indeed suggested that the Yerkes-Dodson curve only applies under certain conditions and that high arousal consistently improves our performance under other conditions, particularly those where we interpret the situation as an exciting challenge rather than a threat and where we perceive that we have the skills to thrive in it.
This potentially changes the relationship between expertise, arousal, and performance in a fundamental way.
According to these theories of positive psychology, depending on the degree of challenge we perceive and our skills for the situation, a high arousal situation can either facilitate or degrade our performance. We might experience the same situation and the same arousal level negatively as anxiety or anger on the one hand or positively as challenge and excitement on the other hand. This would determine whether the high arousal makes us perform worse or better.
For example, a more or less neutral interpretation might have an effect on performance resembling the Yerkes-Dodson law. A very negative interpretation of the situation might have a catastrophic effect on performance even worse than the Yerkes-Dodson law predicts. A very positive interpretation of the situation would have a more uniformly positive relationship of arousal and performance. In this way, the positive psychology theory of arousal and performance is thought by its proponents to explain a wide range of results.
Conclusion: Arousal and Expertise
Is arousal a serious challenge to the power of expertise?
From research consistent with Yerkes-Dodson we know that …
Low arousal can degrade performance because of our body is inadequately prepared for rigorous demands:
■Insufficient oxygenation of working muscles,
■Cooling is not functioning optimally,
■Digestion and excretion are using energy,
■Available glucose in the liver hasn’t been released,
■Alertness and readiness to respond quickly are compromised.
High arousal can degrade performance because our body is prepared for rapid, strenuous response but not for finely controlled motor skills, reasoning, strategic planning, or flexible response to changes in the situation:
■Excess muscle tension for fine control
■Some fine coordination impaired
■Spontaneous attention shifts prevented
■Intermittent blocking of verbal behavior and decision making with extended effort due to “mental fatigue”
The data we’ve examined so far imply that arousal can very well negate the value of expertise under some conditions. If we’re doing surgery in a combat zone we might well have our skills compromised and a good corpsman with adequate basic skills might be as valuable as or more so than a master surgeon under those conditions. A weaker chess player might well consistently defeat much stronger players in high pressure speed matches if they have less of a tendency to “choke” under the pressure. Objective reasoning and strategic planning are significantly compromised by high arousal, especially if the arousal is negative. Extended performance of some kinds is hampered by “mental fatigue.” In even the best cases, high arousal reduces our ability to respond spontaneously and adaptively to surprises at the periphery of our activity.
This is far from a completely negative assessment of the effect of arousal on expert performance however. Experts can learn to interpret a wider range of situations as positive, possibly preventing the downside of the Yerkes-Dodson curve, can learn to rely on skills that do not require the kind of fine coordination that degrades with high arousal, can learn skills and habits that don’t require planning and reasoning, and can learn skills for managing their own arousal level. In short, in addition to their domain expertise, experts can learn to:
1.Make better use of high arousal
2.Rely on skills that don’t degrade with high arousal
3.Better manage their own arousal level
With this flexibility, arousal is a far less serious challenge to the power of expertise than it might seem from a simplistic application of the Yerkes-Dodson law.
 (Yerkes & Dodson, 1908)
 For example, see (Hockey, 1986) for a review of the evidence for general degradation of performance under high arousal conditions.
 (Easterbrook , 1959)
 (Broadbent, 1971), (Kahneman, 1973)
 “This is usually thought of as a reduction in the ability to deal effectively with relatively unlikely peripheral events in favor of focusing on more likely central events.” (Schmidt, 1989)
 (MacMillan, Entin, & Serfaty, 1994)
 I suppose Obi Wan Kenobi would approve since he recommended this to Luke Skywalker when he attacked the Death Star in Star Wars. Fortunately, Luke’s narrowly defined and well learned task was well suited to performance under high arousal. However in a situation where it is imperative to gather and process information more widely rather than focus on a narrow target, trusting our own senses rather than an information panel could easily become a fatal mistake.
 (Bills, 1931)
 (Bertelson & Joffe, 1963)
 (Broadbent, 1958)
 For example, see: (Csikszentmihalyi, 1998)
Although it covers a very wide range of activities, the large body of expertise research takes place in areas where we can easily identify how good people are based on standards of performance within the field itself, and where, to put it bluntly, skill matters. What about performance in real world, where things are a lot messier, and the more skillful exponent doesn’t always come out on top?
This indeed turns out to be a very real issue. While having a certain amount of skill is always valuable, it isn’t always the case that being more skillful means that we perform even better. A little skill might be good, but more skill might not be better. How can this be true?
Consider these possibilities for why being more skillful might not make us perform better:
1. Extremes of Arousal: The general state of our nervous system in response to a situation can in turn affect the performance of our trained skills, although the reason for this is surprisingly poorly understood theoretically. Picture trying to drive a challenging obstacle course while very sleepy, anxious, or terrified. Extremes of arousal may plausibly affect expertise, and perhaps even negate large differences in expertise, although the effects would probably depend on some interaction of the type or activity and whether it was low or high arousal. And it turns out that the way we interpret the situation can be an important factor as well.
2. Transfer Failure: The situation at hand may resemble the situation we practiced for, but be different enough that our skills matter less. If I learn to drive a car and then manage to drive a truck, I’m transferring my skills. If I crash the truck because I can’t figure out how to operate the different controls properly or because the different response of vehicle confuses me, then we have transfer failure. My expertise doesn’t help me if it doesn’t transfer to the situation I’m in.
3. Domain Unpredictability: Some things seem to be intrinsically difficult to predict, so no amount of experience makes us better at predicting things in those domains. I don’t necessarily get better at predicting earthquakes by living through a few earthquakes, and I don’t necessarily get better at predicting slot machine payoffs by playing more, although I might learn other valuable lessons.
Despite the power of expertise across such a wide range of activities, it’s entirely possible that our performance may depend more on something other than expertise under some conditions. I’m going to examine these challenges to the power of expertise one at a time.
Who Ya Gonna Call?
Let’s say you’re working on your computer and it starts acting strangely. You get errors that don’t understand or it crashes for no apparent reason. If you aren’t sure what to do at first, where will you look for help? You might perform a web search for the symptoms to see if it’s a known problem and other people have solved it before you. You might run some diagnostic program or an antivirus scan because those are the tools you happen to have.
If you can’t fix it easily and you aren’t confident with computers you’ll probably start looking for help from another person at some point. Who? If it were me, I probably wouldn’t head down to the local college and find the top honors student or someone in the local Mensa chapter. I probably wouldn’t look for someone with great SAT scores or someone really good at Sudoku or even a master electrician. I’d look for someone with a lot of experience with computers and a proven track record fixing them. I’d look for an expert, and an expert specifically in that area, not just a smart person or an expert in a related area.
I stacked the deck a little bit with this question, because I picked a problem that is probably going to be technical in nature. That is, it seems like it will require some specialized knowledge to solve because it involves computers which are complicated devices that are a little mysterious to the average person and far less so for someone who has worked extensively with them.
It turns out, though, that my guess is pretty accurate for a wide range of fields, not just highly technical ones. Knowledge about the job turns out to be a far better predictor of performance than how high our IQ is or any other general disposition, not just in certain kinds of jobs but across a wide range from complex technical work to manual labor. Just as I’d rather have a computer expert help me rather than my friend with an astronomical IQ, in most cases I’d prefer someone who has job experience rather than someone very smart but inexperienced. And I can point to research evidence that supports my preference.
Seeing Differently vs. Seeing More
Even in many areas where we would tend to expect pure reasoning ability to play a large role, it turns out that on average experience tends to win out consistently over any more general ability or measurement we have come up with.
The research that inspired the modern study of expertise began with the game of chess. Think about chess for just a moment. Chess is an activity with a small number of relatively simple rules. Yes chess has the reputation for being a difficult game. But that’s not because chess is hard to play. Nearly anyone can learn the game. It’s because we soon discover that differences in individual ability are immense.
The difference between someone who plays chess for fun who doesn’t study the game seriously, and an average tournament player, is like night and day. It doesn’t seem like much of a competition most of the time. The difference between an average tournament player and a strong one is just as large, which is why there is a rating system.
Ratings allow people of similar ability to play relatively evenly, or to estimate handicaps as they do in golf. The difference between a strong player and a master is similarly imposing as is that between the master and a grandmaster, and between the average grandmaster and a world champion.
How can a game with a handful of simple rules end up with people playing at such astronomical differences in ability? This was the question that intrigued early researchers trying to figure out how people solve problems. The obvious answer is that the stronger players must be seeing more on the board. But what are they seeing differently?
When most of us look at the chess board we see a collection of pieces in different places that are allowed to move in particular ways. We know what we have to do to win; we have to trap the king. We also know some ways to accomplish that. For example we can capture the opponent’s pieces so we have a bigger army, and we can harass the opponent’s pieces so that they are forced into a less defensible position, allowing us to attack the king. Everyone who plays the game, even for fun, knows these things. Still most of us pretty much have to guess at how to get from some arbitrary position to that result.
If I move here, I’ll attack this piece, but how do I know that my opponent doesn’t have some better move in response that is even stronger? More insidiously, is that move by my opponent actually setting up a surprise for me later? If so, what are my options? These kinds of considerations quickly lead to the very intuitive notion that being better at chess is really about calculation, about being able to imagine a lot of different moves, and what might happen if we made them, and keeping track of all that imagining. The better player must be seeing more moves on the board, figuring out what the options are more accurately, and then predicting the outcome.
This is indeed how early chess software played the game well. It looked at the possible moves, looked at the possible responses to each move, evaluated the resulting positions, and chose the move that seemed to give the best outcome based on what the opponent was able to do. The trouble was that trying to do this more than a couple of moves ahead turned out to be a very demanding calculation. More demanding than even the most powerful computers could handle. Researchers were curious as to whether seeing more moves in their mind is really what good players were doing.
Maybe the human brain is really that much more powerful at calculation than we thought. Or maybe the brain is doing something else entirely?
In a pioneering study of chess players in the 1940’s a Dutch psychologist found the surprising answer. I say his work was pioneering not just because it was early but because it led to entire fields of research based upon it and validating his basic findings. The most compelling and surprising findings:
...Weaker players examined the same number of moves as stronger players, and equally thoroughly (!)
...Stronger players could recognize an actual game position far better than weaker players.
...Stronger players were just as bad as weaker players at recognizing an arbitrary configuration of pieces.
This may not seem so earthshattering at first, but think about the implications. Experts at chess consistently beat weaker players, but without examining more moves and without examining the outcomes of those moves more thoroughly. They aren’t “looking ahead more” and they aren’t “reasoning better” and they aren’t even remembering more in general. They do remember more about chess in a sense but not because they have a better memory. And looking ahead is important, but not by keeping track of moves. Their ability is a result of their mind being better trained to remember chess configurations in particular and to use that knowledge quickly and efficiently to evaluate moves.
So what are chess experts seeing that the rest of us aren’t? They aren’t seeing more moves ahead, they are seeing the board in terms of chess configurations instead of seeing it in terms of individual pieces. Their mind has been trained to see meaningful configurations of pieces instead of individual moves. They are not seeing more per se, they are seeing differently. They are seeing in terms of larger and more meaningful groupings. Experts with extended experience acquire a larger number of more complex patterns and use these new patterns to store knowledge about which actions should be taken in similar situations.
The result is profound. We have a game where a few simple rules results in an incalculably large number of possible sequences of moves. But we become good at this game of many, many moves not by thinking about more moves but by thinking in terms of larger patterns: patterns of pieces rather than movements by individual pieces.
Through practice, chess masters have trained their mind to recognize the unique meaningful patterns that apply to their game. Further, the ability to learn to recognize new patterns (along with a huge capacity to remember them) seems to be something we all possess, not just chess masters. It is a fundamental principle of learning, at least learning to be a chess expert.
Even more interesting, we don’t recognize this as knowledge, in the sense of things we recognize that we know. I know that I know some things. I know that I know all sorts of facts like the capital of some of the U.S. states and the number of sides in a triangle and Newton’s formula relating force and mass and acceleration. These sorts of things are considered explicit knowledge.
Chess masters can’t write down most of the patterns they know, both because those patterns are so vast and because they use them without thinking about them. The patterns they learn become part of their chess intuition in a manner of speaking. A common technical term for this is tacit knowledge. We use tacit knowledge in our thinking without realizing that we are using it. This is why it took focused research to discover what was going on in the minds of chess masters.
Tacit knowledge becomes part of our perception. Chess masters see the board differently; for example they often immediately see positions as good or bad without having to do the kind of analysis that the rest of us would have to rely upon.
Tacit knowledge is also used automatically in our thinking. When chess masters guess at the best move in a given position, their guess is informed by their vast database of tacit knowledge, so it is very different from the guess made by a weaker player. Experts make better guesses in their area of expertise. This is what I mean by their “chess intuition” above.
Trained Intuition and Better Guesses
You might be wondering at this point why I’ve spent so much time talking about chess experts. Or you may have guessed the answer. The most interesting conclusions from the research on chess masters are by no means limited to chess masters. Very similar or consistent results have been obtained across a staggeringly wide variety of fields from physical pursuits like wrestling and ballet to intellectual subjects like calculus and philosophy to artistic activities like painting and violin playing, to a wide variety of everyday jobs, to oddball activities like picking the winners at the horse races. Even among scientists, where the role of abstract reasoning is particularly central and the subject matter particularly challenging, productivity doesn’t seem to be predicted on the whole by supposed general ability measures such as IQ.
The chess findings are a particularly useful rhetorical device here because chess seems like it should be so dependent on reasoning and analysis. It turns out that experts analyze chess positions with the help of a vast mental database of chess configurations that apply without any recognition that they know them. The resulting perception and memory of the board just seems natural to them as a result of practice. Examined closely, in spite of its natural appearance for some people, the effortlessness of deep expertise seems to be an extreme kind of skill acquisition far more than an expression of talent.
Even if you interpret all of these findings from different fields very conservatively, collectively they still tell us something of tremendous importance about how we become good at things. We modify the way we perceive the activity. In effect, we train our intuition about the activity.
In all of these activities, researchers have found that time spent in the activity lets us acquire a new way of perceiving patterns in that activity that let us transcend the limits of our working memory and sequential reasoning capacity. That’s why expertise consistently outperforms IQ or working memory capacity or other general measures as a predictor of performance in virtually every activity that has been studied so far. And expertise is not just specialized knowledge or skills; it is also more importantly an accumulation of organized tacit knowledge that lets us make better guesses.
 (Hunter, 1986)
 (de Groot, 1965)
 This has been the most common interpretation of the chess research findings amongst expertise researchers, based on the influential theory of Chase and Simon. (Chase & Simon, 1973), (Simon & Chase, 1973)
 “Explicit knowledge,” basically just means things we know that can be easily identified and written down. The descriptor declarative is sometimes used as well, meaning that we can declare it.
 In contrast to “explicit knowledge,” this is often referred to as “tacit knowledge,” meaning things we know but we can’t easily express, especially things that support action. Tacit knowledge is usually assumed to be useful for doing things more than for taking part in our conscious reasoning processes. The descriptor procedural is sometimes also used for tacit knowledge because we think of it as involving procedures for doing things rather than declarations about things. For this reason, a common rule of thumb is that tacit knowledge refers to “know how” whereas explicit knowledge refers to “know that” (i.e. I know that grass is green). The casual rule of thumb is troublesome because we don’t really know how we do those things we call procedural, the usage of the word “know” in “know how” is very different than the word “know” in “know that.”
 Following the pioneering chess research, research into other areas reinforced the same finding: expert performance depends heavily on a large accumulated memory of patterns that give us a different “intuitive perceptual orientation” to tasks. “Experts can ‘see’ what challenges and opportunities a particular situation without affords.” (and without doing any analysis) (Perkins, 1995, p. 82)
 One of the leading and best known figures in the study of expertise is K. Anders Ericsson, whose research encompasses a particularly wide range of fields. An excellent and accessible overview of work in diverse areas of expertise research is Ericsson’s edited collection: The Road to Excellence (Ericsson, 1996).
 (Taylor, 1975)
 (Proctor & Dutta, 1995), (VanLehn, 1996)
Monday, August 22, 2011
What’s the most important thing about problem solving? If you paid attention in school, you probably would respond: getting the right answer!
There’s nothing wrong with wanting to get the right answer. Or is there? I want to raise four related concerns:
- The problem structuring concern: Problems don’t always arise in a form that has an identifiable single right answer. Often there are different best answers for different sets of possible criteria, with different sets of tradeoffs.
- The motivated thinking concern: The kind of thinking we do in order to feel we are right, to be seen by others as being right, or to advocate the right answer to others can overwhelm the kind of thinking needed to solve the problem in the best way.
- The my-side bias concern: We have a natural tendency to look selectively for evidence in favor of the first good guess we make explicit, to ignore evidence for alternatives, and to think in ways that support our favored alternative.
- The belief overkill concern: The my-side bias is often reinforced in such a way that that certain of our intuitions become treated as aspirations or universal facts of nature and this extends beyond things that can be verified empirically between observers. Compelling intuitions can guide our thinking into limited preferred patterns, reinforced by selective use of evidence and also by social patterns of polarized thinking.
I argue that these concerns, along with various inferences we can reasonably make about how the mind works, necessitates a certain approach to thinking, especially about more difficult problems.
These factors mean that we have to learn to adopt and leverage different perspectives in order to harvest all of the information available and the expertise needed to solve complex problems. This is why when it comes to problem solving worthy of the name, thinking clearly is more important than thinking correctly along predetermined lines.
At this point you probably have your own concerns. You might be wondering whether I am advocating some sort of fluffy relativistic “there’s no right or wrong and all perspectives are valid” sort of approach to thinking.
That’s not the case. I use the term Clear Thinking because I truly believe there is such a thing as identifiably better and worse thinking, leading to better or worse conclusions and that it very often makes a critical difference whether we get it right.
My point is just that all of us (not just other people) assume we are getting it right much more often than we really are getting it right, and that very knowledge about our own thinking processes is a key to Clear Thinking.
This means that when we need to think clearly about complex problems, we need to use our knowledge about and skill at problem solving itself to root out our own shortcuts, make our thinking more explicit, bring alternate perspectives into play, and in general consider more alternatives than would otherwise come to mind.
Sunday, August 21, 2011
What Makes Some People Better Problem Solvers Than Others?
The Dilemma and Challenges of Exceptional Thinking Abilities
Simply getting the right answer isn’t always the best way to think about solving difficult problems. For some problems outside of the classroom and aside from questions of knowledge from within well established domains, there may not be a single right answer.
There may be additional alternatives to be considered that aren’t known yet or which don’t seem right at first but can be turned into better solutions. Needing to be right (getting the answer that others seem to think is right), or needing to think we’re right, or needing to be seen by others as being right, may mislead our thinking and blind us to better answers.
Thinking Too Much: Defying Common Sense
Wanting to be seen as clever or wanting to be seen as an expert often similarly restricts our thinking. We very often settle for the first guess that seems right or the way other people seem to be thinking. There are sometimes good reasons to stop thinking about a problem and settle for an answer, but most of the time we use shortcuts rather than stopping when we truly have the best answer available to us.
Shortcuts are natural to us and they are an important part of what makes us good thinkers. Shortcuts in thinking are part of our common sense. It doesn’t seem right to sit and reflect on something that has an obvious answer. It can seem like a peculiarity or a symptom of subscribing to some bizarre overly complex view of reality, or maybe even a character flaw.
The trouble is that our common sense that serves us so legitimately well in so many everyday situations turns out ot be poorly suited to many other kinds of complex and counter-intuitive situations. Our natural instincts for reasoning are significantly better adapted to some kinds of problems than others.
The shortcuts that serve us for biological needs like feeding ourselves and mating and getting along with other people in small groups tend to fail us when we think about things like cultures, corporations, markets, and nations or when we’re presented with a completely different kind of problem.
Importantly, the way we learn is not well suited to automatically recognizing which kinds of situations we are thinking poorly in. The shortcuts in our thinking work so well because we rely on them so naturally. We don’t necessarily get an alarm bell in our mind that we are thinking in the wrong way about a problem. We instead get responses from our natural learning systems that we experience as compelling feelings and intuitions that guide our thinking.
It is only by learning about the thinking process itself and how our own mind works that we begin to learn how to make best use of our natural learning systems to think clearly about problems that our natural abilities are not well optimized to solve, situations where our common sense and our intutions fail us. This learning also helps us reason through situations where we have to think across different domains of expertise without a sense of how well we have captured the meaningful patterns in each of those domains.
Sure, when there’s a right answer, we want to be able to figure out what it is. More generally though we want to think clearly about the problem. This means thinking in a way that leads to, if not an ultimate perfect answer, the best solution available, even if that means bringing more expertise and different perspectives into play and challenging our own intuitions.
How do we know when our shortcuts and intuitions are failing us and that the situation requires a different kind of thinking? I think it comes down to making it a priority to learn about our own thinking while we are learning other things. This means being strategic about thinking: knowing as much as possible about our own tools and resources, both their strengths and their weaknesses.
Identifying Our Strengths and Identifying Our Weaknesses
I’ve been a professional problem solver for decades and from time to time I have worked alongside other problem solvers whose abilities truly amazed me. Some people are able to look at a situation and see opportunities and possibilities that others cannot seem to appreciate until after they become real solutions, and sometimes not even then.
Some of those same remarkable problem solvers then even more remarkably sometimes make the worst mistakes in certain situations. They apply their knowledge and skills in ways that just don’t fit the situation, and sometimes the very qualities that often serve them so well in other situations now make them overconfident in their answers. This book is about learning from both their triumphs and their failures, as well as our own triumphs and failures. It’s about learning to think better.
I have devoted many years trying to understand what it is that is different about exceptional problem solvers, and to what degree their abilities can be duplicated and perhaps even improved upon to avoid the worst mistakes that they also tend to make.
This sets out two primary challenges for me:
--> What makes some people so much better problem solvers than others, especially across different kinds of problems?
--> What causes otherwise great problem solvers to make such awful mistakes so often?