Sunday, September 26, 2010

Review of "Big Brain" by Lynch and Granger - an education in brain evolution and development

Big Brains: The Origins and Future of Human Intelligence
by Gary Lynch and Richard Granger
Palgrave Macmillan, 2009

There is a lot of very clearly and cleverly presented original material on brains and their evolution and development in this book. It is one of those books that constantly makes you think and has enough depth to read several times fruitfully. I highly recommend it.

See my full review on Amazon here.

Saturday, September 11, 2010

A classic battle of politicized science: Kamin vs. Eysenck

My review of the 1981 book: Intelligence: The Battle for the Mind by Hans Eysenck and Leon Kamin can be found on Amazon here.

I found this old book a fascinating look at the politics of intelligence at its most extreme prior the publication of The Bell Curve. In spite of all the friction generated by The Bell Curve that left a misleading impression in the minds of many people, there was actually a general consensus over most of the technical claims regarding intelligence and intelligence testing.

By the time The Bell Curve was written, the definition of intelligence in psychometric terms was well established as were the moderate correlations with academic success and some kinds of occupational success, and the heritability numbers for IQ from several twins studies. It was also pretty well accepted that there were group differences in scores as well as individual differences.

However, the implications of all of these things was more in question than ever because in contrast to some of the basic assumptions in The Bell Curve, heritability was known be highly variable from population to population, IQ was a reasonably good predictor of an important but narrow range of abilities mainly related to literacy and certain kinds of reasoning, and more was unknown than known about the reasons and implications of the group differences.

In the Kamin vs. Eysenck book, we see Eysenck focusing on things that for the most part are not at controvesial at all, and not taking an extreme hereditarian view. His politics are subtle, you can still read echoes of the earlier hereditarian view in his chapters. He talks about people's "capacity" and emphasizes how environment is important in developing intelligence, but implies that people still reach limits determined by heredity in some sense.

Kamin on the other hand reveals his own politics far less subtly by accusing nearly every individual differences researcher of some kind of bias or racism and by looking for anomalies and assuming fraud throughout the entire range of psychometric testing research.

A very instructive account in the dynamics of how scientists interact (or not!) around controversial research programs depending on the way they express their own biases.

See the full review here.

Monday, August 16, 2010

Danny Bloom and the perils of "screening"

Inspired by Nicholas Carr's book "The Shallows," and more especially by the varied reactions to Nicholas Carr's ideas, I previously posted 3 articles on the subject of how electronic media are affecting our thinking and reading. I'm no expert on this subject, except that I read a lot both online and offline. The posts were a review of Carr, some thoughts on our shifting perception of knowledge and expertise, and some reflection on the concept of deep reading.

Since those posts, a gentleman named Danny Bloom has been pestering me to blog about him. I have a readership of about a dozen very smart and curious friends on a very good day, but for some reason Danny has been very insistent that I mention him. So here it is. Danny says he is a journalist in Taiwan and he has been contacting all sorts of people and urging them to write about the issue of reading on paper vs. reading on electronic screens (he coins the term "screening"). He feels that doing brain scans of people while they are reading will turn up important differences.

Danny has been pushing hard for me to write about this and sending me a lot of the variations of the same thing over and over again urging that screens are different from paper. Ok, I think it's true, and I prefer paper for various reasons for serious reading, but I'm not sure that this is all discernable readily at the neural level. I think much is cognitive and psychological. And it is hard to tell how much is habit and preference and how much is intrinsic difference in how we are forced to process different kinds of stimuli and use our attention differently. I expect it's a little of each.

I hope this satisfies my obligation to Danny to write about his concern. I consider this an interesting question but I'm not nearly as excited about it as Danny is, and my concerns are more general than just the nature of screens vs. paper. Especially since the resolution and shading of of screens has drastically improved and will continue to do so.

Saturday, July 31, 2010

Book Review: David Perkins' must-read brilliant map of human thinking ability and its improvement

Review of David Perkins’ “Outsmarting IQ: The Emerging Science of Learnable Intelligence,” 1995, Free Press.

Link to review on Amazon --> http://www.amazon.com/review/R3AYGZV7G7AUTO/ref=cm_cr_rdp_perm

Profound Thinking By Example

This is the single best book I’ve come across on the potential for improving human thinking ability. I give it my highest recommendation; I think it should be read by everyone interested in problem solving, decision making, and human abilities in general. It is amazingly broad in its coverage of data, profoundly deep in its treatment of specific lines of relevant evidence, and ingenious in its vision of the future.

What impressed me most about this book is that the author, David Perkins, demonstrates the power of deep reflective thinking by his own example in the organization and treatment of evidence throughout this book, in his critical treatment of his own evidence and ideas, in his creative original ideas, and in his effective consolidation and filtering of massive amounts of research. Showing how asking the right questions can help us understand seemingly contradictory data about intelligence, Perkins gives an engaging plausibility proof for the kind of reflective intelligence he argues for in this book.

The Concept of Realms of Thinking

To give away the ending, the book culminates in a model of problem solving ability based on the metaphor of a map. Human thinking ability results from learning our way around. Navigation is fundamental to all sorts of human thinking. Perkins suggests that all intelligent human thinking results from navigation of various kinds, which can be thought of in terms of levels of realms. Perkins organizes the realms in an overall map or “mindscape” from the lowest level of specific contexts of thinking to the highest level dealing with thinking itself.

In learning to solve problems we not only learn our way around physical realms geographically, but we learn our way around specific contexts we find ourselves in such as the realm of buying a house or the realm of choosing a career. We learn our way around different situations like resolving conflicts or making purchases in general. We learn our way around professional fields like law, physics, and mathematics, and areas of technical expertise such as probability and statistics, game theory, and business. We learn our way around the use of tools. We learn our way around various basic kinds of challenges like problem solving, decision making, planning, and learning. Finally, at Perkins’ top level, which he calls thinking dispositions, and we learn our way around thinking itself in terms of the qualities and attitudes that make it more or less effective.

Perhaps the central thrust of this book is that in organizing human problem solving areas into navigational realms, Perkins is not just providing a training map for learning problem solving skills a million different areas, he is also making a case for the learning the critical skills of navigation itself.

Perkins’ realms are very similar to the traditional concept of domains of expertise, but different in one critically important way: realms emphasize the central skills of navigation rather than just the use of repetition or rote memorization or even just the use of deliberate practice. The concept of realms makes it more explicit that all areas of ability that we learn share some commonality in terms of key skills and attitudes we need for navigation itself.

It is learning to be a better navigator; in all realms of human thinking and not just certain subset of them; that is the central message of Perkins’ book. This is encapsulated in his concept of “reflective intelligence.” Reflective intelligence is the aspect of intelligence that can be most improved for the greatest effect across the range of all realms of thinking. Perkins reviews a number of different attempts to improve human thinking and makes various suggestions based on their results regarding specific kinds of changes that can be made to educational curricula in order to teach children to be better navigators in all areas.

Getting Perspective on Intelligence through 3 Dimensions

In giving away Perkins’ final model, I’ve skipped over two very important and interesting aspects: his argument for the model he uses and for the prospect of learnable intelligence through better navigation, and his predictions for important areas of the evolution of learnable intelligence.

The bulk of Outsmarting Intelligence deals tightly with the subject of the title, the legacy of how intelligence has been envisioned and researched so far. Perkins deals in equally deep, reflective, careful, and often fascinating manner with: (1) the evidence for a single common problem solving ability from psychometric data, (2) the evidence showing us how novices think differently from experts, and (3) the evidence showing us what happens when we try to learn general skills and rules for solving problems in general and how computers solve problems.

From these three bodies of evidence, Perkins derives three corresponding dimensions of human intelligence: (1) a neural intelligence dimension which respects what psychometric data gets right and is most closely associated with what we typically assume IQ tests are measuring, (2) an experiential intelligence dimension which respects what expertise research data gets right, and (3) a reflective intelligence dimension which respects what we have learned about metacognition and from the various programs that have tried to teach thinking skills in general.

Neural intelligence, Perkins concludes, is a real dimension of human ability and very important in some situations especially, but it is simply the wrong target for attempts at improvement for various reasons.

Experiential intelligence represents most of our actual problem solving abilities in practice.
Faced with novel and complex situations where we have no relevant experience, our neural intelligence gives us our best chance at solving the challenges presented. But once we have been acquiring experience in an area, a difference in expertise will make people better problem solvers in that area than will a difference in general intelligence.

So experiential intelligence and neural intelligence work together to make us the generally good problem solvers that we are in most situations: neural intelligence helps us deal with novelty and complexity, and experiential intelligence helps us acquire the knowledge and skills we need to deal with specific domains.

So the obvious question is: what role does reflective intelligence play and why does Perkins consider it so important?

The Significance of Reflective Intelligence

Perkins reviews various lines of research into the wide variety of situations where otherwise powerful problem solving abilities seem to fail us in systematic ways. He looks at social psychological effects, cognitive shortcuts, and so on, similar to other reviews of blind spots in human thinking by many other authors except that Perkins attempts to characterize these foibles specifically in terms of side effects of our experiential intelligence.

Perkins suggests that the human mind is mostly akin to a pattern seeker and pattern-driven problem solving engine and as a result its weaknesses are also those we would expect from a pattern-driven process. The human mind often tends to be hasty, narrow, fuzzy, and sprawling.
HASTY. The goal of a pattern seeking intelligence is to find the right response that most closely matches the current situation rather than making an exhaustive search. As a result, our experiential intelligence tends to mislead us to jump to hasty conclusions when the situation is an unusual variation of a known situation.

NARROW. As a result of efficiently seeking patterns we have already seen, the domain-specificity of expertise tends to make us think in narrow ways when we think we have grasped the situation rather than to broaden our thinking.

FUZZY. Part of the power of pattern-matching is that we can so often generalize the lessons from one situation to another similar one. In situations where the appearance is very similar but the underlying principles are different, again our pattern matching effectiveness leads to mistakes: we overgenerallize from our experience.

SPRAWLING. When a pattern-seeking process does not have a single clear path to follow, as often happens in very complex situations, it will tend to follow one path after another and keep switching back and forth rather than working toward an overall goal.

Experiential intelligence, Perkins concludes, is an elegant system for long-term moderate success. When situations are new to us or complex, we get help from our neural intelligence and we have also learned various tricks for getting around our weaknesses, and these are largely accounted for in reflective intelligence. Reflective intelligence represents realms where we think about our own thinking in order to avoid settling on hasty conclusions, to broaden our thinking beyond the initial scope we assumed, to use precision to distinguish similar looking but different things, and to stay on track when notice we are sprawling.

This explains why reflective intelligence is so important to us in tricky situations where we have inadequate experience and where experience misleads us. But it also helps explain, in Perkins’ view, why reflective intelligence is so important for us to learn to be better thinkers in general. Neural intelligence does not replace experiential intelligence, it tends to reinforce it.

When we don’t have experience, neural intelligence helps us grasp the situation, but when we do have experience, we tend to use our neural intelligence to reinforce what our experience already tells us. That’s one big reason why genius is not simply high IQ. That’s why reflective intelligence is so important, it is the tool we use to remind us of the weak points in our own thinking and help us compensate for them regardless of our experience and general intelligence. The abilities and traits we need in order to overcome our blind spots are learnable. A large and crucial aspect of intelligence is learnable.

Existing Approaches: How they Compare

There are various approaches to teaching reflective intelligence, and Perkins reviews the best known and the best studied among them such as Project Intelligence, Reuven Feuerstein’s Instrumental Enrichment, Edward de Bono’s CORT, and Matthew Lipman’s Philosophy for Children, and others, reviewing their approaches and their results and comparing and contrasting them in order to get a sense of what it takes to enhance reflective intelligence.

One of the things that distinguishes Perkins as a deep reflective thinker himself is that he anticipates, researches, and deals fairly with opposition to his arguments. The very idea of learnable intelligence has in the past come under attack from several angles such as past failures of various programs which tried to teach improved thinking, the implications of expertise and psychometric research data, the apparent weakness of general methods for problem solving, and the challenge of transfer between learning domains. Perkins addresses each of these concerns in turn, resulting in a very persuasive case for the very real improvability of intelligence through changes in education.

The Future of Learnable Intelligence

Toward the end of the book, Perkins reveals the ingenuity of his vision through his discussion of several areas for the future evolution of reflective intelligence: areas which ended up being (remarkable for a book written in 1995) accurate predictions of areas that have since become central areas of interest for science and human improvement in general:

1. Intelligence can become distributed -- good thinking depends upon artifacts to offload the limitations of our attention and memory, and we can use our symbol systems and tools to help us keep track of things we could not track individually. This is a wonderful general description of how we are attempting to use computer networks to help us manage complexity (as opposed to some of the more superficial books in recent years which imply that networks somehow replace rather than enhance individual thinking).

2. Intelligence can embrace complexity -- through information visualization tools, effective use of classification, tagging, and finding things by meaning, consolidation, filtering, the mathematical tools for finding large scale patterns in complex phenomena, and by eliminating narrow information silos, we can use our intelligence to solve increasingly complex problems.

3. Intelligence can be dialectical -- this means raising the level of thinking from lower level more concrete concerns to higher order patterns by recognizing the properties specific to complex systems. Perkins offers Peter Senge’s “The Fifth Discipline” and Murray Gell-Mann’s “The Quark and the Jaguar” as exemplifying ways of understanding dialectical intelligence.

Perkins covers a massive amount of data about intelligence and problem solving, summarizes it effectively, and applies it to a practical, powerfully supported, and exceptionally understandable approach to improving human life by teaching ourselves to be more intelligent. Thinking well in general is an unnatural act but we can learn to do it. All that is left is for us to overcome the ideological and political barriers. This book would make a wonderful, gentle manifesto for that grand effort.

Monday, June 21, 2010

Book Review: "The Shallows" by Nicholas Carr

Beautifully written reminder that each medium has its tradeoffs
by Todd I. Stark

Amazon review --> http://www.amazon.com/review/R2Z3CVIADXIWI1/ref=cm_cr_rdp_perm

When I first came across this book I noticed that a lot of my friends on social media were expressing disgust or boredom with the thesis of "Is the Internet frying our brain?" After all, who but a curmudgeon would claim that the most vital and transformative technology of our time might have a dark side? Especially at a time when leading edge educators are working furiously to bring their field up to date by incorporating the best of the latest technology in a way that improves education.

Against this background Carr's book seems reminiscent of those poor backward folks who opposed the printing press. As the brilliant and funny curmudgeon Neil Postman once said about himself, Carr is indeed playing the role of the Luddite in some ways. Still, neither Postman nor Carr were trying to dismantle the Internet or just shriek an alarm with their work. They are trying to help us understand something important. With that in mind, let's take a more careful look at this book. The Shallows is a thoroughly and broadly researched and beautifully written polemic which I found to represent two different things. First, it is a media analysis and culture critique. Second it is a pessimistic theory about the overall effect of web media on our thinking ability over time.

The first aspect will be a delight for those interested in the evolution of human cognition, those fascinated with media effects per se, the traditionally minded book scholars, and assorted geezers. It is a very satisfying cultural media critique very much in the spirit of Marshall Macluhan and Neil Postman even though it lacks Macluhan's showmanship or Postman's remarkable ever-present humor. It was this aspect made the book a worthwhile reminder for me, introduced me to some fascinating recent cognitive science work supporting the view that different media encourage different ways of thinking, and helped tie together a number of broad ideas for me regarding the evolution of human cognition and the influence of the tools we use.

The second aspect, for the more technically psychologically minded, and the more alarmist and pessimistic part, is a clever argument for competing and mutually destructive habits of attention allocation: (1) the nimble web browsing mind that constantly reserves attention and working memory for making navigational decisions and is exposed to massive amounts of information, and (2) the sustained attention ability that we learn with great effort over time for the purpose of reading and reflective thinking.

The second aspect is the one that most of the articles and marketing have been pushing, a thesis I'll call "Help! The Internet is Frying My Brain!" Carr argues that the nimble web mind better exploits our more natural "bottom-up" or stimulus driven attention mechanisms, which is why we find it so powerful. He also argues that the undistracted reflective mind is far less natural but has unique advantages for human cognition. So it is worth retaining, he argues, _and_ we need to keep working deliberately at it in order to retain it. That alone would be an important point. Thus far, I think the attention argument is completely consistent with the media critique, and supports it. None of this so far says that our brain is being fried by the Internet.

Now comes the trickier part, and the part of Carr's thesis that to me is most controversial, the two ways of using attention may not only compete but may actually be mutually destructive. Carr offers his own experience and that of several other serious book readers to show that they are having increasing trouble reading for prolonged periods. Carr says that there is neuroscience data showing that this may be the result of web reading rather than just advancing age or other less ominous explanations.

This "fried brain" thesis is the part that is either revolutionary, or becomes the fatal flaw in The Shallows, depending on whether or not it is true. So is it true? Does Carr persuade us that not only are we thinking differently with different media (a very strong case I think) but that the Internet is frying our brains?

Today we remember the iconic wise curmudgeon, Socrates, only through his students. That's because old Soc didn't believe in writing. It seems he was a great proponent of contemplative thought and taught that contemplation depends heavily on memory. He thought it would seriously hurt people's memory to rely too much on writing things down. His criticism seems perverse today, even as we remember Soc fondly for his deep reflection and his provocative teaching methods. That's the historical role into which Nicholas Carr has cast himself, the media critic who invokes wisdom and reflection and plays them against seemingly unstoppable cultural trends towards greater convenience, efficiency, and information distribution.

Carr is the guy who wants to warn us about the hazards of writing on our memory. About the damage that the printing press will do to culture. About how TV will change us for the worse. And now about how the Internet will shift our values, instill bad habits, hurt our reading and thinking skills, and even destroy our powers of sustained concentration.

Socrates wasn't entirely wrong even though he bucked a trend that in retrospect was downright silly to oppose. People who don't specifically practice remembering things and instead devote everything to writing do find that they have weaker memories. That's the reason for all those memory courses, the best of which essentially just teach the same methods socrates would have used. The widespread distribution of news did have negative consequences in terms of reinforcing bias and propaganda on a massive scale.

There are some adverse consequences of all the TV watching we do. However none of these things has had the dire consequences that culture critics predicted, we have adapted in turn in some way to each of them, more or less successfully.

So Carr isn't entirely wrong about the tradeoffs involved in using modern technologies. He is not a "Luddite" and he does make a number of valid points.

Carr is not telling us to dismantle the Internet. He fully recognizes the value of technology. He is rather playing Socrates to the modern students. Most people, desperately trying to keep up with the amazing new technologies and learn new ways of getting better information with them will ignore Carr's message pretty much out of hand. "Carr is the only one affected negatively by the Internet, the rest of us are thriving."

Those folks who ignore culture critics out of hand are taking for granted the skills and expertise that many people have cultivated through sheer effort using sustained concentration. They are buying into the attractive fashionable modern viewpoint that just being exposed to a lot of information via technology will make you smart. The majority of people, the ones who go along with that implicit confusion of information and personal knowledge, will indeed lose some of the things we take for granted today. I think Carr is right about that, and that is the most profound message in this book. LISTEN TO IT. Even if you think, with good reason, that it is silly to imagine that using search engines and hyperlinks will hurt your concentration.

Still, the message that the Internet will make us stupid isn't quite right. Writing didn't entirely destroy our memory, it just shifted the habits we need to cultivate to preserve it. It seems like the wisest among us will recognize the value that culture critics like Carr have always had, they will appreciate the detail and care that good media critics like Carr put into their warnings, and they will remember the real tradeoffs between different kinds of media and take responsibility for the cultivation of their own minds.

Just as wise modern students still practice the methods used by Socrates, they will still learn to read and think deeply using books or the electronic equivalent, the wisest will still turn off the TV and other distractions when sustained concentration is called for, and they will understand the difference between various conditions and different kinds of media in general and will use each to its best advantage.

So long as we aren't stupid enough to stop cultivating our individual minds regardless of technology changes, media itself will not make us stupid. Listen to Carr's message, learn it, and then apply it to your use of technology. It's easy to dismiss the claim that the Internet will somehow fry your brain. It's another matter entirely to dismiss the value of cultivating your mind through personal reflection.

Update 5/14/2013:  Dan Willingham had a brief post on this topic.  His view seems fairly close to mine in most respects.  He concludes that sustained attention may not be the skill of greatest importance in the future,  but it may well be the one in shortest supply!

As moderate and reasonable a critique as that seems to me (he does not seem to agree with Carr's more radical point about web tech physically or unavoidably changing our attention) I noticed that some of the comments to his article seem to exemplify the problem.    Some people do seem to see the concern about sustained attention as some sort of old-fashioned culture critique ("you darn kids get off my lawn!"), so much so that they won't bother reading the details of the actual argument.

Dan Willingham: The 21st century skill students really lack


Related background reading:

On the evolution of cognition and symbolic thought (and secondarily, the role of reading):[[ASIN:0393323196 A Mind So Rare: The Evolution of Human Consciousness]][[ASIN:0393317544 The Symbolic Species: The Co-Evolution of Language and the Brain]]

On reading and the brain:[[ASIN:B003H4RAOU Reading in the Brain: The Science and Evolution of a Human Invention]][[ASIN:0060933844 Proust and the Squid: The Story and Science of the Reading Brain]]

On the role of tools in cognition:[[ASIN:0195153723 Adaptive Thinking: Rationality in the Real World (Evolution and Cognition Series)]]

On the role of media technology in culture:[[ASIN:0262631598 Understanding Media: The Extensions of Man]][[ASIN:0679745408 Technopoly: The Surrender of Culture to Technology]]

On the trend to rising IQ scores in modern times:[[ASIN:0521741475 What Is Intelligence?: Beyond the Flynn Effect]]

On the practical limitations of human working memory:[[ASIN:0061771295 Your Brain at Work: Strategies for Overcoming Distraction, Regaining Focus, and Working Smarter All Day Long]][[ASIN:0195372883 The Overflowing Brain: Information Overload and the Limits of Working Memory]]

Saturday, June 12, 2010

Are we taking knowledge and expertise for granted?

On the New York Times Opinion page, Steven Pinker weighs in on the Internet optimists vs. pessimists issue. He appreciates the value of intellectual depth but he doesn't think the brain is nearly "plastic" enough to be reshaped fundamentally by the tools we use, he doesn't believe there are general abilities that are affected by experience. "Experience does not revamp the basic information-processing capacities of the brain" he insists. He also points out that "the effects of experience are highly specific to the experiences themselves."

In making these claims, Pinker reminds us that he joined Leda Cosmides and John Tooby enthusiastically as one of the founders of the most extreme version of cognitive evolutionary psychology (CEP), the "modular brain" theory. We know that the brain has all sorts of very specific speciallizations, but the notion of opaque independent functional modules is far from universally accepted. Books that have included intelligent, scholarly critiques of CEP or stress the importance of non-modular aspects of brain function include Kenan Malik, David Buller, Merlin Donald, Terrence Deacon, Terrence Sejnowski, and Jeffrey Schwartz. The contrast between Deacon and Pinker on the role of language in the evolution of the mind is particularly interesting.

My point is not at all to "debunk" CEP by presenting people who offer other kinds of theory, since I think CEP is a viable concept that probably gets some things right regarding the evolution of the mind. My point is that it seems too radical to claim that the mind and brain are simply and entirely modular in the way they would have to be for Pinker's statements above to be completely true. Pinker wants us to believe that the brain is modular and that experience cannot affect general abilities, yet he can't help using the term "deep reflection." It is difficult to imagine how such a thing as "deep reflection" even makes sense in the modular independent architeture Pinker is insisting protects our intellectual functions from the potentially deleterious effects of experience.

Pinker even explicitly acknowledges the work it takes to develop intellectual depth:

It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people. They must be acquired in special institutions, which we call universities, and maintained with constant upkeep, which we call analysis, criticism and debate.

If it takes so much work to develop intellectual depth, how is it reasonable to then also argue that our thinking can't be affected by experience, or by activities that detract from this developmental process?

Yes, he is right the brain has limits to how much it is reshaped by experience, but I think he significantly overstates the case. The issue is regarding specifics, not the general principle of neuroplasticity. What specific effects do specific activities have on our mind and brain over specific kinds of time period?

Pinker may very well be right that the web is not itself deteriorating our reflective thinking ability the way Nicholas Carr argues it is. Carr perhaps goes over the top when he says that his failing ability to concentrate is specifically due to his use of the web. However Pinker goes too far when he implies that the idea is simply silly in principle. It remains an empirical question, not just a conceptual one, unless we're replacing cognitive neuroscience with Pinkerist modularism.

Pinker also misses a much more important point, that our attitude toward technology affects the way it shapes our daily life. He assumes that those university activities he takes for granted will continue to be valued just because he himself takes their value for granted. The university was not always there, and there is no reason to assume it will always be there if we stop arguing for its value.

Pinker's ironic conclusion:

And to encourage intellectual depth, don’t rail at PowerPoint or Google. It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people. They must be acquired in special institutions, which we call universities, and maintained with constant upkeep, which we call analysis, criticism and debate. They are not granted by propping a heavy encyclopedia on your lap, nor are they taken away by efficient access to information on the Internet.

The new media have caught on for a reason. Knowledge is increasing exponentially; human brainpower and waking hours are not. Fortunately, the Internet and information technologies are helping us manage, search and retrieve our collective intellectual output at different scales, from Twitter and previews to e-books and online encyclopedias. Far from making us stupid, these technologies are the only things that will keep us smart.

Is it really our knowledge that is increasing exponentially, or is it available information?

That fact that Pinker seems to conflate the two is exactly the question begging that most fundamentally ignores the most central aspect of modern culture critique, are we gaining more knowledge because we are exposed to more information?

Against the cultural critics, the smart among us have always managed to take responsibility for their own minds and organize the available information and think deeply enough to create meaningful individual knowledge from it.

Against Pinker and the others who think cultural critics are just "panicking," the fact that some smart people always manage to cultivate knowledge in spite of the challenges offered by new tools doesn't mean that everyone else will automatically inherit that ability.

If we assume that simply having access to a lot of information will make us smart (this is not an exaggeration, it is literally how many Internet optimists think), we will end up missing the real issue.

The real issue is not whether the Internet fries your brain, there is no really good evidence so far that it does. The real issue is whether we recognize and continue to appreciate the work it takes to cultivate knowledge and expertise or whether we take these things for granted.


Update: Nicholas Carr responds to Pinker on his own RoughType blog.

Update: Commentary by Nick Bilton with some useful references, argues sensibly that each form of media has its potential unique value - http://bits.blogs.nytimes.com/2010/06/11/in-defense-of-computers-the-internet-and-our-brains/?ref=technology

Update 6/15: Some of the reviewers do more thougthfully reflect on bigger picture issues and ask somewhat deeper questions than just the alarmist one of whether the Internet is "frying our brains." See this review in New Republic by Todd Gitlin: "The Uses of Half-Truths."

Also: Nicholas Carr and Douglas Rushkoff respond to Pinker on EDGE.

Sunday, June 06, 2010

Good riddance to "Deep Reading?"

My previous post highlighted Nicholas Carr and Clay Shirky's recent books which seem to agree that Internet and Web and related technology are changing our daily habits and also the way we think.

Carr suggests we are gaining a lot but also losing something important. Shirky counters: "good riddance!" to whatever we might be losing.

Maybe it's just a touch of reactionary temperament in me, but for me this brings to mind this sage advice about change:
"Don’t ever take a fence down until you know the reason why it was put up."

I'm not exaggerating Shirky's view, nor is he the only one to hold it, so I think it is worth looking at this question in a little more detail.

What is it that we agree we are losing in order to gain faster, more agile, more network-literate minds, and why does Carr think it is important and Shirky does not?

What's the Argument About?

The essence of Carr's argument as I interpret it is that:

1. Our daily habits, the tools we use, and the ethic with which we use those tools has a considerable shaping influence on the way human beings think.

2. This shaping of our thought by our tools is not an empty metaphor, it has a literal aspect because of the "plastic" nature of the brain that has been discovered in recent decades.

3. Prime examples have been the shift from oral to written culture, the use of maps, the use of clocks, the widespread availability of books, the use of audio and video media, and culminating most recently in mobile constantly online hyperlinked multimedia networking.

4. As each type of media proliferates, the people who use it gain new ways of thinking through new habits, new patterns of attention allocation, new ethics regarding how information and knowledge are related and what it means to be smart and informed.

5. These gains do not generally co-exist with the previous ways of thinking, they greedily force the old ways out. This happens for two kinds of reasons: (1) the economics of production and aquisition of technology, and (2) the plasticity of the brain. Over time our habits carve increasingly deeper ruts that shape the way we think. New habits replace old and new ways of thinking replace old, essentially as a matter of making efficient use of brain tissue.

I think the argument so far is pretty sound, although certainly one could quibble about any of the points. Carr finds examples for which each of these points do seem to legitimately apply.

The example that is most relevant, the one Carr starts with and the one for which he is most dimetrically opposed with Shirky, boils down to the concept of "deep reading."

What is "Deep Reading?"

Deep reading refers to the tradition of reading long, structured written content in an a focused, undistracted manner while using active learning skills. This is not a particularly natural mode for human beings, and so deep reading refers to an ability that requires considerable expertise to be developed. Some reading skills that are naturally difficult must be automatized so we don't have to think about them and can devote precious cognitive resources to thinking about the subject matter.

The cortical resources for maintaining attention on linear written material without allowing distractions to derail us are significant. We all agree on this. Carr makes a point of it and it is also a big part of why Clay Shirky finds web technology to be so freeing. Linear reading is very difficult until you become extremely expert at it. Many people faced against their will with the intellectual ethic of book scholarship have felt the same way Shirky does, that it just isn't worth all the effort to do without the expertise. And furthermore they can't understand why it would be worthwhile to acquire the expertise if it is so difficult and takes so long.

To read effectively, reading has to become second nature so that the basic reading skills are applied without thinking in a matter of milliseconds. Like all forms of expertise, this takes many years of training and most readers probably take that for granted. Some people enjoy the process much more than others. My baby book records my first sentence as an infant as being a complaint: "I can't read!" With my parents help, I put a lot of time and effort into solving that problem, so although I enjoyed the journey, I appreciate why people who don't enjoy that sort of thing find it so intimidating to contemplate crossing that chasm of years of training to get into deep reading.

Why do some people care about Deep Reading?

What deep reading refers to in terms of cognition is "an array of sophisticated processes that propel comprehension and that include inferential and deductive reasoning, analogical skills, critical analysis, reflection, and insight." (Educational Leadership, March 2009, Wolf and Barzillai)

I think Clay Shirky would argue that the above abilities can be gotten, perhaps better, without the difficulties of deep reading. I hope he is right, but Nicholas Carr and I both suspect he would be wrong about that. There's a kind of "no free lunch" argument here I think. Web hyperlinking works so efficiently with our brain and gives us so much of what Shirky calls "cognitive surplus" because it leverages our natural inclination to get distracted by each new thing and then follow it. Rather than finding ways to fight our distractability as we do in deep reading, on the web we let it drive us to explore new things. So we are exploring, hunting and gathering new information. We all agree that this is often a big rush and exposes us to potentially much more diverse information (and people!) in a shorter time than we would ever achieve buried in books in a libary.

The issue at hand is whether we are learning it in the same way as if we were undistractedly reading.

The Key Question

The key technical question upon which the whole argument rests becomes: Is there an added efficiency of web information hunting and gathering providing an available "cognitive surplus" usable as an attention resource that makes it easier to use the metacognitive skills we need for mastery of a subject?

Or conversely do the distractions of that mode of information hunting/gathering prevent us from using those specific metacognitive skills, and force us to think more "shallowly" in some sense?

For me the Carr vs. Shirky debate breaks down to the potential empirical questions generated by the above technical issues.

Is the Real Intelligence in the Network Rather than in Each of Us?

I suppose another way of looking at this is to question whether we really need to acquire deep expertise anymore, whether there is something "old-fashioned" and quaint about expertise itself. Maybe the new online literacy replaces expertise with some other quality that supports human thinking? I think that's intriguing an there are probably some folks who think that way, just as there are people who say that our intelligence can somehow better be transferred into computers or networks with other people rather than embedded in individual minds.

These are interesting speculations, but they seem very hard to reconcile with the mass of learning and expertise research so far. Not that we can't offload our minds from our own brains, I do think that is real possibility, and something we already do to to a great extent. The argument that our tools shape our mind is partly based in this idea of offloading memory from our brain to external tools.

I just don't think it's an entirely good thing to eliminate individual intelligence entirely. I think we still have good use for the processing we do inside our own skulls. Maybe that's where Carr and Shirky really differ most fundamentally? Maybe Shirky really wants to get rid of the individual mind and replace it with a node in a web, similar to the way ants work together in colonies?

Different Experiences with Books

Clay Shirky offers Tolstoy's War and Peace as an example of great literature (though I think it is popularly better known for its length than its literary merit). He says, "boring, too long, and overrated." Why should we care about such quaint things as novels? Although I think Shirky is overly glib on this point, I won't argue it here.

I will however agree completely with Carr that Shirky either isn't what I'm calling a deep reader, or if he has cultivated that ability he chooses not to use it very often anymore at least not on books. "Being able to read" is very different from deep reading. In an interview he admitted to spending his childhood mostly entertained by Gilligan's Island and only much later finding technical books and presumably reading them in a piecewise manner to learn specific practical skills.

Quaint artifacts like myself who spent so much time and effort learning to read deeply and actively striving to understand the mind and knowledge of authors have a different experience of books than people who read in a more passive manner. I don't think there's any doubt of that. So even if War and Peace were something he could appreciate, if he had never developed the expertise for deep reading wouldn't be able to get through it in a manner that really engaged the author as we do in deep reading. Without that different experience of books he may not feel he is missing anything important (that is precisely the point at stake afterall) but I think we all agree that some people do still manage to immerse themselves in a novel and so clearly engage a book differently than Clay Shirky does, and that tradeoff is a key point that Nicholas Carr is making.

Hypnotized!

This reminds me of the situation I discovered when I engaged the hypnosis research years ago. I discovered that there was a big difference in technical theories of hypnosis. Some of them took the phenomena of hypnotic suggestibility for granted and tried to explain them in psyhological terms. Others assumed that the phenomena were faked or pretended and tried to explain that in psychological terms. When it came down to the difference, it was really mostly a matter of whether the researcher responsible for the theory experienced the phenomena themselves or not. Researchers who had interesting experiences with hypnosis knew the phenomena were real and wanted to explain how they arise. Researchers who didn't experience the same thing assumed everyone else must be faking it as well. It turned out from the research that there is a stable trait-like quality, "hypnotizability," that makes the experience of hypnosis very different for different people. Each researcher was originally working to explain their own personal experience, as if it were universal.

The hypnosis story is perhaps not just a metaphor. One of the theories, proposed by researcher Josephine Hilgard, was that hypnosis involves an immersion similar to that found in some kinds of reading.

Deep Reading, Non-Fiction, and Expertise

Deep reading is most certainly not just about immersion in a story, however. Although I've read a small number of novels, each of them made a major impact on me, so for me it seems the experience of immersive fiction is something we should not take lightly, it can be part of a formative process in our development. But although I have a deep respect for literature, I am not primarily a literature geek, I am primarily a non-fiction geek. And for me, that is where I get the most concerned about Shirky's dismissal of deep reading as "old style" literacy.

What deep reading typically means to a non-fiction geek like me is essentially sitting down undistractedly with a book for an hour or so, making a concerted strategic effort to understand the mind of the author, who I assume has knowledge and ways of thinking about the subject at hand that I don't yet have. So the goal of deep reading is to treat a book as if it were a conversation with the author, where I start out confused about the subject, ask questions, look for answers, take notes, keep track of useful other sources (without actually reading them yet!) and in general try to create new representations in my mind regarding the subject matter, using the author as a guide.

This is the central skillset and habit set of the "auto-didact," the person who wants to live a life of self-determined learning. Sure we do browse a lot, and we do learn the skills of hunting and gathering information on the web as well as in books and journals and other people. But, to the critical point regarding Carr's argument, we also develop ways to pull ourselves away from the distractions and focus on learning new things more deeply when we recognize that need. If Carr is right, we are making ourselves increasingly unable to use that option.

The tradition of deep reading says that this kind of process of engaging a book by actively asking questions rather than just passively skimming content is a good thing. And I know from experience, both my own and that of many others, that it is far more difficult to do this in an environment of constant distractions. We just don't have the attention resources, it is too demanding a process and our brain has finite attention capacity.

The expertise research says that it this kind of engagement not only a good thing for learning but absolutely essential for mastering new and different concepts. We don't just pile data into our brain, we have to create new representations of the material by active engagement with it. We know that passively skimming large amounts of related information does not accomplish this. This is the key technical point on which Shirky's glibness about the quaint boring linear mode of reading becomes most dangerous.

I'm not saying it is impossible to become an expert in anything without deep reading. I'm saying that deep reading greatly facilitates the process, and if we lose that ability which Shirky finds quaint, we will indeed have to learn new skills and habits and create a new intellectual ethic to replace it so that we can still acquire deep expertise without deep reading.

Carr's argument is that the shift in media technology will actually change our brains in a way that will make it either impossible or extremely unlikely that we will replace the level of cognitive processing we now enjoy through deep reading. I'm not sure I would go as far as Carr there, but I think we need to be far less glib than Shirky about it, and take the change seriously.

Experts on a subject don't just have more information in their head about a topic. They represent the information differently. That only comes from extended periods of thinking about the subject actively and coming to new insights. That means a particular way of using technology, not just leaving it to chance. No amount of following links to good sources and reading each of them superficially can accomplish what thinking about the material deeply and asking yourself strategic questions can do.

Physical Books Aren't Really So Special, Are They?

I think there are unique qualities to physical books that many of us have learned to exploit particularly well, but that doesn't mean that people can't learn to do similar things with other technologies.

I do think you can, under the right conditions, manage to acquire deep expertise from web technology the way many of us have traditionally done with books, and in some ways it even gives you significance advantages. You have better access to good sources, access to experts, interactive learning technologies, and the potential for quality feedback. These are all advantages of web technology, and some smart, motivated students have learned to make wonderful exemplary use of it.

But organizing the material for yourself and getting your sources together is not enough for real learning. You also have to know when to think about the material you are learning, to ask the right questions to achieve new insights, to use what learning researchers call metacognitive skills to evaluate your own learning and figure out the best thing for you to read or practice next, to find your own weaknesses and figure out how to improve. That's where the scarce attention resources are required, and where we need new habits and a new intellectual ethic to remind us how and where to focus on thinking about the material.

I think the Clay Shirkys of the world probably assume that the unburdening of the mind that comes with efficient use of web technology either allows them to accomplish this in a different way or else renders it obsolete.

And quite obviously people can and do learn many things without deep reading.

One argument is that the sorts of things they are learning may be different, and the depth at which they are mastering them may different. I honestly don't know if that's true, and I don't think Carr has answered the question adequately with the evidence he presents in The Shallows.

I do know for a fact that the unique habits and intellectual ethic of old-fashioned "book scholars" lead to contemplation of a subject in greater depth. And I know that the manner in which we (with vanishingly few exceptions) use web and mobile technology are not at all conducive to that kind of contemplation. Is this nostalgia on my part to find this a little worrying, or is there a meaningful tradeoff going on?

What I don't know is what the real implications of that "depth" are for expertise, intelligence, and problem solving. Is it really just a quaint linear mode of thinking that we are losing in the post-literate age as Shirky suggests, or are we effectively going back to a pre-literate age in some ways by eliminating an essential reflective aspect of our thinking, as Carr tells us?

I don't know, but I suggest the answer might be available in terms of specific empirical questions, so far as we agree on what we think the mind should best be doing. The problem is that the Shirky side and the Carr side may very well have different perspectives on what the mind should really be doing! I think this issue is important enough to keep a conversation open.

Towards a Resolution?
I'll leave you with the intriguing concluding thoughts from (Wolf and Barzillai):


Encouraging Deep Reading Online

Here lies the crucial role of education. Most aspects of reading—from basic decoding skills to higher-level comprehension skills—need to be explicitly taught. The expert reading brain rarely emerges without guidance and instruction. Years of literacy research have equipped teachers with many tools to facilitate its growth (see Foorman & Al Otaiba, in press). For example, our research curriculum, RAVE-O (Wolf, Miller, & Donnelly, 2000), uses digital games to foster the multiple exposures that children need to all the common letter patterns necessary for decoding. Nevertheless, too little attention has been paid to the important task of facilitating successful deep reading online.

The medium itself may provide us with new ways of teaching and encouraging young readers to be purposeful, critical, and analytical about the information they encounter. The development of tools—such as online reading tutors and programs that embed strategy prompts, models, think-alouds, and feedback into the text or browser— may enhance the kind of strategic thinking that is vital for online reading comprehension.

For example, programs like the Center for Applied Special Technology's (CAST) "thinking reader" (Rose & Dalton, 2008) embed within the text different levels of strategic supports that students may call on as needed, such as models that guide them in summarizing what they read. In this way, technology can help scaffold understanding (Dalton & Proctor, 2008). Such prompts help readers pause and monitor their comprehension, resist the pull of superficial reading, and seek out a deeper meaning. For example, in the CAST Universal Design Learning edition of
Edgar Allan Poe's "The Tell-Tale Heart" (http://udleditions.cast.org/INTRO,telltale_heart.html), questions accompanying the text ask readers to highlight words that provide foreshadowing in a given passage; to ponder clues about the narrator as a character in the story; and to use a specific reading strategy (such as visualize, summarize, predict, or question) to better understand a passage.

Well-designed WebQuests can also help students learn to effectively process information online within a support framework that contains explicit instruction. Even practices as simple as walking a class through a Web search and exploring how Web pages may be biased or may use images to sway readers help students become careful, thoughtful consumers of online information. Instruction like this can help young minds develop some of the key aspects of deep reading online.

The Best of Both Worlds

No one has real evidence about the formation of the reading circuit in the young, online, literacy-immersed brain. We do have evidence about the young reading brain exposed to print literacy. Until sufficient proof enlarges the discussion, we believe that nothing replaces the unique contributions of print literacy for the development of the full panoply of the slower, constructive, cognitive processes that invite children to create their own whole worlds in what Proust called the "reading sanctuary."

Thus, in addition to encouraging explicit instruction of deeper comprehension processes in online reading, we must not neglect the formation of the deep-reading processes in the medium of human's first literacy. There are fascinating precedents in the history of writing: The Sumerian writing system, in use 3,000 years ago, was preserved alongside the Akkadian system for many centuries. Along the way, Akkadian writing gradually incorporated, and in so doing preserved, much of what was most valuable about the Sumerian system.

Such a thoughtful transition is the optimal means of ensuring that the unique contributions of both online and print literacies will meet the needs of different individuals within a culture and foster all three dimensions of Aristotle's good society. Rich, intensive, parallel development of multiple literacies can help shape the development of an analytical, probative approach to knowledge in which students view the information they acquire not as an end point, but as the beginning of deeper questions and new, never-before-articulated thoughts.

Update June 13, 2010: "How to Read a Book" --> an interesting brief article that describes a reading method very similar to the one I taught myself and still use --> http://pne.people.si.umich.edu/PDF/howtoread.pdf

Update June 3, 2013:  Annie Murphy Paul wrote an article on this subject that I found helpful, coming to a very similar conclusion I think.

Update April 8 2014:  A Washington Post article featuring Maryanne Wolf's thinking about "slow reading."

Update April 19, 2014  "Is Reading Too Much Bad for Kids?" - just in case you need evidence that cultural values really are reversing to the point of taking reading for granted, de-valuing reading itself, and de-valuing the ability to do it well. 

Saturday, June 05, 2010

Shirky vs. Carr: Battle for the future mind

The Wall Streeet Journal online is playing off two authors of recent books with opposing premises:



Clay Shirky with his optimistic view of how the Internet is making us smarter by giving us more freedom to expand our minds, as discussed in his book, Cognitive Surplus



Nicholas Carr with his more pessimistic take on how the Internet is making us think more shallowly, from his book, The Shallows


As both a fanatical book reader and a collaboration technology specialist, I appreciate both views so this is a fascinating dialectic for me. So far I give the edge to Carr for realism and depth based on the articles, but I haven't read both books yet and of course I have my own bias as well.

Both authors share the common understanding that the way we read shapes the way we think, but they take away different lessons from that. I am reading Carr and he seems to have a good handle on the tradeoffs between the high velocity high diversity web tech reading mode and the undistracted reflective book reading mode. I haven't read Shirky, I don't know how realistic he is about the tradeoffs. His article seems a bit Panglossian.

Since I create web tech for clients for a living, I realize how powerful this technology is, but I am also often in a position to see how it affects their thinking. My biggest challenges in helping people think together using portals and knowledge management tech involve getting them to think about the content in a useful way rather than just clicking on things and making hasty choices. My objective in a good portal site is to encourage people to ask the right questions to the solve real problems faced by the team. Just putting links up on a page is helpful, but it makes a real difference to the outcome to have a structure that guides thinking.

Shirky seems to assume that greater cognitive freedom will automatically be used well because cognitive surplus will somehow make people want to be responsible for their own understanding.

Carr takes the view that media are not just neutral tools that can be used for better or worse, they actually impose a particular way of thinking and interacting upon us. So the freedom Shirky prizes has unspoken structure to it that we take for granted. I think this has a very strong basis in scientific theory as well, from a variety of fields. The symbolic mind did not arise out of nowhere as part of the modern brain macro structure. We know from modern neuroscience research that the brain is far more plastic than we previously assumed, and we know that abstract symbolic thinking involved distinct changes in the brain. It is entirely plausible that over time the nature of each type of media has shaped the brain in different ways. Every step had its tradeoffs.

Our assumption that in every age the tradeoff was always purely for the best ... that's the definition of a Panglossian view.

Maybe it's just my own bias as a book lover, but so far I think Carr has a more realistic view of the tradeoffs involved and understands better real advantages and disadvantages of the different kinds of media.

In one sense, the power of web tech is very obvious. Authors often find me talking about their books and contact me to start a dialog. That's a pleasure I rarely had when writing in a more traditional format. I am exposed to far more interesting material on the web than I could ever manage to gather up in a bookstore. But then I do have to isolate myself for a while from the distractions in order to really dig into a good book and engage the mind of its author and think deeply about their points.

Those are the tradeoffs for me, and if Carr is right, future generations won't have both options.

I'm looking forward to reviewing the Carr book when I'm done.

Update 6/5: Jonah Lehrer had this excellent NYT review of Carr which so far I largely agree with: http://www.nytimes.com/2010/06/06/books/review/Lehrer-t.html

Update 6/6: Wen Stephenson on Bostom.Com with another good review:
http://www.boston.com/ae/books/articles/2010/06/06/the_internet_ate_my_brain/?page=full

Update 6/10: Carr's own blog "RoughType" has a list of reviews as well: http://www.roughtype.com/archives/2010/06/reviews.php and there's a wonderful conversation between Nicholas Carr, Jonah Lehrer and others on Jonah Lehrer's blog "Frontal Cortex" at http://scienceblogs.com/cortex/2010/06/the_shallows.php

Update 6/20: Jonah Lehrer reviews Clay Shirky:
http://scienceblogs.com/cortex/2010/06/cognitive_surplus.php

Saturday, May 29, 2010

Book Review: "Smart Choices" - a practical guide to making better life decisions

"Smart Choices: A Practical Guide to Making Better Life Decisions"

by John Hammond, Ralph Keeney, Howard Raiffa

Originally published 1999, Harvard Business School Press

Link to review on Amazon

There are a lot of books about decision making and problem solving, and the vast majority of them are mediocre. There are a number of decent, readable accounts that give some simple tips and teach you about psychological principles, but these typically have very little in the way of solid tools. We all know it gets boring when the math starts or when we have start doing drills to learn basic skills. And the more specific they get with their methods, the less useful it becomes for our own problems. Books that cater to our creative side can help us learn how to break out of ruts, but they are weaker on helping us make good decisions more consistently.

Then there are systematic formal approaches by academics based on mathematical techniques, and these tend to be the equivalent of textbooks. Decision theory, mathematical modelling, strategy, optimization, probability, statistics, etc.. Great stuff. You can get a lot out of them if you put in the study, as far as useful tools and skills for hypothetical problems, but actually applying their lessons when you face a real problem is another matter. And as with most academic learning, practical transfer is left as an exercise for the reader. Also applying formal methods in situations where we already have good instincts, that often rubs us the wrong way. Using a spreadsheet to choose a mate? If you actually were to study systematic decision making and acquire the skills and habits for using those tools, you would surely make more decisions more consistently. But would you be wiser at knowing when to use these methods?

Smart Choices is closer to the first type of book, a practical guide to principles, but it has the soul of a textbook. No footnotes, bibligraphy, or exercises. But it does treat the subject matter very seriously. Maybe that's also part of why many of the reviewers on Amazon found the book boring. The authors' discipline in focusing on what really works while building on solid theory is clear throughout the book. As a result of this unique approach, this book has two great strengths in my opinion.

First, it is a surprisingly concise and admirably simple presentation of decision theory, with virtually no mathematics required. That's a signficant accomplishment in itself. The authors are deep experts in the technical aspects formal decision making, but have chosen a small set of simple tools to illustrate very general principles. When dealing with uncertainty, you create a risk profile for each alternative, listing the likelihood and consequences of each outcome for that alternative. Ok, not exactly rocket science, but who among us ever thinks of actually doing that to help them think through uncertainty? If you can't decide from the risk profile, you create a decision tree by identifying the things you can control and the things that remain uncertain, and their consequences. Very basic tools and advice and very powerful, with some practical advice for dealing with the messy details. It isn't so much the tools themselves that are the point here, it is the straightforward advice the authors offer on how and when to use them. There is a lot of experience condensed into a small book here.

The second strength of this book is that the authors make an unusually successful effort to bridge the different kinds of decision making genres, offering not only the outline of a formal process to guide you and specific tools to use within the process, but very clear practical explanations of why the steps are done as they are. The book begins with the usual mantra of systematic decision methods: having a process is better than not having a process. Sort of like having a map is better than not having a map. Ok. But before they jump into the how-to part that makes this a small practical guide, they also make their process criteria explicit. The process must help you to:

1. Focus on what's important
2. be logical and consistent
3. acknowledge objective and subjective factors, and blend analytical with intuitive thinking
4. require only as much information and analysis as neccessary to resolve the dilemma
5. encourage and guide the gathering of relevant information and informed opinion
6. be straightforward, reliable, easy to use, and flexible

This sounds great, but how can a formal decision process accomplish these things? And do the authors really provide one that manages this feat? It is their systematic and serious attempt to actually meet these 6 criteria, and their relative success at achieving it that makes for the greatest strength of this book.

The way they attempt this is to define each of their process step in very flexible terms, focusing on the critical relationships between the factors. Some trigger leads you to a loose problem definition with its associated concerns. The problem definition helps you identify means objectives (how you intend to meet your concerns). Means objectives help you figure out your more fundamental objectives. The objectives help you generate alternatives that meet those objectives. Analyzing consequences in various ways helps you evaluate the alternatives and even go back to generate new ones. Alternatives often have consequences that meet different objectives in different ways, so we have ways of helping to make tradeoffs.

There is a lot of theory and experience buried into these seemingly simple ideas, and it would be very easy to miss the value of this if the reader hasn't seen decision theory done less expertly in many other books. It is very easy to make the process too simple, too complicated, too rigid, or not provide enough guidance. I think the authors get it pretty much just right.

The reason it works in this book, in my opinion, is that by explaining the process in clear terms and not just providing the tools, the flexibility of the process becomes much clearer. It becomes obvious from the examples why you want to keep looking for better alternatives even in the later stages of the process, even as you eliminate alternatives that just won't work or just aren't as good as others. It becomes clear where and how to consider uncertainty. It becomes more evident where various thinking traps make their way into the process by causing us to persevere at the wrong problem, by not considering important objectives, but not looking closely enough at the consequences of each alternative, by not considering tradeoffs, by missing relationships between decisions, or by failing to account for your own personal risk tolerance. The guidelines for the process help you avoid each of these problems by helping you focus on the right things at the right point in the process, but without making it so rigid that you fall into a completely different trap.

There is no magic problem solving or decision making method that will solve your problems for you, but following the advice in this book will at the very least help you focus on the right things, ask the right questions at the right time during the process, and help explain your decisions better to others as well as to yourself. There are books that provide more details on specific tools, but this book stands out for its clear and practical presentation of the overall process of making decisions.

Thursday, May 20, 2010

Updated Book Review: David Shenk's "The Genius in All of Us"

Review of "The Genius in All of Us," by David Shenk, Doubleday, 2010.

An effective deconstruction of hereditary talent, and clues for a new model of exceptional ability

Link to the review on Amazon: http://www.amazon.com/review/R3DGLT1WYK6QRG/ref=cm_cr_rdp_perm

It is easy to like or dislike this book from a casual reading based on how you feel about the premise: that everyone has the potential for genius, and that heredity is not destiny in any sense. This sounds at first like a liberal political statement, but Shenk's treatment is far more nuanced than that characterization would imply.

In brief, Shenk's book is a very good deconstruction of hereditary talent, a competent but one-sided (or upon reflection I'll say very selectively focused) review of supporting research in several fields, and an interesting but abbreviated practical introduction to the interactionist (gene X environment) paradigm of development.

Just to be clear, this book is not about the psychometric definition of genius in terms of how far down the bell curve one is on Raven's Progressive Matrices or standardized tests of any sort. Nor is it about clever calculating tricks or precocious abilities, although it does do a very nice job putting those into a larger perspective. This book is more centrally about the expansive and inclusive sense of genius meaning people that accomplish something truly special and significant, and the potential that any given person may be able to get to that point. Somehow. And that's where the nuance is needed and appropriate.

Ok, I didn't like this book all that much when I first read it, and I at first gave it a mediocre 3 star rating on Amazon. I felt it did a great job deconstructing the concept of hereditary talent, but I strongly criticized it for leaving a gap where we need a better theory of where talent comes from and what it is, since obviously we don't all become true geniuses. Even among the folks who appear to have the seeds of genius in them from early on, most don't become genius adults in the broader sense.

In my original review I said this was a one-sided review of the evidence for the interactionist model. I do think it's a very selective review, but one-sided implies that he deliberately ignores contradictory evidence. He doesn't do that. He just doesn't talk about the evidence that led to the model Shenk says is obsolete, that genes are akin to blueprints. That is, the evidence that different variations of allele sometimes have strikingly specific effects in a seemingly "normal" range of environments. The case for the model of heredity that Shenk is deconstructing is not entirely ignored, but it is glossed over in order to make his case for the interactionist model. I think that is why hereditarians like Galton, Spearman, and Charles Murray get so apparently shorted in this book, Shenk focuses entirely on what they get wrong and glosses over the things they may get right.

I suspect that we do inherit "predispositions" in some form under a very wide range of conditions, even if the underlying mechanism is more complex than we previously assumed. Even if changes in environments do alter the expression of genes, something like inheritance of traits clearly does happen in a wide range of "normal" environments, and we can't just ignore that completely because of additional complexity and things that change at the extremes. That's why I say this is a very selective review. But no, it isn't really one-sided, the selectiveness is appropriate for a deconstruction, although it does mark this as a deconstruction rather than a scholarly review.

The more important problem is that the model of talent that arises from this book is not particularly easy to understand. The author is strongly against thinking of genes as predispositions, and rather offers the perspective that genes are akin to "settings." So it would be easy to conclude that the author is saying that we have the ability to make anyone a genius just by tweaking a few settings. He isn't. Or, if you read it as I did upon my first reading, you might hear the author saying that "anyone can be a genius, but talent is complicated process, we don't know what is happening at each step, and so we don't know how to help people get there, but we know it's possible." That's perhaps a little closer to the truth, but it didn't seem very helpful to me.

The reason I updated this review and why I'm now expressing more appreciation for David Shenk's accomplishment here is that while the "settings" model of genes doesn't quite convey the message, I did find upon close reading and careful reflection that the author captured a lot with his examples and case studies of individuals. The thing that is missing is some way of tying together how people manage to select and shape environments for themselves to accomplish great things, in spite of all the cultural, social, and physical constraints that tend to make environmental factors very hard to change for most of us. Shenk assiduously avoids attributing "predispositions" to genes, but then speculates that epigenetic factors may predispose us to things like musical ability. If non-genes can do this, why not genes? He just seems a little *too* intent on crushing hereditary talent in some places.

Geniuses don't just see things differently (although that is sometimes also going on), they don't just have unique abilities (although sometimes they do) geniuses are most distinct in that they manage to carve their own niche, exploiting their own uniqueness in a process where they are driven to mastery and are amazingly persistent, even where the goal seems way out of reach. This runs contrary to our popular wisdom that it makes sense to work toward small easily attained goals in most things. What we think of as really deep talent actually requires really deep faith in the long term process and the motivation to keep going. Shenk captures the significance of motivation, but I had to look very closely to see the patterns for it. It requires willingness to do things that others may find bizarre and to learn freely from what is available. The author illustrates this but seems to have a hard time really tying it all together, at least he did on my first reading. I've come to think of it in terms of niche construction, which to me really captures what exceptional people do that brings out and shapes their unique gene x environment combination in a targeted way. My reversal in the rating reflects my feeling that capturing this idea is more important than giving it a catchy name, which is really what the author is missing.

We don't know exactly how to take advantage of the dynamic nature of heredity and development, although the study of achievement and expertise reviewed by Shenk gives us many tantalizing clues to go on. And if knowing that the potential is there inspires the faith to keep going, then more and more of us will eventually learn to become better and better at using our minds, constructing our own niches from our own individuality, and the promise of "The Genius in All of Us" will eventually begin to be realized. There is a lot in this book that will repay careful reading and re-reading, as I discovered by doing exactly that.

Related Reading:

See also this classic manifesto of genetic interactionism: The Triple Helix: Gene, Organism, and Environment(Lewontin R (1998/2000) Triple Helix: Gene, Organism, Environment. Cambridge, MA, Harvard)

This superb earlier popular introduction to the emerging model Shenk offers: The Agile Gene: How Nature Turns on Nurture(Ridley, The Agile Gene)

This similar treatment of trait development in interactionist terms, but focused on personality: The Temperamental Thread: How Genes, Culture, Time and Luck make Us Who We Are(kagan, temperamental thread)

This alternative and original interactionist account of how personality develops: No Two Alike: Human Nature and Human Individuality (Judith rich harris no two alike)

This interesting challenge to some widely help assumptions about influence: Stranger in the Nest: Do Parents Really Shape Their Child's Personality, Intelligence, or Character?(Stranger in the Nest, D. Cohen)

This little known treasure by an old friend that offers its own unique challenges about human uniqueness and what it means: rebellion: physics to personal will (Brody, Rebellion)

This on the classic view from the perspective of behavior genetics: Genetics and Experience: The Interplay between Nature and Nurture (Individual Differences and Development)(Plomin, Genetics and Experience)

This on the fascinating broader biological implications of interactionism from a gene perspective, how the genes of organisms construct niches even beyond the organism itself: The Extended Organism: The Physiology of Animal-Built Structures(Turner, The Extended Organism)

And finally this wonderful broad account of biology and the role of heredity that appreciates the complexities of gene function in a demanding but uniquely engaging way: The Logic of Life(The logic of life, francois jacob)

Sunday, March 21, 2010

Book Review: The Architecture of Learning by Kevin Washburn

Link to review on Amazon

There's a fair amount that we know about learning, and we often place great importance on teaching, but we seem to take for granted the way teaching and learning are related. It is not at all obvious how teaching leads to learning (assuming that it does!) or what is the best way to create an environment for learning. We do have a lot of relevant evidence, but it is rare to see it put together in a useful way.

That's why this is such a wonderful book, the kind that not only teaches you something useful but uses its own principles as an example to do so. The Architecture of Learning illustrates in simple but detailed terms what it takes to get from raw experience to practical skills and knowledge and then provides you with tools to do the same.

Successfully applying his own teaching insights to the structure of this book, Washburn leads you on a journey to personal understanding of the learning process chapter by chapter by first introducing the core processes needed for learning and then progressively deepening and expanding the discussions and examples and using them in different ways and relating them to previous experience.

Many theories of learning emphasize how it builds on previous learning. This book illustrates the process with specific examples and analogies that bring it to life. Most interestingly to me, in the process Washburn effectively links his general learning model with the concept of expertise and its emphasis on the difference between the way experts and novices represent knowledge. He expands on the expertise model by not only acknowledging the critical value of deliberate extended practice, but also describing what is happening prior to effective practice and following effective practice that makes it effective and allows it to transfer to real applications.

By simplifying the process and making particular stages clear, Washburn helps instructors understand the prerequisites for making practice successful (often neglected!), and what they need to do to make it useful outside of educational settings (rarely accomplished!).

The strong point of this book is the way it synthesizes and consolidates major theories of learning in a useful practical way ... The composite model used here describes 4 essential interacting and iterative processes required for learning: (1) accumulating "reference experiences" upon which further learning can be based, (2) labelling and sorting our experience, (3) relating new ideas to past experiences to create deeper understanding, (4) applying what we know in increasingly broader and more realistic contexts.

Each of these subprocesses produces different kinds of outcomes used in the overall learning process. (1) Experience produces reference experiences. (2) Labelling and sorting produce sequences of key points that help us comprehend. (3) Relating ideas generates the understanding we need in order to usefully practice and apply new skills and ideas. (4) Rehearsing the application of new skills and ideas with feedback makes learning available for real situations.

The meat of The Architecture of Learning is then the blueprints that make use of this model. Learning new skills requires somewhat different focus on these different processes than subject matter content, even though the same basic proceses still apply. The blueprints provide a general framework for designing instruction so that the essential learning processes are all engaged, with the proper focus for the type of subject matter. For example, skills require more rehearsal and less relating to past experience than non-skill content, but relating to past experience is still needed at various points to produce the intermediate outcomes needed to prepare for effective rehearsal. This is all taken into consideration by the useful general blueprints in Architecture of Learning.

My biggest concern reading this book was trying to come up with a good strategy for sizing the units of material that this approach would apply to. Breaking some material into 16 or so instructional segments as done in these blueprints would obviously be excessive, although the exercise of doing so might still be useful for an instructor for its own sake. There's a lot of room for judgment here in just how to take a curriculum and divide it up into units that can be structured using these patterns.

Perhaps the most exciting chapters for me are the ones on creative and critical thinking. I really like the way this book makes it clear that various thinking skills are used together with content to deepen learning. I think that's one of the biggest missing pieces in the way learning is often represented. People realize they're missing something in learning, but they seem to look for the missing piece in strange places like "unconscious" learning rather than looking more closely at the constituent skills we use to process ideas. There's a lot of room for thought here, and a very useful general approach for thinking about designing effective instructional experiences.

Tuesday, February 16, 2010

How To Make Better Decisions For Your Health


Absolutely, the decision making process is the bottleneck in health matters, and knowing how to make better use of our own behaviors is the only practical individual solution. Of particular significance, most people are capable of learning to make good decisions, but our reasoning powers falter under conditions of deprivation and distraction, where our cognitive unconscious, habits, and heuristics take over even in the best decision makers. Each of us has to learn to: (1) make good health decisions for when we can reason well, (2) do critical activities like shopping under optimal reasoning conditions, (3) know our automatic habits and heuristics and take advantage of them deliberately, and (4) install better habits where it makes sense.
About Health
Read the Article at HuffingtonPost

Saturday, January 23, 2010

Minds, Brains, and Learning: A foundational book on brain science and education

Review of: Minds, Brains, and Learning, by James P. Byrnes

Link on Amazon: http://www.amazon.com/Minds-Brains-Learning-Understanding-Neuroscientific/dp/1572306521/ref=cm_cr-mr-title

Nearly a decade after its publication, this book remains for me one of the very best basic resources for those who want to understand how we learn in terms of brain function, and in general the educational relevance of neuroscience.

Brain science remains a potentially important source for expanding and revising our psychological theories of learning. On the other hand, a lot of enthusiastic authors have made excessive claims in recent years about brain science research providing us with new ways of thinking about learning, and most of these claims are hype, many falling into the category of popular "neuromyth" as they become more widely accepted.

It was once the case that talking about human life in terms of brain function would immmediately inspire resistance. We don't like to think of ourselves in mechanical terms. The popular enthusiasm of the past two decades around brain science in popular books has reversed this bias in some ways, now "brain-based" has become a common marketing term. Basing principles on brain science is more than just throwing a few technical terms around and quoting from press releases for recent research.

Byrnes' wonderful book sets out to give the foundation of the major research programs relevant to both neuroscience and education, point out the important ideas they test, evaluate how those tests have fared, and come to balanced conclusions about the results. This is a far cry from the typical popular "brain-based" book which gives you a bums rush tour or neuroanatomy, a highly selective review of certain research, and then a wildly speculative theory followed by the author's personal favorite principles (which in many cases I think these authors would have promoted even without their "brain-based" research).

This is one of the few books that actually does look closely at brain science as a source of information about educational principles and comes to useful conclusions about what we know and what we still need to find out.

Some highlights:

1. Brain research by itself cannot support particular instructional practices, although it can support particular psychological theories which can in turn be used to design more effective forms of instruction.

2. Although we know that complex stimulus environments in early life alter brain structure, we still have no idea how to make good use of this to childrens' advantage. Common claims for special benefits of exposing infants to music, physical activity, and so on have not yet been supported by reliable evidence.

3. The synaptic basis of learning is largely irrelevant to particular modes of instruction. Principles of instruction are better based on psyhological principles such as deliberate practice and creating elaborate multicoded representations, which are stil consistent with neuroscience but not dependent upon its details.

If you have an interest in education and want to understand the real fundamentals of how neuroscience data relates to educational practices, I think this book should be one of your stepping stones.