Monday, June 21, 2010

Book Review: "The Shallows" by Nicholas Carr

Beautifully written reminder that each medium has its tradeoffs
by Todd I. Stark

Amazon review --> http://www.amazon.com/review/R2Z3CVIADXIWI1/ref=cm_cr_rdp_perm

When I first came across this book I noticed that a lot of my friends on social media were expressing disgust or boredom with the thesis of "Is the Internet frying our brain?" After all, who but a curmudgeon would claim that the most vital and transformative technology of our time might have a dark side? Especially at a time when leading edge educators are working furiously to bring their field up to date by incorporating the best of the latest technology in a way that improves education.

Against this background Carr's book seems reminiscent of those poor backward folks who opposed the printing press. As the brilliant and funny curmudgeon Neil Postman once said about himself, Carr is indeed playing the role of the Luddite in some ways. Still, neither Postman nor Carr were trying to dismantle the Internet or just shriek an alarm with their work. They are trying to help us understand something important. With that in mind, let's take a more careful look at this book. The Shallows is a thoroughly and broadly researched and beautifully written polemic which I found to represent two different things. First, it is a media analysis and culture critique. Second it is a pessimistic theory about the overall effect of web media on our thinking ability over time.

The first aspect will be a delight for those interested in the evolution of human cognition, those fascinated with media effects per se, the traditionally minded book scholars, and assorted geezers. It is a very satisfying cultural media critique very much in the spirit of Marshall Macluhan and Neil Postman even though it lacks Macluhan's showmanship or Postman's remarkable ever-present humor. It was this aspect made the book a worthwhile reminder for me, introduced me to some fascinating recent cognitive science work supporting the view that different media encourage different ways of thinking, and helped tie together a number of broad ideas for me regarding the evolution of human cognition and the influence of the tools we use.

The second aspect, for the more technically psychologically minded, and the more alarmist and pessimistic part, is a clever argument for competing and mutually destructive habits of attention allocation: (1) the nimble web browsing mind that constantly reserves attention and working memory for making navigational decisions and is exposed to massive amounts of information, and (2) the sustained attention ability that we learn with great effort over time for the purpose of reading and reflective thinking.

The second aspect is the one that most of the articles and marketing have been pushing, a thesis I'll call "Help! The Internet is Frying My Brain!" Carr argues that the nimble web mind better exploits our more natural "bottom-up" or stimulus driven attention mechanisms, which is why we find it so powerful. He also argues that the undistracted reflective mind is far less natural but has unique advantages for human cognition. So it is worth retaining, he argues, _and_ we need to keep working deliberately at it in order to retain it. That alone would be an important point. Thus far, I think the attention argument is completely consistent with the media critique, and supports it. None of this so far says that our brain is being fried by the Internet.

Now comes the trickier part, and the part of Carr's thesis that to me is most controversial, the two ways of using attention may not only compete but may actually be mutually destructive. Carr offers his own experience and that of several other serious book readers to show that they are having increasing trouble reading for prolonged periods. Carr says that there is neuroscience data showing that this may be the result of web reading rather than just advancing age or other less ominous explanations.

This "fried brain" thesis is the part that is either revolutionary, or becomes the fatal flaw in The Shallows, depending on whether or not it is true. So is it true? Does Carr persuade us that not only are we thinking differently with different media (a very strong case I think) but that the Internet is frying our brains?

Today we remember the iconic wise curmudgeon, Socrates, only through his students. That's because old Soc didn't believe in writing. It seems he was a great proponent of contemplative thought and taught that contemplation depends heavily on memory. He thought it would seriously hurt people's memory to rely too much on writing things down. His criticism seems perverse today, even as we remember Soc fondly for his deep reflection and his provocative teaching methods. That's the historical role into which Nicholas Carr has cast himself, the media critic who invokes wisdom and reflection and plays them against seemingly unstoppable cultural trends towards greater convenience, efficiency, and information distribution.

Carr is the guy who wants to warn us about the hazards of writing on our memory. About the damage that the printing press will do to culture. About how TV will change us for the worse. And now about how the Internet will shift our values, instill bad habits, hurt our reading and thinking skills, and even destroy our powers of sustained concentration.

Socrates wasn't entirely wrong even though he bucked a trend that in retrospect was downright silly to oppose. People who don't specifically practice remembering things and instead devote everything to writing do find that they have weaker memories. That's the reason for all those memory courses, the best of which essentially just teach the same methods socrates would have used. The widespread distribution of news did have negative consequences in terms of reinforcing bias and propaganda on a massive scale.

There are some adverse consequences of all the TV watching we do. However none of these things has had the dire consequences that culture critics predicted, we have adapted in turn in some way to each of them, more or less successfully.

So Carr isn't entirely wrong about the tradeoffs involved in using modern technologies. He is not a "Luddite" and he does make a number of valid points.

Carr is not telling us to dismantle the Internet. He fully recognizes the value of technology. He is rather playing Socrates to the modern students. Most people, desperately trying to keep up with the amazing new technologies and learn new ways of getting better information with them will ignore Carr's message pretty much out of hand. "Carr is the only one affected negatively by the Internet, the rest of us are thriving."

Those folks who ignore culture critics out of hand are taking for granted the skills and expertise that many people have cultivated through sheer effort using sustained concentration. They are buying into the attractive fashionable modern viewpoint that just being exposed to a lot of information via technology will make you smart. The majority of people, the ones who go along with that implicit confusion of information and personal knowledge, will indeed lose some of the things we take for granted today. I think Carr is right about that, and that is the most profound message in this book. LISTEN TO IT. Even if you think, with good reason, that it is silly to imagine that using search engines and hyperlinks will hurt your concentration.

Still, the message that the Internet will make us stupid isn't quite right. Writing didn't entirely destroy our memory, it just shifted the habits we need to cultivate to preserve it. It seems like the wisest among us will recognize the value that culture critics like Carr have always had, they will appreciate the detail and care that good media critics like Carr put into their warnings, and they will remember the real tradeoffs between different kinds of media and take responsibility for the cultivation of their own minds.

Just as wise modern students still practice the methods used by Socrates, they will still learn to read and think deeply using books or the electronic equivalent, the wisest will still turn off the TV and other distractions when sustained concentration is called for, and they will understand the difference between various conditions and different kinds of media in general and will use each to its best advantage.

So long as we aren't stupid enough to stop cultivating our individual minds regardless of technology changes, media itself will not make us stupid. Listen to Carr's message, learn it, and then apply it to your use of technology. It's easy to dismiss the claim that the Internet will somehow fry your brain. It's another matter entirely to dismiss the value of cultivating your mind through personal reflection.

Update 5/14/2013:  Dan Willingham had a brief post on this topic.  His view seems fairly close to mine in most respects.  He concludes that sustained attention may not be the skill of greatest importance in the future,  but it may well be the one in shortest supply!

As moderate and reasonable a critique as that seems to me (he does not seem to agree with Carr's more radical point about web tech physically or unavoidably changing our attention) I noticed that some of the comments to his article seem to exemplify the problem.    Some people do seem to see the concern about sustained attention as some sort of old-fashioned culture critique ("you darn kids get off my lawn!"), so much so that they won't bother reading the details of the actual argument.

Dan Willingham: The 21st century skill students really lack


Related background reading:

On the evolution of cognition and symbolic thought (and secondarily, the role of reading):[[ASIN:0393323196 A Mind So Rare: The Evolution of Human Consciousness]][[ASIN:0393317544 The Symbolic Species: The Co-Evolution of Language and the Brain]]

On reading and the brain:[[ASIN:B003H4RAOU Reading in the Brain: The Science and Evolution of a Human Invention]][[ASIN:0060933844 Proust and the Squid: The Story and Science of the Reading Brain]]

On the role of tools in cognition:[[ASIN:0195153723 Adaptive Thinking: Rationality in the Real World (Evolution and Cognition Series)]]

On the role of media technology in culture:[[ASIN:0262631598 Understanding Media: The Extensions of Man]][[ASIN:0679745408 Technopoly: The Surrender of Culture to Technology]]

On the trend to rising IQ scores in modern times:[[ASIN:0521741475 What Is Intelligence?: Beyond the Flynn Effect]]

On the practical limitations of human working memory:[[ASIN:0061771295 Your Brain at Work: Strategies for Overcoming Distraction, Regaining Focus, and Working Smarter All Day Long]][[ASIN:0195372883 The Overflowing Brain: Information Overload and the Limits of Working Memory]]

Saturday, June 12, 2010

Are we taking knowledge and expertise for granted?

On the New York Times Opinion page, Steven Pinker weighs in on the Internet optimists vs. pessimists issue. He appreciates the value of intellectual depth but he doesn't think the brain is nearly "plastic" enough to be reshaped fundamentally by the tools we use, he doesn't believe there are general abilities that are affected by experience. "Experience does not revamp the basic information-processing capacities of the brain" he insists. He also points out that "the effects of experience are highly specific to the experiences themselves."

In making these claims, Pinker reminds us that he joined Leda Cosmides and John Tooby enthusiastically as one of the founders of the most extreme version of cognitive evolutionary psychology (CEP), the "modular brain" theory. We know that the brain has all sorts of very specific speciallizations, but the notion of opaque independent functional modules is far from universally accepted. Books that have included intelligent, scholarly critiques of CEP or stress the importance of non-modular aspects of brain function include Kenan Malik, David Buller, Merlin Donald, Terrence Deacon, Terrence Sejnowski, and Jeffrey Schwartz. The contrast between Deacon and Pinker on the role of language in the evolution of the mind is particularly interesting.

My point is not at all to "debunk" CEP by presenting people who offer other kinds of theory, since I think CEP is a viable concept that probably gets some things right regarding the evolution of the mind. My point is that it seems too radical to claim that the mind and brain are simply and entirely modular in the way they would have to be for Pinker's statements above to be completely true. Pinker wants us to believe that the brain is modular and that experience cannot affect general abilities, yet he can't help using the term "deep reflection." It is difficult to imagine how such a thing as "deep reflection" even makes sense in the modular independent architeture Pinker is insisting protects our intellectual functions from the potentially deleterious effects of experience.

Pinker even explicitly acknowledges the work it takes to develop intellectual depth:

It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people. They must be acquired in special institutions, which we call universities, and maintained with constant upkeep, which we call analysis, criticism and debate.

If it takes so much work to develop intellectual depth, how is it reasonable to then also argue that our thinking can't be affected by experience, or by activities that detract from this developmental process?

Yes, he is right the brain has limits to how much it is reshaped by experience, but I think he significantly overstates the case. The issue is regarding specifics, not the general principle of neuroplasticity. What specific effects do specific activities have on our mind and brain over specific kinds of time period?

Pinker may very well be right that the web is not itself deteriorating our reflective thinking ability the way Nicholas Carr argues it is. Carr perhaps goes over the top when he says that his failing ability to concentrate is specifically due to his use of the web. However Pinker goes too far when he implies that the idea is simply silly in principle. It remains an empirical question, not just a conceptual one, unless we're replacing cognitive neuroscience with Pinkerist modularism.

Pinker also misses a much more important point, that our attitude toward technology affects the way it shapes our daily life. He assumes that those university activities he takes for granted will continue to be valued just because he himself takes their value for granted. The university was not always there, and there is no reason to assume it will always be there if we stop arguing for its value.

Pinker's ironic conclusion:

And to encourage intellectual depth, don’t rail at PowerPoint or Google. It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people. They must be acquired in special institutions, which we call universities, and maintained with constant upkeep, which we call analysis, criticism and debate. They are not granted by propping a heavy encyclopedia on your lap, nor are they taken away by efficient access to information on the Internet.

The new media have caught on for a reason. Knowledge is increasing exponentially; human brainpower and waking hours are not. Fortunately, the Internet and information technologies are helping us manage, search and retrieve our collective intellectual output at different scales, from Twitter and previews to e-books and online encyclopedias. Far from making us stupid, these technologies are the only things that will keep us smart.

Is it really our knowledge that is increasing exponentially, or is it available information?

That fact that Pinker seems to conflate the two is exactly the question begging that most fundamentally ignores the most central aspect of modern culture critique, are we gaining more knowledge because we are exposed to more information?

Against the cultural critics, the smart among us have always managed to take responsibility for their own minds and organize the available information and think deeply enough to create meaningful individual knowledge from it.

Against Pinker and the others who think cultural critics are just "panicking," the fact that some smart people always manage to cultivate knowledge in spite of the challenges offered by new tools doesn't mean that everyone else will automatically inherit that ability.

If we assume that simply having access to a lot of information will make us smart (this is not an exaggeration, it is literally how many Internet optimists think), we will end up missing the real issue.

The real issue is not whether the Internet fries your brain, there is no really good evidence so far that it does. The real issue is whether we recognize and continue to appreciate the work it takes to cultivate knowledge and expertise or whether we take these things for granted.


Update: Nicholas Carr responds to Pinker on his own RoughType blog.

Update: Commentary by Nick Bilton with some useful references, argues sensibly that each form of media has its potential unique value - http://bits.blogs.nytimes.com/2010/06/11/in-defense-of-computers-the-internet-and-our-brains/?ref=technology

Update 6/15: Some of the reviewers do more thougthfully reflect on bigger picture issues and ask somewhat deeper questions than just the alarmist one of whether the Internet is "frying our brains." See this review in New Republic by Todd Gitlin: "The Uses of Half-Truths."

Also: Nicholas Carr and Douglas Rushkoff respond to Pinker on EDGE.

Sunday, June 06, 2010

Good riddance to "Deep Reading?"

My previous post highlighted Nicholas Carr and Clay Shirky's recent books which seem to agree that Internet and Web and related technology are changing our daily habits and also the way we think.

Carr suggests we are gaining a lot but also losing something important. Shirky counters: "good riddance!" to whatever we might be losing.

Maybe it's just a touch of reactionary temperament in me, but for me this brings to mind this sage advice about change:
"Don’t ever take a fence down until you know the reason why it was put up."

I'm not exaggerating Shirky's view, nor is he the only one to hold it, so I think it is worth looking at this question in a little more detail.

What is it that we agree we are losing in order to gain faster, more agile, more network-literate minds, and why does Carr think it is important and Shirky does not?

What's the Argument About?

The essence of Carr's argument as I interpret it is that:

1. Our daily habits, the tools we use, and the ethic with which we use those tools has a considerable shaping influence on the way human beings think.

2. This shaping of our thought by our tools is not an empty metaphor, it has a literal aspect because of the "plastic" nature of the brain that has been discovered in recent decades.

3. Prime examples have been the shift from oral to written culture, the use of maps, the use of clocks, the widespread availability of books, the use of audio and video media, and culminating most recently in mobile constantly online hyperlinked multimedia networking.

4. As each type of media proliferates, the people who use it gain new ways of thinking through new habits, new patterns of attention allocation, new ethics regarding how information and knowledge are related and what it means to be smart and informed.

5. These gains do not generally co-exist with the previous ways of thinking, they greedily force the old ways out. This happens for two kinds of reasons: (1) the economics of production and aquisition of technology, and (2) the plasticity of the brain. Over time our habits carve increasingly deeper ruts that shape the way we think. New habits replace old and new ways of thinking replace old, essentially as a matter of making efficient use of brain tissue.

I think the argument so far is pretty sound, although certainly one could quibble about any of the points. Carr finds examples for which each of these points do seem to legitimately apply.

The example that is most relevant, the one Carr starts with and the one for which he is most dimetrically opposed with Shirky, boils down to the concept of "deep reading."

What is "Deep Reading?"

Deep reading refers to the tradition of reading long, structured written content in an a focused, undistracted manner while using active learning skills. This is not a particularly natural mode for human beings, and so deep reading refers to an ability that requires considerable expertise to be developed. Some reading skills that are naturally difficult must be automatized so we don't have to think about them and can devote precious cognitive resources to thinking about the subject matter.

The cortical resources for maintaining attention on linear written material without allowing distractions to derail us are significant. We all agree on this. Carr makes a point of it and it is also a big part of why Clay Shirky finds web technology to be so freeing. Linear reading is very difficult until you become extremely expert at it. Many people faced against their will with the intellectual ethic of book scholarship have felt the same way Shirky does, that it just isn't worth all the effort to do without the expertise. And furthermore they can't understand why it would be worthwhile to acquire the expertise if it is so difficult and takes so long.

To read effectively, reading has to become second nature so that the basic reading skills are applied without thinking in a matter of milliseconds. Like all forms of expertise, this takes many years of training and most readers probably take that for granted. Some people enjoy the process much more than others. My baby book records my first sentence as an infant as being a complaint: "I can't read!" With my parents help, I put a lot of time and effort into solving that problem, so although I enjoyed the journey, I appreciate why people who don't enjoy that sort of thing find it so intimidating to contemplate crossing that chasm of years of training to get into deep reading.

Why do some people care about Deep Reading?

What deep reading refers to in terms of cognition is "an array of sophisticated processes that propel comprehension and that include inferential and deductive reasoning, analogical skills, critical analysis, reflection, and insight." (Educational Leadership, March 2009, Wolf and Barzillai)

I think Clay Shirky would argue that the above abilities can be gotten, perhaps better, without the difficulties of deep reading. I hope he is right, but Nicholas Carr and I both suspect he would be wrong about that. There's a kind of "no free lunch" argument here I think. Web hyperlinking works so efficiently with our brain and gives us so much of what Shirky calls "cognitive surplus" because it leverages our natural inclination to get distracted by each new thing and then follow it. Rather than finding ways to fight our distractability as we do in deep reading, on the web we let it drive us to explore new things. So we are exploring, hunting and gathering new information. We all agree that this is often a big rush and exposes us to potentially much more diverse information (and people!) in a shorter time than we would ever achieve buried in books in a libary.

The issue at hand is whether we are learning it in the same way as if we were undistractedly reading.

The Key Question

The key technical question upon which the whole argument rests becomes: Is there an added efficiency of web information hunting and gathering providing an available "cognitive surplus" usable as an attention resource that makes it easier to use the metacognitive skills we need for mastery of a subject?

Or conversely do the distractions of that mode of information hunting/gathering prevent us from using those specific metacognitive skills, and force us to think more "shallowly" in some sense?

For me the Carr vs. Shirky debate breaks down to the potential empirical questions generated by the above technical issues.

Is the Real Intelligence in the Network Rather than in Each of Us?

I suppose another way of looking at this is to question whether we really need to acquire deep expertise anymore, whether there is something "old-fashioned" and quaint about expertise itself. Maybe the new online literacy replaces expertise with some other quality that supports human thinking? I think that's intriguing an there are probably some folks who think that way, just as there are people who say that our intelligence can somehow better be transferred into computers or networks with other people rather than embedded in individual minds.

These are interesting speculations, but they seem very hard to reconcile with the mass of learning and expertise research so far. Not that we can't offload our minds from our own brains, I do think that is real possibility, and something we already do to to a great extent. The argument that our tools shape our mind is partly based in this idea of offloading memory from our brain to external tools.

I just don't think it's an entirely good thing to eliminate individual intelligence entirely. I think we still have good use for the processing we do inside our own skulls. Maybe that's where Carr and Shirky really differ most fundamentally? Maybe Shirky really wants to get rid of the individual mind and replace it with a node in a web, similar to the way ants work together in colonies?

Different Experiences with Books

Clay Shirky offers Tolstoy's War and Peace as an example of great literature (though I think it is popularly better known for its length than its literary merit). He says, "boring, too long, and overrated." Why should we care about such quaint things as novels? Although I think Shirky is overly glib on this point, I won't argue it here.

I will however agree completely with Carr that Shirky either isn't what I'm calling a deep reader, or if he has cultivated that ability he chooses not to use it very often anymore at least not on books. "Being able to read" is very different from deep reading. In an interview he admitted to spending his childhood mostly entertained by Gilligan's Island and only much later finding technical books and presumably reading them in a piecewise manner to learn specific practical skills.

Quaint artifacts like myself who spent so much time and effort learning to read deeply and actively striving to understand the mind and knowledge of authors have a different experience of books than people who read in a more passive manner. I don't think there's any doubt of that. So even if War and Peace were something he could appreciate, if he had never developed the expertise for deep reading wouldn't be able to get through it in a manner that really engaged the author as we do in deep reading. Without that different experience of books he may not feel he is missing anything important (that is precisely the point at stake afterall) but I think we all agree that some people do still manage to immerse themselves in a novel and so clearly engage a book differently than Clay Shirky does, and that tradeoff is a key point that Nicholas Carr is making.

Hypnotized!

This reminds me of the situation I discovered when I engaged the hypnosis research years ago. I discovered that there was a big difference in technical theories of hypnosis. Some of them took the phenomena of hypnotic suggestibility for granted and tried to explain them in psyhological terms. Others assumed that the phenomena were faked or pretended and tried to explain that in psychological terms. When it came down to the difference, it was really mostly a matter of whether the researcher responsible for the theory experienced the phenomena themselves or not. Researchers who had interesting experiences with hypnosis knew the phenomena were real and wanted to explain how they arise. Researchers who didn't experience the same thing assumed everyone else must be faking it as well. It turned out from the research that there is a stable trait-like quality, "hypnotizability," that makes the experience of hypnosis very different for different people. Each researcher was originally working to explain their own personal experience, as if it were universal.

The hypnosis story is perhaps not just a metaphor. One of the theories, proposed by researcher Josephine Hilgard, was that hypnosis involves an immersion similar to that found in some kinds of reading.

Deep Reading, Non-Fiction, and Expertise

Deep reading is most certainly not just about immersion in a story, however. Although I've read a small number of novels, each of them made a major impact on me, so for me it seems the experience of immersive fiction is something we should not take lightly, it can be part of a formative process in our development. But although I have a deep respect for literature, I am not primarily a literature geek, I am primarily a non-fiction geek. And for me, that is where I get the most concerned about Shirky's dismissal of deep reading as "old style" literacy.

What deep reading typically means to a non-fiction geek like me is essentially sitting down undistractedly with a book for an hour or so, making a concerted strategic effort to understand the mind of the author, who I assume has knowledge and ways of thinking about the subject at hand that I don't yet have. So the goal of deep reading is to treat a book as if it were a conversation with the author, where I start out confused about the subject, ask questions, look for answers, take notes, keep track of useful other sources (without actually reading them yet!) and in general try to create new representations in my mind regarding the subject matter, using the author as a guide.

This is the central skillset and habit set of the "auto-didact," the person who wants to live a life of self-determined learning. Sure we do browse a lot, and we do learn the skills of hunting and gathering information on the web as well as in books and journals and other people. But, to the critical point regarding Carr's argument, we also develop ways to pull ourselves away from the distractions and focus on learning new things more deeply when we recognize that need. If Carr is right, we are making ourselves increasingly unable to use that option.

The tradition of deep reading says that this kind of process of engaging a book by actively asking questions rather than just passively skimming content is a good thing. And I know from experience, both my own and that of many others, that it is far more difficult to do this in an environment of constant distractions. We just don't have the attention resources, it is too demanding a process and our brain has finite attention capacity.

The expertise research says that it this kind of engagement not only a good thing for learning but absolutely essential for mastering new and different concepts. We don't just pile data into our brain, we have to create new representations of the material by active engagement with it. We know that passively skimming large amounts of related information does not accomplish this. This is the key technical point on which Shirky's glibness about the quaint boring linear mode of reading becomes most dangerous.

I'm not saying it is impossible to become an expert in anything without deep reading. I'm saying that deep reading greatly facilitates the process, and if we lose that ability which Shirky finds quaint, we will indeed have to learn new skills and habits and create a new intellectual ethic to replace it so that we can still acquire deep expertise without deep reading.

Carr's argument is that the shift in media technology will actually change our brains in a way that will make it either impossible or extremely unlikely that we will replace the level of cognitive processing we now enjoy through deep reading. I'm not sure I would go as far as Carr there, but I think we need to be far less glib than Shirky about it, and take the change seriously.

Experts on a subject don't just have more information in their head about a topic. They represent the information differently. That only comes from extended periods of thinking about the subject actively and coming to new insights. That means a particular way of using technology, not just leaving it to chance. No amount of following links to good sources and reading each of them superficially can accomplish what thinking about the material deeply and asking yourself strategic questions can do.

Physical Books Aren't Really So Special, Are They?

I think there are unique qualities to physical books that many of us have learned to exploit particularly well, but that doesn't mean that people can't learn to do similar things with other technologies.

I do think you can, under the right conditions, manage to acquire deep expertise from web technology the way many of us have traditionally done with books, and in some ways it even gives you significance advantages. You have better access to good sources, access to experts, interactive learning technologies, and the potential for quality feedback. These are all advantages of web technology, and some smart, motivated students have learned to make wonderful exemplary use of it.

But organizing the material for yourself and getting your sources together is not enough for real learning. You also have to know when to think about the material you are learning, to ask the right questions to achieve new insights, to use what learning researchers call metacognitive skills to evaluate your own learning and figure out the best thing for you to read or practice next, to find your own weaknesses and figure out how to improve. That's where the scarce attention resources are required, and where we need new habits and a new intellectual ethic to remind us how and where to focus on thinking about the material.

I think the Clay Shirkys of the world probably assume that the unburdening of the mind that comes with efficient use of web technology either allows them to accomplish this in a different way or else renders it obsolete.

And quite obviously people can and do learn many things without deep reading.

One argument is that the sorts of things they are learning may be different, and the depth at which they are mastering them may different. I honestly don't know if that's true, and I don't think Carr has answered the question adequately with the evidence he presents in The Shallows.

I do know for a fact that the unique habits and intellectual ethic of old-fashioned "book scholars" lead to contemplation of a subject in greater depth. And I know that the manner in which we (with vanishingly few exceptions) use web and mobile technology are not at all conducive to that kind of contemplation. Is this nostalgia on my part to find this a little worrying, or is there a meaningful tradeoff going on?

What I don't know is what the real implications of that "depth" are for expertise, intelligence, and problem solving. Is it really just a quaint linear mode of thinking that we are losing in the post-literate age as Shirky suggests, or are we effectively going back to a pre-literate age in some ways by eliminating an essential reflective aspect of our thinking, as Carr tells us?

I don't know, but I suggest the answer might be available in terms of specific empirical questions, so far as we agree on what we think the mind should best be doing. The problem is that the Shirky side and the Carr side may very well have different perspectives on what the mind should really be doing! I think this issue is important enough to keep a conversation open.

Towards a Resolution?
I'll leave you with the intriguing concluding thoughts from (Wolf and Barzillai):


Encouraging Deep Reading Online

Here lies the crucial role of education. Most aspects of reading—from basic decoding skills to higher-level comprehension skills—need to be explicitly taught. The expert reading brain rarely emerges without guidance and instruction. Years of literacy research have equipped teachers with many tools to facilitate its growth (see Foorman & Al Otaiba, in press). For example, our research curriculum, RAVE-O (Wolf, Miller, & Donnelly, 2000), uses digital games to foster the multiple exposures that children need to all the common letter patterns necessary for decoding. Nevertheless, too little attention has been paid to the important task of facilitating successful deep reading online.

The medium itself may provide us with new ways of teaching and encouraging young readers to be purposeful, critical, and analytical about the information they encounter. The development of tools—such as online reading tutors and programs that embed strategy prompts, models, think-alouds, and feedback into the text or browser— may enhance the kind of strategic thinking that is vital for online reading comprehension.

For example, programs like the Center for Applied Special Technology's (CAST) "thinking reader" (Rose & Dalton, 2008) embed within the text different levels of strategic supports that students may call on as needed, such as models that guide them in summarizing what they read. In this way, technology can help scaffold understanding (Dalton & Proctor, 2008). Such prompts help readers pause and monitor their comprehension, resist the pull of superficial reading, and seek out a deeper meaning. For example, in the CAST Universal Design Learning edition of
Edgar Allan Poe's "The Tell-Tale Heart" (http://udleditions.cast.org/INTRO,telltale_heart.html), questions accompanying the text ask readers to highlight words that provide foreshadowing in a given passage; to ponder clues about the narrator as a character in the story; and to use a specific reading strategy (such as visualize, summarize, predict, or question) to better understand a passage.

Well-designed WebQuests can also help students learn to effectively process information online within a support framework that contains explicit instruction. Even practices as simple as walking a class through a Web search and exploring how Web pages may be biased or may use images to sway readers help students become careful, thoughtful consumers of online information. Instruction like this can help young minds develop some of the key aspects of deep reading online.

The Best of Both Worlds

No one has real evidence about the formation of the reading circuit in the young, online, literacy-immersed brain. We do have evidence about the young reading brain exposed to print literacy. Until sufficient proof enlarges the discussion, we believe that nothing replaces the unique contributions of print literacy for the development of the full panoply of the slower, constructive, cognitive processes that invite children to create their own whole worlds in what Proust called the "reading sanctuary."

Thus, in addition to encouraging explicit instruction of deeper comprehension processes in online reading, we must not neglect the formation of the deep-reading processes in the medium of human's first literacy. There are fascinating precedents in the history of writing: The Sumerian writing system, in use 3,000 years ago, was preserved alongside the Akkadian system for many centuries. Along the way, Akkadian writing gradually incorporated, and in so doing preserved, much of what was most valuable about the Sumerian system.

Such a thoughtful transition is the optimal means of ensuring that the unique contributions of both online and print literacies will meet the needs of different individuals within a culture and foster all three dimensions of Aristotle's good society. Rich, intensive, parallel development of multiple literacies can help shape the development of an analytical, probative approach to knowledge in which students view the information they acquire not as an end point, but as the beginning of deeper questions and new, never-before-articulated thoughts.

Update June 13, 2010: "How to Read a Book" --> an interesting brief article that describes a reading method very similar to the one I taught myself and still use --> http://pne.people.si.umich.edu/PDF/howtoread.pdf

Update June 3, 2013:  Annie Murphy Paul wrote an article on this subject that I found helpful, coming to a very similar conclusion I think.

Update April 8 2014:  A Washington Post article featuring Maryanne Wolf's thinking about "slow reading."

Update April 19, 2014  "Is Reading Too Much Bad for Kids?" - just in case you need evidence that cultural values really are reversing to the point of taking reading for granted, de-valuing reading itself, and de-valuing the ability to do it well. 

Saturday, June 05, 2010

Shirky vs. Carr: Battle for the future mind

The Wall Streeet Journal online is playing off two authors of recent books with opposing premises:



Clay Shirky with his optimistic view of how the Internet is making us smarter by giving us more freedom to expand our minds, as discussed in his book, Cognitive Surplus



Nicholas Carr with his more pessimistic take on how the Internet is making us think more shallowly, from his book, The Shallows


As both a fanatical book reader and a collaboration technology specialist, I appreciate both views so this is a fascinating dialectic for me. So far I give the edge to Carr for realism and depth based on the articles, but I haven't read both books yet and of course I have my own bias as well.

Both authors share the common understanding that the way we read shapes the way we think, but they take away different lessons from that. I am reading Carr and he seems to have a good handle on the tradeoffs between the high velocity high diversity web tech reading mode and the undistracted reflective book reading mode. I haven't read Shirky, I don't know how realistic he is about the tradeoffs. His article seems a bit Panglossian.

Since I create web tech for clients for a living, I realize how powerful this technology is, but I am also often in a position to see how it affects their thinking. My biggest challenges in helping people think together using portals and knowledge management tech involve getting them to think about the content in a useful way rather than just clicking on things and making hasty choices. My objective in a good portal site is to encourage people to ask the right questions to the solve real problems faced by the team. Just putting links up on a page is helpful, but it makes a real difference to the outcome to have a structure that guides thinking.

Shirky seems to assume that greater cognitive freedom will automatically be used well because cognitive surplus will somehow make people want to be responsible for their own understanding.

Carr takes the view that media are not just neutral tools that can be used for better or worse, they actually impose a particular way of thinking and interacting upon us. So the freedom Shirky prizes has unspoken structure to it that we take for granted. I think this has a very strong basis in scientific theory as well, from a variety of fields. The symbolic mind did not arise out of nowhere as part of the modern brain macro structure. We know from modern neuroscience research that the brain is far more plastic than we previously assumed, and we know that abstract symbolic thinking involved distinct changes in the brain. It is entirely plausible that over time the nature of each type of media has shaped the brain in different ways. Every step had its tradeoffs.

Our assumption that in every age the tradeoff was always purely for the best ... that's the definition of a Panglossian view.

Maybe it's just my own bias as a book lover, but so far I think Carr has a more realistic view of the tradeoffs involved and understands better real advantages and disadvantages of the different kinds of media.

In one sense, the power of web tech is very obvious. Authors often find me talking about their books and contact me to start a dialog. That's a pleasure I rarely had when writing in a more traditional format. I am exposed to far more interesting material on the web than I could ever manage to gather up in a bookstore. But then I do have to isolate myself for a while from the distractions in order to really dig into a good book and engage the mind of its author and think deeply about their points.

Those are the tradeoffs for me, and if Carr is right, future generations won't have both options.

I'm looking forward to reviewing the Carr book when I'm done.

Update 6/5: Jonah Lehrer had this excellent NYT review of Carr which so far I largely agree with: http://www.nytimes.com/2010/06/06/books/review/Lehrer-t.html

Update 6/6: Wen Stephenson on Bostom.Com with another good review:
http://www.boston.com/ae/books/articles/2010/06/06/the_internet_ate_my_brain/?page=full

Update 6/10: Carr's own blog "RoughType" has a list of reviews as well: http://www.roughtype.com/archives/2010/06/reviews.php and there's a wonderful conversation between Nicholas Carr, Jonah Lehrer and others on Jonah Lehrer's blog "Frontal Cortex" at http://scienceblogs.com/cortex/2010/06/the_shallows.php

Update 6/20: Jonah Lehrer reviews Clay Shirky:
http://scienceblogs.com/cortex/2010/06/cognitive_surplus.php