Thursday, December 24, 2009

The wisdom of crowds vs. the distribution of expertise

Jason Flom has a very good brief post on 10 principles for the future of learning:

This is a good post that I'm not so much criticizing as using a springboard to illustrate a trend that I see with current epistemological thinking in popular culture: we are moving toward seeing collectively derived sources as authorities in preference to individuals, or we assume that we are moving away from reliance on authority sources. It seems as if a lot of people are viewing this as an unambiguously good thing. I'm not so sure. Culture is changing rapidly, but the evolved properties of the human nervous system don't necessarily change at the same rate, and I think there is no doubt that we do have stable social cognitive features that characterize us even when we switch technologies radically.

To me there is a classic dilemma regarding authority: we rely on it for accurate knowledge (because expertise is not evenly distributed!) and we can also be manipulated by it and rely on it too much. The currently popular philosophy of knowledge seems to be that we are sociallizing knowledge and that our intelligence and expertise are becoming a collective web of some sort. This abstract is certainly interesting and provocative, but really it is unlikely to be true at least in the near future.

The important features of individual minds don't scale to networks of humans (as far as I know!). Individual expertise and intelligence does not appear as a property of human social networks (again, AFAIK). Also, groups are subject to fallacies just as individuals are (this one I think is pretty well established in social psychology). Those fallacies just follow a different pattern.

Collective editing is not neccessarily self-correcting, it can amplify mistakes under various conditions.So we can't just make that dilemma of authority go away by saying that we are collectively the new authority source or that there is no more need for authoritative sources or pretending that social networks are themselves experts or that expertise is becoming more evenly distributed. It clearly is not.

As of right now, there are still a few people who know much more than nearly all others about each domain. I don't say that lightly. It's a fundamental scaling that results from the effort required to achieve true expertise in any domain. It's a result of expertise research, not political or social philosophy.

So I feel as if there is a serious but very common epistemological flaw of confusing wide dissemination of information with even distribution of expertise. The misused term "knowledge" seems to be used in this service, since we often carelessly use it to mean both information and expertise. Of course we should make use of collective sources like Wikipedia, but we should not make the further glib assumption that these can replace individual expertise.

That's why we really do need some of those criteria for making limited use of collectively derived sources. They don't neccessarily provide us with the best information for all uses, and our natural temptation is to just transfer credibility to them. We need to not only make better use of collectively derived sources but also transfer proportionately our critical thinking normative principles from individual authority to those collective sources and learn new principles for evaluating them.

Sometimes the crowd is wrong. And most of the time the crowd gets the least common denominator most right. That's probably good enough for a high school project, but not for serious scholarship. In my opinion. Thinking remains a property of individual minds, facilitated by the social network but not (God help us) replaced by it.

kind regards,


Wednesday, December 23, 2009

What, If Anything, Can Skeptics Say About Science? - some thoughts

The link below is to a thought-provoking post by Daniel Loxton about the relationship of science and skepticism. My thoughts are in reply #27, and I'm also posting them here. If you're interested in such things as I am, I also draw your attention to Jim Lippard's interesting epistemological observations in reply #10.

Summarizing Daniel's heuristics ...

1) Where both scientific domain expertise and expert consensus exist, skeptics are (at best) straight science journalists.

2) Where scientific domain expertise exists, but not consensus, we can report that a controversy exists — but we cannot resolve it.

3) Where scientific domain expertise and consensus exist, but also a denier movement or pseudoscientific fringe, skeptics can finally roll up their sleeves and get to work.

4) Where a paranormal or pseudoscientific topic has enthusiasts but no legitimate experts, skeptics may perform original research, advance new theories, and publish in the skeptical press.

I'd summarize my response by saying that I think these are good categories, but we mainly know them after-the-fact. They don't really get to the heart of what it means to be an effective doubter.

My thoughts in response:

I do find the idea of having heuristics for applying critical thinking appealing, but I’m uneasy about this particular very broad set and framework. There’s some question begging that seems inevitable when we draw up neat categories for observations.

Specifically, as heretical as it may perhaps seem to some, I don’t know that I agree that skepticism means a “science-based epistemology.” I think it means more a heavily empirical epistemology: observe and guess and test, rather than theorize and predict. Clearly, theory and prediction do play a central role in _science_, but not neccessarily in _skepticism_ pe se. To me they are closely related but not the same thing.

These categories in the post seem in part based on the underlying notion that expertise and epistemic value are closely related. To me, expertise does not have a straightforward simple relationship with our knowledge of the underlying phenomena. For one thing, it takes us in two different directions at once: (1) refined expertise organizes our knowledge of a domain along very specific lines – thus its power – and this also causes us to treat true anomalies as outliers to be ignored, and (2) expertise also makes us better able to see finer distinctions that lead to new discoveries.

So to me _expertise_ does contribute greatly to scientific discovery, but expert _consensus_ does not neccessarily define the underlying phenomena or by itself merit a different approach to experimentation. It is in the areas where we have the strongest expert consensus that the most interesting anomalies arise. It is often in testing the least likely conjectures, the ones outside the expert consensus, that we make the most interesting discoveries.

Before the discovery of metamaterials, there was an almost unanimous consensus that em radiation could not be guided around objects except in science fiction. The discovery had to be made by experts who could understand the significance of the discovery and had the tools for isolating it, but still it violated the expert consensus. Examples like this are rare, but I think well established, showing dramatically the two divergent ways that expertise influences epistemic value.

Dealing with the problem of interpreting an anomaly, if we knew ahead of time what the relevant domain of expertise was, and how it affected our understanding of the observations, we would already have largely solved the problem, thus the question begging of dealing with claims differently based on their relationship to the expert consensus, especially assuming that the expert consensus renders moot the scientific value of applying expertise to studying a putative anomaly.

I would argue that skeptics are at their best domain-general observers and experts in various areas of protocol and experimentalism and avid students of past lessons learned in studying anomalous claims in general. Consequently I think they are best engaged across the board investigating the circumstances of interesting claims – making use of scientific domain experts … knowing the expert consensus and taking it as the default … but not relying on the expert consensus by assuming it always makes anomalies less likely.

As a personal preference, I don’t think skeptics should be only in the job of confirming the consensus, but also in the job of questioning it reasonably.

kind regards,

Sunday, December 20, 2009

The Expected Unexpected, a review of The Black Swan by Taleb

The Expected Unexpected - or, What can we learn from White Swans?

A review of The Black Swan, by Nassim Nicholas Taleb

Review by Todd I. Stark, 12/20/2009

Link to review on Amazon.

I came to this book expecting a clever but flawed argument for intellectual laziness or superficial thinking, another popular argument for "gut" or "intuition" or "Zen." Or perhaps a slick Gladwell-esque treatment of randomness. Perhaps a popularization of postmodernism or neo-pragmatism applied to financial markets, or a "Thriving on Chaos" (Tom Peters) for the 21st century.

This book is none of those things. Instead I found myself immersed in a very intriguing and deep intellectual journey into the roots of applied statistics and empirical science that had me thinking and taking notes prolifically. As readable as it is, this is not (or should not be) a quick read. Taleb is an erudite scholar but he uses his scholarship in the service of ideas rather than to accumulate impressive footnotes. His lectures are conversational but convey great weight.

I do find his tone somewhat arrogant in spots, oddly so. The half dozen or so people he finds interesting are worthy dinner companions, the rest of the world of intellegent mortals are pretty much fools who he chooses to stereotype and parody. Anyway, that's the impression I get. Contempt for audience usually works against an author. Still, very few modern authors can combine technical knowledge, originality, and readability the way Taleb can, and this to me kept me reading even when I imagined I was one of the many targets of the author's contempt.

Enough on style and impression, the content of this book is what makes it merit five stars. The core idea here is that we are creatures who quickly and easily sort things into categories and tell stories to make sense of them. Narration comes unbidden to us, but not skillful abstract thinking. This much of course we have heard before from the heuristics and biases school and behavioral economics.

Taleb's contribution is to point out a particularly broad implication of this principle, that our knowledge rapidly degrades when rare events are consequential. We explain them away and miss their importance. At the same time, we overestimate the impact of rare events for arbitrary reasons when they really have no consequence.

That would be of mostly academic interest except for one thing. The key to Taleb's overall argument is his claim that the impact of rare events is domain-specific. That means we can learn to distinguish domains where: (a) classical risk statistics apply ("Mediocristan") from (b) those where rare events and winner-take-all competitions dominate ("Extremistan").

In "Mediocristan" domains, classical decision theory in principle should help us (although Taleb seems to feel that these domains are few and far between among things we really care about). In "Extremistan," Taleb advises, we should take steps to protect ourselves from rare adverse events and use diverse aggressive risk taking to exploit rare positive events.

The idea here is that we are betting that something rare will eventually impact us in these domains, even though we can't know what specifically it will be. The point is that in domains where likelihood can't really be calculated, we should focus our efforts instead on the impact rather than guessing at the probability. Taleb doesn't make this argument lightly, he explains the limits in our predictive ability in an understandable way yet he also takes heed of technical details in his arguments.

"Extremistan" consists of all of the domains where scaling and nonlinear accumulations occur, or rapid deviations from small differences in initial conditions. In line with complexity theorists, Taleb finds these effects pretty much everywhere of interest to social scientists, economists, and people in general. He suggests that we might be able to use scalable non-linear maths to get a rough idea of what to expect in some domains, but that we need to remain careful of the illusion of mathematical predictive power.

Biological values like height and weight and average life expectancy are areas where the normal curve applies, and perhaps mean failure rate of parts in engineering. But when important things like money or fame or success can accumulate virtually without limit and often for arbitrary reasons like contagion, the assumptions of Gaussian distributions are just not relevant. At least that's how I understand Taleb's claim that the normal distribution is a great intellectal fraud: it is applied confidently to things where it has no place being applied.

With one strange but perhaps unavoidable exception, Taleb usually follows his own advice. He considers theories for storytelling purposes and explanation but doesn't take them seriously or depend on them for his argument. He does use storytelling quite a bit to make his point, even though his argument is largely about the unreliability of our stories. He seems to be saying that we automatically make sense of events through stories, so it takes a story to help us understand the argument against relying on storytelling.

I suppose there is a touch of the postmodernist skepticism of narratives in The Black Swan, but the possibility of having specific strategies for dealing differently with specific domains is an intriguing and welcome update to that tradition, bringing it perhaps more in line with the critical rationalism and conjectures and refutations of Karl Popper and the scientific skepticism of the classical pragmatists than with the modern cynics. Taleb is not rejecting theory entirely, but he certainly prefers direct experiment and aggressive risk taking (in strategic areas) to bland assumptions of predictable statistical likelihood and the reliability of knowledge.

For me this book ties together a lot of diverse ideas very nicely in an original and interesting way, and yet stays on target with its message.

Very impressive effort and a very worthwhile read.