Monday, April 11, 2011

Book Review - Apollo Root Cause Analysis: A New Way of Thinking (?)

Book Review - Apollo Root Cause Analyis: A New Way of Thinking Book by Dean Gano, Apollonian Publications, 2008. Review by Todd I. Stark Link to Amazon review.

Very useful principles for preventing bad things from happening, April 11, 2011 By
Todd I. Stark "Cellular Wetware plus Books"

This review is from: Apollo Root Cause Analysis: A New Way of Thinking (Paperback)

How can we prevent bad things from happening, and how can a formal method help us in this quest? That's the topic of this book.

"By understanding the cause and effect principle and creating a RealityChart, your understanding of what constitutes reality will be changed forever ... allow you to see a reality that was previously beyond your comprehension." (Apollo Root Cause Analysis, p. 3)

"... the problem of a linear language in a nonlinear world while accommodating the simple human mind has been a challenge for the ages. I believe this challenge has been met with Apollo Root Cause Analysis." (ARCA, p. 176)

The importance of the topic covered by this book is immense, so for my purposes, I'll forgive the author his amusing enthusiasm for this own method as seen in the quotes above and try to determine what may actually be truly useful in it rather than dwell on what may or may not be unique about it. I will be referring to this book and to the Apollo method in this review. The method is also implemented with associated RealityCharting(tm) software which is outside the scope of this review, except where explicitly mentioned.

Problem solving well is one of if not the most critical factor for human success across a wide range of activities. One big part of problem solving is explaining *why* something unexpected happened. I you think about it, anytime something unexpected happens, we generally want to know why. We do this naturally, automatically, and effortlessly. We can't stop ourselves from doing it to at least some extent. Sometimes the explanation seems obvious and sometimes it seems very elusive. Even when the explanation seems obvious, there is often more going on of importance than we realized. We do this seemingly because especially if it's a bad thing that happened, understanding why it happened potentially helps us predict whether it will happen again and perhaps prevent it from happening again. That's important to us as individuals as well as to organizations. It's so important that we already do it all the time. We also tend to assume that we're very good at it, I think. My experience is in agreement with the author of this book, people are in general not nearly as good at solving problems as we think we are, at least when the problem become complex and involve multiple people and organizations.

We have a powerful natural ability to make sense of events that happen around us by identifying people, places, things, and weaving a story around how they interact with each other. (1) The power of storytelling to make sense of events is also its tragic downfall when we need to understand how things happen in great detail. Our temptation to tell stories actually gets in the way of understanding what specific things need to happen in order for other things to happen, and that's the sort of understanding that is needed in root cause analysis. Getting past our natural but distracting abilities to use the right tools effectively is the greatest value of an effective formal method. We often look to formal methods to systematize our thinking in general, but problem solving outside the academic realm of worked problems is often not amendable to being constrained by formal methods. Instead, formal methods provide their real benefit in forcing us to look past our own blind spots at key points during problem solving. To his credit, the author of this book seems to have grasped this important point very well and applied it effectively.

The most problematic blind spots in our thinking are those that aren't avoided by intelligence, education, or domain-specific expertise, skills or knowledge. The most problematic blind spots are actually natural abilities that serve us well most of the time. One of the most important examples is that we tend to use storytelling to construct a plausible sequence of events leading to an outcome, putting people into the center of the action and making the events meaningful to us. There are a number of reasons why this natural and compelling process is a bad idea in problem solving.

First, since a story starts with the putative "root cause" and then proceeds to its effects, it tends to assume a single cause. Events don't have a single cause. If our goal is to prevent bad things from happening, we'd rather identify as many relevant causes as possible, and potentially address each one as makes sense.

Second, storytelling tends to center around characters, their motives, and the things they do. The motives and behaviors of people are often the things we have the least reliable control over in problem solving. We'd prefer to find things we can control better if at all possible, and save human choices and our interpretation of human motives as a last resort.

Third, we can usually tell multiple different stories about the same events, depending on what information we emphasize and how we emphasize it, which in turn depends on our motivations and perspective. This means that the choice of which "root cause" we focus on tends to be more political and subjective than a result of careful analysis.

Our storytelling is natural and compelling but it tends to be more to make sense of events than to understand the details of what causes what.

So effective Root Cause Analysis tools and methods should discourage storytelling and instead search the prerequisite conditions and sequences of events that lead to the results we want to prevent. That's the message of this book and its method. So how well does it accomplish this?

Why a formal process?

Causal analysis (or what is sometimes more specifically referred to as "root cause analysis") is the more formal process of doing what we tend to do naturally, try to figure out why events diverged from what we expected. Why do we need a formal process for doing this, if we already do it naturally? Because often we don't do it very well. The very fact that we do it naturally means that we will tend to make *confident guesses* about why things happened. Often more confident than accurate. This far, the author is on solid ground. The author doesn't refer to it explicitly, but there is a very large literature on the cognitive science of decision making that supports the author's claim that people tend to jump confidently to unwarranted conclusions when they first try to explain causation. (2, 3, 6)

One very important reason for a formal process is to systematize the data gathering and analysis process in order to help compensate for our natural biases that lead us to make confident but inaccurate guesses. The value of formal vs. informal processes is debated widely in the decision science literature, since much of our innate intelligence is the result of automatic non-conscious processes that are opaque to us in our own thinking. (5) However in general I think it is a pretty safe conclusion that a good formal process often helps us focus attention on things that we would not otherwise have seen to see past some of our own blind spots. That's the first reasonable rationale for causal analysis methods like Apollo Root Cause Analysis (ARCA) and its primary tool, RealityCharting(tm). Simply having a process to focus us on causes rather than stories is useful in itself.

Why a formal group process?

There are various reasons for using a formal process in organizations. Unexpected events often affect more than one person, often more than one person is part of the problem, and often more than one person has information or perspective needed to figure out what happened. So causal analysis very often becomes a group process. In addition to the reasons for using a formal process in general (to get around our individual blind spots), it is also important in groups in order to help avoid "group blind spots" such as the principles of behavior in groups studied by psychologists. We are often tempted us to go along with the consensus, to protect our own ideas, to react initially negatively to new ideas, and other tendencies that can negatively impact the problem solving process in groups. (6) A good formal group process can help get around our group blind spots just as it can help us get around our individual blind spots. In spite of the well-established problems created by "groupthink" and other group dynamics studied in social psychological, under optimal conditions, groups can often perform significantly better than individuals on some kinds of problems. (7)

The author makes a key point about group processes that he says defies conventional wisdom (p. 9). He implies essentially that the principles of good problem solving are simple and domain-general so should be taught to everyone, whereas domain expertise is deeper and necessarily differs more between people, so "subject matter experts" should also be at hand. The conventional wisdom he says is that problem solving is entirely "inherent to the subject at hand," or what I would call domain-specific, ignoring the value of domain-general principles in problem solving. I don't know how much this really defies conventional wisdom, but I agree with him completely and I think this is an important principle. It is a major part of the rationale for involving more people in a collaborative group problem solving process rather than just pulling in a few experts. If the process is good, and this principle is valid, then it has deep implications for improving problem solving in organizations by broadening involvement and investment in the process itself. That brings us to the real question at hand, whether the author's method accomplishes this objective.

Why Apollo?

So far my description here is I think pretty much in line with the author's rationale. Now we come to the real meat. How good is this particular method at getting around our individual and group blind spots, compared to other methods? That's the significance the author claims for this book and for his method, so that's what we really need to know to evaluate this book. The book covers a lot of ideas that are very important to know, but many are common knowledge to experienced problem solvers, so I am not going to focus on those. What I will focus on is what is supposed to be special about ARCA and RealityCharting(tm) in particular.

According to the author, the key distinction between his formal method and all others is that all others are ways of categorizing causes and schemes for voting on the best way to categorize them, while his method discerns (and RealityCharting displays) the actual relationship between different causes, along with the evidence for each cause. (p. 193)

All viable methods of "root cause analysis" in groups involve taking chains of events that would be too numerous and related in too complex a manner to envision in a common way by everyone involved in the process of they were simply described in words. Visual representations of trees of causes play a key role in all formal methods of "root cause analysis," including ARCA.

The trick to understanding the underlying message of this book and grasping the uniqueness claimed for the method is to understand exactly what he means by the *relationships between causes* and how the method forces you to think about evidence for causes. I don't personally favor the author's explanation for why his method is unique. His rules of causality seem awkward to me and I think there is a slightly different and clearer way to think of how his method works.

First, every effect is also a cause. There is no distinction between causes and effects except at the endpoints. Feedback loops of causes that have effects that ultimately lead back to the original cause are common in nature as well as human design and this method has no problem handling them. We start with the effect we are trying to explain and end with causes that either do not need to be explained or cannot yet be explained. This seems fairly straightforward to me for a causal chain, I'm not sure why the author feels it is unique to his method. I suppose it may be that commonly used methods tend to de-emphasize this aspect of causality.

Second, the method forces you to identify both actions and conditions that had to be in place for the action to cause the effect. This isn't useful for causal modeling as much as it is a useful trick for helping to identify all of the causes that might be relevant to solving the problem. Breaking the "Why?" question into an action and a set of conditions is seemingly somewhat unique to this method and helps avoid thinking solely in terms of actions or solely in terms of things that were in place at the time or characteristics of the situation. The theory is that it is usually *easier* to see actions that happened and less obvious what conditions had to be in place as well, but it is often *more useful* to identify the conditions because they tend to be more predictable and more controllable.

Third, the method forces you to have evidence for each cause, and takes an unusual and perhaps mildly questionable empiricist slant here. It distinguishes "sensed" from "inferred" evidence. "Sensed" of course refers to direct observation by someone, with as little interpretation as possible. "Inferred" refers to anything else. Assumptions and opinions represent doubts and must be investigated further, and just about anything that someone doesn't observe directly themselves has some doubt associated with it, whereas direct observation is automatically given very high credibility by the rules of the methodology.

This is a slightly odd sort of slant from my perspective because the credibility of eyewitness testimony in general is often in doubt (8), the reliability of inference is often not obvious, and because in my experience, hypothesis testing is particularly useful and important problem solving tool. Storytelling, much maligned and minimized in the ARCA method for good reason, is built on inferences from observations. However so do scientific theories and mathematical models arise from inferences. (9) The main difference is that science and math involve systematic linking and pragmatic testing of inferences rather than just weaving meaningful narratives. The author frequently asserts in different ways that storytelling negatively impacts problem solving, and defers to direct observation. I would instead argue that both are in doubt, and that their relative strength is not absolute based on source, but depends upon how inferences and observations support each other. One contemporary philosopher usefully compares the relationship to that of a crossword puzzle. (10) It is notable that the ARCA method and reality charting tool have no trouble accommodating different conceptions of the reliability of evidence.


1. The philosophy behind the method is extremely inclusive organizationally. The idea is not to find the putative "true causes" but to gather enough information to produce effective solutions. So the method encourages as many people as possible to participate as fully as possible, rather than to argue over the right causes to focus on. That is perhaps the greatest strength of this method, its formal incorporation of diverse perspectives, allowing everyone to be heard and every perspective considered while discouraging the usual drama of competing narratives. Different definitions of the problem are managed by starting from different primary causes and creating separate charts. Since the framing of the problem often guides or constrains the search for a solution, I consider this a strength of this method.

2. ARCA explicitly includes the source and specifics of evidence for each cause, helping to identify and prioritize information gathering tasks to further validate assumptions and opinions, or reproduce observations. You are strongly encouraged to provide evidence for every cause, a process that I think can lead to a far more comprehensive yet focused data gathering effort than other methods.

3. ARCA allows for the explicit representation of causal feedback loops, which can be important factors in understanding what is happening. Many causal charts make this important relationship very difficult.

4. The technique of treating each effect as a cause and subjecting each to requirements for evidence, and explicit follow-up for finding further causes, intermediate causes, information required, or explicit reasons for stopping is a very powerful aid to causal thinking and to me is the core of both the method and the associated software.

Limitations and Criticisms

1. Potential overreliance on unreliable eyewitness testimony vs. discouragement of useful inferences. The process of hypothesis generation is a particular kind of inference where we consider more alternatives than we suggest, and as a result we understand that they are explicitly hypotheses. There is experimental evidence that generating hypotheses ourselves leads to a more realistic appraisal of their strength and less false confidence than inferences provided from other people. (cf. Derek Koehler) Other research supports the notion that our causal judgments are themselves inferences that often result from mental simulation. [This is a comment regarding the method in the book, which downplays the role of "inference" vs. "direct observation." It is a minor point in practice because there is nothing in the RealityCharting tool that forces you to evaluate sources of evidence in a particular way.]

2. Overemphasis on uniqueness of the method. This mostly refers to the tone in which the book is written, which I often found distracting and annoying, wherein it often makes the book sound like a sales presentation for the method and the tool rather than a treatise on effective problem solving. This point goes along with the scarcity of credit to other sources that for me would have greatly enriched my appreciation of the rationale for the method. The author relies too heavily on himself as an authority for his rationale to rate this a five star book.

3. Restriction of causal statements to very brief verb-noun and noun-verb format rather than sentences. I think this has some value in fostering clarity if you manage to find a phrase that everyone happens to understand in the same way. Still, in practice I found that it can be time consuming to come up with these compact ways of expressing causes, and that they were easier to misinterpret than full sentences would be. [This is also a potential issue in the tool because the rule checking tries to enforce it. However, it can be disabled if desired.]

4. Does not allow you to visibly categorize causes except into actions vs. conditions. This is an attribute of both the method and the associated software. This "limitation" is the intentional result of the author's core principle that he feels distinguishes his method in particular: categorizing causes is relatively useless compared to relating them. Categorizing causes is suggested in ARCA only if you are stuck finding plausible candidate causes and want to try thinking in terms of categories in order to help find candidates. Since this is intentional, it is a limitation only in the sense that someone familiar with other methods may run into it and have to think differently in order to use the method as intended.

Given that so far every substantive criticism I've had of this method is easily accommodated by minor changes to the method (as well as in the associated tool, with the exception of categorizing causes) I found this to be a particularly versatile method. It captures a number of key principles of problem solving, especially in groups, and provides practical and effective ways of compensating for our most problematic blind spots and biases. For the most part, I think understanding the principles of ARCA and RealityCharting would probably enhance most other methods as well as the method standing on its own.

A formal method itself is no substitute for improving the reflective intelligence of each problem solver, but I think this method could go a long way in any organization toward getting people to think more effectively, and especially for helping them communicate their best thinking to each other. I found very little to disagree with and much of value in this book and found the method easy to understand and apply.


(1) Brian Boyd on art and storytelling as biological adaptations which derive from play.

"On the Origin of Stories: Evolution, Cognition, and Fiction," Brian Boyd, 2009, Belknap Press of Harvard University Press

(2) David Perkins discusses the significance of domain-general principles for effective thinking, and why they can't be replaced completely by intelligence or expertise. A plausible research-based discussion of why we need help getting around our built-in individual blind spots.

"Outsmarting IQ: The Emerging Science of Learnable Intelligence," David Perkins, 1995, Simon and Schuster

(3) The psychological study of how we tend to answer questions of the form "Why ... ?" is called attribution theory. Probably a result of our bias toward storytelling explanations, most attribution theory has attempted to address attributions of people in social situations and sometimes products in the case of consumer research. A much smaller amount of research has been done on the perception of causality in other ways, such as in building causal explanations, and only a tiny amount has made its way into popularly accessible books.

"Causal attribution: from cognitive processes to collective beliefs," Miles Hewstone , Wiley-Blackwell

(4) There are not many modern books that deal relatively broadly with causal modeling from both a philosophical and mathematical perspective. One of the few is Pearl's text.

"Causality: Models, reasoning, and inference," J. Pearl, 2000, Cambridge University Press

(5) The non-conscious processes in realistic problem solving is addressed by a fairly sizeable body of technical literature by Dijksterhuis, Bargh, Gollwitzer, and others, and introduced in an accessible and practical way by Gary Klein.

"Sources of Power: How People Make Decisions," 1999, Gary Klein, MIT Press

(6) The problems exacerbated by group behavior under non-optimal conditions are introduced accessibly and briefly yet very broadly in:

"The Psychology of Judgment and Decision Making," Scott Plous, 1993, Mcgraw-Hill.

(7) The positive potential of groups under optimal conditions is discussed in
"Group Problem Solving" by Patrick Laughlin, 2011, Princeton University Press.

The common belief that direct observation is inherently more reliable than inference is based on two ideas that are at best only partially true: (1) the assumption that reports based on observation does not significantly depend on memory, inference, or explanation, and so is automatically free of distortions, biases, or storytelling, and (2) the assumption that inferences are roughly equivalently reliable under a wide range of conditions.

(8) On the reconstructive nature of memory for events, see:
"Searching for Memory: The Brain, The Mind, and the Past," Daniel L. Schacter (1997), Basic Books

(9) On the nature of inference and especially scientific inference, see:

"Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science," Mayo and Spanos (2010), Cambridge University Press

(10) The useful crossword puzzle metaphor for the relationship of observation and inference is introduced in:

"Evidence and Inquiry: A Pragmatist Reconstruction of Epistemology," Susan Haack (2009), Prometheus Books

No comments:

Post a Comment