Daniel Kahneman, Gary Klein, (Kahneman and Klein 2009)



This article reports on an effort to explore the differences between two approaches to intuition and expertise that are often viewed as conflicting: heuristics and biases (HB) and naturalistic decision making (NDM). Starting from the obvious fact that professional intuition is sometimes marvelous and sometimes flawed, the authors attempt to map the boundary conditions that separate true intuitive skill from overconfident and biased impressions. They conclude that evaluating the likely quality of an intuitive judgment requires an assessment of the predictability of the environment in which the judgment is made and of the individual’s opportunity to learn the regularities of that environment. Subjective experience is not a reliable indicator of judgment accuracy.

In this article we report on an effort to compare our views on the issues of intuition and expertise and to discuss the evidence for our respective positions. When we launched this project, we expected to disagree on many issues, and with good reason: One of us (GK) has spent much of his career thinking about ways to promote reliance on expert intuition in executive decision making and identifies himself as a member of the intellectual community of scholars and practitioners who study naturalistic decision making (NDM). The other (DK) has spent much of his career running experiments in which intuitive judgment was commonly found to be flawed; he is identified with the “heuristics and biases” (HB) approach to the field.

Two perspectives

Origins of the Naturalistic decision making approach

The NDM approach, which focuses on the successes of expert intuition, grew out of early research on master chess players conducted by deGroot (1946/1978) and later by Chase and Simon (1973). DeGroot showed that chess grand masters were generally able to identify the most promising moves rapidly, while mediocre chess players often did not even consider the best moves. The chess grand masters mainly differed from weaker players in their unusual ability to appreciate the dynamics of complex positions and quickly judge a line of play as promising or fruitless. Chase and Simon (1973) described the performance of chess experts as a form of perceptual skill in which complex patterns are recognized. They estimated that chess masters acquire a repertoire of 50,000 to 100,000 immediately recognizable patterns, and that this repertoire enables them to identify a good move without having to calculate all possible contingencies.


A central goal of NDM is to demystify intuition by identifying the cues that experts use to make their judgments, even if those cues involve tacit knowledge and are difficult for the expert to articulate. In this way, NDM researchers try to learn from expert professionals. Many NDM researchers use cognitive task analysis [Cognitive task analysis] (CTA) methods to investigate the cues and strategies that skilled decision makers apply (Crandall, Klein, & Hoffman, 2006; Schraagen, Chipman, & Shalin, 2000).

Origins of the Heuristics and biases approach

In sharp contrast to NDM, the HB approach favors a skeptical attitude toward expertise and expert judgment. The origins of this attitude can be traced to a famous monograph published by Paul Meehl in 1954. Meehl (1954) reviewed approximately 20 studies that compared the accuracy of forecasts made by human judges (mostly clinical psychologists) and those predicted by simple statistical models. The criteria in the studies that Meehl (1954) discussed were diverse, with outcome measures ranging from academic success to patient recidivism and propensity for violence. Although the algorithms were based on a subset of the information available to the clinicians, statistical predictions were more accurate than human predictions in almost every case. Meehl (1954) believed that the inferiority of clinical judgment was due in part to systematic errors, such as the consistent neglect of the base rates of outcomes in discussion of individual cases. In a wellknown article, he later explained his reluctance to attend clinical conferences by citing his annoyance with the clinicians’ uncritical reliance on their intuition and their failure to apply elementary statistical reasoning (Meehl, 1973).

Contrasts between the naturalistic decision making and heuristics and biases approaches

Stance regarding expertise and decision algorithms

There is no logical inconsistency between the observations that inspired the NDM and HB approaches to professional judgment: The intuitive judgments of some professionals are impressively skilled, while the judgments of other professionals are remarkably flawed. Although not contradictory, these core observations suggest conflicting generalizations about the utility of expert judgment. Members of the HB community are of course aware of the existence of skill and expertise, but they tend to focus on flaws in human cognitive performance. Members of the NDM community know that professionals often err, but they tend to stress the marvels of successful expert performance.

The basic stance of HB researchers, as they consider experts, is one of skepticism. They are trained to look for opportunities to compare expert performance with performance by formal models or rules and to expect that experts will do poorly in such comparisons. They are predisposed to recommend the replacement of informal judgment by algorithms whenever possible. Researchers in the NDM tradition are more likely to adopt an admiring stance toward experts. They are trained to explore the thinking of experts, hoping to identify critical features of the situation that are obvious to experts but invisible to novices and journeymen, and then to search for ways to pass on the experts’ secrets to others in the field. NDM researchers are disposed to have little faith in formal approaches because they are generally skeptical about attempts to impose universal structures and rules on judgments and choices that will be made in complex contexts.

Skilled intuition as recognition

Simon (1992) offered a concise definition of skilled intuition that we both endorse: “The situation has provided a cue: This cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition” (p. 155).

See also Cedric Chin | Expertise Is ‘Just’ Pattern Matching.

In other environments, the regularities that can be observed are misleading. Hogarth (2001) introduced the useful notion of wicked environments [Wicked environment], in which wrong intuitions are likely to develop. His most compelling example (borrowed from Lewis Thomas) is the early 20th century physician who frequently had intuitions about patients in the ward who were about to develop typhoid. He confirmed his intuitions by palpating these patients’ tongues, but because he did not wash his hands the intuitions were disastrously self-fulfilling.

Two conditions must be satisfied for skilled intuition to develop: an environment of sufficiently high validity and adequate opportunity to practice the skill.


  • Our starting point is that intuitive judgments can arise from genuine skill—the focus of the NDM approach— but that they can also arise from inappropriate application of the heuristic processes on which students of the HB tradition have focused.
  • Skilled judges are often unaware of the cues that guide them, and individuals whose intuitions are not skilled are even less likely to know where their judgments come from.
  • True experts, it is said, know when they don’t know. However, nonexperts (whether or not they think they are) certainly do not know when they don’t know. Subjective confidence is therefore an unreliable indication of the validity of intuitive judgments and decisions.
  • The determination of whether intuitive judgments can be trusted requires an examination of the environment in which the judgment is made and of the opportunity that the judge has had to learn the regularities of that environment.
  • We describe task environments as “high-validity” if there are stable relationships between objectively identifiable cues and subsequent events or between cues and the outcomes of possible actions. Medicine and firefighting are practiced in environments of fairly high validity. In contrast, outcomes are effectively unpredictable in zero-validity environments. To a good approximation, predictions of the future value of individual stocks and long-term forecasts of political events are made in a zero-validity environment.
  • Validity and uncertainty are not incompatible. Some environments are both highly valid and substantially uncertain. Poker and warfare are examples. The best moves in such situations reliably increase the potential for success.
  • An environment of high validity is a necessary condition for the development of skilled intuitions. Other necessary conditions include adequate opportunities for learning the environment (prolonged practice and feedback that is both rapid and unequivocal). If an environment provides valid cues and good feedback, skill and expert intuition will eventually develop in individuals of sufficient talent.
  • Although true skill cannot develop in irregular or unpredictable environments, individuals will sometimes make judgments and decisions that are successful by chance. These “lucky” individuals will be susceptible to an illusion of skill and to overconfidence (Arkes, 2001). The financial industry is a rich source of examples.
  • The situation that we have labeled fractionation of skill is another source of overconfidence. Professionals who have expertise in some tasks are sometimes called upon to make judgments in areas in which they have no real skill. (For example, financial analysts may be skilled at evaluating the likely commercial success of a firm, but this skill does not extend to the judgment of whether the stock of that firm is underpriced.) It is difficult both for the professionals and for those who observe them to determine the boundaries of their true expertise.
  • We agree that the weak regularities available in low-validity situations can sometimes support the development of algorithms that do better than chance. These algorithms only achieve limited accuracy, but they outperform humans because of their advantage of consistency. However, the introduction of algorithms to replace human judgment is likely to evoke substantial resistance and sometimes has undesirable side effects.


Kahneman, Daniel, and Gary Klein. 2009. “Conditions for Intuitive Expertise: A Failure to Disagree.” American Psychologist 64 (6): 515–26. https://doi.org/10.1037/a0016755.