For: Routledge Handbook of Virtue Epistemology, Heather Battaly, ed.

Dual-process Theory and Intellectual Virtue: A Role for Self-Confidence

Berit Brogaard

August 27, 2017

Bio

Berit “Brit” Brogaard is Professor of Philosophy at University of Miami and a Professor II at University of Oslo. Her areas of research include philosophy of perception, philosophy of emotions, and philosophy of language. She is the author of Transient Truths (Oxford University Press, 2012), On Romantic Love (Oxford University Press, 2015), The Superhuman Mind (Penguin, 2015) and Seeing & Saying (Oxford University Press, in production).

Abstract

Taking dual-processing theory as a springboard, this chapter provides an overview of the advantages and disadvantages of fast, sub-personal type-1 cognitive processing and slow, conscious type-2 processing. It is then argued that adopting intellectual virtues, such as intellectual humility, intellectual self-vigilance and intellectual gregariousness, can help minimize the errors produced by mistakenly applied type-1 processing. However, other virtues ,e.g., intellectual self-confidence, intellectual pride and intellectual optimism, are required in order to avoid the errors introduced when type-2 processing interferes with the exercise of reliable type-1 heuristics, or rules of thumb. Finally, several strategies for adopting these intellectual virtues are discussed, including new habit formation and attentional bias techniques.

Keywords: cognitive biases; dual-process theory; gut feelings; habit formation; heuristics; intellectual gregariousness; optimism; self-confidence; stereotyping; type-1 cognitive processing; type-2 cognitive processing; virtue epistemology

1. Introduction

The dual-processing theory of cognitive processing states that there are two distinct ways in which we make decisions in daily life. Type-1 cognitive processing is fast but can be erroneous, whereas type 2 processing is slow and often is more accurate than type-1 processing. At least that is the most commonly held view (Tversky and Kahneman, 1983; Samuelson and Church, 2014; Roberts and West, 2015). As we will see, this commonly held view is not quite accurate. Which of the two types of processing is most accurate depends on your background information and the task you are asked to complete. For example, if you know very little about American cities, deciding the population size of a city based on name recognition can yield rather accurate results. In fact, studies have shown that when Germans were asked to determine whether the population size was greater in Milwaukee or Detroit, most said ‘Detroit’, which is the correct answer (Gigerenzer, 2007). When Americans were asked the same question, they made more mistakes because they were unable to rely on name recognition. Why does the name recognition heuristic work for Germans but not Americans? The main reason is that if you barely know anything about two American cities, it is likely that the city you have heard about has a larger population of the two. Of course, there are a myriad of counterexamples to this heuristic, or rule of thumb, which is why it is heuristic, but the heuristic yields more accurate results than mere guesswork in circumstances like these.

        Type-1 processing is particularly useful when time is of the essence and little information is available on which to base one’s decision. Type-2 processing is much more useful when a task requires logical or mathematical reasoning, when one needs to avoid stereotyping or the misapplication of heuristics, and when time is not of the essence.

One major problem, however, is that people don’t always know when to use type-1 processing versus type-2 processing. Sometimes a rule of thumb is applied to a task that could have been resolved accurately using logical or mathematical reasoning (type-2 processing). In other cases, the accurate results people reach using type-1 processing are retracted in favor of incorrect type-2 processing.

Even when we know which type of cognitive processing to apply, we tend to make mistakes, primarily because most of us are not very intellectually virtuous (Matlin, 2013). Cognitive errors we make using type-1 processing include:

  • Committing simple logical errors. For instance, you might think that you are more likely to die in an airplane crash that is the result of terrorism than you are to die in any kind of airplane crash (conjunction fallacy).
  • Relying too heavily on the first piece of information offered (“the anchor”) when making decisions (anchoring). For instance, if you have just been asked for the last two digits of your social security number, you may pay more for an item you are asked to purchase afterwards the higher the last two digits of your social security number are.
  • Basing our predictions of the frequency of an event based on how easily an example can be brought to mind (availability heuristic). For instance, after hearing news reports about people losing their jobs, you might mistakenly infer that you are at risk of losing your job.
  • Preferring one option to a second option merely on the basis of the description of the options (framing effect). For instance, you may prefer the package of minced meat that says ‘90 percent fat-free’ to a package containing the same product saying ‘contains 10 percent fat’.
  • Preferring to avoid losses to acquiring gains (loss aversion). For instance, you are likely to feel better if you do not lose $10 than if you find $10.
  • Evaluating our past experiences almost entirely on the basis of how they were at their peak (pleasant or unpleasant) and how they ended (peak-end rule). For instance, you might idealize a bad relationship that ended on good terms, because you focus exclusively on its peaks and its termination.
  • Letting the ease of a mental task influence our implicit determination of the likelihood of an event (simulation heuristic). For instance, you may buy a lottery ticket because it is very easy to mentally picture a winning scenario.

Here I focus on errors produced by the representativeness heuristic and the availability heuristic (section 2) as well as cognitive biases (section 3)—three kinds of type-1 processing. I will argue that adopting intellectual virtues, such as intellectual humility, intellectual self-vigilance and intellectual openmindedness, can help minimize the errors produced by such type-1 processing. But, I will argue, other virtues, such as intellectual confidence, pride and optimism, are required in order to avoid the errors introduced when type-2 processing interferes with the exercise of reliable type-1 heuristics. I also examine several strategies for acquiring these intellectual virtues, including new habit formation and attentional bias techniques.

2. Heuristics and their Misapplication

According to the dual-process theory, we engage in two types of cognitive processing. Type-1 processing is fast and automatic and occurs below the level of conscious awareness. This type of cognitive processing takes place, for instance, when we engage in face recognition and automatic stereotyping. Type-2 processing is slow and controlled and needs focused attention. We typically engage in this type of processing when we conjure up counterexamples to a principle, when we discover that we have been automatically stereotyping and when we realize that the type-1 processing we have been employing may be incorrect.

        While type-1 processing often is less accurate than type-2 processing, it is also faster and therefore more convenient to use during daily decision-making processes. More mistakes occur with type-1 processing because it relies on heuristics. Because heuristics only serve as a rule of thumb, we make the wrong decisions or predictions when they fail (Kahneman, 2011).

Daniel Kahneman and Amos Tversky discovered that a small number of heuristics and cognitive biases govern type-1 decision making (Kahneman and Tversky 1996), and while these heuristics and biases allow accurate and fast thinking under the right conditions, they generate mistakes when applied in the wrong circumstances. One heuristic that can generate errors when misapplied, is the representativeness heuristic. When we employ this heuristic, we assume that samples are representative—that they are indeed similar—to the populations from which they were selected. Suppose you are about to toss a coin one-thousand times. Before tossing you are asking to make a guess as to whether the coin will land heads down closer to 50 percent of the time or closer to 60 percent. Assuming it is a fair coin, the best guess is that it will land head down closer to 50 percent of the time. The representativeness heuristic yields fairly good outcomes when the sample is large, and one-thousand tosses is a large number of tosses. However, the heuristic is unreliable when the sample is small.

Kahneman and Tversky (1972) asked college students to consider which of two hospitals was more likely to have a birth rate of 60 percent boys per day: a small hospital where about fifteen babies were born each day on average or a large hospital where about forty-five babies were born each day on average. The majority of students responded that the likelihood was about the same. This, however, is incorrect. When a sample is small, it is less likely that the proportions represented by the sample will be representative of reality. So, even though about 50 percent of babies born in the world are boys, it is more likely that the percentage in a small sample deviates from the population as a whole compared to the percentage in a large sample. This is also a major reason that statistical correlations are less likely to reflect causal relationships in studies with a small number of subjects. When the power goes down (roughly reflecting the number of research participants), statistical correlations that do not reflect causal relationships are more likely to happen.

        Another situation in which the representativeness heuristic may yield the wrong result is when it is prioritized over genuine statistical principles. An example of this is Kahneman’s base-rate fallacy (Kahneman, 2011). When students were provided with information about Tom W, who was described as having the stereotypical look and interests of an engineering student, but were then told that the school of engineering was significantly smaller than the college of humanities and education, they predicted that Tom W was an engineer despite it being far more likely that he was a non-stereotypical humanities and education student.

Another error we frequently commit as a result of employing the representativeness heuristic is the conjunction fallacy. Linda the Bank Teller problem is perhaps the best known illustration of this fallacy. Tversky and Kahneman (1983) asked participants to solve the following problem.

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Which is more probable?

Linda is a bank teller.

Linda is a bank teller and is active in the feminist movement.

More than 80 percent of participants chose option 2, regardless of whether they were novice, intermediate or expert statisticians. However, the probability of two events occurring in conjunction is always less than or equal to the probability of either one occurring alone.

The standard explanation of our mistake is that even though logically, we should not pick option 2, we consider option 2 more likely to be correlated with what Lisa did in college.

With her special background, Linda is representative of someone who is a feminist. So, we come to think that it is more likely that Linda is both a bank teller and a feminist than it is that she is a bank teller. While this is the standard explanation of our mistake, an alternative explanation of the error—as we will see below—is that the wording of the task encourages us to provide alternative and more natural, pragmatic readings of the problem.

Another example of a misapplication of the representativeness heuristic is that of stereotyping. It is a stereotype that gay women are masculine, whereas gay men are feminine. While there are gay women who are masculine and gay men who are feminine, the generalization is incorrect and is most likely a result of exposure only to a small sample of individuals from the respective populations.

The converse mistake occurs when we misapply what is known as the ‘availability heuristic’. The availability heuristic is a heuristic used to predict epistemic frequency or probably in terms of how easy it is to think of relevant examples (Tversky and Kahneman, 1973). As we saw above, in the Detroit example, the availability heuristic sometimes produces accurate results. Errors resulting from the availability heuristic differ from those resulting from the representative heuristics in that the latter consist in being presented with a specific example (e.g., Linda) and then making an erroneous decision about whether the specific example is similar to the general category that it is supposed to represent. When we make errors using the availability heuristic, by contrast, we are given a general category and we then make a mistake when asked to recall specific examples (Tversky & Kahneman, 1983). For example, if we are given the category of philosopher and are asked to quickly provide two examples, we are more likely to cite old white males than young white females, because old white male philosophers come more easily to mind. In this case factors like recency and familiarity can affect memory retrieval.

The availability heuristic is also sometimes responsible for illusory correlations like stereotyping. Like the representativeness heuristic, the availability heuristic can lead us to believe that two variables are statistically correlated, even when they are not. One can minimize the tendencies to believe stereotypes and to engage in stereotyping by becoming acquainted with a larger sample of the population and by collecting more data before making a decision. As we will see in section 4, these preventative measures are connected to intellectual virtues such as intellectual humility or self-vigilance.

3. Cognitive Biases

In addition to misapplying heuristic, we are also often the slaves of cognitive biases. A common cognitive bias that people have, especially when employing fast type-1 processing, is confirmation bias. Confirmation bias occurs when we look for evidence that confirms our hypothesis but ignore or forget to look for evidence that might reject our hypothesis. Consider the following example of confirmation bias (Wason, 1968). You are presented with four cards E, J, 6 and 7:

E

 J

 6

 7

The task is to decide which card or cards to turn over in order to determine whether the following conditional is true or false of the four cards:

IF A CARD HAS A VOWEL ON THE ONE SIDE, THEN IT HAS AN EVEN NUMBER ON THE OTHER SIDE

Most people presented with the task choose to turn over the E card, and if the E card has an even number on the other side, they then conclude that the conditional is true. Although turning over the E card is a good first step, stopping the task after confirming that the E card has an even number on the other side is an example of confirmation bias. Turning over the E card, can help confirm the hypothesis that the conditional is true if the E card has an even number printed on the back. But even if it has an even number printed on the back, we are not yet done, because other possible counter-evidence can be revealed by turning a second card. Some people presented with the task suggest turning over the E card, and if the E card has an even number printed on the back, they then suggest turning over the 6 card (Oaksford and Chater 1994). However, turning over the 6 card is useless, as the conditional does not state that even-numbered cards have a vowel on the other side. Turning over the 6 card is an instance of affirming the consequent, which is a fallacy. The correct solution to the problem is to turn over the E card, and if the E card has an even number printed on the back, then turn over the 7 card (or we could do it in the reverse order). The conditional is true if the E card has a vowel printed on the back (affirming the antecedent), and the 7 card does not have a vowel printed on the back (denial of the consequent). Otherwise the conditional is false.

Confirmation bias is not uncommon in daily life. For instance, individuals who believe they are insomniacs often overestimate how long it takes them to fall asleep and underestimate the amount of time they sleep (Harvey and Tang 2012). They ignore evidence that would disconfirm their hypothesis that they suffer from insomnia.

Cognitive biases that occur when type-1 processing is employed when it should not be may be a result of intellectual laziness or closed-mindedness. Failure to use type-2 reasoning to double check may be a result of intellectual laziness or arrogance. Intellectual laziness and arrogance can lead us to endorse false beliefs rather than question them.

4. Intellectual Humility, Self-Vigilance and Gregariousness

We now turn to the question of how intellectual humility and other virtues can help us address both cognitive biases like confirmation bias and misapplied heuristics like the availability heuristic and the representativeness heuristic.        

Intellectual diligence/perseverance, intellectual self-vigilance/openmindedness and intellectual humility stand in opposition to their vice counterparts: intellectual laziness, intellectual closed-mindedness and intellectual arrogance. A person is diligent when she is intellectually conscientious and continues to pursue her intellectual goals in spite of obstacles (king, 2015). A person is self-vigilant when she self-monitors, is observant and attentive in her intellectual enterprises and is open to revise her belief set in light of new information (Riggs, 2010). Finally, a person exhibits humility when she is aware of her own intellectual limitations and does not seek to cross those limits without having the requisite skills (Whitcomb, et al. 2017).

Correcting failures to employ type-2 processing would not be difficult, if it were not for our intellectual laziness, closed-mindedness and arrogance. These vices are, however, traits that can be overcome using willpower and taking advantage of the mechanism of habit formation (Dickinson, 1985). Once something has become a habit, one does not need to use conscious thinking to carry out the task.

Consider the activity of driving a car. The first time you sat behind the steering wheel of a car, you likely had to think hard about everything you did from gear-switching and parallel parking to lane shifts and making your way in and out of roundabouts. But, after enough time has passed, you no longer need to put much (if any) conscious thought into driving. Your skills had become automatic, which means that you had formed a habit.

The good thing about habit formation is that as long as you are strong-willed and keep carrying out the task in question at frequent intervals, you will in due course be able to perform it automatically (Dickinson, 1985; Yin and Knowlton 2006).

On the neural level, habit formation amounts to a strengthening of the neural circuits employed in carrying out the task. Proteins are deposited in the synapses (gaps) between neurons, making them more likely to fire together (Yin and Knowlton 2006). Once the habit is acquired, you can complete the task using type-1 cognitive processing rather than type-2 processing. Most of our daily activities are completed in this way, using type-1 cognitive processing, for instance, getting ready for work in the morning, cooking familiar dishes, responding to run-of-the-mill emails or text messages, and so on.

        We can take advantage of habit formation, not by switching to type-1 processing when type-2 processing is called for, but rather by making it a habit to double check responses before blurting them out, particularly when a task is logical or mathematical and has a correct answer that can be arrived at if one exercises intellectual humility and self-vigilance. Consider the following puzzle, borrowed from Kahneman's Thinking, Fast And Slow (2011: 44-45):

A bat and ball cost $1.10.

The bat costs one dollar more than the ball.

How much does the ball cost?

The puzzle naturally evokes an intuitive answer: 10 cents (the correct answer is 5 cents). This is a simple math puzzle that is easily solved using type-2 cognitive processing. But when we are intellectually lazy, we tend to follow our gut instincts, even when the task is not the kind of task that should be handled in this way. Mathematical and logical exercises require type-2 processing. Second-guessing ourselves when solving these types of exercises should be something we do automatically. The automaticity involved in double-checking mathematics and logic problems involves type-1 processing exactly where type-1 processing ought to be employed.

One further intellectual virtue can help achieve type-2 processing when called for. As I have argued elsewhere (Brogaard 2014a; Brogaard 2014b), knowledge is not the only epistemic good. Our intellectual peers are valuable insofar as they can help us control our  intellectual self-confidence. Peer reviewing encourages intellectual humility and self-vigilance. We can call a natural or automatic tendency to engage with intellectual peers for the sake of getting to the truth ‘intellectual gregariousness’. People who are intellectually gregarious are more likely to revise their decisions and beliefs on the basis of feedback from their peers than those who work in isolation.

Intellectual gregariousness can help stimulate self-vigilance and hence strengthen type-2 processing. Intellectually gregarious individuals are stimulated by events that allow for sharing of ideas and future collaborative projects, events such as chatting with colleagues in hallways, participating in conferences and workshops, reading and writing book reviews.

Developing a more outward-directed intellectual attitude requires changing our intellectual interest by a consistent change in situational interest (Johnstone et al.2007; Ochsner, et al. 2002; McRae et al., 2012). Intellectual interest can be understood as a constellation of dispositions to like, dislike or prefer certain intellectual activities that lead to consistent patterns of behaviorseither as a result of a stable personality trait (personal interest) and as a result of situational factors (Cole & Hanson, 1978; for further discussion, see Mount et al, 2005).

Our intellectual interest changes considerably as we proceed through life. Most of these changes protect us against disappointment. Unconscious influences alter our preferences in light of the options we have available (Colburn, 2011). For example, if you have a preference for a life of intellectual extravagance working in the world’s best-ranked philosophy department but you are unlikely to ever reach this goal, your brain may implicitly alter your preferences and make you prefer what is obtainable. It would be great if our brains always made us alter our preferences to fit our options without us having to rely on conscious effort. But that evidently is not the case. Sometimes we need to change our personal interest by consistently changing our situational interest.

Unlike personal interest, situational interest is spontaneous, transitory, and is triggered by the particular situation you find yourself in. You might not normally be interested in talking to people at work functions and yet suddenly find yourself fascinated by what your colleague has to say at the annual holiday celebration. Educational research shows that consistent situational interest is the main factor that can trigger personal interest (Krapp, et al 1992). Situational interest increases when you receive novel information (Hidi 1990), as well as when the activity you are engaging in is at least minimally relevant to your personal interests (Schraw and Dennison 1994).

To awaken your interest in getting validation and scrutiny from your intellectual peers and returning the favor, you might attempt to seek out intellectually enjoyable situations. A big secret of intellectually gregarious individuals is that they pay attention to, and comment on, details of expressed viewpoints and arguments. They also ask a lot of questions pertaining to their own pursuit of the truth.

Imagine that you are an introverted philosopher with an interest in the meaning of life. At the annual holiday party, you find yourself surrounded by the future educators, lawyers and entrepreneurs. When they are not talking shop, which is beyond you, they are chatting about the weather. You last an hour, then you can no longer breathe and you split. Needless to say, this is the wrong approach. The right approach is to realize that it could be illuminating to find out what future educators, lawyers and entrepreneurs have to say about the meaning of life and then attempt to gather that information.

5. Gut Feelings

Kahneman notes that ‘many people are overconfident, prone to place too much faith in their intuitions’ (2011: 45). In other words, people are overconfident that type-1 processing will produce the right results, even in circumstances where it does not. This is particularly problematic when we are faced with logic or math problems, stereotyping and other misapplications of representativeness and availability heuristics. In many cases of logical or mathematical problems as well as stereotyping and misapplications of heuristics, we mistakenly employ type-1 processing when type-2 processing is called for.

There are many other cases, however, where type-1 processing is far more reliable than type-2 processing. One example comes from baseball. Richard Dawkins notes that

when a man throws a ball high in the air and catches it again… he behaves as if he had solved a set of differential equations in predicting the trajectory of the ball… At some subconscious level, something functionally equivalent to the mathematical calculations is going on (Dawkins 1976/1989: 96).

Research, however, has shown that people who catch balls, such as outfielders in baseball, do not engage in mathematical calculations of the trajectory of the ball (Brogaard & Marlow, 2015). Catching the ball is not possible on the basis of slow type-2 processing. So, how do outfielders catch the ball? The answer is that they rely on a ‘gaze heuristic’ (Gigerenzer, 2007). The brain does not bother calculating any real facts about the speed or trajectory of the ball. Instead, it uses an algorithm that adjusts the outfielder’s running speed so that the ball appears continually to move in a straight line in his field of vision. In other words, through practice, the outfielder’s brain has developed its own algorithm to make it possible for him to catch the ball.

                A second example of successful type-1 processing comes from cases in which name recognition produces accurate results. Recall the Detroit example above, in which foreigners who know very little about American cities are asked to determine whether there are more people in Milwaukee or Detroit. Most are more inclined to answer ‘Detroit’ because they recognize it  (Gigerenzer, 2007). In this type of case, foreigners answer correctly, whereas most Americans would not know which of the two cities was largest. Familiarity and name recognition can thus produce accurate predictions in circumstances where knowledge about the options is limited and you recognize one option but not the other.

        A third example of successful type-1 cognitive processing comes from expert chess players. Experts have gone through a process of perceptual learning that allows them to automatically recognize chess configurations as units rather than having to analyze every configuration presented to them during a chess game. Whereas novices are only able to encode the position of the individual chess pieces in long-term memory, expert chess players encode complicated patterns. The basic unit encoded in long-term memory is the ‘chunk’, which consists of a configuration of pieces that are frequently encountered together and that are related by type, color, role, and position (Chase and Simon, 1973a, 1973b). The number of figurations that the expert player has stored in long-term memory can be as high as 300,000 (Gobet & Simon, 2000). The chunks can also be encoded in a combined form known as ‘templates’ (Gobet & Simon, 1996).

Studies using eye-movement measurements have demonstrated that retrieval of chess configurations in experts correlate with holistic fixation on the pieces on the chessboard and a widening of (de Groot & Gobet, 1996) and increase in visual span (Reingold, et al, 2001). These studies suggest that there is a difference, not simply in the cognitive abilities of chess experts and chess novices, but in their perceptual appearances.

Reingold et al. (2001) carried out a study that suggests that part of the enhanced skill-set of expert chess players is a result of perceptual learning that changes the brain’s visual system. A minimized 5x5 chessboard was displayed to novice, intermediate and expert chess players. In the first part of the study configurations fell into two types: (i) figurations with two or three pieces in a checking setup (e.g., the bishop in one corner and the king in the diagonal corner). This is the ‘yes’ condition. (ii) Configurations with two or three pieces in a non-checking setup (e.g., the rook in one corner and the king in the diagonal corner). This is the ‘no’ condition. In the second part of the study, only the two attacker positions (e.g., the bishop/rook and the king) from the first part were used, and double check positions were added to create four possible combinations of checking for both attackers (i.e., yes/yes, yes/no, no/yes, and no/no). The non-checking configuration was the congruent condition, whereas the checking configuration was the incongruent condition. On half of the trials, one of the attackers was colored red as a cue (e.g., the rook) (Fig. 1).

chess3.jpg

Figure 1. Examples of the check configurations The top row demonstrates ‘yes’ (check) versus ‘no’ (non-check) conditions with two or three pieces The bottom row illustrates the no-cue condition (‘no’ trials) and conditions in which a cued non-checking attacker appeared together with an attacker that was either congruent (i.e., non-checking) or incongruent (i.e., checking). From Reingold et al. (2001)

In the first part of the study the players were told to determine as quickly and accurately as they could whether or not the black king was in check. Here the results showed that novice and intermediate chess players responded more slowly when there were two attackers (three pieces) than one attacker (two pieces), whereas the extra piece didn’t affect expert chess players. This indicates holistic processing for experts but non-holistic processing for novices and intermediate players, who would need to evaluate each chess piece in a serial fashion. The results support the claim that the enhanced skill set of expert chess players is a result of acquiring new perceptual abilities, viz. abilities to process chess configurations as units.

In the second part of the study, the participants were instructed to proceed as before if there was no cue but if a cue was present they should ignore the non-colored attacker. If processing of chess relations is serial (piece by piece), cuing should improve performance, as compared to the no-cue condition, because the player wouldn’t need to examine the non-cued checking relation. If, on the other hand, the processing of the chess relations is parallel (holistic), cuing should not improve performance. The results showed that cueing helped novices and intermediate players but didn’t help experts, suggesting that unlike non-experts, experts process the chess configurations holistically rather than piece by piece.

The results furthermore revealed that experts were faster in the congruent (non-checking) versus the incongruent (checking) condition, when a cue was present

The case of perceiving chess configurations sheds light on the difference between type-1 and type-2 cognitive processing. For novice chess players chess configurations are presented piecemeal. In order to play chess, novices must engage in type-2 cognitive processes. In the case of experts, groups or chunks composed of several smaller structured units are themselves processed holistically as units. The hypothesis that type-1 cognitive processing is employed by expert chess players explains both the relatively effortless and fast decision-making processes of expert players during chess games (relative to novice and intermediate players).

So, what does this tell us about intellectual virtue? We need life experience to learn which type-1 processes are reliable and which are not. Even if we cannot consciously control our inclination to use type-1 processes, we can get into the habit of using only reliable type-1 processes. Once we have a good sense of which type-1 processes to avoid and which to embrace, we might develop character virtues, such as intellectual pride with respect to those processes that are reliable.

6. Linda the Bank Teller Revisited

Type-1 cognitive processing may in fact be even more effective than it is usually made out to be. In fact, some of the errors we are said to make when using type-1 cognitive processing rather than type-2 processing may not be errors upon further scrutiny, but may be grounded in the human capacity for inferring additional semantic information from social contexts (see Hertwig and Gigerenzer, 1999 for a related reply to Tversky & Kahnemann, 1983). In Linda the Bank Teller case, most research participants exposed to the exercise likely make a mistake because they do not assign a literal reading to the information provided. In most everyday contexts, people do not attempt to communicate what the sentences they use express semantically but instead attempt to convey non-literal information. A well-known case is that of ‘John and Mary got married and had a baby’ versus ‘John and Mary had a baby and got married’. The two sentences are logically equivalent and have the same semantic meaning but in ordinary discourse the sentences normally also convey a temporal order, as in ‘John and Mary got married, and then they had a baby’ versus ‘John and Mary had a baby, and then they got married’.

In the bank teller case there are two ways in which a non-literal reading may be assigned to the case. Consider the difference between (1) and (2):

(1)

(a) Linda is a bank teller.

(b) Linda is a bank teller and a feminist.

(2)

(a) Linda is only a bank teller (i.e., a bank teller but not a feminist).

(b) Linda is a bank teller and a feminist.

1(a) does not exclude any feminist bank tellers. So, any person who fall into the 1(b) category also falls into the 1(a) category. Since it is logically impossible for a person to fall into the 1(b) category without falling into the 1(a) category, it cannot be more likely for a person to be in the 1(b) category than it is for her to be in the 1(a) category. In fact, all else being equal, it is more likely for a person to be a bank teller but not a feminist than it is for a person to be both a bank teller and a feminist.

The first remark does not apply in the case of (2). If someone falls into category 2(b), then as a matter of necessity they do not fall into category 2(a). Yet if we randomly choose an individual from the general population, it is evidently more likely that they are a bank teller and not a feminist than it is that they are a bank teller and a feminist. Linda is not a randomly chosen individual, however. The reader is given background information about Linda. The background information tells us that when Linda was in college, she was a devoted feminist. If the reader assumes that the majority of people who are devoted feminists in college continue to be feminists later on, then the only rational response to the question of Linda’s post-college occupation is to say that there is a greater chance that Linda is a bank teller and a feminist than a bank teller and not a feminist.

The task the research participants were in fact asked to complete was that of determining the probability in case (1) when taken literally. So, if the task is correctly followed, then the answer is that it is more likely that Linda is a bank teller. But this skill of providing answers on the basis of the meaning that is literally given to us is not typically a useful skill. If the host at a conference asks you to find out whether the keynote speaker has already had breakfast, and you discover that she had breakfast on the previous day but not that same morning, you would commit no semantic errors if you reported back to the host that the keynote already had breakfast. This, however, would not be a satisfactory job done. Even though the host did not mention it, she clearly was interested in knowing whether the keynote had breakfast that same morning and not whether she had breakfast the day before or a week prior to that.

The upshot is that people’s intuitive answer in The Bank Teller case is grounded in a useful intellectual skill, viz., that of being able to determine in real-world environments what the speaker is attempting to pragmatically convey rather than what the sentences she utters semantically express. This latter skill is exercised using type-1 processing, and in most ordinary circumstances using type-2 processing to interfere with the exercise of this skill would produce an unintended outcome.

The so-called cognitive flaw made by research participants in the Linda case also turns on the formulation of the problem. Suppose the task really is to determine which of the two options is more probable in (1). People may be more likely to provide the correct answer if the literal meaning is made explicit. For example, the two answer options could have been formulated as follows:

(3)

(a) Linda is a bank teller (and we are not saying that she is only a bank teller and not

also a feminist. We are leaving that option open).

(b) Linda is a bank teller and a feminist.

Given this way of articulating the problem, we would expect research participants to assign equal probability to 3(a) and 3(b) if the background information about Linda’s college days is given a lot of weight. If the instructions also included a remark to the effect that it is not the case that most college feminists continue to be feminists, people might assign a higher probability to 3(a), which is the desired outcome in this particular case.

7. Intellectual Confidence, Optimism and Gregariousness

As we have seen in the previous two sections, cognitive errors can occur because we engage in slow type-2 cognitive processing when type-1 processing is more effective (pace Kahneman). Exceptions to the superiority of type-1 processing include purely logical and mathematical exercises, stereotyping and misapplications of the availability and representativeness heuristics. In the latter types of cases type-2 cognitive processing is the only reliable way to obtain an accurate result. In many everyday scenarios, however, type-1 processing is more reliable. In these cases, the most advantageous intellectual virtues to embrace are not virtues that make us slow down and think more carefully about what we are doing, but rather virtues that boost self-confidence in the results we have arrived at on the basis of our reliable type-1 processes. That is, in some cases it doesn’t pay off to second-guess ourselves.

How do we develop or enhance intellectual self-confidence? Boosting our optimism about our chosen methods for making decisions and predictions when a method has proven to be reliable is one way to enhance self-confidence. Countless studies have shown that optimism is associated with high self-esteem (see e.g. Ho, et al., 2010; Hecht, 2013). Optimistic individuals see themselves as being in charge of their own intellectual successes and achievements rather than being passive agents whose only successes came about as a result of luck or strategic rule following (Chou, 2013).

Surprisingly, optimists and pessimists have distinct brain activations that can be measured using electroencephalography (EEG), which detects the brain wave patterns in different parts of the brain (Hecht 2013). Optimism turns out to be associated with greater physiological activity in the left hemisphere of the front of the brain, whereas pessimism triggers more activity in the right hemisphere.

In people who fall in the middle on the optimism/pessimism spectrum, the brain takes in and processes positive and negative information to about the same extent. But the left hemisphere is more active when positive information needs to be processed, whereas the right hemisphere is harder at work when the input is unpleasant or negative (Kakolewski, et al. 1999).

In one experiment, research participants listened to a recording of a message warning them about the damaging effects of sun tanning. They listened through either the left ear or the right ear (McCormick and Seta 2012). Information that comes in through one ear is processed on the opposite side of the brain. Those who received the message through the left ear and who therefore processed it in the right side of the brain were more likely to use sunscreen on the beach (McCormick and Seta 2012). In other words, the individuals who had processed the message in the right hemisphere were more likely to be cautious about the damages of sunburn, because the message was delivered to the “cautious” or “pessimistic” side of their brain.

This asymmetry between the two hemispheres of the brain can also be detected when mid-spectrum people process information about their own positive versus negative features (Fox, 2002). For example, if ill-tempered but hard-working people think about their own anger, the right hemisphere is more active. When they contemplate how hard they work to achieve goals, on the other hand, the left hemisphere is harder at work.

The constant elevated activity in the left hemisphere in optimists is explained by their propensity to look at the bright side of life and see themselves in a positive light and as reliable intellectual agents. Pessimists have shut down the parts of the left hemisphere that are supposed to take in and process positive aspects of themselves and their surroundings and be in charge of their own success. One form of depression is a pathological, or extremist, state of pessimism (Leary & Hoyle 2009: 35).

The good news is that second-guessing oneself owing to a lack of intellectual self-confidence can be corrected with goal-directed effort. Optimism about one’s own efforts and the outcome of one’s own decisions can prevent second-guessing oneself and thereby enhance intellectual self-confidence. Optimism and pessimism are both rooted in what is known as attentional bias (for an overview, see Hecht, 2013). Attentional bias is a general tendency for sensory inputs or thoughts to negatively affect the cognitive processing of a task.

Because optimists have recurrent positive thoughts about themselves and their own problem-solving strategies, their brains are more attentive to positive elements of their own cognitive skill set. Accordingly, they filter out information that does not fit their brighter mindset. Pessimists are equally affected by attentional bias but the information they take in is not filtered through rosy glasses (Hecht, 2013). Pessimists pay greater attention to negative thoughts and negative cues, while ignoring positive elements.

This has been measured in a number of ways, most successfully by tracking people’s eye movements when presented with pleasant or unpleasant images (Isaacowitz, 2005;  Segerstrom, 2001). When presented with two parallel images, one pleasant such as a smiling face and the other unpleasant such as a fearful face, optimists gaze significantly less at the unpleasant image and focus much more on the pleasant image as compared to pessimists

These kinds of attentional biases can be corrected with effort. One way to adjust a right-brain negative bias is to engage in imagination exercises that attribute a happy outcome to a devastating situation (Johnstone, et al. 2007; Ochsner, et al. 2002; McRae, et al. 2012). In a pilot study,[1] we asked volunteers to look at pictures of fatal car accidents, quadriplegics in motorized wheelchairs, and homeless people on the street. Participants who had scored as severe pessimists at the outset were told what actually happened and were then asked to imagine a different positive outcome of the scenario. For example, they might imagine the homeless person finding a winning lottery ticket on the street or the quadriplegic meeting a doctor with a magical cure. They repeated this task once daily for eight weeks. After the study period, their degree of pessimism had decreased significantly as measured by standard optimism questionnaires. These initial results indicate that if we dwell less on negative information, it is likely that we may become more optimistic.

Another approach that may help correct a right-brain negative bias is to train the brain to search for positive cues in the environment. In a second pilot study,2 we asked research participants scoring high on pessimism to search for the one happy face in a crowd of unhappy/neutral faces displayed on a computer screen. Each session had twenty visual search tasks that required finding a happy face in a crowd of unhappy/neutral faces. Our volunteers were asked to repeat the task once daily for eight weeks. Those who complied with the task scored significantly higher on measures of optimism after the eight weeks compared to their starting point. This points to the possibility that actively searching for positive cues can increase optimism.

Figure-1-Example-stimulus-matrices-from-the-visual-search-task-Clockwise-from-the-top.png

 

Figure 2. Screenshots from a task asking participants scoring high on pessimism to identify the happy face in the crowd of neural faces.

This sort of attentional bias task does not require a laboratory setting or the right sort of computer stimuli. One could complete it when sitting in a crowded dentist office, walking around the grocery store or riding the subway--using the wait time to practice finding the most happy face in the crowd.

Optimism about the outcome of our decisions and the accuracy of our predictions does not by itself make for intellectual self-confidence of the right kind, even though it is a step in the right direction. As noted above, we also need reflective thoughts about which type-1 processes are reliable and which are not. We need to be able to distinguish between cases where quick thinking is desirable (e.g. when time and information are very limited) and cases where it is not (e.g., cases of stereotyping). We want to avoid indiscriminate self-confidence. We want self-confidence to target type-1 processes that are reliable, but not processes that are unreliable. This is where intellectual pride enters the picture. People with this virtue are attentive to their intellectual strengths and own them.

8. Conclusion

Tversky and Kahneman have long argued that we do not reason rationally and are subject to cognitive illusions, produced by heuristics that we rely on when we reason fast. The heuristics and fallacies giving rise to these cognitive illusions include: the representative heuristic, the availability heuristic, the conjunction fallacy and confirmation biases. The mistake we make in all of these cases is to rely on fast type-1 cognitive processing rather than slow type-2 cognitive processing. Only the latter type of cognitive processing is reliable as a method for making decisions and predictions. Or so the argument goes.

As it turns out, however, only in certain types of circumstances, such as when the task is strictly logical or mathematical or when we are at risk of stereotyping or misapplying the availability and representativeness heuristics, would it be wise to interfere with our fast type-1 processing skills. One reason for this is that we often do not have the required information to use type-2 processing effectively. Another reason is that there are many circumstances that call for pragmatic approaches.

I have argued that while intellectual humility, intellectual self-vigilance and intellectual gregariousness can be useful in eliminating errors in cases that call for type-2 cognitive processing, a different set of intellectual virtues are needed to minimize errors in cases that call for type-1 cognitive processing. The latter include intellectual self-confidence, pride and optimism. I have also discussed some approaches that may be employed in an effort to acquire these virtues. Intellectual gregariousness may be cultivated by changing one’s situational interests. Intellectual self-vigilance may be acquired using standard techniques for habit formation. There is an even more efficient way to acquire intellectual self-confidence. It consists in using attentional-bias exercises to develop a brighter outlook on one’s own decisions and predictions.[2]

References

Ackerman PL & Heggestad ED. (1997) “Intelligence, personality, and interests: Evidence for overlapping traits,” Psychological Bulletin, 121, 219-245.

Brogaard, B.(2014a).  "Intellectual Flourishing as the Fundamental Epistemic Norm," in ed. C. Littejohn and J. Turri, Epistemic Norms: New Essays on Action, Belief, and Assertion, 2014, Oxford University Press, 11-31.

Brogaard, B. (2014b) "Towards a Eudaimonistic Virtue Epistemology", In: Abrol Fairweather (ed.), Naturalizing Virtue Epistemology, Synthese Library, 2014, Volume 366, pp 83-102.

Brogaard, B. & Marlow, K. (2015) The Superhuman Mind, New York: Penguin Group.

Chase, W. G., & Simon, H. A. (1973a) “The Mind’s Eye in Chess.” In W. G. Chase (Ed.), Visual information processing (pp. 215-281). New York: Academic Press.

Chase, W. G., & Simon, H. A. (1973b) “Perception in Chess,” Cognitive Psychology, 4, 55-81.

Chou, S. (2013). “In search of optimal optimism: Can we be realistically optimistic?” Presented at the 121st American Psychological Association Annual Convention (APA), Honolulu, Hawaii, August, 2013.

Colburn, B. (2011) “Autonomy and Adaptive Preferences”, Utilitas 23, 01: 52-71

Cole NS, Hanson GR. ( 1978). “Impact of Interest Inventories on Career Choice,” In Diamond EE (ed.), Issues of Sex Bias and Sex Fairness in Career Interest Measurement. Washington, DC: National Institute of Education.

Dawkins, R (1976) The Selfish Gene, Oxford: Oxford University Press.

Dickinson, A. (1985). “Actions and Habits: The Development of Behavioural Autonomy,” Philosophical Transactions of the Royal Society B: Biological Sciences, volume 308: 67—78.

Fox E. (2002) “Processing Emotional Facial Expressions: The Role of Anxiety and Awareness,” Cogn Affect Behav Neurosci. 2:52–63.

Giltay EJ, Geleijnse JM, Zitman FG, Hoekstra T, Schouten EG.(2004) “Dispositional Optimism and All-Cause and Cardiovascular Mortality in a Prospective Cohort of Elderly Dutch Men and Women,” Arch Gen Psychiatry 61(11):1126-35.

Gigerenzer, G. (2007). Gut Feelings: The Intelligence of the Unconscious, New York: Penguin Group.

Gobet, F., & Simon, H. A. (1996) ”Templates in Chess Memory: A Mechanism for Recalling Several Boards,” Cognitive Psychology, 31, 1-40.

Gobet, F., & Simon, H. A. (2000)  “Five Seconds or Sixty? Presentation Time in Expert Memory,” Cognitive Science, 24, 651-682.

Harvey, AG & Tang NK. (2012) “(Mis)Perception of Sleep in Insomnia: A Puzzle and a Resolution,” Psychol Bull. 138(1):77-101.

Hecht, D. (2013) “The Neural Basis of Optimism and Pessimism,” Exp Neurobiol. 22(3): 173–199.

Hertwig, R., and Gigerenzer, G. (1999) “The ‘conjunction fallacy’ revisited: how intelligent inferences look like reasoning errors,” J. Behav. Decis. Mak. 12, 275–305.

Hidi, S. (1990) “Interest and its Contribution as a Mental Resource for Learning,” Rev. Educ. Res. 60: 549–572

Ho, MY, Cheunga FM, Cheung, SF. (2010) “The Role of Meaning in Life and Optimism in Promoting Well-Being”, Personality and Individual Differences, 48, 5: 658–663.

Isaacowitz, DM. (2005) “The Gaze of the Optimist,” Pers Soc Psychol Bull. 31: 407–415.

Isaacowitz, DM. (2006) “Motivated Gaze: The View from the Gazer,” Curr Dir Psychol Sci. 15: 68–72.

Johnstone T, van Reekum CM, Urry HL, Kalin NH, Davidson RJ. (2007) “Failure to Regulate: Counterproductive Recruitment of Top-Down Prefrontal-Subcortical Circuitry in Major Depression,” J Neurosci. 27: 8877–8884.

Kahneman, D. (2011) Thinking, Fast And Slow, New York: Macmillan: Farrar, Straus and Giroux.

Kahneman, D. & Tversky, A (1972) “Subjective Probability: A Judgment of Representativeness,” Cognitive Psychology 3: 430-454.

Kahneman, D. & Tversky, A (1996) “On the Reality of Cognitive Illusions,” Psychological Review 103: 582-591.

Kakolewski KE, Crowson JJ, Jr, Sewell KW, Cromwell RL. (1999) “Laterality, Word Valence, and Visual Attention: a Comparison of Depressed and Non-Depressed Individuals,” Int J Psychophysiol. 34:283–292.

King, N. (2014) “Perseverance as an Intellectual Virtue,” Synthese 191(15): 3501-3523.

Krapp, A., Hidi, S., and Renninger, K. A. (1992) “Interest, Learning and Development.” In Renninger, A., Hidi, S., and Krapp, A. (eds.), The Role of Interest in Learning and Development, Erlbaum, Hillsdale, NJ: 3–25.

Leary MR, Hoyle RH, eds. (2009) Handbook of Individual Differences in Social Behavior,  New York: Guilford Press.

MacLeod, C. & Campbell, L. (1992) “Memory Accessibility and Probability Judgments: An Experimental Evaluation of the Availability Heuristic,” Journal of Personality and Social Psychology 63: 890-902.

Matlin, M. W. (2013) Cognition, 8th ed. Hoboken: Wiley.

McCormick M, Seta JJ. (2012) “Lateralized Goal Framing: How Selective Presentation Impacts Message Effectiveness,” J Health Psychol.17:1099–1109

McRae K, Gross JJ, Weber J, Robertson ER, Sokol-Hessner P, Ray RD, Gabrieli JD, Ochsner KN. (2012). “The Development of Emotion Regulation: an fMRI Study of Cognitive Reappraisal in Children, Adolescents and Young Adults,” Soc Cogn Affect Neurosci. 7:11–22.

Mount M, Barrick, MM, Scullen, SM and Round, J. (2005) “Higher-Order Dimensions of the Big Five Personality Traits and the Six Vocational Interest Types,” Personnel Psychology 58, 2: 447–478.

Oaksford, M. & Chater, N. (1994) “A Rational Analysis of the Selection Task as Optimal Data Selection,” Psychological Review 101: 608-631.

Ochsner KN, Bunge SA, Gross JJ, Gabrieli JD. (2002) “Rethinking Feelings: An fMRI Study of the Cognitive Regulation of Emotion,” J Cogn Neurosci. 14: 1215–1229.

Reingold, E. M., Charness, N., Pomplun,M., & Stampe, D. M. (2001) “Visual Span in Expert Chess Players: Evidence from Eye Movements,” Psychological Science, 12, 49-56.

Riggs, W. (2010) “Open-Mindedness,” Metaphilosophy Vol.41(1‐2): 172-188

Roberts RC & West R. (2015) “Natural Epistemic Defects and Corrective Virtues,” Synthese 192(8): 2557-2576.

Samuelson, PL & Church, IM (2014) “When Cognition Turns Vicious: Heuristics and Biases in Light of Virtue Epistemology,” Philosophical Psychology 28 (8):1095-1113.

Schraw, G., and Dennison, R. S. (1994) “The Effect of Reader Purpose on Interest and Recall,” J. Reading Behav. 26: 1–18.

Segerstrom SC. (2001) “Optimism and Attentional Bias for Negative and Positive Stimuli,” Pers Soc Psychol Bull. 27: 1334-1343.

Stroop, JR (1935) “Studies of Interference in Serial Verbal Reactions,” Journal of Experimental Psychology 18, 643-662.

Tversky, A. & Kahneman, D. (1973) “Availability: A Heuristic for Judging Frequency and Probability,” Cognitive Psychology 5: 207-232.

Tversky, A., and Kahneman, D. (1983) “Extensional Versus Intuitive reasoning: The Conjunction Fallacy in Probability Judgment,” Psychol. Rev. 90, 4: 293-315.

Wason, PC (1968) “Reasoning about a Rule,” Quarterly Journal of Experimental Psychology 20: 273-281.

Whitcomb D, Battaly H, Baehr, J., and Howard-Snyder D. (2017). “Intellectual Humility: Owning Our Limitations.” Philosophy and Phenomenological Research 94, 3 : 509–539..

Yin HH and Knowlton BJ (2006) “The Role of the Basal Ganglia in Habit Formation,” Nature Reviews Neuroscience 7, 464-476.

 


[1] 2 Unpublished pilot data from The Brogaard Lab for Multisensory Research.

[2] For helpful comments on a previous version of this paper I am grateful to Heather Battaly and audiences at Humboldt University and University of Miami.