Tuesday 13 June 2017

The Argumentative Theory of Reasoning





This post is by Hugo Mercier, Cognitive Scientist (French National Center for Scientific Research) and co-author (with Dan Sperber) of The Enigma of Reason. In this post, he discusses the argumentative theory and refers to some of his most recent publications (1; 2; 3). 

It is easy nowadays to find long lists of biases (such as this one). In turn, these lists of biases have given rise to numerous attempts at debiasing. The popular system 1 / system 2 framework has been useful in framing these attempts at debiasing. System 1 would be a set of cognitive mechanisms that deliver quick, effortless intuitions, which tend to be correct but are prone to systematic mistakes. System 2 would be able to correct these intuitions through individual reflection. Teaching critical thinking, for instance, can then be thought of as a way of strengthening system 2 against system 1.

The problem is that, as Vasco Correia noted in a recent post, debiasing attempts, including the teaching of critical thinking, have not been quite as successful as we might like. He suggests that instead of trying to change individual cognition, we should manipulate the environment to make the best of the abilities we have.

Essentially, this is the point that Maarten Boudry, Fabio Paglieri, Emmanuel Trouche, and myself have made in a recent article. We ground our analysis in the argumentative theory of reasoning. According to this theory, reasoning is not a system 2 like homunculus that would be able to oversee other cognitive mechanisms. Instead, it is just another intuitive mechanism among many others. Its specificity is to bear on reasons: reasoning evaluates and finds reasons. By contrast, the vast majority of our inferences go one without any reasons being processed.

According to the argumentative theory of reasoning, the function of human reasoning is—as the name suggests—to argue. Reasoning would have evolved so that people can exchange arguments. When people disagree, they can then try to convince each other, and evaluate each others’ arguments, so that whoever had the best idea to start with is more likely to carry the day.

This theory nicely explains why many biases observed in the lab aren’t easily fixed by reasoning: because they are biases of reasoning. In particular, the confirmation bias—or, more rightly, the myside bias—is specific to reason. Because of this myside bias, reasoning mostly produces reasons that support one’s initial intuitions. Even if initial intuitions are misguided, they are more likely to end up being bolstered by reasoning than corrected. Unsurprisingly, individual reasoning does, by and large, a poor job of correcting mistaken intuitions.

What to do then? Try to use reasoning in the context it evolved to work in: that of a discussion between people who disagree about something while sharing an overall goal—to solve a problem, reach more accurate beliefs, etc. Such discussions improve performance in a wide variety of tasks, from medical diagnoses to economic forecasts, from logical problems to school tasks.

Ironically, doing so might end up improving solitary reasoning as well. When we exchange arguments with others, we are exposed to counter-arguments. With repeated exposure, we learn to anticipate counter-arguments, and this might help attenuate the myside bias.

No comments:

Post a Comment

Comments are moderated.