Thursday 27 April 2017

Interview with Thomas Sturm on the Science of Rationality and the Rationality of Science

In this post Andrea Polonioli interviews Thomas Sturm (pictured below), ICREA Research Professor at the Department of Philosophy at the Universitat Autónoma de Barcelona (UAB) and member of the UAB's Center for History of Science (CEHIC). His research centers on the relation between philosophy and psychology, including their history. Here, we discuss his views on empirical research on human rationality.


AP: The psychology of judgment and decision-making has been divided into what appear to be radically different perspectives on human rationality. Whilst research programs like heuristics and biases have been associated with a rather bleak picture of human rationality, Gerd Gigerenzer and his colleagues have argued that very simple heuristics can make us smart. Yet, some philosophers have also argued that, upon close scrutiny, these research programs do not share any real disagreement. What is your take on the so-called “rationality wars” in psychology?

TS: Let me begin with a terminological remark. I would like to refrain from further using the terminology of “rationality wars”. It was introduced by Richard Samuels, Stephen Stich, and Michael Bishop (SSB hereafter) in 2002, and I have used their expression too without criticizing it. In academic circles, we may think that such language creates no problems, and I hate to spoil the fun. But because theories of rationality have such a broad significance in science and society, there is a responsibility to educate the public, and not to hype things. Researchers are not at war with one another. Insofar as a dispute becomes heated, if fights for funding and recognition play a role, then we should speak openly about this, tame down our language, and not create a show or a hype. We should discuss matters clearly and critically, end of story.

Now, I study this debate, which has many aspects, with fascination. It’s fascinating because they concern a most important concept of science and social life, adding fresh perspectives to philosophical debates that have occasionally become too sterile. And the debates are so interesting because they provide ample materials for philosophy of science.

AP: Can you explain what you have in mind here?

TS: There are many things one could point out here, but the perhaps philosophically most interesting aspect of the debate is fueled by puzzles and paradoxes in our ordinary concept of rationality. People should know the background of this. Much of the present debate is a long-term reaction to the attempt to explicate a notion of rationality that represents how an ideal reasoner, equipped with unlimited temporal, cognitive, and other resources would judge and decide – and how we finite reasoners should judge and decide. This notion was mostly developed during the mid-20th century through a confluence of modern logic as developed since Frege, probability theory, and the economic theory of games and decisions, as presented by von Neumann and Morgenstern in 1947. 

The attributes of the desired theory vary from author to author and discipline to discipline, but it is not historically inaccurate to say that there was an aim to develop an account that would be formal, optimizing, algorithmic, and mechanic (i.e. implementable on computers). So, let’s call this the ‘FOAM’ theory, or group of theories, of rationality. FOAM theories quickly ran into serious problems – such as Allais’ paradox, Ellsberg’s paradox, Newcomb’s problem, or Amartya Sen’s criticism of “property Alpha”, i. e., the condition of the independence of irrelevant alternatives. Some of challenges go back to early modern times, such as the St. Petersburg paradox, but in the 20th century they proliferated. The more precisely FOAM theories were spelled out, the easier it became to raise objections to them. Now, if the theories in question were taken to be descriptive, these problems could be seen as empirical counterexamples, and then one would have to look for an alternative explanation of judgment and decision-making. 

If, on the other hand, one took the theories to be normative, then the same problems could be viewed as paradoxes, revealing that our concept of rationality isn’t homogeneous, or as indicating that we have conflicting normative intuitions about how people should reason, judge, and decide. This is the most basic philosophical source of the current debate: Followers of the “heuristics and biases” program, while agreeing that the theories are descriptively inadequate, view FOAM theories as normatively adequate, and choose to ignore normative disputes. Critics think that we need not necessarily take a particular FOAM theory for granted. The fast-and-frugal heuristics or “bounded rationality” program, however, is more critical about standard norms of rationality. Sometimes it is stated such that FOAM theories should generally be abolished. Sometimes the objection is only to one particular FOAM theory or part of it, or even only to specific applications of such a particular theory, not to its normative validity.

Now, SSB thought the debate is about how far we are rational. This is an empirical question: How often do we follow norms from logic or probability theory, how often do we fail to do so? SSB tried to resolve the psychological dispute by showing that the competing research programs really agree about all their core claims, and that the differences are only “rhetorical flourishings”. Stated differently, SSB argued that for Kahneman and Tversky the glass of human rationality looks, sadly, half empty; for Gigerenzer, the glass appears to be, happily, half full. So, the debaters would indeed be under an illusion. SSB’s analysis was detailed, but in my opinion misinterpreted some of the claims of both parties. For instance, they put Gigerenzer in the camp of evolutionary psychology, which he is not really committed to. More importantly, SSB did not properly address the core conceptual and normative issues. They left out of sight what the glass really is: What is rationality? What constitutes reasoning at all? And what are core standards of good reasoning? These questions make the debate so philosophically fascinating. (I should mention that Bishop’s views have changed, as he told me in conversation.)

Let me add in what respect I find some philosophical reactions to the debate, namely those from certain – not all – naturalists, unconvincing. I don’t mean the kind of naturalism that claims that there are no supernatural entities. Very few people would deny that. I mean the kind of naturalism that claims that we can explain everything by the theories and methods of the sciences, and also the naturalism that claims that all philosophical questions – say, about reason or rationality – can be answered by the methods of science. Such naturalism often takes the form of following the latest developments in science in too uncritical ways. Whatever the rationality debates prove, they show that we cannot always take science at face value. Cognitive scientists, economists and other social scientists have been aware that the dispute isn’t simply about empirical questions. The issues are philosophical ones: conceptual, methodological, normative. While some scientists themselves are trying to address them, they cannot do this by the standard methods of their fields because those methods presuppose nonempirical assumptions about what rationality is, and how to study it.

AP: In what ways do you think philosophers can contribute to psychological research on judgment and decision-making?

TS: The standard answer here is, of course: Philosophers can help with their expertise in dealing with conceptual puzzles, with normative issues (such as arising in the methodology of science), and, in general, with critical thinking about questions which science cannot answer by its own methods. This answer is basically correct, but must be adapted to the contexts in which philosophers are asked for advice. With respect to psychological debates about rationality, the main caveat concerns the special methods of psychology, as well as the ongoing, rich and complex intrapsychological debates about them. Philosophers do not learn anything about this from textbooks in philosophy of science. For instance, I studied with influential German and American philosophers of science, such as Lorenz Krüger and Philip Kitcher, but their core expertise concerns physics and biology, respectively. 

These areas, plus their neighboring fields such as astronomy or chemistry, provide the majority of teaching materials in philosophy of science up until today. I do not mean that there are no domain-general topics in philosophy of science: theory-ladenness of data, undetermination of theories by data, or experimental artifacts are problems in physics just as much as in psychology, no doubt. But some important issues in the rationality debate concern specifics of the proper understanding and uses of probability theory, or the roles of contextual factors in the explanation of experimental results, among other things – and then also important facts of the sociology of science, such as potentially distorting citation practices.

AP: What would be an example of such a distortion?

TS: A famous article by Kahneman and Tversky from 1974 seems to have been excessively cited, partly because it was published in Science. It made their claims concerning the ubiquity of fallacies or biases in reasoning highly popular, burying opposing results and studies underneath. In the early 1990s, for instance, Lola Lopes advanced some smart and sharp criticisms of the heuristics-and-biases approach, but she could not get her papers into the highest ranking journals. Her work often remains ignored. So, there may be a citation bias in the research about human biases in reasoning. I am formulating carefully here: The issue is a disputed one, but it is highly important, and it would deserve close scrutiny by someone.

AP: Back to our original question: What can philosophers contribute?

TS: We can answer this, in part, by looking at what they have contributed in the past. Some contributions were more like sideeffects of their works, while others were intentional. Of the first kind, the example that comes to my mind is Grice’s work on maxims of conversation. This has frequently been cited by psychologists who doubt that the experiments showing mistakes in reasoning are methodologically sound: at least some of the experiments produce results that could be, by applying Gricean maxims, be interpreted more charitably, more rational. Of the second kind, I think of Sen’s criticism of “property Alpha” – a clear counterexample showing that a particular condition of rational choice is far from being uncontroversial. But this objection came from someone who is a philosopher and a scientist at the same time.

So, philosophy of science as it usually is now is insufficient. But there are a number of classical authors at the interfaces of philosophy and psychology that I recommend to read, such as Karl Bühler, Kurt Lewin, Egon Brunswik, Paul Meehl, or currently Klaus Fiedler, Joel Michell, or Gigerenzer. The latter is not only a participant in the rationality debates, but also a shrewd historian and philosopher of psychology. All of them had philosophical training or have worked with philosophers over their careers. I keep on learning about methodological issues from such psychologists. So philosophers can help to enforce critical thinking, but they must do this in close cooperation with those scientists who think philosophically about their field. By the way: If that is a version of naturalism, it’s one that gives distinctive, critical weight to philosophy.

AP: And what scientific findings on human reasoning do you think should receive most attention within the philosophical community?

TS: I should better say which findings should not be paid much attention to, and how we should not use psychological research more generally. There are two studies that philosophers have pointed to endlessly: the Wason selection task and the Linda problem. Both have led to doubtful “findings”. However, these reasoning tasks are not too hard to explain, and so they get used time and again. To use Thomas Kuhn’s expression, they have become paradigms of bad reasoning that philosophers who wish to build upon psychological research cite – they are both as much particular exemplars of problem-solving as they also contain the methodological and theoretical and axiological assumptions of the research built around them. If you don’t know these, you cannot take part in the discussion. 

So, OK, know the paradigm, and if you don’t, look it up on the internet. But here comes the difference to real Kuhnian paradigms: it’s far from clear that they can be used as good models for further research. True, psychologists from the heuristics-and-biases camp have used them in this way. But no true theory has emerged from that research. Heuristics such as representativeness or availability are mere names for processes that we do not understand. Also, we have a collection of heuristics, but no idea of how they could be used to form a systematic theory of human judgment and decision-making.

AP: What does that mean, for instance, for the interpretation of results in the Linda problem?

TS: If Kahneman and Tversky are right, then subjects here are misled because they take into account the description of Linda as smart, being trained in philosophy, politically interested, and as having participated in antinuclear demonstrations, where this information actually should not guide their reasoning about whether it’s more probable that Linda is a feminist bank teller than that she is a bank teller; and so subjects commit “conjunction fallacies”. However, we might as well say that subjects view the information as possible evidence, given to them in order to solve the reasoning task about what’s more probable in the sense of “credible”, as opposed to mathematical probabilities. In other heuristics-and-biases studies, we have been warned against the sin called “belief perseverance” – the stubborn inclination to believe something that is actually undermined by the evidence. 

Surely you ought to pay attention to the evidence! But if subjects faced with the Linda problem do that, they are blamed to commit an error. That’s what I would call blind or uncritical naturalism in the use of psychological research by philosophers. There is an abundance of possibilities to describe the reasoning behavior of subjects such that it is quite rational or reasonable. Note 1: I am not saying here there are no errors. I am saying that there is no unified theory. Note 2: You may say I just talked about the Linda problem at length while I said people should stop doing so. But perhaps you can now see why, or in what way we should not talk about the Linda problem: We should no longer blindly cite it, or the numerous similar kinds of studies about other reasoning norms, as good empirical evidence for the claim that people are bad reasoners. In a sense, that is what naturalistic philosophers like Stich and others have done. This should end.

The same for all other findings or “findings”. It is not the job of the philosopher to simply report what psychologists have found out, and to claim we can derive a naturalistic account of rationality from that. We should rather help people to think things through for themselves!

AP: We have learnt quite a lot over the past decades from the science of rationality. But as a philosopher of science, what do you think that scientific research on reasoning can tell us about the rationality of science?

TS: That’s a highly interesting question, about which there isn’t much work so far. My current thinking here is the following. While a lot of psychological research has been devoted to the reasoning of both laypersons and experts, partly scientific experts, the contributions that could help us to explain, or even improve, the rationality of science, are close to zero. That is, I think, true for both the heuristics-and-biases and the fast-and-frugal heuristics approaches. Consider the former first. Here, it is assumed that we know what the standards of good or valid reasoning are. Otherwise the experiments could not get off the ground in the first place. So, Kahneman, Tversky, and their collaborators did not care or dare to ask how psychological research might be useful for understanding the rationality of science. In philosophy, some have tried to use heuristics and biases for this, attempting to support naturalism in philosophy of science. 

How did this work out? For instance, Miriam Solomon has used the concepts of belief perseverance, representativeness, or availability in order to explain the normative successes, the rationality of choices made by scientists in the geological revolution of the 19th century. This is virtually the opposite of what the heuristics-and-biases program does! Also, Solomon just applied the concept, say, of belief perseverance to the few geologists who trusted in continental drift long before sufficient evidence was in. As so often with applications of heuristics and biases, people apply the terminology without having a clear normative standard against which to measure subject’s behavior. But we should recognize that sometimes there simply are no such norms, and science at its research frontiers is a good example for this. We should consequently not use the language of heuristics and biases, or at least not pretend we could thereby explain the rationality of theory change in science.

On the other side, adherents of bounded rationality have not studied much what heuristics scientists use, or could use. Bill Wimsatt’s work is an exception, and Peter Todd’s upcoming study on Darwin’s notebooks may be one too. The main reason for the difficulty of applying fast and frugal heuristics is that such heuristics are often based either on long experience, or on a fit between the environment and our modes of thinking. Science, however, constantly tries to innovate itself. Understanding how scientific innovation is possible, or can be understood rationally, is nothing any of the existing theories of rationality, descriptive or normative, can provide for. To my mind, to achieve progress here, it would need an approach that closely integrates philosophy of science with history of science, and both with theories of rationality.

AP: Are you currently doing any research on human rationality?

TS: Yes, and lot’s! My projects fall into various related areas. The last topic we just discussed is one of them. So far, I have only one paper in the pipeline, about the (im)possibility of understanding scientific innovation rationally. Maybe more will come. I like to go where the arguments carry me. Then, in our HPS group here in Barcelona, we are currently discussing the history and philosophy of psychological theories and debates, trying especially to understand what form of naturalism in epistemology or the philosophy of science might be developed from it. How have normative and descriptive theories been developed at the interfaces of philosophy and science? How have naturalists tried to adapt, time and again, to new empirical knowledge about human reasoning, and what contributions philosophers have actually made? And could there really be a science of rationality?

Then, I aim to connect current debates over rationality to the history of the concepts of reason and rationality as they have been developed and revised, time and again, in the philosophical tradition. Immanuel Kant’s understanding of reason is particularly central to me here, partly due to my other research in the history of philosophy, but also because he is just such a rich thinker in this area. Philosophers, with very few exceptions, do not think about it in the light of the different conceptions of rationality now on the market. I think this might be heuristically useful, though one must be careful to avoid reading him too anachronistically. I once stumbled over an article by C.W. Churchman– nowadays mostly forgotten, but he was influential as an editor for the journal Philosophy of Science, and as an operations researcher. In 1971, Churchman asked “Was Kant a decision theorist?” He translated several of Kant’s objections to teleological ethical theories into assumptions of 20th-century rational choice theory. 

This is intriguing, because it makes Kant’s rejection of consequentialism in ethics look more reasonable. Another example stems from attempts to read his philosophy of history in game-theoretic terms. Such readings seem to put Kant into the camp of FOAM theories in the 20th century, and perhaps reinforce the stereotype of him as the “man of the clock”, the man who would pedantically stick to rules in all areas of life. But Kant says that we cannot and should not apply rules of reason mechanically. It takes another cognitive faculty (which he gives different names, such as “seasoned judgment” or “motherwit”) to apply rules intelligently. That is part of what he means when he says that reason has to constantly criticize itself, to study its foundations and limits. This continues to be very useful advice.

Tuesday 25 April 2017

Strategic Thinking, Theory of Mind, and Autism


My name is Peter Pantelis. I study “theory of mind”—our ability to reason about other people’s mental states. Years ago, I became interested in an economic game called the Beauty Contest, because I think it taps into theory of mind very elegantly:

You are going to play a game (against 250 undergraduate psychology students). Each player will submit a whole number from 0 to 100. The winner will be the player whose number is closest to 2/3 of the mean number selected by all the players.

What number do you submit?

(I’ll wait for you to think about it for a moment)

What number should you submit, and why? Game theory says the rational strategy is for you to say 0—and so should everyone else. That’s what economists call the Nash equilibrium.[1] But in practice, virtually nobody submits the “rational” choice of 0. The average number selection is usually something like 25-35.

People also give a wide variety of responses, and interpreting this (non-normative) pattern is where the behavioral economists come in. Nagel (1995) and Camerer and Fehr (2006) have modeled this task in ways that both fit the empirical data well and capture the various intuitions people bring to this game. Their papers are some of my favorites, and I summarize their approaches in our recently published Cognition paper.

They posit that players bring different levels of strategic sophistication to this game. Although many players answer completely randomly or arbitrarily, for most players, a critical aspect of deciding on what number to select is making a sensible prediction of what numbers the other players are likely to select. And to sensibly do so you must estimate just how sophisticated your opponents are, and possibly what beliefs they also may hold about you.

If that doesn’t engage your theory of mind ability, I don’t know what does.

Thursday 20 April 2017

Memories: Distorted, Reconstructed, Experiential and Shared


PERFECT 2017 Memory Workshop




We are very excited that on 5th May 2017 Project PERFECT will be holding its second annual workshop, at Jesus College, Cambridge. The workshop will feature leading experts in the field of philosophy of memory. The talks will focus on a wide-range of fascinating issues that dominate contemporary research on memory. The talks will be of interest to philosophers of mind, philosophers of psychology, epistemologists and psychologists, as well as other cognitive scientists interested in how we remember the past.



Issues to be covered in the talks include how memory can generate knowledge; how false and distorted memories can be useful features of ordinary cognition; the nature of experiential memories; whether we can be immune from error due to misidentifying ourselves in a memory; and the role of shared memories in relationships.

Many of the talks will have an interdisciplinary angle, highlighting how recent psychological research—e.g. on false and distorted memory, and dementia and grief—should impact on our understanding of human memory.

Two of the talks will focus directly on a concept at the very heart of Project PERFECT: i.e. epistemic innocence. This is the idea that some false and misleading cognitions bring epistemic benefits that could not be possessed in the absence of the cognitions.

Kirk Michaelian will examine the claim that memory can generate new knowledge. He will explore two views that are consistent with this claim, arguing that the views, when combined, support the claim that episodic memories (our memories of individual incidents) are misleading but in a way that makes them epistemically innocent.

On a similar theme, I will present work written in collaboration with Lisa Bortolotti showing that three memory distortions famously studied in the psychological literature can be explained in terms of the presence of cognitive mechanisms that are epistemically innocent.

Dorothea Debus will explore the nature of memories with experiential qualities. She will argue that we give this type of memory special weight, and she will illustrate how we are both passive and active with respect to these memories. We are active because we can prompt ourselves and others to remember events. We are passive because the memories often just come to us.

Jordi Fernández will examine the claim that one cannot have an inaccurate memory as a result of misidentifying oneself in the memory. He will consider how psychological research on observer memories (when people seem to recall a scene in which they featured from the perspective of an observer) and disowned memory might be taken to challenge the claim. Then he will respond to the challenge by drawing on the same psychological research to offer a positive view in support of the target claim.

John Sutton will focus on how the ways people have shared memories that are reflected in and can come to constitute specific close relationships. He will focus on both ongoing relationships and the end of relationships. He will draw on psychological studies on the role of memory in dementia and grief.

For more information about the workshop see here.

Tuesday 18 April 2017

Bounded Rationality Meets Situated and Embodied Cognition



This post is by Enrico Petracca (University of Bologna), who recently published a paper entitled ‘A cognition paradigm clash: Simon, situated cognition and theinterpretation of bounded rationality’ in the Journal of Economic Methodology. Enrico is involved in a project called ‘embodied rationality’, and pursued with his colleague Antonio Mastrogiorgio (University of Chieti-Pescara). The project aims to integrate the notion of embodied cognition within the framework of bounded rationality.

Bounded rationality has been a hard-to-digest notion in economics and the other social sciences since its introduction by Herbert A. Simon in the middle of the last century. How could ‘rationality’ be ‘bounded’? And – as a typically related concern – would this imply that social sciences should abandon any normative horizon, giving the way to an unappealable ‘irrationality’?

Thursday 13 April 2017

Surfing Uncertainty

In this post, Andy Clark, Professor of Logic and Metaphysics at the University of Edinburgh, introduces his new book: Surfing Uncertainty: Prediction, Action, and the Embodied Mind.


Sometimes, we are most forcibly struck by what isn’t there. If I play you a series of regularly spaced tones, then omit a tone, your perceptual world takes on a deeply puzzling shape. It is a world marked by an absence – and not just any old absence. What you experience is a very specific absence: the absence of that very tone, at that very moment. What kind of neural and (more generally) mental machinery makes this possible?


There is an answer that has emerged many times during the history of the sciences of the mind. That answer, appearing recently in what is arguably its most comprehensive and persuasive form to date, depicts brains as prediction machines – complex multi-level systems forever trying pre-emptively to guess at the flow of information washing across their many sensory surfaces. 

According to this emerging class of models, biological brains are constantly active, trying to predict the streams of sensory stimulation before they arrive. Systems like that are most strongly impacted by sensed deviations from their predicted states. It is these deviations from predicted states (‘prediction errors’) that here bear much of the explanatory and information-processing burden, informing us of what is salient and newsworthy in the current sensory array. When you walk back into your office and see that steaming coffee-cup on the desk in front of you, your perceptual experience (the theory claims) reflects the multi-level neural guess that best reduces prediction errors. To visually perceive the scene, your brain attempts to predict the scene, allowing the ensuing error (mismatch) signals to refine its guessing until a kind of equilibrium is achieved.

Perception here phases seamlessly into understanding. What we see is constantly informed by what we know and what we were thus already busy (both consciously and non-consciously) expecting. Perception and imagination likewise emerge as tightly linked, since to perceive the world is to deploy multi-level neural machinery capable of generating a kind of ‘virtual version’ of the sensory signal for itself, using what the system knows about the world. Indeed, so strong is the tie that perception itself becomes a matter of what some theorists have called ‘controlled hallucination’.

Tuesday 11 April 2017

Helpful Rationality Assessments




Hello, readers! I’m Patricia Rich, and I’m currently a philosophy postdoc on the new Knowledge and Decision project at the University of Hamburg. This post is about a paper stemming from my dissertation, entitled Axiomatic and Ecological Rationality: Choosing Costs and Benefits. It appeared in the Autumn issue of the Erasmus Journal for Philosophy and Economics.

My paper defends a specific method of evaluating rationality. The method is general and can be applied to choices, inferences, probabilistic estimates, argumentation, etc., but I’ll explain it here through one example. Suppose I’m worried about my friend Alex’s beliefs regarding current affairs. Her claims often seem far-fetched and poorly supported by evidence. As rationality experts who want to help, how should we evaluate Alex?

I embrace several components of the “ecological rationality” research program, which many readers will know from other posts. First, it’s important to move beyond particular beliefs and evaluate strategies that Alex uses or could use (“teach a man to fish …”). The starting point should be her present strategy, which I’ll call FB: Alex skims her Facebook feed and forms many beliefs primarily on the basis of news headlines there.

We probably all think FB is a terrible strategy, but we shouldn’t just tell Alex that and call it a day; it’s important to compare FB to some of her alternatives. For example, Alex could skim both her own feed and her (more conservative) brother’s, or she could watch TV news. Making concrete comparisons focuses our attention on what improvements are possible for Alex and doesn’t require us to posit – and identify – a sharp boundary between “rational” and “irrational” strategies.

Thursday 6 April 2017

Bias in Context Sheffield 2017

In this post Robin Scaife reports from the conference Bias in Context.

On the 25th and 26th of January 2017 the University of Sheffield hosted the 3rd in a series of 4 conferences on Bias in Context. This workshop was supported by the Leverhulme Trust as part of a research project grant on bias and blame. The previous two conferences in the series had focused on how to understand the relationship between psychological and structural explanations. This time the theme was Interpersonal Interventions and Collective Action. The goal was to look beyond individualistic approaches to changing biases and examine how interpersonal interactions and collective action can be used to combat bias.
Experts came from both Philosophy and Psychology and many of those attending also had practical experience of leading diversity training sessions.





The conference began with Dr Evelyn Carter (UCLA) giving a talk about her ongoing research into applying theories of motivation to confronting bias. She argued that it is crucial that we always confront bias because speaking up sets norms. Across a number of studies her research team have found that feedback drawing attention to and condemning bias makes people more favourable to anti-prejudice. Their research indicates that both high and low confrontation feedback can be effective in promoting change.


After lunch Dr Gabriella Beckles-Raymond (Canterbury) talked about developing an ethics of social transformation. She argued that we are more aware of our biases, and in particular what causes them, than is typically assumed. She argued that we cannot use ‘implicit’ as an excuse that we can’t act or our society as an excuse that we are powerless. Instead we must move away from a focus that sees bias as a problem for individuals, to be solved by individuals and use the ethics of empathy to address the deeper social, societal and structural problems.


Then Dr Robin Scaife (Sheffield) presented the findings from a series of experiments examining the effects of administering in person blame for implicit bias. The results indicate that, contra common assumptions about blaming increasing bias or making people resistant to change, the communication of disapprobation for the manifestation of implicit bias has potential benefits and no costs. Those who had been blamed showed similar or slightly reduced levels of implicit bias and had significantly stronger explicit intentions to change their future behaviour than those who had not. 



This was followed by Dr Rosa Terlazzo (Kansas State) who discussed the idea that victims have a duty to other victims to resist their oppression. She argued that if this duty is to end the harm caused by oppressive norms then this is beyond their power, but if the duty is merely not to contribute to the harms then this will do little to limit oppression. Terlazzo argued that instead we should understand victims to have a duty to act as counter-stereotypic individuals in order to weaken the self-regarding biases experienced by other victims and thereby mitigating but not ending the harms of oppressive norms.
The first day of the conference ended with drinks and dinner which provided a great opportunity for all participants to discuss and share their perspectives on bias. 







The second day of the conference began with Dr Yannig Luthra (UCLA) on social prejudice, co-authored with Dr Cristina Borgoni (Graz). He presented several arguments in favour of the claim that an individual counts as violating norms of epistemic and practical rationality directly in virtue of drawing from epistemic and practical problems in her social context. The central idea is that rational life is social in much the same way it is temporal. Your view can be an extension of the view of others in the same way it can be an extension of your own past perspective. In both cases one can be implicated for importing rational failings. However, the diagnosis of the wrong must ultimately be with the social sources of the individual’s bias.


Then Dr Joseph Kisolo-Ssonko, (Nottingham) talked about collective intentionality, bias and constituting a ‘we’. He argued that our capacity to think of ourselves as a ‘we’ is not the voluntary choice it is often presented as being. Instead it is underwritten by normatively loaded social and structural biases and power structures. Because of this he concludes that biases do not just cause us to act irrationally on a pre-existing social stage. Rather, they also found what counts for us as collectively rational.


Professor Sally Haslanger (MIT) gave the final talk of the conferenced titled: ‘If racism is the answer, what is the question?’ She claimed that racism is best understood as a homeostatic system where racism is constituted by the systematic looping of schemas and resources. Practices distribute things of value and disvalue but in turn we learn about what different races “deserve” by looking around us at the result of these practices. Haslanger argued that to end racism we have to stop the systematic looping by dismantle society as we know it and that in achieving this end changing attitudes should not be the highest priority because other methods of intervention are likely to be more efficacious.

The conferences concluded with Dr Jules Holroyd (Sheffield) and Dr Erin Beeghly (Utah) chairing a round table discussion. Much of the discussion focused on how to resist and combat the way that recent election results in both the USA and UK have been perceived as legitimising prejudice. Lacey Davidson (Purdue)  made the exciting announced that she has been awarded a Global Synergy Grant to transform Jenny Saul’s bias project website (www.biasproject.org) into an ongoing bias web resource. There were lots of promising suggestions for features which could make up part of this resource. Keep an eye out for developments on that front.






The fourth & final conference in the bias in context series will take place on the 12th and 13th of October at the University of Utah. The full program, details, and call for abstracts for the poster session, will soon be/is available at www.biasincontext4.weebly.com

Tuesday 4 April 2017

The Problem of Debiasing




Vasco Correia (pictured above) is currently a Research Fellow at the Nova Institute of Philosophy (Universidade Nova de Lisboa), where he is developing a project on cognitive biases in argumentation and decision-making. In this post, he summarises a paper he recently published in Topoi.

This paper is an attempt to show that there are reasons to remain optimistic—albeit cautiously—regarding our ability to counteract cognitive biases. Although most authors agree that biases should be mitigated, there is controversy about which debiasing methods are the most effective. Until recently, the notion that critical thinking is effective in preventing biases appealed to many philosophers and argumentation theorists. It was assumed that raising awareness of biases and teaching critical thinking to students would suffice to enhance open-mindedness and impartiality. Yet the benefits of such programs are difficult to demonstrate empirically, and some authors now claim that critical thinking is by and large ineffective against biases.

Monday 3 April 2017

What is Unrealistic Optimism?

This post is the final one in our series summarizing the contributions to the special issue on unrealistic optimism 'Unrealistic Optimism -Its nature, causes and effects'. The paper by Anneli Jefferson, Lisa Bortolotti and Bojana Kuzmanovic looks at the nature of unrealistically optimistic cognitions and the extent to which they are irrational.

Anneli Jefferson

We know that people have a tendency to expect that their future will be better than that of others or better than seems likely on an objective measure of probability. But are they really expressing a belief that the future will be good, or should we see these expressions of optimism as hopes or possibly even just expression of desires for the future? Maybe when I say ‘My marriage has an 85% likelihood of lasting ‘til death do us part’’, what I am actually saying is ‘I really, really want my marriage to last.’ If what is expressed is a desire rather than a belief, we do not need to worry that we are systematically mistaken in our beliefs in the future and that our expectations for our future are insufficiently sensitive to the evidence we have for what is likely to happen. In the paper, we argue that expressions of unrealistic optimism are indeed what they seem to be on the surface, beliefs about what is likely to occur. The fact that optimistic expectations are frequently not well supported by the evidence is a feature that they share with many other beliefs, as we humans are not ideally rational in our belief formation.

Lisa Bortolotti

By definition, unrealistic optimism is a phenomenon that shows us to be insufficiently in touch with reality. However, establishing that we are in fact making an error when assessing the likelihood of future outcomes is surprisingly difficult. In some cases, whether an expectation is correct or not can only be established post factum. Only at the end of the Euro 2016 could we say that Ronaldo’s belief that Portugal would win the European cup had been correct (if indeed he had this belief). Things are more complicated if what we know is that Ronaldo believed that Portugal had a 95% likelihood of winning the European cup. Is this belief validated by the fact that Portugal did win? Not necessarily, as his likelihood estimate may still have been too high given some objective measure of likelihood. Furthermore, it cannot be the case that probabilistic risk estimates are proven or disproven by later outcomes. Otherwise, any risk estimate which isn’t either 0 or 1 will automatically be incorrect, it is just impossible to say whether the error lay in being too optimistic or pessimistic before the actual outcome ensues.


Bojana Kuzmanovic

But the question of whether an individual’s optimistic beliefs are false is in many ways less pressing than the question whether the individual is justified in holding that belief given the evidence available to them. Are unrealistically optimistic beliefs epistemically irrational because they do not take into account available evidence either when the individual forms the belief or when they maintain their belief?