Rational animals? the limits of human reason

In my previous post, I discussed a misinformation in the context of non-human animals. However, I did not consider the psychological mechanisms that are operating ‘beneath the surface’ that enable the spread of misinformation and the adoption of inaccurate beliefs. Here, I question the limits of human reason, drawing attention to what can most aptly be described as psychological flaws.

The terms reason’ and ‘rationality’ are often used interchangeably. According to a commonplace objectivist view, rationality consists in one’s beliefs or actions following from reason. However, the question of ‘what is reason?’ remains. When I refer to reason (likewise, reasoning), my focus is on the mechanisms of thought and belief formation. Reasoning, in the simplest sense, refers to the process by which people generate and evaluate arguments and beliefs (Anderson, 1985; Hollyoak & Spellman, 1993, in Westen, Burton, & Kowalski, 2006).

Similarly, Amoretti and Vassallo (2012) suggest that according to the ‘standard picture’, rationality consists in reasoning in accordance with normative principles (p.10). In addition to highlighting the relationship between rationality and reason, the mention of ‘normative principles’ draws attention to rules for reasoning that have been derived from logic, probability theory and decision theory (Stein, 1996, in Samuels, Stich, & Faucher, 2004, p.2).

My concern here is how well we are able to follow these normative principles. In other words, my concern here is what Samuels, Stitch, & Faucher call the ‘Descriptive Project’ – understanding how we go about reasoning (Stein, 1996, in Samuels, Stich, & Faucher, 2004, p.2), and more specifically, understanding the limits of our reasoning faculties.

While ordinarily, humans are thought to be good reasoners, in recent years psychologists have highlighted various ways that our reasoning processes are susceptible to error (Richardson, 2018). The fallibility of human reason was brought to attention in the late 20th century when psychologists noticed the extent in which we rely on heuristics – cognitive shortcuts that allow us to make rapid decisions with minimal cognitive load. Psychologists Daniel Kahneman and Amos Tversky, among others, noticed that reliance on these cognitive shortcuts gave rise to a number of biases that influence our reasoning processes and undermine the conception of humans as ideal rational agents. For example, we have a tendency to pay attention to information that confirms our existing beliefs, known as ‘confirmation bias’ (Mercier, 2011, p.136).

If our reasoning processes are potentially corrupt, then perhaps our moral reasoning, and what follows (i.e. moral judgement, action, and so forth) might also be led astray. Moral reasoning, in the simplest sense, is a type of reasoning about what one ought to do in any situation involving moral agents. One might suggest that moral reasoning is simply applying ethical theories to deliberation about how one ought to act to form a deductive argument. Or, it could be argued that moral reasoning involves adjusting one’s beliefs, in light of goals, to arrive at what Rawls calls a “reflective equilibrium’’ (Harman, Sinnott-Armstrong, & Mason, 2010, p.212-214, 237). However, as I alluded to above, moral reasoning is not undertaken by ‘ideal rational agents’, and as such, it is a complicated matter.

In the paper What Good is Moral Reasoning (2011), psychologist Hugo Mercier questions the role that individual reasoning plays in our moral lives. Mercier highlights that a number of theories of moral reasoning broadly fit a dual process framework, dividing cognitive action into two distinct categories, building on Kahneman’s interpretation, referred to as System 1 and System 2. The former usually refers to automatic unconscious processes, relying on heuristics, whereas the latter processes are slow, effortful, and conscious (Mercier, 2011, p.133).

Although our System 1 processes are often reliable, they are fallible, and can lead to irrational judgements. However, the division of cognitive processes into System 1 and System 2 grants a way for moral reasoning to take place free from corruption. Put another way, even if we rely on mechanisms that have inherent biases to make quick and non-reflective judgements, moral reasoning is a reflective endeavour – intentional, controllable, and based on rules. As such, one can potentially undertake moral reasoning and make moral judgements without being swayed by unconscious biases.  

Mercier highlights that System 1 processes play a dominant role in making moral judgments and in decision making. In contrast, System 2 processes are primarily used as a justificatory mechanism, that is making post-hoc rationalisations to justify our own beliefs and actions. In addition, Mercier notes that when people use reason to examine their own moral beliefs and judgements, they are likely to only find arguments that support their pre-existing positions. This is consistent with the concept of Confirmation Bias, the ‘‘seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand’’(Nickerson 1998, p. 175, in Mercier, 2011, p.136).

Similarly, in the paper Reasons probably won’t change your mind: The role of reasons in revising moral decisions (2018), Matthew Stanley and colleagues postulate that moral decisions are driven by affective responses rather than deliberate reasoning. Their study found that after assessing moral dilemmas, and being asked to justify reasons, participants were unlikely to change their original decision, even when later presented with opposing reasons. They note that moral decisions resist change as a result of the shortcomings in reasoning processes discussed above. This perspective is not without criticism, however, and I will consider this shortly.

It becomes clear that the two defined systems within a dual process framework are not operating independently; System 1 processes influence System 2 processes. As such, reflective moral reasoning (S2) is just as fallible as unconscious processes (S1). Therefore, a problem arises. If moral reasoning is meant to give guidance on matters that can have great impact on the lives of others, a mechanism that mostly supports our previous beliefs is surely not sufficient for this task (Mercier, 2011, p.137).  Further, if our reasoning is fallible, it follows that we are ill equipped to evaluate arguments and make accurate judgments, which in turn, creates an environment ripe for the spread of misinformation and the formation of false beliefs.

More recently, however, others have questioned whether the results of these studies follow outside of the laboratory and whether a pessimistic account of human rationality is justified. Paxton and Greene (2010), while conceding that emotion-based intuition plays a formidable role in our beliefs and moral judgements, argue that moral reasoning in the “real-world” can still be effective. They suggest moral reasoning between individuals allows for the transmission of moral principles that may be used to override intuitions and influence behaviour (Paxton & Greene, 2010, p.525). 

Hugo Mercier, in his latest book, Not Born Yesterday. The Science of Who We Trust and What We Believe (2020), also defends human reason. Mercier, drawing on the latest findings from experimental psychology, suggests that cognitive shortcomings are faults in an otherwise well-functioning system. 

Similarly, in the paper Is Conspiracy Theorizing Irrational (2019) and his forthcoming book Bad Beliefs: Why They Happen to Good People, (In Press), Neil Levy proposes that humans are ‘subjectively rational’ – reasoning, by their lights, in the best possible way. That is to say that a pessimistic view of human reason is not justified. What might be deemed objectively irrational, or poor reasoning, can be explained as a rational response to environmental factors (Levy, 2019, & Levy, In Press).

That said, these approaches that question a pessimistic view of human reason do not provide a direct objection to the idea that our reasoning is fallible. Paxton and Greene are suggesting that reflective reasoning can play a role in moral deliberation, so if anything, they are only questioning the degree to which psychological flaws impact our reasoning, rather than objecting to them having any impact at all.

Whereas Mercier posits that while some heuristics (which he calls ‘mechanisms of open vigilance’) can misguide us, they have developed for good reasons, and overall, function well. This does not serve as a direct challenge to the existence of flaws in our reasoning abilities, but instead, shifts judgement from a pessimistic point of view to an optimistic one.

Likewise, Levy does not object directly to the idea that our reasoning faculties are fallible. Instead, he proposes that a shift in perspective, beyond the individual, is required to understand why we hold bad beliefs. Levy’s departure from the individual does raise a good point: that our beliefs, our individual beliefs, are not that individual after all. Rather, our beliefs are deeply social, influenced by a wide range of external factors.

The picture that I have sketched here, with a few exceptions, considers belief through the lens of the individual. That is, reasoning is understood from an individual agent’s perspective, calling attention to psychological shortcomings taking place at an individual level. However, psychological explanations alone cannot account for the prominence of false beliefs. Rather, we must look beyond the individual to understand how our beliefs take shape. The social nature of our beliefs creates an environment in which misinformation can spread easily, and be immensely damaging.

*This blog post has been adapted from my Masters of Research thesis, ‘How Misinformation Reinforces the Status of Animals as Food’ (2021). Thank you to my supervisors Jane Johnson and Mark Alfano.

REFERENCES

Amoretti, M.C., & Vassallo, N. (2012). “The Life According to Reason is Best and Pleasantest”, in M.C Amoretti & N. Vassallo, Reason and Rationality, Piscataway (eds.), NJ: Transaction Books.

Asch, S. E. (1951). Effects of group pressure upon the modification and distortion of judgments. in H. Guetzkow (ed.), Groups, leadership, and men Pittsburgh, PA: Carnegie Press.

Harman, G., Sinnott-Armstrong, W., & Mason, K. (2010). “Moral Reasoning” in J. Doris (ed.), The Moral Psychology Handbook, Oxford, UK: Oxford University Press.

Kahneman, D. (2011). Thinking, Fast and Slow. New York, NY: Penguin Press.

Levy, N. (2019). Is Conspiracy Theorising Irrational? Social Epistemology Review and Reply Collective, 8(10): 65-76.

Levy, N. (In Press).  Bad Beliefs. Why They Happen to Good People. Oxford, UK: Oxford University Press.

Mercier, H. (2011). What Good Is Moral Reasoning?. Mind and Society, 10: 131-148.

Mercier, H. (2020). Not Born Yesterday. The Science of Who We Trust and What We Believe. Princeton, NJ: Princeton University Press.

Paxton, J.M., & Greene, J.D. (2010). Moral Reasoning: Hints and Allegations. Topics in Cognitive Science, 2: 511-527.

Richardson, H. S. (2018). “Moral Reasoning”, The Stanford Encyclopedia of Philosophy. Edward N. Zalta (ed.), Retrieved April 3rd 2021, from https://plato.stanford.edu/archives/fall2018/entries/reasoning-moral/

Sankey, H. (2013). On Reason and Rationality. Metascience, 22: 677–679.

Samuels, R., Stich, S., & Faucher, L. (2004). “Reason and Rationality” in I. Niiniluoto, M. Sintonen, J. Woleński (eds.), Handbook of Epistemology. Dordrecht, Netherlands: Springer.

Stanley, M. L., Dougherty, A. M., Yang, B. W., Henne, P., & De Brigard, F. (2018). Reasons probably won’t change your mind: The role of reasons in revising moral decisions. Journal of Experimental Psychology: General, 147 (7): 962–987.

Westen, D., and Burton, L., & Kowalski, R. (2006). Psychology: Australian and New Zealand Edition, Milton, Australia: John Wiley & Sons Australia LTD