Skip to main content

Critical Thinking

Published onApr 02, 2019
Critical Thinking

The value of critical thinking is undisputed, especially in times of fake news, filterbubbles and manipulative social media campaigns. Yet there is a lot of discussion about how critical thinking can be taught (e.g. see Willingham 2007), or if it can be taught at all (eg. Mehta et al 2014). Our approach is to invite students to look at critical thinking through an analytical lens.

We start by introducing them to the concept of cognitive bias. We explain that coginitive bias is not a deficit in itself, but that every cognitive bias is there for a reason. Most of the cognitive biases come from the necessity to save our brains time or energy. The brilliant classification and visualisation of 175 biases by Buster Benson serves as the organisational backbone for the lecture. We describe each of the categories using a couple of examples, and explain the science behind some of these examples. We close this section with a brief discussion of how it is impossible to eliminate cognitive bias, as it exists for a reason, but how it is part of critical thinking to »slow down« debate in order to properly deal with this fact.

Following cognitive bias, we discuss algorithmic bias (Nissenbaum 2001). This topic forms a bridge back to computational thinking, and we show how honest, straightforward code can introduce bias. We explain that the problem originates when we attribute objectivity to algorithms and data, suggesting that computers can come to objective decisions. Analogous to cognitive bias, we declare that every algorithmic bias is there for a reason, and show some of the reasons that are responsible for the kind of systematic and repeatable errors that come from biased algorithms and/or data.

We take a closer look at three sources of algorithmic bias. The first category is bias in systems that learn from the world, exemplified by a couple of google engine cases like three white teens/three black teens, professional/unprofessional hairstyles, etc, and the well-documented possibility of search result manipulation in 2016. The second category is bias in datasets, such as the ImageNet controversy of 2019, discussed extensively in Excavating AI. As a third category of bias we talk about decision making systems, using the COMPAS recidivism system case study as an example. This brings us to the question if there are things that cannot be decided using an algorithm, as discussed in How to recognize AI snake oil.

We close this section by showing the consequences of machine bias, and how machine bias can be weaponized, as well as introducing the term mathwashing to exemplify how bias is not always an accident. We point out how the fact that algorithms are often black boxes, hidden as secrets, protected by NDAs, shielded by obfuscation and technicality, makes it impossible to see how it works and find reasons for decisions made, and how this creates situations that are more akin to Kafkas The Trial than Orwells oft-cited 1984. Consequently, we touch upon the subject of explainable AI as an exciting field of research.

We then turn to logical fallacies as the practical little sibling of cognitive bias. Based on some typical examples, we show the link between cognitive bias and logical fallacies, such as the »correlation implies causation« fallacy that has an obvious connection to availability bias. Students learn that, contrary to cognitive bias, we can identify and eliminate every occurrence of logical fallacies without loss to individuals or society.

As a exemplary topic we then delve into the issue of diversity in informatics. We experience the obvious lack in diversity in our student population, where, to measure along just one metric, not even 20% of students are female. Also, social mobility in Austria is particularily low compared to other countries in the EU. We discuss the reasons and consequences of low diversity, and show research that points to potential harmful consequences of ignoring a lack in diversity.

The discussions that follow are some of the most controversial in this course. Some students think that this topic comes from a political agenda hidden in the course and oppose it vigorously. Individual students even question the content of the whole course as ideological because we include this topic. Still, we think that an open and honest discussion creates more good than ill will, and in the long run can only benefit.

Next: creative thinking

Calls for discussion

  • Where do you think we could improve this chapter? Are we missing essential bits?

  • We always appreciate ideas for exercises that can help students comprehend critical thinking core concepts, with focus on cognitive bias or algorithmic bias? More specifically, which kind of simple coding exercises could help understand algorithmic bias?

Comments
0
comment
No comments here
Why not start the discussion?