# Mathematical Thinking

###### Contributors (1)

###### Published

Undergraduate Informatics Curricula often incorporate a sizeable portion of Mathematics. Compared to high school, mathematics at the university level is less concerned with fluency in arithmetic. Instead, it requires students to understand the value of mathematics in problem solving, and the mathematical way of looking at the world.

We start with a brief comparison of what was taught as mathematics in school - basically the craft of using arithmetic - and what mathematics is in an academic context. A brief excursion into the history of mathematics shows the struggle to find a basis that is complete and free of contradictions, upended by Bertrand Russels paradox and Gödel.

Mathematics is then introduced as a tool used to communicate about things that can be expressed in it’s language. In this sense mathematics is discussed as a language that has a giant advantage over every other language in that it is unambiguous. The languages we use to talk to each other include lots of ambiguities, for example words with multiple meanings that change according to context, such as secure, sanction or thing. In any spoken language, it is actually impossible to express things in a way that eliminates all ambiguities. In math, if it is possible to write down a problem so that it is complete and consistent, then there is no doubt about what it means. Thus, translating a problem into math is often referred to as »looking what the problem actually says«.

This unique property of mathematics is quite costly, as mathematics as a language is hard to read and even harder to write. When Keynes published his »General Theory of Employment, Interest and Money«, it took mathematicians fifteen years to capture the core of his theories in mathematical formulas. The ultimate benefit of formulating his theories and all necessary assumptions in math was to show that they could be formulated in an unique and consistent way, making visible what his theories »actually said« from a mathematical standpoint.

The ultimate benefit that comes with using math as a language to express things is that only math can really prove things. While »evidence« and »proof« are often synonymous in everyday language, they express very different concepts. In Science, we can disprove or confirm theories, but never prove them. In court, we use evidence to judge if somebody is guilty or innocent, but we can never prove it, as for example the evidence could be compromised. Medicine in the best case is evidence-based, but it cannot be proof-based. (Note: in german language, we use »Beweis« as a word for evidence and proof alike, making it even more difficult to draw a clear line here).

Only in math we get to prove things. We can actually prove the pythagorean theorem, Cantor's theorem, or Gödel’s incompletness theorem. So, using math in programming brings the same benefit: if we can write down an algorithm and its assumptions (eg. input) in math, we can prove that the algorithm will actually do what it was written to do. When we formulate the mathematical representation of a certain circuitry, we can prove that it will work under the equally formalised assumptions. This explains for example, why mathematics courses have a much higher share in the computer engineering curriculum than in other informatics studies.

The flip side of this power to prove is that only a tiny fraction of things can actually be written in mathematics. Using math as a way to describe the world is a frustratingly limiting experience. You have to leave out a lot of aspects that simply cannot be captured in math, condemning it to being a very specialised but powerful tool that only applies to very small parts of real-world problems.

We then introduce four concepts that are of central importance to mathematics as well as informatics: abstraction, deduction, induction, and recursion. Abstraction is explained as the combination of reduction and generalisation, and with it comes the distinction between conceptual and functional abstraction. We show that deduction in everyday language is actually induction, as witnessed by Sherlock’s deductions that are actually inductions in a mathematical sense. Finally, we try to convey a general understanding of recursion, which plays a central role in informatics as kind of gateway concept; they who understand recursion are deemed worthy of computer science.

The chapter closes with a critical reflection on the rationality of thought in the context of informatics. Starting with a quote from Parnas and Clements classical article »A Rational Design Process: How and Why to Fake it«: »Ideally, we would like to derive our programs from a statement of requirements in the same sense that theorems are derived from axioms in a published proof.« We then deconstruct the idea that problems are solved in through seperation, logical order and planning, based on the critique brought forward in Henrik Gedenryds seminal »How Designers Work«. We introduce the notion that problem solving is really an incremental, cyclical, and often chaotic process that consistenly refuses to be pressed into linear process models. This prepares students for the importance of the concept of »wicked problems«, introduced in *design thinking*, as well as the necessity to deal with *creative thinking* in order find a more complete understanding of problem solving.

Next: *computational thinking*

**Calls for discussion**

Where do you think we could improve this chapter? Are we missing essential bits?