The field increasingly recognises its responsibility within society, as technology becomes a constituent part of today’s world. To deflect this responsibility by stating I was just the scientist will not be good enough. The chapter starts with common excuses of technologists for avoiding criticism. A prominent example is the notion of “Everyone else is doing it anyway’’ or in its variation “If I am not doing it, somebody else will’’. Other common statements include “I couldn’t know what it will be used for’’ (variation: “Guns don’t kill people, people kill people’’), “It’s just always been this way’’ or “I just did my job’’ (and “I needed the money’’). None of these excuses are necessarily entirely invalid and the moral judgement depends on the precise circumstances. Discussing these common excuses in class serves both, to sensitise students and to illustrate that responsibility is typically complex, situated and distributed.
We then introduce the three main positions from moral philosophy: virtue ethics, deontology and consequentialism, and we demonstrate how ethical judgements shift when these are applied to a controversial social media study in which researchers from Cornell University and Facebook unknowingly involved 689003 users to investigate emotional contagiency. The main learning point here is that ethical judgements are dependent on a moral position, that we often tacitly assume. We then discuss the notion of values and their sources and how they relate to believes. We demonstrate how values are used in data-driven psychological profiling and how this is applied in political campaigning for the purposes of manipulation. We discuss online examples, such as the MIT project Collective Debate to demonstrate the implications for public discourse.
We then explore two main areas of moral import for future computer scientists: ethical conduct in science, and responsible research and innovation (RRI). Going through classic examples such as the Milgram experiment and the Tuskagee study, and linking back to the Nuremberg trials, we derive some fundamental principles for research ethics which are still the backbone of most formal ethics frameworks: informed consent, respect, fairness and judging the balance between knowledge gain and risk to participants. We review some existing ethics frameworks (e.g. the Australian National Statement on Ethical Conduct in Human Research) and step through two hypothetical research studies to identify common ethics issues. We stress that anticipatory ethics has its limits and that an approved ethics application does not replace an ethical mindset while conducting a study.
We then turn to RRI framework implemented by the European Union to discuss central pillars for a reflective and responsible practice: anticipation, reflexivity, inclusion and responsiveness. We apply these principles to the case of predictive policing and point out the many moral dilemmas involved. We then discuss the MIT project The Moral Machine and point out the limitations of such a data-driven engineering approach to moral judgements. A video Slaughterbots by the Campaign against autonomous weapons serves as chilling example of why responsible innovation is all the more important at the intersection of AI and military applications. Connecting to this point, we discuss the case of Google workers forcing their company out of a related AI project and that the value of programmers as highly skilled and sought-after workforce, makes them increasingly powerful as activists for societal responsibility.
We end with the 6th law of Technology by Kranzberg
Technology is neither good nor bad; nor is it neutral
and discuss a quote by Ryan Calo (Stanford Law School)
The most interesting aspect of cyberspace flows from its status as an engine of realization: cyberspace widens the range of what we think of as possible. It’s often said that where there’s a will, there’s a way. I don’t agree. […] I’d say the converse is more plausible. Where there’s a way, there’s a will. If one day a new road for thought yawns into the distance, some adventuring mind will take it.
Next: criminal thinking
Calls for discussion
Where do you think we could improve this chapter? Are we missing essential bits?
We always appreciate ideas for exercises that can help students comprehend responsible thinking core concepts.
Which general topics from the typical CS curriculum would benefit most being discussed from an ethical perspective?