The striking success of modern AI systems raises new and unique ethical questions that complicate orthodox views of how humans interface with technology. The widespread use of AI systems complicates ascriptions of moral responsibility, i.e., deciding who ought to bear blame when things go wrong. The black-box nature of AI systems threatens to erode trust in the organizations that deploy them because, for example, it can be difficult, if not impossible, to understand why the AI system made one particular decision rather than another. It can, moreover, be difficult to assure the safety of AI systems given their tendency to display capabilities that they were not explicitly designed to possess. Furthermore, the use of AI systems raises privacy concerns because of the massive amounts of data required for their training. And the use of human-generated data to train AI systems can have far reaching negative consequences on justice and fairness given the variety of biases encoded in the data that are then parroted back by the AI system thereby exacerbating existing inequalities. AI systems have even reinvigorated debates on who or what ought to be included within our moral circle, i.e., who or what ought to count as deserving of moral consideration (a moral patient) and capable of bearing moral responsibility (a moral agent). This seminar will introduce you to questions around the ethical dimensions of modern AI, with a special perspective on normative ethics, governance and regulatory practices. 

 

Suggested reading: 

 

Dubber, M. D., Pasquale, F., & Das, S. (Eds.). (2020). The Oxford handbook of ethics of AI. Oxford Handbooks. 

Liao, S. M. (Ed.). (2020). Ethics of artificial intelligence. Oxford University Press. 

Schaich Borg, J., Sinnott-Armstrong, W., Conitzer, V. (2024). Moral AI: And How We Get There. Pelican Books.

Semester: WiSe 2024/25