Self-fulfilling Prophecy in Practical and Automated Prediction

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 823 KB, PDF-dokument

A self-fulfilling prophecy is, roughly, a prediction that brings about its own truth. Although true predictions are hard to fault, self-fulfilling prophecies are often regarded with suspicion. In this article, we vindicate this suspicion by explaining what self-fulfilling prophecies are and what is problematic about them, paying special attention to how their problems are exacerbated through automated prediction. Our descriptive account of self-fulfilling prophecies articulates the four elements that define them. Based on this account, we begin our critique by showing that typical self-fulfilling prophecies arise due to mistakes about the relationship between a prediction and its object. Such mistakes—along with other mistakes in predicting or in the larger practical endeavor—are easily overlooked when the predictions turn out true. Thus we note that self-fulfilling prophecies prompt no error signals; truth shrouds their mistakes from humans and machines alike. Consequently, self-fulfilling prophecies create several obstacles to accountability for the outcomes they produce. We conclude our critique by showing how failures of accountability, and the associated failures to make corrections, explain the connection between self-fulfilling prophecies and feedback loops. By analyzing the complex relationships between accuracy and other evaluatively significant features of predictions, this article sheds light both on the special case of self-fulfilling prophecies and on the ethics of prediction more generally
OriginalsprogEngelsk
TidsskriftEthical Theory and Moral Practice
Vol/bind26
Sider (fra-til)127–152
Antal sider6
ISSN1386-2820
DOI
StatusUdgivet - 2023

Bibliografisk note

Funding Information:
For helpful input and feedback, we wish to thank Berend Alberts-de Gier, Sander Beckers, Maren Behrensen, Marianne Boenink, Justin D’Arms, Dai Heide, Olya Kudina, Jonne Maas, Alan Rubel, Mark Ryan, David Skillicorn, Brandt van der Gaast, and Nils Wagner. We are grateful for discussions with audiences at the University of Twente, Northeastern University, and Queen’s University, where earlier versions of this article were presented. We also benefited from discussions of parts of this project at several conferences, including the Aachen Emerging Technology & Evolving Responsibility Workshop, Computer Ethics—Philosophical Enquiry, and the OZSW Annual Conference. Early work on this article was done with support from the Netherlands Organisation for Scientific Research (NWO), under project numbers 652.001.003 (King) and 313-99-309 (Mertens).

Publisher Copyright:
© 2023, The Author(s).

ID: 335260194