Artificial Intelligence, Ethics, and Intergenerational Responsibility - HAL Accéder directement au contenu
Pré-publication, Document de travail Année : 2021

Artificial Intelligence, Ethics, and Intergenerational Responsibility

Résumé

Humans shape the behavior of artificially intelligent algorithms. One mechanism is the training these systems receive through the passive observation of human behavior and the data we constantly generate. In a laboratory experiment with a sequence of dictator games, we let participants' choices train an algorithm. Thereby, they create an externality on future decision making of an intelligent system that affects future participants. We test how information on training artificial intelligence affects the prosociality and selfishness of human behavior. We find that making individuals aware of the consequences of their training on the well-being of future generations changes behavior, but only when individuals bear the risk of being harmed themselves by future algorithmic choices. Only in that case, the externality of artificially intelligence training induces a significantly higher share of egalitarian decisions in the present.
Fichier principal
Vignette du fichier
2110.pdf ( 917.42 Ko ) Télécharger
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

halshs-03237437, version 1 (26-05-2021)

Identifiants

  • HAL Id : halshs-03237437 , version 1

Citer

Victor Klockmann, Alicia von Schenk, Marie Claire Villeval. Artificial Intelligence, Ethics, and Intergenerational Responsibility. 2021. ⟨halshs-03237437⟩
64 Consultations
292 Téléchargements
Dernière date de mise à jour le 20/04/2024
comment ces indicateurs sont-ils produits

Partager

Gmail Facebook Twitter LinkedIn Plus