La percepción de la toma de decisiones a través de inteligencia artificial cuando se produce daño a las personas

  1. Pablo Espinosa
  2. Miguel Clemente
Journal:
Estudios penales y criminológicos

ISSN: 1137-7550

Year of publication: 2023

Issue: 44

Type: Article

More publications in: Estudios penales y criminológicos

Abstract

Artificial Intelligence (AI) decision making may happen in scenarios where in a split second a decision has to be made without human supervision over the life or well-being of individuals. AI algorithms used in these cases can be based on deontological or utilitarian criteria. Even if there was a normative consensus in the ethics of AI decision making, if people did not find acceptable AI ethical criteria, its rejection would hinder its implementation. For instance, if an autonomous car would always sacrifice its passengers’ safety rather than risking other victims in an unavoidable accident, a lot of people would choose not to buy an autonomous car. In this paper we revise Social Psychology research papers related the variables involved in the perception of AI decision making. Social perception of AI may be relevant in developing legal responsibility criteria. Finally, we examine issues related to the legal field, like the use of AI in the legal system and to commit crimes.

Bibliographic References

  • AWAD, E., DSOUZA, S., KIM, R., SCHULZ, J. et al., “The Moral Machine Experiment”, en Nature, 563(7729), 2018, pp. 59-64. https://doi.org/10.1038/s41586-018-0637-6.
  • BARTELS, D. M. y PIZARRO, D.A., “The mismeasure of morals: Antisocial personality traits predict utilitarian responses to moral dilemmas”, en Cognition, 121(1), 2011, pp. 154-161. https://doi.org/10.1016/j.cognition.2011.05.010.
  • BONNEFON, J. F., SHARIFF, A., y RAHWAN, I., “The social dilemma of autonomous vehicles”, en Science, 352(6293), 2016, pp. 1573-1576. https://doi.org/10.1126/science.aaf2654.
  • BOSTYN, D. H., ROETS, A., y CONWAY, P., “Sensitivity to Moral Principles Predicts Both Deontological and Utilitarian Response Tendencies in Sacrificial Dilemmas”, en Social Psychological and Personality Science, 2021, pp. 1-10. https://doi.org/10.1177/19485506211027031.
  • CRIMINAL LAW SENTENCING GUIDELINES, “Wisconsin Supreme Court Requires Warning Before Use of Algorithmic Risk Assessments in Sentencing. - ‘State v. Loomis’, 881 N.W.2d 749 (Wis. 2016).”, en Harvard Law Review, 130(5), 2017, pp. 1530–1537.
  • DE AGREDA, A. G. “Ethics of autonomous weapons systems and its applicability to any AI systems”, en Telecommunications Policy, 44(6), 2020. https://doi.org/10.1016/j.telpol.2020.101953.
  • DE SILES, E.L. “AI, on the Law of the Elephant: Toward Understanding Artificial Intelligence”, en Buffalo Law Review, 69(5), 2021, pp.1389-1469.
  • DINIC, B. M., MILOSAVLJEVIC, M., y MANDARIC, D.J., “Effects of Dark Tetrad traits on utilitarian moral judgement: The role of personal involvement and familiarity with the victim”, en Asian Journal of Social Psychology, 24(1), 2021, pp. 48-58. https://doi.org/10.1111/ajsp.12422.
  • ELLEUCH, M. A., BEN HASSENA, A., ABDELHEDI, M. y PINTO, F.S., “Real-time prediction of COVID-19 patients health situations using Artificial Neural Networks and Fuzzy Interval Mathematical modeling”, en Applied Soft Computing, 110, 2021. https://doi.org/10.1016/j.asoc.2021.107643.
  • EVERETT, J. A. G. y KAHANE, G., “Switching Tracks? Towards a Multidimensional Model of Utilitarian Psychology”, en Trends in Cognitive Sciences, 24(2), 2020, pp. 124-134. https://doi.org/10.1016/j.tics.2019.11.012.
  • FEIER, T., GOGOLL, J., y UHL, M., “Hiding Behind Machines: Artificial Agents May Help to Evade Punishment”, en Science and Engineering Ethics, 28(2), Article 19, 2022. https://doi.org/10.1007/s11948-022-00372-7.
  • FOOT, P., “The problem of abortion and the doctrine of the double effect”, en Oxford Review, 5, 1967, pp. 5–15.
  • GAWRONSKI, B., ARMSTRONG, J., CONWAY, P., FRIESDORF, R., et al., “Consequences, Norms, and Generalized Inaction in Moral Dilemmas: The CNI Model of Moral Decision-Making”, en Journal of Personality and Social Psychology, 113(3), 2017, pp.343-376. https://doi.org/10.1037/pspa0000086.
  • GOGOLL, J. y MULLER, J.F., “Autonomous Cars: In Favor of a Mandatory Ethics Setting”, en Science and Engineering Ethics, 23(3), 2017, pp. 681-700. https://doi.org/10.1007/s11948-016-9806-x.
  • GRATCH, J. y FAST, N.J., “The power to harm: AI assistants pave the way to unethical behavior”, en Current Opinion in Psychology, 47, 2022. https://doi.org/10.1016/j.copsyc.2022.101382.
  • HAIDT, J., “The emotional dog and its rational tail: A social intuitionist approach to moral judgment”, en Psychological Review, 108(4), 2001, pp. 814-834. https://doi.org/10.1037//0033-295x.108.4.814.
  • HARRIS, J., “The Immoral Machine”, en Cambridge Quarterly of Healthcare Ethics, 29(1), 2020, pp. 71-79. https://doi.org/10.1017/s096318011900080x.
  • KAHANE, G., EVERETT, J.A.C., EARP, B.D., FARIAS, M. et al., “'Utilitarian' judgments in sacrificial moral dilemmas do not reflect impartial concern for the greater good”, en Cognition, 134, 2015, pp. 193-209. https://doi.org/10.1016/j.cognition.2014.10.005.
  • KING, T.C., AGGARWAL, N., TADDEO, M. y FLORIDI, L., “Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions”. Science and Engineering Ethics, 26(1), 2020, pp. 89-120. https://doi.org/10.1007/s11948-018-00081-0.
  • LIU, P. y LIU, J.T., “Selfish or Utilitarian Automated Vehicles? Deontological Evaluation and Public Acceptance”, en International Journal of Human-Computer Interaction, 37(13), 2021, pp. 1231-1242. https://doi.org/10.1080/10447318.2021.1876357.
  • MORITA, T. y MANAGI, S., “Autonomous vehicles: Willingness to pay and the social dilemma”, en Transportation Research Part C-Emerging Technologies, 119, 2020. https://doi.org/10.1016/j.tre.2020.102748.
  • NAVARICK, D.J., “Question framing and sensitivity to consequences in sacrificial moral dilemmas”, en Journal of Social Psychology, 161(1), 2021, pp. 25-39. https://doi.org/10.1080/00224545.2020.1749019.
  • PLETTI, C., LOTTO, L., BUODO, G., y SARLO, M. “It's immoral, but I'd do it! Psychopathy traits affect decision-making in sacrificial dilemmas and in everyday moral situations”, en. British Journal of Psychology, 108(2), 2017, pp. 351-368. https://doi.org/10.1111/bjop.12205.
  • STARKE, C., BALEIS, J., KELLER, B. y MARCINKOWSKI, F., “Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature”, en Big Data & Society, 9(2), 2022. https://doi.org/10.1177/20539517221115189.
  • TAKAMATSU, R., “Personality correlates and utilitarian judgments in the everyday context: Psychopathic traits and differential effects of empathy, social dominance orientation, and dehumanization beliefs”, en Personality and Individual Differences, 146, 2019, pp. 1-8. https://doi.org/10.1016/j.paid.2019.03.029.
  • TIGARD, D.W., “Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible”, en Cambridge Quarterly of Healthcare Ethics, 30(3), 2021, pp. 435-447. https://doi.org/10.1017/s0963180120000985.
  • XU, Z.C., “Human Judges in the Era of Artificial Intelligence: Challenges and Opportunities”, en Applied Artificial Intelligence, 36(1), 2022. https://doi.org/10.1080/08839514.2021.2013652.
  • YOKOI, R. y NAKAYACHI, K., “Trust in Autonomous Cars: Exploring the Role of Shared Moral Values, Reasoning, and Emotion in Safety-Critical Decisions”, Human Factors, 63(8), 2021, pp. 1465-1484. https://doi.org/10.1177/0018720820933041.
  • YOUNG, A.D., y MONROE, A.E., “Autonomous morals: Inferences of mind predict acceptance of AI behavior in sacrificial moral dilemmas”, en Journal of Experimental Social Psychology, 85, 2019. https://doi.org/10.1016/j.jesp.2019.103870.
  • ZHANG, Z.X., CHEN, Z.S., y XU, L.Y., “Artificial intelligence and moral dilemmas: Perception of ethical decision-making in AI”, en Journal of Experimental Social Psychology, 101, 2022. https://doi.org/10.1016/j.jesp.2022.104327.