Title: Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency
Abstract: This paper proposes a model of the Artificial Autonomous Moral Agent (AAMA), discusses a standard of moral cognition for AAMA, and compares it with other models of artificial normative agency. It is argued here that artificial morality is possible within the framework of a “moral dispositional functionalism.” This AAMA is able to read the behavior of human actors, available as collected data, and to categorize their moral behavior grounded in moral patterns herein. The present model is grounded in several analogies among artificial cognition, human cognition, and moral action. It is premised on the idea that moral agents should not be built on rule-following procedures, but on learning patterns from data. This idea is rarely implemented in AAMA models, albeit it has been suggested in the machine ethics literature (W. Wallach, C. Allen, J. Gips and especially M. Guarini). As an agent-based model, this AAMA constitutes an alternative to the mainstream action-centric models proposed by K. Abney, M. Anderson and S. Anderson, R. Arkin, T. Powers, W. Wallach, i.a. Moral learning and moral development of dispositional traits play here a fundamental role in cognition. By using a combination of neural networks and evolutionary computation, called “soft computing” (H. Adeli, N. Siddique, S. Mitra, L. Zadeh), the present model reaches a certain level of autonomy and complexity, which illustrates well “moral particularism” and a form of virtue ethics for machines, grounded in active learning. An example derived from the “lifeboat metaphor” (G. Hardin) and the extension of this model to the NEAT architecture (K. Stanley, R. Miikkulainen, i.a.) are briefly assessed.
Publication Year: 2017
Publication Date: 2017-01-01
Language: en
Type: book-chapter
Indexed In: ['crossref']
Access and Citation
Cited By Count: 36
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot