This paper surveys nine studies that implement aspects of moral reasoning within cognitive architectures (CAs) or CA-inspired frameworks. Its primary aim is to assess the viability of this approach for future research and to clarify the state of the domain. Two research paradigms
...
This paper surveys nine studies that implement aspects of moral reasoning within cognitive architectures (CAs) or CA-inspired frameworks. Its primary aim is to assess the viability of this approach for future research and to clarify the state of the domain. Two research paradigms emerge: (1) modeling human moral reasoning and (2) constructing artificial moral agents. Despite this distinction, all studies face similar challenges: fragmented reuse (each employs a different architecture), limited pre-programmed behaviors, and the absence of standardized benchmarks or metrics. Researchers remain optimistic about the explainability of their systems' behaviors and inner workings, yet often they acknowledge significant scalability and validation hurdles. Overall, CAs currently support only small-scale experiments; substantial further research – both empirical and into the theoretical basis of the field – is needed before these systems can attain real-world relevance.