"uuid","repository link","title","author","contributor","publication year","abstract","subject topic","language","publication type","publisher","isbn","issn","patent","patent status","bibliographic note","access restriction","embargo date","faculty","department","research group","programme","project","coordinates"
"uuid:33283954-fd1d-40c9-a6bf-7bd020350bbe","http://resolver.tudelft.nl/uuid:33283954-fd1d-40c9-a6bf-7bd020350bbe","Context-specific value inference via hybrid intelligence","Liscio, E. (TU Delft Interactive Intelligence)","Jonker, C.M. (promotor); Murukannaiah, P.K. (copromotor); Delft University of Technology (degree granting institution)","2024","Human values are the abstract motivations that drive our opinions and actions. AI agents ought to align their behavior with our value preferences (the relative importance we ascribe to different values) to co-exist with us in our society. However, value preferences differ across individuals and are dependent on context. To reflect diversity in society and to align with contextual value preferences, AI agents must be able to discern the value preferences of the relevant individuals by interacting with them. We refer to this as the value inference challenge, which is the focus of this thesis. Value inference entails several challenges and the related work on value inference is scattered across different AI subfields. We present a comprehensive overview of the value inference challenge by breaking it down into three distinct steps and showing the interconnections among these steps.","Values; Natural Language Processing; Morality; Ethics; Explainable AI; Active Learning; Hybrid Intelligence","en","doctoral thesis","","978-94-6366-840-8","","","","","","","","","Interactive Intelligence","","",""
"uuid:01302e26-9428-4c67-acd5-7e5bbe77e7cb","http://resolver.tudelft.nl/uuid:01302e26-9428-4c67-acd5-7e5bbe77e7cb","Remittance dependence, support for taxation and quality of public services in Africa","Konte, Maty (World Bank); Ndubuisi, G.O. (TU Delft Economics of Technology and Innovation; Universiteit Maastricht)","","2024","We explore the heterogeneous effect of migrant remittances on citizens' support for taxation using a sample comprising 45,000 individuals from the Afrobarometer survey round 7 [2016–2018] across 34 African countries. To correct for unobserved heterogeneity, we endogenously identify latent classes/subtypes of individuals that share similar patterns on how their support for taxation is affected by their unobserved and observed characteristics, including remittance dependency. We apply the finite multilevel mixture of regressions approach, a supervised machine learning method to detect hidden classes in the data without imposing a priori assumptions on class membership. Our data are best generated by an econometric model with two classes/subtypes of individuals. In class 1 where more than two-thirds of the citizens belong, we do not find any significant evidence that remittance dependence affects support for taxation. However, in class 2 where the remaining one-third of the citizens belong, we find a significant negative effect of remittance dependence on support for taxation. Furthermore, we find that citizens who have a positive appraisal of the quality of the public service delivery have a lower probability of belonging to the class in which depending on remittance reduces support for taxation. The findings emphasize the need for efficient public services provisioning to counteract the adverse effect of remittances on tax morale.","Africa; public services; remittance; tax morale; taxation","en","journal article","","","","","","","","","","","Economics of Technology and Innovation","","",""
"uuid:bf965b32-ed0b-4e8a-8bca-10cb860a883b","http://resolver.tudelft.nl/uuid:bf965b32-ed0b-4e8a-8bca-10cb860a883b","Modelling Value Change: An Exploratory Approach","de Wildt, T.E. (TU Delft Ethics & Philosophy of Technology); van de Poel, I.R. (TU Delft Ethics & Philosophy of Technology)","","2024","Value and moral change have increasingly become topics of interest in the philosophical literature. Several theoretical accounts have been proposed. These are usually based on certain theoretical and conceptual assumptions. Their strengths and weaknesses are often difficult to determine and compare because they are based on limited empirical evidence. We propose agent-based modeling to build simulation models that can theoretically help us explore accounts of value change. We can investigate whether a simulation model based on a specific account of value change can reproduce relevant phenomena. To illustrate this approach, we build a model based on the pragmatist account of value change proposed by Van De Poel and Kudina (2022). We show that this model can reproduce four relevant phenomena, namely 1) the inevitability and stability of values, 2) societies differ in openness and resistance to change, 3) moral revolutions, and 4) lock-in. This makes this account promising, although more research is needed to see how well it can explain other relevant phenomena and compare its strengths and weaknesses to other accounts. On a more methodological level, our contribution suggests that simulation models might be useful to theoretically explore accounts of value change and make further progress in this area.","Value Change; Moral Change; Agent-Based Modelling; Exploratory Modelling","en","journal article","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:53403207-3875-4e99-a6f0-f7128f60e942","http://resolver.tudelft.nl/uuid:53403207-3875-4e99-a6f0-f7128f60e942","Moral foundations in gender violence cases decided in Portuguese courts","Martins Martinho Bessa, A.C. (TU Delft Transport and Logistics); Kroesen, M. (TU Delft Transport and Logistics); Chorus, C.G. (TU Delft Industrial Design Engineering)","","2024","Gender violence encompasses a multitude of morally problematic psychological, physical, and sexual behaviors that, in most countries, constitute criminal offenses. In this study, we investigate the association between moral foundations (Care, Fairness, Loyalty, Authority, and Sanctity) and punitive responses to gender violence offenses. Our case study focuses on gender violence in Portugal, a country in which these offenses are a prevalent social problem. We collected data on gender violence legal cases decided in Portuguese courts between 2002 and 2022, and we used a latent class cluster analysis model to identify the complex patterns in the data and reduce such patterns to a distinct number of clusters. Four main clusters unravel latent relations between the foundations mapped in the legal narratives and corresponding punitive responses: (i) Affirmative with suspended prison time (moral rhetoric rooted in Authority); (ii) Mixed outcomes but no prison time (moral rhetoric rooted in Sanctity); (iii) Affirmative with lengthy prison time large compensation (moral rhetoric rooted in Loyalty and Care); and (iv) Affirmative with court fines (moral rhetoric rooted in Fairness). The moral foundations provide a valuable lens to understand the problem of gender violence, but further research is needed to establish the causal mechanisms between morality and punitive responses to gender violence.","Court; domestic violence; gender violence; latent class cluster analysis; legal cases; moral foundations theory; morality","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-09-05","Industrial Design Engineering","","Transport and Logistics","","",""
"uuid:8cabdf81-f503-463f-89d7-7a2874d3f876","http://resolver.tudelft.nl/uuid:8cabdf81-f503-463f-89d7-7a2874d3f876","Moral Values, Behaviour, and the Self: An empirical and conceptual analysis","van den Berg, T.G.C. (TU Delft Transport and Logistics)","Chorus, C.G. (promotor); Kroesen, M. (promotor); Corrias, L.D.A. (copromotor); Delft University of Technology (degree granting institution)","2023","","Moral Values; Moral Behaviour; Moral Self; Narrative Identity; Moral Foundations Theory; Moral Psychology; Phenomenology; Ricoeur; International Crimes","en","doctoral thesis","","978-94-6483-127-6","","","","","","","","","Transport and Logistics","","",""
"uuid:0a41b907-9477-4cf3-aa1e-2ac979547522","http://resolver.tudelft.nl/uuid:0a41b907-9477-4cf3-aa1e-2ac979547522","The moral source of collective irrationality during COVID-19 vaccination campaigns","Voinea, Cristina (University of Bucharest); Marin, L. (TU Delft Ethics & Philosophy of Technology); Vica, Constantin (University of Bucharest)","","2023","Many hypotheses have been advanced to explain the collective irrationality of COVID-19 vaccine hesitancy, such as partisanship and ideology, exposure to misinformation and conspiracy theories or the effectiveness of public messaging. This paper presents a complementary explanation to epistemic accounts of collective irrationality, focusing on the moral reasons underlying people’s decisions regarding vaccination. We argue that the moralization of COVID-19 risk mitigation measures contributed to the polarization of groups along moral values, which ultimately led to the emergence of collective irrational behaviors. Collective irrationality arises from groups explicitly or implicitly endorsing values that ultimately harm both themselves and those around. The role of social media platforms in amplifying this polarization and contributing to the emergence of collective irrationality is also examined. Finally, potential strategies for addressing the moral sources of collective irrationality are discussed.","Collective irrationality; moral reasons; covid-19; vaccine hesitancy; Social media","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-05-28","","","Ethics & Philosophy of Technology","","",""
"uuid:0a020d5d-1b38-4102-ab14-de2f4efd414d","http://resolver.tudelft.nl/uuid:0a020d5d-1b38-4102-ab14-de2f4efd414d","How Engineers Can Care from a Distance: Promoting Moral Sensitivity in Engineering Ethics Education","van Grunsven, J.B. (TU Delft Ethics & Philosophy of Technology); Marin, L. (TU Delft Ethics & Philosophy of Technology); Stone, T.W. (TU Delft Ethics & Philosophy of Technology); Doorn, N. (TU Delft Ethics & Philosophy of Technology); Roeser, S. (TU Delft Values Technology and Innovation)","Miller, Glenn (editor); Mateus Jerónimo, Helena (editor); Zhu, Qin (editor)","2023","Moral (or ethical) sensitivity is widely viewed as a foundational learning goal in engineering ethics education. We have argued in this paper is that this view of moral sensitivity cannot be readily transported from the nursing context to the engineering context on the basis of a care-analogy. The particularized care characteristic of the nursing context is decisively different from the generalized and universalized forms of care characteristic of the engineering context. Through a focus on care and maintenance, the engineering student’s moral sensitivity can be refined, opening up a perceptual awakening and affectivity towards the complex nature of the engineer’s Other. This awakening is in part promoted through an understanding of the ideology of neutrality as a moment in the history engineering. Becoming aware of this ideology as an ideology can then be seen as an activity of dividing loyalties that allows for a reflexive and critical view of the biases and presuppositions inherited within the world of engineering. This process of deepening the engineering student’s moral sensitivity is perhaps as much a process of the student becoming aware of her professional world, how it shapes her understanding of herself, and what it means to be a good engineer.","Philosophy of engineering; Engineering education; philosophy of technology; Moral sensitivity","en","book chapter","Rowman & Littlefield","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-09-01","","Values Technology and Innovation","Ethics & Philosophy of Technology","","",""
"uuid:ee216e0c-64bf-4414-886d-a3acac3117a0","http://resolver.tudelft.nl/uuid:ee216e0c-64bf-4414-886d-a3acac3117a0","Normative uncertainty and societal preferences: The problem with evaluative standards","Kuilman, S.K. (TU Delft Interactive Intelligence); Andriamahery, Koji (IMT Mines Alès); Jonker, C.M. (TU Delft Interactive Intelligence); Cavalcante Siebert, L. (TU Delft Interactive Intelligence)","","2023","Many technological systems these days interact with their environment with increasingly little human intervention. This situation comes with higher stakes and consequences that society needs to manage. No longer are we dealing with 404 pages: AI systems today may cause serious harm. To address this, we wish to exert a kind of control over these systems, so that they can adhere to our moral beliefs. However, given the plurality of values in our societies, which “oughts” ought these machines to adhere to? In this article, we examine Borda voting as a way to maximize expected choice-worthiness among individuals through different possible “implementations” of ethical principles. We use data from the Moral Machine experiment to illustrate the effectiveness of such a voting system. Although it appears to be effective on average, the maximization of expected choice-worthiness is heavily dependent on the formulation of principles. While Borda voting may be a good way of ensuring outcomes that are preferable to many, the larger problems in maximizing expected choice-worthiness, such as the capacity to formulate credences well, remain notoriously difficult; hence, we argue that such mechanisms should be implemented with caution and that other problems ought to be solved first.","normative uncertainty; limit of forms; ethics; preference profiles; moral machine; self-driving cars","en","journal article","","","","","","","","","","","Interactive Intelligence","","",""
"uuid:a73174a9-606c-42a2-bfde-a9d3481b0ca7","http://resolver.tudelft.nl/uuid:a73174a9-606c-42a2-bfde-a9d3481b0ca7","What Attentional Moral Perception Cannot Do but Emotions Can","Hutton, James (TU Delft Ethics & Philosophy of Technology)","","2023","Jonna Vance and Preston Werner argue that humans’ mechanisms of perceptual attention tend to be sensitive to morally relevant properties. They dub this tendency “Attentional Moral Perception” (AMP) and argue that it can play all the explanatory roles that some theorists have hoped moral perception can play. In this article, I argue that, although AMP can indeed play some important explanatory roles, there are certain crucial things that AMP cannot do. Firstly, many theorists appeal to moral perception to explain how moral knowledge is possible. I argue that AMP cannot put an agent in a position to acquire moral knowledge unless it is supplemented with some other capacity for becoming aware of moral properties. Secondly, theorists appeal to moral perception to explain “moral conversions”, i.e., cases in which an experience leads an agent to form a moral belief that conflicts with her pre-existing moral beliefs. I argue that AMP cannot explain this either. Due to these shortcomings, theorists should turn to emotions for a powerful and psychologically realistic account of virtuous agents’ sensitivity to the moral landscape.","moral epistemology; moral psychology; moral perception; attention; emotion; epistemic sentimentalism","en","journal article","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:ffc33a4e-79df-47b5-ae70-609e1ba63567","http://resolver.tudelft.nl/uuid:ffc33a4e-79df-47b5-ae70-609e1ba63567","Give and take: Moral aspects of travelers' intentions to participate in a hypothetical established social routing scheme","Szép, T. (TU Delft Transport and Logistics); van den Berg, T.G.C. (TU Delft Transport and Logistics); Cointe, Nicolas; Daniel, Aemiro Melkamu (Swedish University of Agricultural Sciences); Martinho, Andreia (Tufts University); Tang, Tanzhe (Rijksuniversiteit Groningen); Chorus, C.G. (TU Delft Industrial Design Engineering)","","2023","Social routing schemes are widely regarded as promising tools to reduce traffic congestion in urban networks. We contribute to the growing literature on such schemes and their effect on travel behavior, by exploring the interaction between the characteristics and framing of the scheme on the one hand, and travelers' moral personality and moral motivations on the other hand. Our method uses a two-wave stated intention experiment eliciting preferences in a hypothetical context where a social routing scheme is presumed to have been established already. This is followed by a morality survey. We hypothesize and then confirm the following: when a social routing scheme is framed and designed as an altruistic effort requesting personal sacrifices for the benefit of other travelers, people who strongly adhere to care related notions of morality are attracted to such a scheme. On the contrary, a scheme that is designed and framed as a collective endeavour which would also benefit participating travelers attracts those who strongly adhere to moral notions related to fairness. We derive tentative policy recommendations from our findings, suggesting that a collective good scheme, albeit more difficult to implement, is likely to be more viable in the long run.","Altruism; Collective good; Contextual morality; Discrete choice analysis; Moral Foundations Questionnaire; Social routing","en","journal article","","","","","","","","","Industrial Design Engineering","","Transport and Logistics","","",""
"uuid:f886a863-6e06-4af2-a0c9-8a5217259436","http://resolver.tudelft.nl/uuid:f886a863-6e06-4af2-a0c9-8a5217259436","Moral rhetoric in discrete choice models: a Natural Language Processing approach","Szép, T. (TU Delft Transport and Logistics); van Cranenburgh, S. (TU Delft Transport and Logistics); Chorus, C.G. (TU Delft Industrial Design Engineering; TU Delft Engineering, Systems and Services)","","2023","This paper proposes a new method to combine choice- and text data to infer moral motivations from people’s actions. To do this, we rely on moral rhetoric, in other words, extracting moral values from verbal expressions with Natural Language Processing techniques. We use moral rhetoric based on a well-established moral, psychological theory called Moral Foundations Theory. We use moral rhetoric as input in Discrete Choice Models to gain insights into moral behaviour based on people’s words and actions. We test our method in a case study of voting and party defection in the European Parliament. Our results indicate that moral rhetoric have significant explanatory power in modelling voting behaviour. We interpret the results in the light of political science literature and propose ways for future investigations.","Discrete choice models; Moral Foundations Theory; Moral rhetoric; Natural Language Processing","en","journal article","","","","","","","","","Industrial Design Engineering","Engineering, Systems and Services","Transport and Logistics","","",""
"uuid:ef493877-34d0-40a2-89b9-991a4204d947","http://resolver.tudelft.nl/uuid:ef493877-34d0-40a2-89b9-991a4204d947","Towards machine learning for moral choice analysis in health economics: A literature review and research agenda","Smeele, Nicholas V.R. (Erasmus Universiteit Rotterdam); Chorus, C.G. (TU Delft Industrial Design Engineering); Schermer, Maartje H.N. (Erasmus MC); de Bekker-Grob, Esther W. (Erasmus Universiteit Rotterdam)","","2023","Background: Discrete choice models (DCMs) for moral choice analysis will likely lead to erroneous model outcomes and misguided policy recommendations, as only some characteristics of moral decision-making are considered. Machine learning (ML) is recently gaining interest in the field of discrete choice modelling. This paper explores the potential of combining DCMs and ML to study moral decision-making more accurately and better inform policy decisions in healthcare. Methods: An interdisciplinary literature search across four databases – PubMed, Scopus, Web of Science, and Arxiv – was conducted to gather papers. Based on the Preferred Reporting Items for Systematic and Meta-analyses (PRISMA) guideline, studies were screened for eligibility on inclusion criteria and extracted attributes from eligible papers. Of the 6285 articles, we included 277 studies. Results: DCMs have shortcomings in studying moral decision-making. Whilst the DCMs' mathematical elegance and behavioural appeal hold clear interpretations, the models do not account for the ‘moral’ cost and benefit in an individual's utility calculation. The literature showed that ML obtains higher predictive power, model flexibility, and ability to handle large and unstructured datasets. Combining the strengths of ML methods with DCMs has the potential for studying moral decision-making. Conclusions: By providing a research agenda, this paper highlights that ML has clear potential to i) find and deepen the utility specification of DCMs, and ii) enrich the insights extracted from DCMs by considering the intrapersonal determinants of moral decision-making.","Discrete choice models; Health preference research; Literature review; Machine learning; Moral decision-making; Moral preferences; Research agenda","en","review","","","","","","","","","Industrial Design Engineering","","","","",""
"uuid:7e5489f6-8946-40ba-a88c-4dc2573fa1e0","http://resolver.tudelft.nl/uuid:7e5489f6-8946-40ba-a88c-4dc2573fa1e0","Moral foundations theory and the narrative self: towards an improved concept of moral selfhood for the empirical study of morality","van den Berg, T.G.C. (TU Delft Transport and Logistics); Corrias, Luigi Dennis Alessandro (Vrije Universiteit Amsterdam)","","2023","Within the empirical study of moral decision making, people’s morality is often identified by measuring general moral values through a questionnaire, such as the Moral Foundations Questionnaire provided by Moral Foundations Theory (MFT). However, the success of these moral values in predicting people’s behaviour has been disappointing. The general and context-free manner in which such approaches measure moral values and people’s moral identity seems crucial in this respect. Yet, little research has been done into the underlying notion of self. This article aims to fill this gap. Taking a phenomenological approach and focusing on MFT, we examine the concept of moral self that MFT assumes and present an improved concept of moral self for the empirical study of morality. First, we show that MFT adopts an essentialist concept of moral self, consisting of stable moral traits. Then, we argue that such a notion is unable to grasp the dynamical and context sensitive aspects of the moral self. We submit that Ricoeur’s narrative notion of identity, a self that reinterprets itself in every decision situation through self-narrative, is a viable alternative since it is able to incorporate context sensitivity and change, while maintaining a persisting moral identity. Finally, we argue that this narrative concept of moral self implies measuring people’s morality in a more exploratory fashion within a delineated context.","Moral Foundation Theory; Moral self; Moral values; Narrative self; Ricoeur","en","journal article","","","","","","","","","","","Transport and Logistics","","",""
"uuid:bcccfa5e-f406-4ac6-8cf2-71eb00b07d07","http://resolver.tudelft.nl/uuid:bcccfa5e-f406-4ac6-8cf2-71eb00b07d07","The Quarrel of Local Post-hoc Explainers for Moral Values Classification in Natural Language Processing","Agiollo, A. (TU Delft Interactive Intelligence; Alma Mater Studiorum – Universitá di Bologna); Cavalcante Siebert, L. (TU Delft Interactive Intelligence); Murukannaiah, P.K. (TU Delft Interactive Intelligence); Omicini, Andrea (Alma Mater Studiorum – Universitá di Bologna)","Calvaresi, Davide (editor); Najjar, Amro (editor); Omicini, Andrea (editor); Carli, Rachele (editor); Ciatto, Giovanni (editor); Aydogan, Reyhan (editor); Mualla, Yazan (editor); Främling, Kary (editor)","2023","Although popular and effective, large language models (LLM) are characterised by a performance vs. transparency trade-off that hinders their applicability to sensitive scenarios. This is the main reason behind many approaches focusing on local post-hoc explanations recently proposed by the XAI community. However, to the best of our knowledge, a thorough comparison among available explainability techniques is currently missing, mainly for the lack of a general metric to measure their benefits. We compare state-of-the-art local post-hoc explanation mechanisms for models trained over moral value classification tasks based on a measure of correlation. By relying on a novel framework for comparing global impact scores, our experiments show how most local post-hoc explainers are loosely correlated, and highlight huge discrepancies in their results—their “quarrel” about explanations. Finally, we compare the impact scores distribution obtained from each local post-hoc explainer with human-made dictionaries, and point out that there is no correlation between explanation outputs and the concepts humans consider as salient.","eXplainable Artificial Intelligence; Local Post-hoc Explanations; Moral Values Classification; Natural Language Processing","en","conference paper","Springer","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-04-01","","","Interactive Intelligence","","",""
"uuid:1ca8f755-74ee-440a-8a78-550268c9ef54","http://resolver.tudelft.nl/uuid:1ca8f755-74ee-440a-8a78-550268c9ef54","Identifying moral antecedents of decision-making in discrete choice models","Szép, T. (TU Delft Transport and Logistics)","Chorus, C.G. (promotor); van Cranenburgh, S. (copromotor); Delft University of Technology (degree granting institution)","2022","Discrete Choice Models are valuable tools for quantitative decision-making analysis: they allow analysts to draw behavioural conclusions from data, better understand and predict choices, and evaluate policies. However, up until recently, they had a blind spot for morality. Moral values often play an essential role in decision-making; fairness or loyalty can deter people from following self-interest. Moral motivations can also prompt decision-makers to change their minds when contemplating a dilemma or hide their preferences when they want to avoid judgement. These notions are not aligned with crucial behavioural assumptions traditional Discrete Choice Models are based on, such as stable preferences echoing through choices or decision-makers maximizing their utility. This thesis aims to develop and test new Discrete Choice Models that help identify morality in a mathematically rigorous framework, thus increasing the behavioural realism of Discrete Choice Models in moral decision-making. To do this, it uses two approaches.","Discrete Choice Modelling; Moral Decision Making; Identifiability; Methodological and Empirical Research","en","doctoral thesis","","978-94-6384-375-1","","","","","","","","","Transport and Logistics","","",""
"uuid:f686a46e-d470-4fb5-9a6d-997501188dfa","http://resolver.tudelft.nl/uuid:f686a46e-d470-4fb5-9a6d-997501188dfa","Empirical Essays in Artificial Intelligence Ethics","Martins Martinho Bessa, A.C. (TU Delft Transport and Logistics)","Chorus, C.G. (promotor); Kroesen, M. (copromotor); Delft University of Technology (degree granting institution)","2022","As Artificial Intelligence (AI) becomes increasingly important in modern society, there is a pressing need to address the ethical issues associated with these technologies. AI Ethics is a necessary endeavor to capitalize on the benefits of AI while minimizing its risks. However, it faces important challenges related to normative urgency, multi-purpose nature of AI, and multitude of stakeholders operating in the AI space. This doctoral dissertation builds on the premise that empirical information is valuable for AI Ethics to address these challenges and realize its normativemandate. The main ambition is to make an empirical contribution that facilitates the reflective development of AI, which assists the communities operating in the AI space to engage in a critical reflection on AI.","Artificial Intelligence; Ethics; Morality; Empirical Research","en","doctoral thesis","","978-94-6384-354-6","","","","","","","","","Transport and Logistics","","",""
"uuid:2369b935-bb35-492c-aed3-5e7b30649046","http://resolver.tudelft.nl/uuid:2369b935-bb35-492c-aed3-5e7b30649046","The Good Life and Climate Adaptation","Pesch, U. (TU Delft Ethics & Philosophy of Technology)","","2022","The need to adapt to climate change brings about moral concerns that according to ‘eco-centric’ critiques cannot be resolved by modernist ethics, as this takes humans as the only beings capable of intentionality and rationality. However, if intentionality and rationality are reconsidered as ‘counterfactual hypotheses’ it becomes possible to align modernist ethics with the eco-centric approaches. These counterfactual hypotheses guide the development of institutions, so as to allow the pursuit of a ‘good life’. This mean that society should be organized as if humans are intentional and, following Habermas’s idea of ‘communicative rationality’, as if humans are capable of collective deliberation. Given the ecological challenges, the question becomes how to give ecological concerns a voice in deliberative processes","the good life; climate adaptation; ethics; deliberation; ecological ethics; moral hypotheses; agency; exceptionalism; Agency; Climate adaptation; Ecological ethics; Moral hypotheses; Deliberation; Ethics; Exceptionalism; The good life","en","journal article","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:12797e3d-578d-41b0-83e7-0f895527d8c9","http://resolver.tudelft.nl/uuid:12797e3d-578d-41b0-83e7-0f895527d8c9","Why Are General Moral Values Poor Predictors of Concrete Moral Behavior in Everyday Life? A Conceptual Analysis and Empirical Study","van den Berg, T.G.C. (TU Delft Transport and Logistics); Kroesen, M. (TU Delft Transport and Logistics); Chorus, C.G. (TU Delft Transport and Logistics)","","2022","Within moral psychology, theories focusing on the conceptualization and empirical measurement of people’s morality in terms of general moral values –such as Moral Foundation Theory- (implicitly) assume general moral values to be relevant concepts for the explanation and prediction of behavior in everyday life. However, a solid theoretical and empirical foundation for this idea remains work in progress. In this study we explore this relationship between general moral values and daily life behavior through a conceptual analysis and an empirical study. Our conceptual analysis of the moral value-moral behavior relationship suggests that the effect of a generally endorsed moral value on moral behavior is highly context dependent. It requires the manifestation of several phases of moral decision-making, each influenced by many contextual factors. We expect that this renders the empirical relationship between generic moral values and people’s concrete moral behavior indeterminate. Subsequently, we empirically investigate this relationship in three different studies. We relate two different measures of general moral values -the Moral Foundation Questionnaire and the Morality As Cooperation Questionnaire- to a broad set of self-reported morally relevant daily life behaviors (including adherence to COVID-19 measures and participation in voluntary work). Our empirical results are in line with the expectations derived from our conceptual analysis: the considered general moral values are poor predictors of the selected daily life behaviors. Furthermore, moral values that were tailored to the specific context of the behavior showed to be somewhat stronger predictors. Together with the insights derived from our conceptual analysis, this indicates the relevance of the contextual nature of moral decision-making as a possible explanation for the poor predictive value of general moral values. Our findings suggest that the investigation of morality’s influence on behavior by expressing and measuring it in terms of general moral values may need revision.","moral values; moral decision-making; moral behavior; Moral Foundation Theory; compliance with COVID-19 measures; contextual aspects of moral decision-making; theory of Morality as Cooperation","en","journal article","","","","","","","","","","","Transport and Logistics","","",""
"uuid:0bf0252b-f49d-408a-b908-8903be8e86bc","http://resolver.tudelft.nl/uuid:0bf0252b-f49d-408a-b908-8903be8e86bc","Meaningful human control: actionable properties for AI system development","Cavalcante Siebert, L. (TU Delft Interactive Intelligence); Lupetti, M.L. (TU Delft Design Aesthetics); Aizenberg, E. (TU Delft Cyber Security); Beckers, N.W.M. (TU Delft Human-Robot Interaction); Zgonnikov, A. (TU Delft Human-Robot Interaction); Veluwenkamp, H.M. (TU Delft Ethics & Philosophy of Technology); Abbink, D.A. (TU Delft Human-Robot Interaction); Giaccardi, Elisa (TU Delft Human Information Communication Design); Houben, G.J.P.M. (TU Delft Web Information Systems); Jonker, C.M. (TU Delft Interactive Intelligence); van den Hoven, M.J. (TU Delft Ethics & Philosophy of Technology); Forster, D. (TU Delft Human-Robot Interaction); Lagendijk, R.L. (TU Delft Cyber Security)","","2022","How can humans remain in control of artificial intelligence (AI)-based systems designed to perform tasks autonomously? Such systems are increasingly ubiquitous, creating benefits - but also undesirable situations where moral responsibility for their actions cannot be properly attributed to any particular person or group. The concept of meaningful human control has been proposed to address responsibility gaps and mitigate them by establishing conditions that enable a proper attribution of responsibility for humans; however, clear requirements for researchers, designers, and engineers are yet inexistent, making the development of AI-based systems that remain under meaningful human control challenging. In this paper, we address the gap between philosophical theory and engineering practice by identifying, through an iterative process of abductive thinking, four actionable properties for AI-based systems under meaningful human control, which we discuss making use of two applications scenarios: automated vehicles and AI-based hiring. First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations within which the system ought to operate. Second, humans and AI agents within the system should have appropriate and mutually compatible representations. Third, responsibility attributed to a human should be commensurate with that human’s ability and authority to control the system. Fourth, there should be explicit links between the actions of the AI agents and actions of humans who are aware of their moral responsibility. We argue that these four properties will support practically minded professionals to take concrete steps toward designing and engineering for AI systems that facilitate meaningful human control.","Artificial intelligence; AI ethics; Meaningful human control; Moral responsibility; Socio-technical systems","en","journal article","","","","","","","","","","","Interactive Intelligence","","",""
"uuid:54e77120-3ffd-4250-9227-2795d2b94d86","http://resolver.tudelft.nl/uuid:54e77120-3ffd-4250-9227-2795d2b94d86","Technology as Driver for Morally Motivated Conceptual Engineering","Veluwenkamp, H.M. (TU Delft Ethics & Philosophy of Technology); Capasso, M. (Sant'Anna School of Advanced Studies); Maas, J.J.C. (TU Delft Ethics & Philosophy of Technology); Marin, L. (TU Delft Ethics & Philosophy of Technology)","","2022","New technologies are the source of uncertainties about the applicability of moral and morally connotated concepts. These uncertainties sometimes call for conceptual engineering, but it is not often recognized when this is the case. We take this to be a missed opportunity, as a recognition that different researchers are working on the same kind of project can help solve methodological questions that one is likely to encounter. In this paper, we present three case studies where philosophers of technology implicitly engage in conceptual engineering (without naming it as such). We subsequently reflect on the case studies to find out how these illustrate conceptual engineering as an appropriate method to deal with pressing concerns in the philosophy of technology. We have two main goals. We first want to contribute to the literature on conceptual engineering by presenting concrete examples of conceptual engineering in the philosophy of technology. This is especially relevant, because the technologies that are designed based on the conceptual work done by philosophers of technology potentially have crucial moral and social implications. Secondly, we want to make explicit what choices are made when doing this conceptual work. Making explicit that some of the implicit assumptions are, in fact, debated in the literature allows for reflection on these questions. Ultimately, our hope is that conscious reflection leads to an improvement of the conceptual work done.
We use probabilistic topic modelling to explore how the academic literature addresses value conflicts. Identified tactics can be used to specify design requirements and policy guidelines in support of the social acceptance of energy systems. Agent-based modelling is used to identify value conflicts embedded in energy systems that result from the heterogeneous properties of the affected population. Agent-based models provide insights about the type of population affected by value conflicts and hence about the severity of the resulting lack of social acceptance. This thesis contributes to the literature on social acceptance by demonstrating how long-term acceptance can be supported by drawing on insights from ethics of technology. Additionally, we provide a systematic and practical approach to integrate human values in the regulatory and technical design of infrastructures, which is critical for supporting the ongoing energy transition.","value conflicts; value change; moral acceptability; social acceptance; agent-based modelling; exploratory modelling; probabilistic topic models; capability approach","en","doctoral thesis","","","","","","","","","","","Energie and Industrie","","",""
"uuid:82654ced-c709-4ab4-8f68-cf4d74a3c606","http://resolver.tudelft.nl/uuid:82654ced-c709-4ab4-8f68-cf4d74a3c606","A Defence of the Control Principle","Sand, M. (TU Delft Ethics & Philosophy of Technology)","","2020","The nexus of the moral luck debate is the control principle, which says that people are responsible only for things within their control. In this paper, I will first argue that the control principle should be restrained to blameworthiness, because responsibility is too wide a concept to square with control. Many deniers of moral luck appeal to the intuitiveness of the control principle. Defenders of moral luck do not share this intuition and demand a stronger defence of the control principle. I will establish a defence of the control principle based on the value of simplicity for selecting a theory of blameworthiness. A simpler theory of blameworthiness is more likely to be true, and not being falsely judged blameworthy is desirable. I will conclude that simplicity advices the acceptance of the control principle over other theories of blameworthiness that embrace factors beyond control.","Blame; Blameworthiness; Control principle; Moral luck; Simplicity","en","journal article","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:2b555e2c-45c0-4582-8bb8-acd35dde1c2e","http://resolver.tudelft.nl/uuid:2b555e2c-45c0-4582-8bb8-acd35dde1c2e","Scientists’ views on (moral) luck","Sand, M. (TU Delft Ethics & Philosophy of Technology); Jongsma, Karin (University Medical Center Utrecht)","","2020","Scientific discoveries are often to some degree influenced by luck. Whether luck’s influence is at odds with common-sense intuitions about responsibility, is the central concern of the philosophical debate about moral luck. Do scientists acknowledge that luck plays a role in their work and–if so–do they consider it morally problematic? The present article discusses the results of four focus groups with scientists, who were asked about their views on luck in their fields and its moral implications. The participants underscored circumstantial luck as a key dimension of luck in science. Nevertheless, most participants insisted that there are ways of executing ‘control’ in science: They believe that virtues and skills can increase one’s chances for success. The cultivation of these skills and virtues was considered a reasonable ground for pride. Prizes and rewards were rarely tied to personal desert, but instead to their societal function.","control; Moral luck; qualitative research; RRI; serendipity","en","journal article","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:d7072b47-cb8a-4779-8807-ff500b0d424c","http://resolver.tudelft.nl/uuid:d7072b47-cb8a-4779-8807-ff500b0d424c","Charting moral psychology’s significance for bioethics: Routes to bioethical progress, its limits, and lessons from moral philosophy","Klenk, M.B.O.T. (TU Delft Ethics & Philosophy of Technology)","","2020","Empirical moral psychology is sometimes dismissed as normatively insignificant because it plays no decisive role in settling ethical disputes. But that conclusion, even if it is valid for normative ethics, does not extend to bioethics. First, in contrast to normative ethics, bioethics can legitimately proceed from a presupposed moral framework. Within that framework, moral psychology can be shown to play four significant roles: it can improve bioethicists’ understanding of (1) the decision situation, (2) the origin and legitimacy of their moral concepts, (3) efficient options for implementing (legitimate) decisions, and (4) how to change and improve some parts of their moral framework. Second, metaethical considerations suggest that moral psychology may lead to the radical revision of entire moral frameworks and thus prompt the radical revision of entire moral frameworks in bioethics. However, I show that bioethics must either relinquish these radical implications of moral psychology and accept that there are limits to progress in bioethics based on moral psychology or establish an epistemic framework that guides radical revision.","Activism; Bioethics; Debunking arguments; Interdisciplinarity; Metaethics; Moral psychology","en","review","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:b47790f0-b2f7-495a-a35a-80c529c742b0","http://resolver.tudelft.nl/uuid:b47790f0-b2f7-495a-a35a-80c529c742b0","Why metaethics needs empirical moral psychology","Hopster, J. (Karl-Franzens-Universitat Graz); Klenk, M.B.O.T. (TU Delft Ethics & Philosophy of Technology)","","2020","What is the significance of empirical moral psychology for metaethics? In this article we take up Michael Ruse's evolutionary debunking argument against moral realism and reassess it in the context of the empirical state of the art. Ruse's argument depends on the phenomenological presumption that people generally experience morality as objective. We demonstrate how recent experimental findings challenge this widely-shared armchair presumption and conclude that Ruse's argument fails. We situate this finding in the recent debate about Carnapian explication and argue that it illustrates the necessary role that empirical moral psychology plays in explication preparation. Moral psychology sets boundaries for reasonable desiderata in metaethics and, therefore, it is necessary for metaethics.","Conceptual ethics; Evolutionary debunking arguments; Experimental moral psychology; Fruitfulness; Michael Ruse","en","journal article","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:5462a45d-2b99-48e8-a720-2052d3113658","http://resolver.tudelft.nl/uuid:5462a45d-2b99-48e8-a720-2052d3113658","How Do Technological Artefacts Embody Moral Values?","Klenk, M.B.O.T. (TU Delft Ethics & Philosophy of Technology)","","2020","According to some philosophers of technology, technology embodies moral values in virtue of its functional properties and the intentions of its designers. But this paper shows that such an account makes the values supposedly embedded in technology epistemically opaque and that it does not allow for values to change. Therefore, to overcome these shortcomings, the paper introduces the novel Affordance Account of Value Embedding as a superior alternative. Accordingly, artefacts bear affordances, that is, artefacts make certain actions likelier given the circumstances. Based on an interdisciplinary perspective that invokes recent moral anthropology, I conceptualize affordances as response-dependent properties. That is, they depend on intrinsic as well as extrinsic properties of the artefact. We have reason to value these properties. Therefore, artefacts embody values and are not value-neutral, which has practical implications for the design of new technologies.","Artefacts; Ethics of technology; Moral value; Response-dependence; Value embedding","en","journal article","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:e7b255d9-97a3-445a-93f0-6680ebe8394c","http://resolver.tudelft.nl/uuid:e7b255d9-97a3-445a-93f0-6680ebe8394c","Does morality predict aggressive driving? A conceptual analysis and exploratory empirical investigation","van den Berg, T.G.C. (TU Delft Transport and Logistics); Kroesen, M. (TU Delft Transport and Logistics); Chorus, C.G. (TU Delft Transport and Logistics)","","2020","Risky and aggressive driving is an important cause of traffic casualties and as such a major health and cost problem to society. Given the consequences for others, risky and aggressive driving has a clear moral component. Surprisingly, however, there has been little research on the relation between morality and risky and aggressive driving behavior. In this study we aim at addressing this gap. First, we present a conceptual analysis of the relationship between moral values and aggressive driving behavior. For this purpose, we extend Schwartz's integrated model of ethical decision making and apply it to the context of aggressive driving. This conceptual analysis shows that moral decision-making processes consist of several stages, like moral awareness, moral judgment and moral intent, each of which are influenced by individual and situational factors and all of which need to materialize before someone's generally endorsed moral value affects concrete behavior. This suggests that the moral value-aggressive driving relationship is rather indeterminate. This conceptual picture is confirmed by our empirical investigation, which tests to what extent respondents’ moral values, measured through the Moral Foundation Questionnaire, are predictive of respondents’ aggressive driving behavior, as measured through an aggressive driving behavior scale. Our results show few and rather weak empirical relationships between moral values and committed aggressive driving behaviors, as was expected in light of our conceptual analysis. We derive several policy implications from these results.","Aggressive and risky driving; Aggressive driving behavior scale; Integrated model of ethical decision making; Moral foundations theory; Moral values","en","journal article","","","","","","","","","","","Transport and Logistics","","",""
"uuid:17200412-df59-4104-a8fe-e6bbc9216d11","http://resolver.tudelft.nl/uuid:17200412-df59-4104-a8fe-e6bbc9216d11","Allocation of moral decision-making in human-agent teams: a pattern approach","van der Waa, J.S. (TU Delft Interactive Intelligence; TNO); van Diggelen, Jurriaan (TNO); Cavalcante Siebert, L. (TU Delft Interactive Intelligence); Neerincx, M.A. (TU Delft Interactive Intelligence; TNO); Jonker, C.M. (TU Delft Interactive Intelligence)","Harris, Don (editor); Li, Wen-Chin (editor)","2020","Artificially intelligent agents will deal with more morally sensitive situations as the field of AI progresses. Research efforts are made to regulate, design and build Artificial Moral Agents (AMAs) capable of making moral decisions. This research is highly multidisciplinary with each their own jargon and vision, and so far it is unclear whether a fully autonomous AMA can be achieved. To specify currently available solutions and structure an accessible discussion around them, we propose to apply Team Design Patterns (TDPs). The language of TDPs describe (visually, textually and formally) a dynamic allocation of tasks for moral decision making in a human-agent team context. A task decomposition is proposed on moral decision-making and AMA capabilities to help define such TDPs. Four TDPs are given as examples to illustrate the versatility of the approach. Two problem scenarios (surgical robots and drone surveillance) are used to illustrate these patterns. Finally, we discuss in detail the advantages and disadvantages of a TDP approach to moral decision making.","Dynamic task allocation; Human Factors; Human-Agent Teaming; Machine Ethics; Meaningful human control; Moral decision-making; Team Design Patterns","en","conference paper","SpringerOpen","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-01-01","","","Interactive Intelligence","","",""
"uuid:b144b7cc-ab7e-43cc-99da-dc4cbd0a8384","http://resolver.tudelft.nl/uuid:b144b7cc-ab7e-43cc-99da-dc4cbd0a8384","Moral Philosophy and the ‘Ethical Turn’ in Anthropology","Klenk, M.B.O.T. (TU Delft Ethics & Philosophy of Technology)","","2019","Moral philosophy continues to be enriched by an ongoing empirical turn, mainly through contributions from neuroscience, biology, and psychology. Thus far, cultural anthropology has largely been missing. A recent and rapidly growing ‘ethical turn’ within cultural anthropology now explicitly and systematically studies morality. This research report aims to introduce to an audience in moral philosophy several notable works within the ethical turn. It does so by critically discussing the ethical turn’s contributions to four topics: the definition of morality, the nature of moral change and progress, the truth of moral relativism, and attempts to debunk morality. The ethical turn uncovers a richer picture of moral phenomena on the intersubjective level, one akin to a virtue theoretic focus on moral character, with striking similarities of moral phenomena across cultures. Perennial debates are not settled but the ethical turn strengthens moral philosophy’s empirical turn and it rewards serious attention from philosophers.","Metaethics; Moral anthropology; Ethical turn; Moral progress; Moral disagreement; Cultural anthropology","en","journal article","","","","","","Michael Klenk's work on this publication was part of the project ValueChange that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 788321.","","","","","Ethics & Philosophy of Technology","","",""
"uuid:cf873621-2ce0-4850-ac9d-9196a41f6c94","http://resolver.tudelft.nl/uuid:cf873621-2ce0-4850-ac9d-9196a41f6c94","Conflicting values in the smart electricity grid a comprehensive overview","de Wildt, T.E. (TU Delft Energie and Industrie); Chappin, E.J.L. (TU Delft Energie and Industrie); van de Kaa, G. (TU Delft Economics of Technology and Innovation); Herder, P.M. (TU Delft Energie and Industrie); van de Poel, I.R. (TU Delft Values Technology and Innovation)","","2019","This paper aims to anticipate social acceptance issues related to the deployment of the smart electricity grid by identifying underlying value conflicts. The smart electricity grid is a key enabler of the energy transition. Its successful deployment is however jeopardized by social acceptance issues, such as concerns related to privacy and fairness. Social acceptance issues may be explained by value conflicts, i.e. the impossibility for a technological or regulatory design to simultaneously satisfy multiple societal expectations. Due to unsatisfied expectations concerning values, social discontent may arise. This paper identifies five groups of value conflicts in the smart electricity grid: consumer values versus competitiveness, IT enabled systems versus data protection, fair spatial distributions of energy systems versus system performance, market performance versus local trading, and individual access versus economies of scale. This is important for policy-makers and industry to increase the chances that the technology gains acceptance. As resolving value conflicts requires resources, this paper suggests three factors to prioritize their resolution: severity of resulting acceptance issues, resolvability of conflicts, and the level of resources required. The analysis shows that particularly the socio-economic disparities caused by the deployment of the smart electricity grid are alarming. Affordable policies are currently limited, but the impact in terms of social acceptance may be large.","Moral acceptability; Probabilistic topic models; Semantic fields; Smart electricity grid; Technology acceptance; Value conflicts","en","journal article","","","","","","","","","","Values Technology and Innovation","Energie and Industrie","","",""
"uuid:b3500ca8-26f5-404c-b072-2a85f34f4944","http://resolver.tudelft.nl/uuid:b3500ca8-26f5-404c-b072-2a85f34f4944","Did Alexander Fleming deserve the Nobel Prize?","Sand, M. (TU Delft Ethics & Philosophy of Technology)","","2019","Penicillin is a serendipitous discovery par excellence. But, what does this say about Alexander Fleming’s praiseworthiness? Clearly, Fleming would not have received the Nobel Prize, had not a mould accidently entered his laboratory. This seems paradoxical, since it was beyond his control. The present article will first discuss Fleming’s discovery of Penicillin as an example of moral luck in science and technology and critically assess some common responses to this problem. Second, the Control Principle that says that people are not responsible for things beyond their control will be defended. An implication of this principle is that Alexander Fleming’s desert, which is based on his epistemic skills, remains untouched by luck. Third, by distinguishing different notions of praiseworthiness, a way to resolve the paradox of moral luck will be elaborated. Desert provides only a pro tanto reason to determine whether someone is an appropriate addressee of reward. Here, luck can make a difference. Forth, it will be argued that stimulating the quest for socially beneficial science provides a compelling reason to treat scientists with equal desert differently. Penicillin provides striking evidence for the importance of this quest and showcasing it incentivizes the making of socially beneficial science. Ultimately, it will be justified why Fleming deserved the Nobel Prize in at least one sense of the concept.","Control; Desert; Moral luck; Penicillin; Praiseworthiness; Reward; Serendipity","en","journal article","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:83cfb204-bdf5-42ad-b912-cfcc3d4d3d90","http://resolver.tudelft.nl/uuid:83cfb204-bdf5-42ad-b912-cfcc3d4d3d90","Making sense of the self: an integrative framework for moral agency","Pesch, U. (TU Delft Ethics & Philosophy of Technology)","","2019","The self is conceptualized in a multitude of ways in different scholarly fields; at the same time moral agency appears to presuppose a unitary conception of the self. This paper explores this tension by introducing ‘moral senses’ which inform the normative evaluations of a person. The moral senses are featured as innate dispositions, but they inevitably recruit discursive categorizations in order to function. These senses forward both an ‘individual self’, by experiencing a unitary body, mind and character, and a ‘social self’, that is similarly experienced as a body, a mind, and a character. This social self is enabled by the capacity to internalize other people's feelings and intentions and the need to have otherworldly explanations for observable reality. This integrative framework of moral senses provides an understanding that helps to address the challenge of moral heterogeneity and plurality.","boundary work; individual self; moral agency; moral intuitions; social self","en","journal article","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:44a2e612-cda9-4dfc-a4d5-f5680368a302","http://resolver.tudelft.nl/uuid:44a2e612-cda9-4dfc-a4d5-f5680368a302","Ethics, morality, and game theory","Alfano, M.R. (TU Delft Ethics & Philosophy of Technology; Australian Catholic University); Rusch, Hannes (Philipps-University Marburg; Technische Universität München); Uhl, Matthias (Technische Universität München)","","2018","Ethics is a field in which the gap between words and actions looms large. Game theory and the empirical methods it inspires look at behavior instead of the lip service people sometimes pay to norms. We believe that this special issue comprises several illustrations of the fruitful application of this approach to ethics.","Behavior; Economics; Ethics; Game theory; Morals; Philosophy; Strategic interaction","en","contribution to periodical","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:9f94cc44-6f7d-48fe-9757-7d602f09ec78","http://resolver.tudelft.nl/uuid:9f94cc44-6f7d-48fe-9757-7d602f09ec78","Critiquing the Reasons for Making Artificial Moral Agents","Robbins-van Wynsberghe, A.L. (TU Delft Ethics & Philosophy of Technology); Robbins, S.A. (TU Delft Ethics & Philosophy of Technology)","","2018","","Aritifical Moral Agents; Machine Ethics; Robot Ethics","en","journal article","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:d3e0c917-2038-4a46-8968-7522e41736af","http://resolver.tudelft.nl/uuid:d3e0c917-2038-4a46-8968-7522e41736af","Five things you should know about cost overrun","Flyvbjerg, Bent (University of Oxford); Ansar, Atif (University of Oxford); Budzier, Alexander (University of Oxford); Buhl, Søren (Aalborg University); Cantarelli, Chantal (University of Sheffield); Garbuio, Massimo (University of Sydney); Glenting, Carsten (Viegand Maagøe A/S); Holm, Mette Skamris (Aalborg Municipality); Lovallo, Dan (University of Sydney); Lunn, Daniel (University of Oxford); Molin, E.J.E. (TU Delft Transport and Logistics); Rønnest, Arne (Esrum Kloster and Møllegård); Stewart, Allison (Infrastructure Victoria); van Wee, G.P. (TU Delft Transport and Logistics)","","2018","This paper gives an overview of good and bad practice for understanding and curbing cost overrun in large capital investment projects, with a critique of Love and Ahiaga-Dagbui (2018) as point of departure. Good practice entails: (a) Consistent definition and measurement of overrun; in contrast to mixing inconsistent baselines, price levels, etc. (b) Data collection that includes all valid and reliable data; as opposed to including idiosyncratically sampled data, data with removed outliers, non-valid data from consultancies, etc. (c) Recognition that cost overrun is systemically fat-tailed; in contrast to understanding overrun in terms of error and randomness. (d) Acknowledgment that the root cause of cost overrun is behavioral bias; in contrast to explanations in terms of scope changes, complexity, etc. (e) De-biasing cost estimates with reference class forecasting or similar methods based in behavioral science; as opposed to conventional methods of estimation, with their century-long track record of inaccuracy and systemic bias. Bad practice is characterized by violating at least one of these five points. Love and Ahiaga-Dagbui violate all five. In so doing, they produce an exceptionally useful and comprehensive catalog of the many pitfalls that exist, and must be avoided, for properly understanding and curbing cost overrun.","Agency; Behavioral science; Cost forecasting; Cost overrun; Cost underestimation; De-biasing; Deception; Delusion; Moral hazard; Optimism bias; Reference class forecasting; Root causes of cost overrun; Strategic misrepresentation","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-02-12","","","Transport and Logistics","","",""
"uuid:627ab830-75e4-425f-bd24-b46daf633434","http://resolver.tudelft.nl/uuid:627ab830-75e4-425f-bd24-b46daf633434","The Design of Human Oversight in Autonomous Weapon Systems","Verdiesen, E.P. (TU Delft Information and Communication Technology)","Conitzer, V. (editor); Kambhampati, S. (editor); Koenig, S. (editor); Rossi, F. (editor); Schnabel, B. (editor)","2018","As the reach and capabilities of Artificial Intelligence (AI) systems increases, there is also a growing awareness of the ethical, legal and societal impact of the potential actions and decisions of these systems. Many are calling for guidelines and regulations that can ensure the responsible design, development, implementation, and policy of AI. In scientific literature, AI is characterized by the concepts of Adaptability, Interactivity and Autonomy (Floridi & Sanders, 2004). According to Floridi and Sanders (2004), Adaptability means that the system can change based on its interaction and can learn from its experience. Machine learning techniques are an example of this. Interactivity occurs when the system and its environment act upon each other and Autonomy implies that the system itself can change its state.","autonomous weapons systems; ethical decision-making; human oversight; moral judgement","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","","","Information and Communication Technology","","",""
"uuid:571180d5-82c0-4214-a9e1-9c475853d8a6","http://resolver.tudelft.nl/uuid:571180d5-82c0-4214-a9e1-9c475853d8a6","A comprehensive approach to reviewing latent topics addressed by literature across multiple disciplines","de Wildt, T.E. (TU Delft Energie and Industrie); Chappin, E.J.L. (TU Delft Energie and Industrie); van de Kaa, G. (TU Delft Economics of Technology and Innovation); Herder, P.M. (TU Delft Energie and Industrie)","","2018","This paper proposes an approach to capturing and reviewing scientific literature addressing latent topics across multiple scientific fields. As latent topics like moral values are affected by word polysemy and synonymy, a traditional keyword-based approach is often ineffective and therefore inappropriate. As a result, scientific literature addressing latent topics tends to be fragmented thereby constraining efforts to address similar and complementary research challenges. A novel approach to reviewing the literature by utilizing both semantic fields and probabilistic topic models has therefore been developed. We illustrate this approach by reviewing the literature addressing the value justice in the energy sector and compare this with a regular keyword-based approach. The new approach results in a more complete overview of the relevance of energy justice as compared to the traditional keyword-based approach. This novel approach can be applied to other latent topics including other values or phenomena such as societal resistance to technologies, thereby leading to an increased understanding of existing relevant literature and the identification of new areas of research.","Energy sector; Justice; Latent topics; Moral values; Probabilistic topic models; Semantic fields","en","journal article","","","","","","","","","","","Energie and Industrie","","",""
"uuid:b5df09c6-7224-4221-bfa5-1d0048b98e76","http://resolver.tudelft.nl/uuid:b5df09c6-7224-4221-bfa5-1d0048b98e76","How to redesign a rent rebate system?: Experience in the Netherlands","Priemus, H. (TU Delft OLD Support RES; TU Delft OLD OTB – Research for the Built Environment); Haffner, M.E.A. (TU Delft OLD Housing Systems; TU Delft OLD OTB – Research for the Built Environment)","","2017","In 2006, responsibility for implementing the Dutch housing allowance system was transferred from the Ministry of Housing to the Tax Authority. It has since been renamed, and is now known as the ‘rent rebate system’. A number of dilemmas have become evident since the 2006 changes. Attention has shifted to how to implement the system effectively: how to limit the overconsumption of housing services, how to avoid moral hazard, how to reduce outright fraud, how to reduce the poverty trap, and how to prevent the escalation of public spending. These new dilemmas have led to the central research question in this article: how to redesign a system of rent rebates? The discussion of these dilemmas points to further changes. Proposals for a redesign of the rent rebate system in the Netherlands are presented. These proposals could also be relevant for other countries.","fraud; housing allowances; moral hazard; overconsumption; poverty trap; the Netherlands","en","journal article","","","","","","","","","","OLD OTB – Research for the Built Environment","OLD Support RES","","",""
"uuid:50e17171-5d6e-4769-95ae-23809855fb6b","http://resolver.tudelft.nl/uuid:50e17171-5d6e-4769-95ae-23809855fb6b","A Review of Value-Conflicts in Cybersecurity: An assessment based on quantitative and qualitative literature analysis","Christen, Markus (University of Zürich); Gordijn, Bert (Dublin City University); Weber, Karsten (Brandenburg University of Technology Cottbus); van de Poel, I.R. (TU Delft Values Technology and Innovation); Yaghmaei, E. (TU Delft Ethics & Philosophy of Technology)","","2017","Cybersecurity is of capital importance in a world where economic and social processes increasingly rely on digital technology. Although the primary ethical motivation of cybersecurity is prevention of informational or physical harm, its enforcement can also entail conflicts with other moral values. This contribution provides an outline of value conflicts in cybersecurity based on a quantitative literature analysis and qualitative case studies. The aim is to demonstrate that the security-privacydichotomy—that still seems to dominate the ethics discourse based on our bibliometric analysis—is insufficient when discussing the ethical challenges of cybersecurity. Furthermore, we want to sketch how the notion of contextual integrity could help to better understand and mitigate such value conflicts.","Cybersecurity; Moral Values; Value Conflicts; Privacy; Contextual Integrity","en","journal article","","","","","","","","","","Values Technology and Innovation","Ethics & Philosophy of Technology","","",""
"uuid:5fdd22dc-7456-4ccf-8c7c-c71e34a380c8","http://resolver.tudelft.nl/uuid:5fdd22dc-7456-4ccf-8c7c-c71e34a380c8","Pushing the Margins of Responsibility: Lessons from Parks’ Somnambulistic Killing","Santoni De Sio, F. (TU Delft Ethics & Philosophy of Technology); Di Nucci, Ezio (University of Copenhagen)","","2017","David Shoemaker has claimed that a binary approach to moral responsibility leaves out something important, namely instances of marginal agency, cases where agents seem to be eligible for some responsibility responses but not others. In this paper we endorse and extend Shoemaker’s approach by presenting and discussing one more case of marginal agency not yet covered by Shoemaker or in the other literature on moral responsibility. Our case is that of Kenneth Parks, a Canadian man who drove a long way to his mother-in-law’s and killed her in a state of somnambulism. We support our claim about Parks’ marginal responsibility in three steps: we first deny that Parks acts involuntarily as traditionally claimed in the legal literature; we then propose to extend Shoemaker’s analysis of marginal responsibility based on quality of will so as to include two other dimensions: the moral status of the agent and the actual causal effects of their actions; finally, we distinguish Parks’ marginal responsibility from four other existing concepts: “tracing” (drunken cases), diminished responsibility (minor mental disorders), causal responsibility (Williams’ unlucky lorry driver), and moral disapproval without responsibility (bad actions by small children, animals, or machines).","Consciousness and moral responsibility; David Shoemaker; Marginal agency; Marginal responsibility; Reactive attitudes; Strawsonian theory of responsibility","en","journal article","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:a49ffe9d-9778-4bf8-8e90-dd018d0b6e70","http://resolver.tudelft.nl/uuid:a49ffe9d-9778-4bf8-8e90-dd018d0b6e70","The Food Warden: An Exploration of Issues in Distributing Responsibilities for Safe-by-Design Synthetic Biology Applications","Robaey, Z.H. (TU Delft BT/Biotechnology and Society); Spruit, S. (TU Delft Organisation & Governance); van de Poel, I.R. (TU Delft Values Technology and Innovation)","","2017","The Safe-by-Design approach in synthetic biology holds the promise of designing the building blocks of life in an organism guided by the value of safety. This paves a new way for using biotechnologies safely. However, the Safe-by-Design approach moves the bulk of the responsibility for safety to the actors in the research and development phase. Also, it assumes that safety can be defined and understood by all stakeholders in the same way. These assumptions are problematic and might actually undermine safety. This research explores these assumptions through the use of a Group Decision Room. In this set up, anonymous and non-anonymous deliberation methods are used for different stakeholders to exchange views. During the session, a potential synthetic biology application is used as a case for investigation: the Food Warden, a biosensor contained in meat packaging for indicating the freshness of meat. Participants discuss what potential issues might arise, how responsibilities should be distributed in a forward-looking way, who is to blame if something would go wrong. They are also asked what safety and responsibility mean at different phases, and for different stakeholders. The results of the session are not generalizable, but provide valuable insights. Issues of safety cannot all be taken care of in the R&D phase. Also, when things go wrong, there are proximal and distal causes to consider. In addition, capacities of actors play an important role in defining their responsibilities. Last but not least, this research provides a new perspective on the role of instruction manuals in achieving safety.","Group decision room; Moral responsibility; Safe-by-Design; Synthetic biology; Uncertainty","en","journal article","","","","","","","","","","Values Technology and Innovation","BT/Biotechnology and Society","","",""
"uuid:81752701-e837-4e91-810d-0fb1abc51027","http://resolver.tudelft.nl/uuid:81752701-e837-4e91-810d-0fb1abc51027","Gone with the Wind: Conceiving of Moral Responsibility in the Case of GMO Contamination","Robaey, Z.H.","","2016","Genetically modified organisms are a technology now used with increasing frequency in agriculture. Genetically modified seeds have the special characteristic of being living artefacts that can reproduce and spread; thus it is difficult to control where they end up. In addition, genetically modified seeds may also bring about uncertainties for environmental and human health. Where they will go and what effect they will have is therefore very hard to predict: this creates a puzzle for regulators. In this paper, I use the problem of contamination to complicate my ascription of forward-looking moral responsibility to owners of genetically modified organisms. Indeed, how can owners act responsibly if they cannot know that contamination has occurred? Also, because contamination creates new and unintended ownership, it challenges the ascription of forward-looking moral responsibility based on ownership. From a broader perspective, the question this paper aims to answer is as follows: how can we ascribe forward-looking moral responsibility when the effects of the technologies in question are difficult to know or unknown? To solve this problem, I look at the epistemic conditions for moral responsibility and connect them to the normative notion of the social experiment. Indeed, examining conditions for morally responsible experimentation helps to define a range of actions and to establish the related epistemic virtues that owners should develop in order to act responsibly where genetically modified organisms are concerned.","genetically modified organisms; contamination; moral responsibility; social experimentation; epistemic virtues","en","journal article","Springer","","","","","","","","Technology, Policy and Management","Values Technology and Innovation","","","",""
"uuid:bc92b3f5-6058-4f78-b893-afd4aa1b8e4b","http://resolver.tudelft.nl/uuid:bc92b3f5-6058-4f78-b893-afd4aa1b8e4b","‘Friendship is a slow ripening fruit’: an agency perspective on water, values and infrastructure","Ertsen, M.W. (TU Delft Water Resources)","","2016","This paper argues that human and material agents co-shape ‘morality’. Water systems will be discussed in more detail. Artefacts (technologies) relate humans and their worlds, but the specifics of this relationship become meaningful only within specific actor-networks. As such, the material influences the moral decisions of humans. Examples from the larger Mesopotamian area, on both state-led and community-managed water systems, are discussed to show that these result from activities of individuals, households and groups manipulating water fluxes in short time periods of hours and days. Analysis of these daily activities, and especially of how the material acts, offers options for archaeologists to trace morality in action.","artefacts; Irrigation; Mesopotamia; modelling; morality","en","journal article","","","","","","","","","","","Water Resources","","",""
"uuid:964cee4c-1b30-40e5-b333-d375f343f1c1","http://resolver.tudelft.nl/uuid:964cee4c-1b30-40e5-b333-d375f343f1c1","Transferring Moral Responsibility for Technological Hazards: The Case of GMOs in Agriculture","Robaey, Z.H. (TU Delft Ethics & Philosophy of Technology)","","2016","The use of genetically modified organisms in agriculture makes great promises of better seeds, but also raises many controversies about ownership of seeds and about potential hazards. I suggest that owners of these seeds bear the responsibility to do no harm in using these seeds. After defining the nature of this responsibility, this paper asks, if ownership entails moral responsibility, and ownership can be transferred, then how is moral responsibility transferred? Building on the literature on use plans, I suggest five conditions for a good transfer of moral responsibility for genetically modified seeds. I also look at the Monsanto Technology Use Guide and Technology/Stewardship Agreement, as an examplar of a use plan, to explore the extent to which these conditions are present. I conclude that use plans can play a role in the distribution and transfer of moral responsibility for technologies with high benefits and potential harmful uncertainties.","GMOs; Moral responsibility; Ownership; Technology use guide; Uncertainties; Use plans","en","journal article","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:7eed1879-a536-4685-a01a-cd62e3ac6f47","http://resolver.tudelft.nl/uuid:7eed1879-a536-4685-a01a-cd62e3ac6f47","Design for responsibility: Safeguarding moral perception via a partnership architecture","De Greef, T.E.; Leveringhaus, A.","","2015","Advanced warfare technologies (AWT) create unprecedented capabilities to control the delivery of military force up to the point, some argue, that we are loosing humanity. But dependence on them generates difficult moral challenges impacting the decision-making process, which are only beginning to be addressed. In order to arrive at an informed opinion about the impact of AWT on decision-making, we need to know more about what AWTs are and how they operate. We provide a short overview of the different types of AWTs and discuss the key principles that underlie Humanitarian Law. We also discuss the impact of physical distance and increased levels of autonomy on AWT and discuss the challenges posed to moral perception. Before such systems can be deployed, we need to rest assured that their usage enhances, rather than undermines, human decision-making capacities. There are important choices to be made, and sound design is ‘design for responsibility’. As a solution, we therefore propose the partnership architecture that embeds concurrent views of the world and working agreements, ensuring that operators use appropriate information in the decision-making process.","sensemaking; drones; unmanned systems; moral perception; responsibility; partnerships; working agreements; humanmachine teamwork","en","journal article","Springer","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Intelligent Systems","","","",""
"uuid:1eb5ff7d-4d44-4348-b1da-b1320e8a64ca","http://resolver.tudelft.nl/uuid:1eb5ff7d-4d44-4348-b1da-b1320e8a64ca","Refining the ethics of computer-made decisions: A classification of moral mediation by ubiquitous machines","Van der Voort, M.; Pieters, W.; Consoli, L.","","2015","In the past decades, computers have become more and more involved in society by the rise of ubiquitous systems, increasing the number of interactions between humans and IT systems. At the same time, the technology itself is getting more complex, enabling devices to act in a way that previously only humans could, based on developments in the fields of both robotics and artificial intelligence. This results in a situation in which many autonomous, intelligent and context-aware systems are involved in decisions that affect their environment. These relations between people, machines, and decisions can take many different forms, but thus far, a systematic account of machine-assisted moral decisions is lacking. This paper investigates the concept of machine-assisted moral decisions from the perspective of technological mediation. It is argued that modern machines do not only have morality in the sense of mediating the actions of humans, but that, by making their own decisions within their relations with humans, mediate morality itself. A classification is proposed to differentiate between four different types of moral relations. The moral aspects within the decisions these systems make are combined into three dimensions that describe the distinct characteristics of different types of moral mediation by machines. Based on this classification, specific guidelines for moral behavior can be provided for these systems.","ubiquitous computing; moral reasoning; technological mediation; moral decisions; human computer relations","en","journal article","Springer","","","","","","","","Technology, Policy and Management","Engineering, Systems and Services","","","",""
"uuid:eaa918ee-acd8-4438-8a3e-7b5e6e5d24e2","http://resolver.tudelft.nl/uuid:eaa918ee-acd8-4438-8a3e-7b5e6e5d24e2","Cognitive biases can affect moral intuitions about cognitive enhancement","Caviola, L.; Mannino, A.; Savulescu, J.; Faulmüller, N.","","2014","Research into cognitive biases that impair human judgment has mostly been applied to the area of economic decision-making. Ethical decision-making has been comparatively neglected. Since ethical decisions often involve very high individual as well as collective stakes, analyzing how cognitive biases affect them can be expected to yield important results. In this theoretical article, we consider the ethical debate about cognitive enhancement (CE) and suggest a number of cognitive biases that are likely to affect moral intuitions and judgments about CE: status quo bias, loss aversion, risk aversion, omission bias, scope insensitivity, nature bias, and optimistic bias. We find that there are more well-documented biases that are likely to cause irrational aversion to CE than biases in the opposite direction. This suggests that common attitudes about CE are predominantly negatively biased. Within this new perspective, we hope that subsequent research will be able to elaborate this hypothesis and develop effective de-biasing techniques that can help increase the rationality of the public CE debate and thus improve our ethical decision-making.","cognitive enhancement; rationality; cognitive bias; attitudes; de-biasing; moral intuitions; brain function augmentation","en","journal article","Frontiers","","","","","","","","Technology, Policy and Management","Values Technology and Innovation","","","",""
"uuid:c17f161c-01b2-4026-9505-40fb2881f620","http://resolver.tudelft.nl/uuid:c17f161c-01b2-4026-9505-40fb2881f620","Moral ape philosophy","De Boer, J.","","2011","Our closest relative the chimpanzee seems to display proto-moral behavior. Some scholars emphasize the similarities between humans and chimpanzees, others some key differences. This paper aims is to formulate a set of intermediate conditions between a sometimes helpful chimpanzee and moral man. I specify these intermediate conditions as requirements for the chimpanzees, and for each requirement I take on a verificationist stance and ask what the empirical conditions that satisfy it would be. I ask what would plausibly count as the behavioral correlate of each requirement, when implemented. I take a philosophical look at morality using the chimpanzees as a prism. We will talk of propositional attitudes, rationality and reason in relation to the chimps. By means of the chimps I intend to arrive at a notion of objective morality as conceived from a first person point of view in terms of propositional attitudes and reasons.","Chimpanzees; Morality; Frans de Waal; Creature construction","en","journal article","Springer Verlag","","","","","","","","Technology, Policy and Management","Values and Technology","","","",""
"uuid:6d7338a1-87ef-4cd1-94b3-47dcdd47ec16","http://resolver.tudelft.nl/uuid:6d7338a1-87ef-4cd1-94b3-47dcdd47ec16","Moral responsibility in R&D networks: A procedural approach to distributing responsibilities","Doorn, N.","Van den Hoven, M.J. (promotor); Van der Poel, I.R. (promotor)","2011","The introduction of new technologies can be accompanied by risks and unforeseen side-effects, often with high impact. If no-one is responsible for addressing these risks and side-effects, the implementation of technologies might result in harmful consequences for society. It is therefore desirable that the prevention of these negative aspects of technology is already taken into account explicitly in Research and Development (R&D). However, even if most people would agree that the people working in R&D have a professional responsibility to address these issues, it is not clear who exactly should address it and how. Is it the the responsibility of the fundamental or applied researchers working in the laboratory or should it be delegated to the technology producers at the end of the chain? One of the problems with professional responsibility is that people have different views on responsibility and the question under what conditions one is responsible. This may ultimately lead to gaps in the distribution of responsibilities because people may expect someone else to assume the remaining responsibilities. This thesis discusses an alternative approach to distributing responsibilities. Rather than developing one substantive conception of the responsibility of professionals, a procedural approach for distributing responsibilities is developed. The idea behind this procedural approach is that people may agree on the procedure for distributing the responsibilities, even if they do not have the same substantive view on responsibility. The model is illustrated with a case study on a technological project concerning the development of an in-house monitoring system based on ambient technology.","research & development (R&D); moral responsibility; engineering ethics; Professional ethics; wide reflective equilibrium; ambient intelligence technology; procedural fairness; John Rawls","en","doctoral thesis","3TU.Centre for Ethics and Technology","","","","","","","2011-05-11","Technology, Policy and Management","Values and Technology - Philosophy","","","",""
"uuid:c10c2b98-c218-429a-b48b-7904650df041","http://resolver.tudelft.nl/uuid:c10c2b98-c218-429a-b48b-7904650df041","Engineering and the Problem of Moral Overload","Van den Hoven, J.; Lokhorst, G.J.; Van de Poel, I.","","2011","When thinking about ethics, technology is often only mentioned as the source of our problems, not as a potential solution to our moral dilemmas. When thinking about technology, ethics is often only mentioned as a constraint on developments, not as a source and spring of innovation. In this paper, we argue that ethics can be the source of technological development rather than just a constraint and technological progress can create moral progress rather than just moral problems. We show this by an analysis of how technology can contribute to the solution of so-called moral overload or moral dilemmas. Such dilemmas typically create a moral residue that is the basis of a second-order principle that tells us to reshape the world so that we can meet all our moral obligations. We can do so, among other things, through guided technological innovation.","moral overload; engineering; technological progress","en","journal article","Springer","","","","","","","","Technology, Policy and Management","Values and Technology","","","",""
"uuid:e1caf50e-0473-4501-8597-1047c148fd2d","http://resolver.tudelft.nl/uuid:e1caf50e-0473-4501-8597-1047c148fd2d","Computational meta-ethics towards the meta-ethical robot","Lokhorst, G.J.","","2011","It has been argued that ethically correct robots should be able to reason about right and wrong. In order to do so, they must have a set of do’s and don’ts at their disposal. However, such a list may be inconsistent, incomplete or otherwise unsatisfactory, depending on the reasoning principles that one employs. For this reason, it might be desirable if robots were to some extent able to reason about their own reasoning—in other words, if they had some meta-ethical capacities. In this paper, we sketch how one might go about designing robots that have such capacities. We show that the field of computational meta-ethics can profit from the same tools as have been used in computational metaphysics.","automated moral reasoning; computational meta-ethics","en","journal article","Springer Verlag","","","","","","","","Technology, Policy and Management","Values and Technology","","","",""
"uuid:2bcb3cb7-b516-4478-81ea-86316042de2b","http://resolver.tudelft.nl/uuid:2bcb3cb7-b516-4478-81ea-86316042de2b","Online Responsibility: Bad Samaritanism and the Influence of Internet Mediation","Polder-Verkiel, S.E.","","2010","In 2008 a young man committed suicide while his webcam was running. 1,500 people apparently watched as the young man lay dying: when people finally made an effort to call the police, it was too late. This closely resembles the case of Kitty Genovese in 1964, where 39 neighbours supposedly watched an attacker assault and did not call until it was too late. This paper examines the role of internet mediation in cases where people may or may not have been good Samaritans and what their responsibilities were. The method is an intuitive one: intuitions on the various potentially morally relevant differences when it comes to responsibility between offline and online situations are examined. The number of onlookers, their physical nearness and their anonymity have no moral relevance when it comes to holding them responsible. Their perceived reality of the situation and ability to act do have an effect on whether we can hold people responsible, but this doesn’t seem to be unique to internet mediation. However the way in which those factors are intrinsically connected to internet mediation does seem to have a diminishing effect on responsibility in online situations.","responsibility; internet mediation; bad Samaritanism; Kitty Genovese; Abraham Biggs; Bystander effect; physical distance; anonymity; ability to act; perceived reality; moral philosophy; intuitions","en","journal article","Springer","","","","","","","","Technology, Policy and Management","","","","",""
"uuid:7e4158ab-b99e-4262-8f91-5ccbaff17922","http://resolver.tudelft.nl/uuid:7e4158ab-b99e-4262-8f91-5ccbaff17922","On Genies and Bottles: Scientists’ Moral Responsibility and Dangerous Technology R&D","Koepsell, D.","","2009","The age-old maxim of scientists whose work has resulted in deadly or dangerous technologies is: scientists are not to blame, but rather technologists and politicians must be morally culpable for the uses of science. As new technologies threaten not just populations but species and biospheres, scientists should reassess their moral culpability when researching fields whose impact may be catastrophic. Looking at real-world examples such as smallpox research and the Australian ‘‘mousepox trick’’, and considering fictional or future technologies like Kurt Vonnegut’s ‘‘ice-nine’’ from Cat’s Cradle, and the ‘‘grey goo’’ scenario in nanotechnology, this paper suggests how ethical principles developed in biomedicine can be adjusted for science in general. An ‘‘extended moral horizon’’ may require looking not just to the effects of research on individual human subjects, but also to effects on as a whole. Moreover, a crude utilitarian calculus can help scientists make moral decisions about which technologies to pursue and disseminate when catastrophes may result. Finally, institutions should be devised to teach these moral principles to scientists, and require moral education for future funding.","dangerous technology; moral responsibility; duty of restraint; scientific ethics; research ethics","en","journal article","Springer","","","","","","","","Technology, Policy and Management","Values and Technology","","","",""
"uuid:b3d395a2-2558-49ab-a056-6e7868e5285a","http://resolver.tudelft.nl/uuid:b3d395a2-2558-49ab-a056-6e7868e5285a","Neuroimaging and Responsibility Assessments","Vincent, N.A.","","2009","Could neuroimaging evidence help us to assess the degree of a person’s responsibility for a crime which we know that they committed? This essay defends an affirmative answer to this question. A range of standard objections to this high-tech approach to assessing people’s responsibility is considered and then set aside, but I also bring to light and then reject a novel objection—an objection which is only encountered when functional (rather than structural) neuroimaging is used to assess people’s responsibility.","moral responsibility; legal responsibility; capacity-theoretic conception of responsibility; capacitarian theory of responsibility; mental capacity; capacity responsibility; neuroimaging; fMRI; modal fallacy; automatic functions; theory to the best explanation; Roper v. Simmons [2005]","en","journal article","Springer","","","","","","","","Technology, Policy and Management","Values and Technology","","","",""
"uuid:967e3f1d-7caf-4fa7-8f95-42bf1b9245a8","http://resolver.tudelft.nl/uuid:967e3f1d-7caf-4fa7-8f95-42bf1b9245a8","Implementing the Netherlands Code of Conduct for Scientific Practice: A Case Study","Schuurbiers, D.; Osseweijer, P.; Kinderlerer, J.","","2009","Widespread enthusiasm for establishing scientific codes of conduct notwithstanding, the utility of such codes in influencing scientific practice is not self-evident. It largely depends on the implementation phase following their establishment—a phase which often receives little attention. The aim of this paper is to provide recommendations for guiding effective implementation through an assessment of one particular code of conduct in one particular institute. Based on a series of interviews held with researchers at the Department of Biotechnology of Delft University of Technology, this paper evaluates how the Netherlands Code of Conduct for Scientific Practice is received by those it is supposed to govern. While respondents agreed that discussion of the guiding principles of scientific conduct is called for, they did not consider the code as such to be a useful instrument. As a tool for the individual scientific practitioner, the code leaves a number of important questions unanswered in relation to visibility, enforcement, integration with daily practice and the distribution of responsibility. Recommendations are provided on the basis of these questions. There is more at stake than merely holding scientific practitioners to a proper exercise of their duties; implementation of scientific society codes of conduct also concerns the further motives and value commitments that gave rise to their establishment in the first place.","Code of conduct; Science and engineering ethics; Responsible conduct of research; Research integrity; Moral responsibility","en","journal article","Springer","","","","","","","","Applied Sciences","Biotechnology","","","",""
"uuid:9de882ac-97c7-4a46-9491-9730ff874af3","http://resolver.tudelft.nl/uuid:9de882ac-97c7-4a46-9491-9730ff874af3","Sopholab: Experimental computational philosophy","Wiegel, V.","Van den Hoven, M.J. (promotor); Van den Berg, J. (promotor)","2007","In this book, the extend to which we can equip artificial agents with moral reasoning capacity is investigated. Attempting to create artificial agents with moral reasoning capabilities challenges our understanding of morality and moral reasoning to its utmost. It also helps philosophers dealing with the inherent complexity of modern organizations. Modern society with large multi-national organizations and extensive information infrastructures provides a backdrop for moral theories that is hard to encompass through mere theorising. Computerized support for theorising is needed to be able to fully grasp and address the inherent complexity. Using moral reasoning capacity will help us addressing the challenges that technological artefacts pose. They do not only contain information about us, they start to act on our behalves.With the increasing autonomy comes an increased need to ensure that their behaviour is in line with what we expect from them. To investigate and address these issues a laboratoy for philosophy is outlined: SophoLab. It consists of a methodology; a framework of modal logic, DEAL; and multi-agent software systems. SophoLab provides the basis for an experimental, computational philosophy. Its viability and usefulness are demonstrated through several experiments.","moral philosophy; computational philosophy; privacy; software agent; deontic logic; modal logic; mas","en","doctoral thesis","","","","","","","","","Technology, Policy and Management","","","","",""
"uuid:8e1a7202-8eee-4b45-81ac-174b66d0a7c7","http://resolver.tudelft.nl/uuid:8e1a7202-8eee-4b45-81ac-174b66d0a7c7","The ethical cycle","Van De Poel, I.; Royakkers, L.","","2007","","ethics; engineering; moral problems; designing; deliberation","en","journal article","Springer","","","","","","","","Technology, Policy and Management","","","","",""