Circular Image

S.N.R. Buijsman

info

Please Note

27 records found

Inauthentic Value Shifts

More than Manipulation

In a recent commentary, Aboodi (2025) has criticized our (Buijsman et al., 2025) concern with inauthentic value shifts [IVS] that can occur through human-AI interactions. We presented emerging evidence that such interactions can lead to unperceived changes in values, which can le ...

Machine Learning Models as Mathematics

Interpreting Explainable AI in Non-causal Terms

We would like to have a wide range of explanations for the behaviour of machine learning systems. However, how should we understand these explanations? Typically, attempts to clarify what an explanations for questions such as ‘why am I getting this output for these inputs?’ have ...
Integrating AI systems into workflows risks undermining the competence of the people supported by them, specifically due to a loss of meta-cognitive competence. We discuss a recent suggestion to mitigate this through better uncertainty quantification. While this is certainly a st ...
The advances in machine learning (ML)-based systems in medicine give rise to pressing epistemological and ethical questions. Clinical decisions are increasingly taken in highly digitised work environments, which we call artificial epistemic niches. By considering the case of ML s ...
Importance: Artificial intelligence (AI) presents transformative opportunities to address the increasing challenges faced by health care systems globally. Particularly, in data-rich environments, such as intensive care units (ICUs), AI could assist in enhancing clinical decision- ...

Autonomy by Design

Preserving Human Autonomy in AI Decision-Support

AI systems increasingly support human decision-making across domains of professional, skill-based, and personal activity. While previous work has examined how AI might affect human autonomy globally, the effects of AI on domain-specific autonomy—the capacity for self-governed act ...
What variables should be used to get explanations (of AI systems) that are easily interpretable? The challenge to find the right degree of abstraction in explanations, also called the ‘variables problem’, has been actively discussed in the philosophy of science. The challenge is ...
Artificial Intelligence (AI) in healthcare holds transformative potential but faces critical challenges in ethical accountability and systemic inequities. Biases in AI models, such as lower diagnosis rates for Black women or gender stereotyping in Large Language Models, highlight ...

Is Meaningful Human Control Over Personalised AI Assistants Possible?

Ethical Design Requirements for The New Generation of Artificially Intelligent Agents

Recently, several large tech companies have pushed the notion of AI assistants into the public debate. These envisioned agents are intended to far outshine current systems, as they are intended to be able to manage our affairs as if they are personal assistants. In turn, this oug ...
Machine learning techniques are driving — or soon will be driving — much of scientific research and discovery. Can they function as models similar to more traditional modeling techniques in scientific contexts? Or might they replace models altogether if they deliver sufficient pr ...

Transparency for AI systems

A value-based approach

With the widespread use of artificial intelligence, it becomes crucial to provide information about these systems and how they are used. Governments aim to disclose their use of algorithms to establish legitimacy and the EU AI Act mandates forms of transparency for all high-risk ...

Opening the Analogical Portal to Explainability

Can Analogies Help Laypeople in AI-assisted Decision Making?

Concepts are an important construct in semantics, based on which humans understand the world with various levels of abstraction. With the recent advances in explainable artificial intelligence (XAI), concept-level explanations are receiving an increasing amount of attention from ...
Relevancy is a prevalent term in value alignment. We either need to keep track of the relevant moral reasons, we need to embed the relevant values, or we need to learn from the relevant behaviour. What relevancy entails in particular cases, however, is often ill-defined. The reas ...
This chapter explores the principles and frameworks of human-centered artificial intelligence (AI), specifically focusing on user modeling, adaptation, and personalization. It introduces a four-dimensional framework comprising paradigms, actors, values, and levels of realization ...
ChatGPT is a powerful language model from OpenAI that is arguably able to comprehend and generate text. ChatGPT is expected to greatly impact society, research, and education. An essential step to understand ChatGPT’s expected impact is to study its domain-specific answering capa ...
AI systems are increasingly being used to support human decision making. It is important that AI advice is followed appropriately. However, according to existing literature, users typically under-rely or over-rely on AI systems, and this leads to sub-optimal team performance. In ...
Process reliabilist accounts claim that a belief is justified when it is the result of a reliable belief-forming process. Yet over what range of possible token processes is this reliability calculated? I argue against the idea that all possible token processes (in the actual worl ...
Machine learning is used more and more in scientific contexts, from the recent breakthroughs with AlphaFold2 in protein fold prediction to the use of ML in parametrization for large climate/astronomy models. Yet it is unclear whether we can obtain scientific explanations from suc ...
Technologies have all kinds of impacts on the environment, on human behavior, on our society and on what we believe and value. But some technologies are not just impactful, they are also socially disruptive: they challenge existing institutions, social practices, beliefs and conc ...