Circular Image

S.N.R. Buijsman

25 records found

Artificial Intelligence (AI) in healthcare holds transformative potential but faces critical challenges in ethical accountability and systemic inequities. Biases in AI models, such as lower diagnosis rates for Black women or gender stereotyping in Large Language Models, highlight ...

Autonomy by Design

Preserving Human Autonomy in AI Decision-Support

AI systems increasingly support human decision-making across domains of professional, skill-based, and personal activity. While previous work has examined how AI might affect human autonomy globally, the effects of AI on domain-specific autonomy—the capacity for self-governed act ...
What variables should be used to get explanations (of AI systems) that are easily interpretable? The challenge to find the right degree of abstraction in explanations, also called the ‘variables problem’, has been actively discussed in the philosophy of science. The challenge is ...
Importance: Artificial intelligence (AI) presents transformative opportunities to address the increasing challenges faced by health care systems globally. Particularly, in data-rich environments, such as intensive care units (ICUs), AI could assist in enhancing clinical decision- ...

Is Meaningful Human Control Over Personalised AI Assistants Possible?

Ethical Design Requirements for The New Generation of Artificially Intelligent Agents

Recently, several large tech companies have pushed the notion of AI assistants into the public debate. These envisioned agents are intended to far outshine current systems, as they are intended to be able to manage our affairs as if they are personal assistants. In turn, this oug ...
The advances in machine learning (ML)-based systems in medicine give rise to pressing epistemological and ethical questions. Clinical decisions are increasingly taken in highly digitised work environments, which we call artificial epistemic niches. By considering the case of ML s ...
Integrating AI systems into workflows risks undermining the competence of the people supported by them, specifically due to a loss of meta-cognitive competence. We discuss a recent suggestion to mitigate this through better uncertainty quantification. While this is certainly a st ...
This chapter explores the principles and frameworks of human-centered artificial intelligence (AI), specifically focusing on user modeling, adaptation, and personalization. It introduces a four-dimensional framework comprising paradigms, actors, values, and levels of realization ...

Opening the Analogical Portal to Explainability

Can Analogies Help Laypeople in AI-assisted Decision Making?

Concepts are an important construct in semantics, based on which humans understand the world with various levels of abstraction. With the recent advances in explainable artificial intelligence (XAI), concept-level explanations are receiving an increasing amount of attention from ...
ChatGPT is a powerful language model from OpenAI that is arguably able to comprehend and generate text. ChatGPT is expected to greatly impact society, research, and education. An essential step to understand ChatGPT’s expected impact is to study its domain-specific answering capa ...
Relevancy is a prevalent term in value alignment. We either need to keep track of the relevant moral reasons, we need to embed the relevant values, or we need to learn from the relevant behaviour. What relevancy entails in particular cases, however, is often ill-defined. The reas ...
Machine learning techniques are driving — or soon will be driving — much of scientific research and discovery. Can they function as models similar to more traditional modeling techniques in scientific contexts? Or might they replace models altogether if they deliver sufficient pr ...

Transparency for AI systems

A value-based approach

With the widespread use of artificial intelligence, it becomes crucial to provide information about these systems and how they are used. Governments aim to disclose their use of algorithms to establish legitimacy and the EU AI Act mandates forms of transparency for all high-risk ...
Process reliabilist accounts claim that a belief is justified when it is the result of a reliable belief-forming process. Yet over what range of possible token processes is this reliability calculated? I argue against the idea that all possible token processes (in the actual worl ...
Why should we explain opaque algorithms? Here four papers are discussed that argue that, in fact, we don’t have to. Explainability, according to them, isn’t needed for trust in algorithms, nor is it needed for other goals we might have. I give a critical overview of these argumen ...
AI systems are increasingly being used to support human decision making. It is important that AI advice is followed appropriately. However, according to existing literature, users typically under-rely or over-rely on AI systems, and this leads to sub-optimal team performance. In ...
Machine learning is used more and more in scientific contexts, from the recent breakthroughs with AlphaFold2 in protein fold prediction to the use of ML in parametrization for large climate/astronomy models. Yet it is unclear whether we can obtain scientific explanations from suc ...
Technologies have all kinds of impacts on the environment, on human behavior, on our society and on what we believe and value. But some technologies are not just impactful, they are also socially disruptive: they challenge existing institutions, social practices, beliefs and conc ...
Explainable artificial intelligence (XAI) aims to help people understand black box algorithms, particularly of their outputs. But what are these explanations and when is one explanation better than another? The manipulationist definition of explanation from the philosophy of scie ...