SB

S.N.R. Buijsman

Authored

13 records found

Transparency for AI systems

A value-based approach

With the widespread use of artificial intelligence, it becomes crucial to provide information about these systems and how they are used. Governments aim to disclose their use of algorithms to establish legitimacy and the EU AI Act mandates forms of transparency for all high-risk ...
Machine learning is used more and more in scientific contexts, from the recent breakthroughs with AlphaFold2 in protein fold prediction to the use of ML in parametrization for large climate/astronomy models. Yet it is unclear whether we can obtain scientific explanations from suc ...
Users of sociotechnical systems often have no way to independently verify whether the system output which they use to make decisions is correct; they are epistemically dependent on the system. We argue that this leads to problems when the system is wrong, namely to bad decisions ...
In recent years philosophers have used results from cognitive science to formulate epistemologies of arithmetic (e.g. Giaquinto in J Philos 98(1):5–18, 2001). Such epistemologies have, however, been criticised, e.g. by Azzouni (Talking about nothing: numbers, hallucinations and f ...
With recent advances in explainable artificial intelligence (XAI), researchers have started to pay attention to concept-level explanations, which explain model predictions with a high level of abstraction. However, such explanations may be difficult to digest for laypeople due to ...
Explaining the behaviour of Artificial Intelligence models has become a necessity. Their opaqueness and fragility are not tolerable in high-stakes domains especially. Although considerable progress is being made in the field of Explainable Artificial Intelligence, scholars have d ...
AI systems are increasingly being used to support human decision making. It is important that AI advice is followed appropriately. However, according to existing literature, users typically under-rely or over-rely on AI systems, and this leads to sub-optimal team performance. In ...
Explainable artificial intelligence (XAI) aims to help people understand black box algorithms, particularly of their outputs. But what are these explanations and when is one explanation better than another? The manipulationist definition of explanation from the philosophy of scie ...
Why should we explain opaque algorithms? Here four papers are discussed that argue that, in fact, we don’t have to. Explainability, according to them, isn’t needed for trust in algorithms, nor is it needed for other goals we might have. I give a critical overview of these argumen ...
Process reliabilist accounts claim that a belief is justified when it is the result of a reliable belief-forming process. Yet over what range of possible token processes is this reliability calculated? I argue against the idea that all possible token processes (in the actual worl ...
ChatGPT is a powerful language model from OpenAI that is arguably able to comprehend and generate text. ChatGPT is expected to greatly impact society, research, and education. An essential step to understand ChatGPT’s expected impact is to study its domain-specific answering capa ...
Clarke and Beck argue that the ANS doesn't represent non-numerical magnitudes because of its second-order character. A sensory integration mechanism can explain this character as well, provided the dumbbell studies involve interference from systems that segment by objects such as ...
Technologies have all kinds of impacts on the environment, on human behavior, on our society and on what we believe and value. But some technologies are not just impactful, they are also socially disruptive: they challenge existing institutions, social practices, beliefs and conc ...

Contributed

6 records found

Algorithmic Fairness: Encouraging Exclusionary Diversity

(instead of Inclusionary Pluriversality)

AI is becoming significantly more impactful in society, especially with regard to decision-making. Algorithmic fairness is the field wherein the fairness of an AI algorithm is defined, subsequently evaluated, and ideally improved. This paper uses a fairness decision tree to crit ...

From Data to Decision

Investigating Bias Amplification in Decision-Making Algorithms

This research investigates how biases in datasets influence the outputs of decision-making algorithms, specifically whether these biases are merely reflected or further amplified by the algorithms. Using the Adult/Census Income dataset from the UCI Machine Learning Repository, th ...

Influence of Data Processing on the Algorithm Fairness vs. Accuracy Trade-off

Building Pareto Fronts for Equitable Algorithmic Decisions

Algorithmic bias due to training from biased data is a widespread issue. Bias mitigation techniques such as fairness-oriented data pre-, in-, and post-processing can help but usually come at the cost of model accuracy. For this contribution, we first conducted a literature review ...

A study on bias against women in recruitment algorithms

Surveying the fairness literature in the search for a solution

Algorithms have a more prominent presence than ever in the domain of recruitment. Many different tasks ranging from finding candidates to scanning resumes are handled more and more by algorithms and less by humans. Automating these tasks has led to bias being exhibited towards di ...
As stated this research aims to provide insights into the criteria and trade-offs that determine a responsible level of automation when automating decision-making by civil servants through artificial intelligence within smaller public organisations, such as municipalities, in the ...
Machine Learning (ML) algorithms have the potential to reproduce biases that already exist in society, a fact that leads to scholarly work trying to quantify algorithmic discrimination through fairness metrics. Although there are now a plethora of metrics, some of them are even c ...