Searched for: subject%3A%22Human%255C-in%255C-the%255C-loop%22
(1 - 20 of 33)

Pages

document
Karnani, Simran (author)
In recent years, there has been a growing interest among researchers in the explainability, fairness, and robustness of Computer Vision models. While studies have explored the usability of these models for end users, limited research has delved into the challenges and requirements faced by researchers investigating these requirements. This study...
master thesis 2023
document
Ziad Ahmad Saad Soliman Nawar, Ziad (author)
Machine learning (ML) systems for computer vision applications are widely deployed in decision-making contexts, including high-stakes domains such as autonomous driving and medical diagnosis. While largely accelerating the decision-making process, those systems have been found to suffer from a severe issue of reliability, i.e., they can easily...
master thesis 2023
document
Sadowska, Anna D. (author), Maestre, José María (author), Kassking, Ruud (author), van Overloop, P.J.A.T.M. (author), De Schutter, B.H.K. (author)
We propose a model-predictive control (MPC)-based approach to solve a human-in-the-loop control problem for a network system lacking sensors and actuators to allow for a fully automatic operation. The humans in the loop are, therefore, essential; they travel between the network nodes to provide the remote controller with measurements and to...
journal article 2023
document
Verma, H. (author), Mlynar, Jakub (author), Schaer, Roger (author), Reichenbach, Julien (author), Jreige, Mario (author), Prior, John (author), Evéquoz, Florian (author), Depeursinge, Adrien (author)
Significant and rapid advancements in cancer research have been attributed to Artificial Intelligence (AI). However, AI’s role and impact on the clinical side has been limited. This discrepancy manifests due to the overlooked, yet profound, differences in the clinical and research practices in oncology. Our contribution seeks to scrutinize...
conference paper 2023
document
Degachi, C. (author), Al Owayyed, M. (author), Tielman, M.L. (author)
Increased levels of user control in learning systems is commonly cited as good AI development practice. However, the evidence as to the effect of perceived control over trust in these systems is mixed. This study investigated the relationship between different trust dimensions and perceived control in postgraduate student burnout support...
conference paper 2023
document
Sharifi Noorian, S. (author), Qiu, S. (author), Sayin, Burcu (author), Balayn, A.M.A. (author), Gadiraju, Ujwal (author), Yang, J. (author), Bozzon, A. (author)
High-quality data plays a vital role in developing reliable image classification models. Despite that, what makes an image difficult to classify remains an unstudied topic. This paper provides a first-of-its-kind, model-agnostic characterization of image atypicality based on human understanding. We consider the setting of image classification...
conference paper 2023
document
Yang, J. (author), Bozzon, A. (author), Gadiraju, Ujwal (author), Lease, Matthew (author)
contribution to periodical 2023
document
de Rooij, G. (author), van Baelen, D. (author), Borst, C. (author), van Paassen, M.M. (author), Mulder, Max (author)
Haptic cues on the side stick are a promising method to reduce loss of control in-flight incidents. They can be intuitively interpreted and provide immediate support, leading to a shared control system. However, haptic interfaces are limited in providing information, and the reason for cues may not always be clear to pilots. This study presents...
journal article 2023
document
Chen, Dina (author)
Recent works explain the DNN models that perform image classification tasks following the "attribution, human-in-the-loop, extraction" workflow. However, little work has looked into such an approach for explaining DNN models for language or multimodal tasks. To address this gap, we propose a framework that explains and assesses the model...
master thesis 2022
document
Biswas, Shreyan (author)
Explaining the behaviour of Artificial Intelligence models has become a necessity. Their opaqueness and fragility are not tolerable in high-stakes domains especially. <br/>Although considerable progress is being made in the field of Explainable Artificial Intelligence, scholars have demonstrated limits and flaws of existing approaches:...
master thesis 2022
document
Zheng, Meng (author)
Machine learning models are so-called a "black box," which means people can not easily observe the relationship between the output and input or explain the reason for such results. In recent years, much work has been done on interpretable machine-learning, such as Shapley values, counterfactual explanations, partial dependence plots, or saliency...
master thesis 2022
document
WANG, Siwei (author)
master thesis 2022
document
Ziengs, Bart (author)
Interpretability of ML models and image recognition models specifaclly, is a increasing problem. In this thesis, the design and implementation of Brickroutine: a system that used a trained model, is presented. Using human annotations, semantic interpretations are given to image classification problems. By giving an iterative approach in terms of...
master thesis 2022
document
Sharifi Noorian, S. (author), Qiu, S. (author), Gadiraju, Ujwal (author), Yang, J. (author), Bozzon, A. (author)
Unknown unknowns represent a major challenge in reliable image recognition. Existing methods mainly focus on unknown unknowns identification, leveraging human intelligence to gather images that are potentially difficult for the machine. To drive a deeper understanding of unknown unknowns and more effective identification and treatment, this...
conference paper 2022
document
Tsiakas, K. (author), Murray-Rust, D.S. (author)
In this paper, we discuss the trends and challenges of the integration of Artificial Intelligence (AI) methods in the workplace. An important aspect towards creating positive AI futures in the workplace is the design of fair, reliable and trustworthy AI systems which aim to augment human performance and perception, instead of replacing them...
conference paper 2022
document
Perez Dattari, R.J. (author), Ferreira de Brito, B.F. (author), de Groot, O.M. (author), Kober, J. (author), Alonso Mora, J. (author)
The successful integration of autonomous robots in real-world environments strongly depends on their ability to reason from context and take socially acceptable actions. Current autonomous navigation systems mainly rely on geometric information and hard-coded rules to induce safe and socially compliant behaviors. Yet, in unstructured urban...
journal article 2022
document
Zhang, Zijian (author), Setty, Vinay (author), Anand, A. (author)
We introduce SparCAssist, a general-purpose risk assessment tool for the machine learning models trained for language tasks. It evaluates models' risk by inspecting their behavior on counterfactuals, namely out-of-distribution instances generated based on the given data instance. The counterfactuals are generated by replacing tokens in...
conference paper 2022
document
Bhardwaj, Akansha (author), Yang, J. (author), Cudré-Mauroux, Philippe (author)
Platforms such as Twitter are increasingly being used for real-world event detection. Recent work often leverages event-related keywords for training machine learning based event detection models. These approaches make strong assumptions on the distribution of the relevant microposts containing the keyword – referred to as the expectation – and...
journal article 2022
document
Biswas, S. (author), Corti, L. (author), Buijsman, S.N.R. (author), Yang, J. (author)
Explaining the behaviour of Artificial Intelligence models has become a necessity. Their opaqueness and fragility are not tolerable in high-stakes domains especially. Although considerable progress is being made in the field of Explainable Artificial Intelligence, scholars have demonstrated limits and flaws of existing approaches: explanations...
conference paper 2022
document
Zeng, Wei (author)
To study how to involve the end-users in the development of machine learning explainability, this project has chosen the context of bird species identification. It intends to develop a platform where the end-users can learn bird knowledge while contributing to building the explainability of machine learning models. Among all the methods that...
master thesis 2021
Searched for: subject%3A%22Human%255C-in%255C-the%255C-loop%22
(1 - 20 of 33)

Pages