LC

Authored

12 records found

COCTEAU

An Empathy-Based Tool for Decision-Making

Traditional approaches to data-informed policymaking are often tailored to specific contexts and lack strong citizen involvement and collaboration, which are required to design sustainable policies. We argue the importance of empathy-based methods in the policymaking domain given ...

“It Is a Moving Process”

Understanding the Evolution of Explainability Needs of Clinicians in Pulmonary Medicine

Clinicians increasingly pay attention to Artificial Intelligence (AI) to improve the quality and timeliness of their services. There are converging opinions on the need for Explainable AI (XAI) in healthcare. However, prior work considers explanations as stationary entities with ...

“It Is a Moving Process”

Understanding the Evolution of Explainability Needs of Clinicians in Pulmonary Medicine

Clinicians increasingly pay attention to Artificial Intelligence (AI) to improve the quality and timeliness of their services. There are converging opinions on the need for Explainable AI (XAI) in healthcare. However, prior work considers explanations as stationary entities with ...
Explaining the behaviour of Artificial Intelligence models has become a necessity. Their opaqueness and fragility are not tolerable in high-stakes domains especially. Although considerable progress is being made in the field of Explainable Artificial Intelligence, scholars have d ...
In recent years, new methods to engage citizens in deliberative processes of governments and institutions have been studied. Such methodologies have become a necessity to assure the efficacy and longevity of policies. Several tools and solutions have been proposed while trying to ...
The spread of AI and black-box machine learning models made it necessary to explain their behavior. Consequently, the research field of Explainable AI was born. The main objective of an Explainable AI system is to be understood by a human as the final beneficiary of the model. In ...
The spread of AI and black-box machine learning models made it necessary to explain their behavior. Consequently, the research field of Explainable AI was born. The main objective of an Explainable AI system is to be understood by a human as the final beneficiary of the model. In ...
In recent years, new methods to engage citizens in deliberative processes of governments and institutions have been studied. Such methodologies have become a necessity to assure the efficacy and sustainability of policies. Several tools and solutions have been proposed while tryi ...
In recent years, new methods to engage citizens in deliberative processes of governments and institutions have been studied. Such methodologies have become a necessity to assure the efficacy and sustainability of policies. Several tools and solutions have been proposed while tryi ...
We present VaccinItaly, a project which monitors Italian online conversations around vaccines, on Twitter and Facebook. We describe the ongoing data collection, which follows the SARS-CoV-2 vaccination campaign roll-out in Italy and we provide public access to the data collected. ...
We present VaccinItaly, a project which monitors Italian online conversations around vaccines, on Twitter and Facebook. We describe the ongoing data collection, which follows the SARS-CoV-2 vaccination campaign roll-out in Italy and we provide public access to the data collected. ...
One year after the outbreak of the SARS-CoV-2, several vaccines have been successfully developed to prevent its spreading, and vaccine roll-out campaigns are taking place worldwide. However, an increasing number of individuals is still hesitant towards getting vaccinated, and thi ...

Contributed

8 records found

Finding Shortcuts to a black-box model using Frequent Sequence Mining

Explaining Deep Learning models for Fact-Checking

Deep-learning (DL) models could greatly advance the automation of fact-checking, yet have not widely been adopted by the public because of their hard-to-explain nature. Although various techniques have been proposed to use local explanations for the behaviour of DL models, little ...

A Comparison of Instance Attribution Methods

Comparing Instance Attribution Methods to Baseline k-Nearest Neighbors Method

In this research, a comparison between different Instance Attribution (IA) methods and k-Nearest Neighbors (kNN) via cosine similarity is conducted on a Natural Language Processing (NLP) machine learning model. The format in which the comparison is made is by way of a human surve ...
The goal of this paper is to examine how different presentation strategies of Explanainable Artificial Intelligence (XAI) explanation methods for textual data affect non-expert understanding in the context of fact-checking. The importance of understand- ing the decision of an Art ...
In today's society, claims are everywhere, in the online and offline world. Fact-checking models can check these claims and predict if a claim is true or false, but how can these models be checked? Post-hoc XAI feature attribution methods can be used for this. These methods give ...
We seek to examine the vulnerability of BERT-based fact-checking. We implement a gradient based, adversarial attack strategy, based on Hotflip swapping individual tokens from the input. We use this on a pre-trained ExPred model for fact-checking. We find that gradient based adver ...
In recent years, there has been a growing interest among researchers in the explainability, fairness, and robustness of Computer Vision models. While studies have explored the usability of these models for end users, limited research has delved into the challenges and requirement ...
Machine learning (ML) systems for computer vision applications are widely deployed in decision-making contexts, including high-stakes domains such as autonomous driving and medical diagnosis. While largely accelerating the decision-making process, those systems have been found to ...
Despite the low adoption rates of artificial intelligence (AI) in respiratory medicine, its potential to improve patient outcomes is substantial. To facilitate the integration of AI systems into the clinical setting, it is essential to prioritise the development of explainable AI ...