CJ
C.M. Jonker
info
Please Note
<p>This page displays the records of the person named above and is not linked to a unique person identifier. This record may need to be merged to a profile.</p>
207 records found
1
...
Combating widespread misinformation requires scalable and reliable fact-checking methods. Fact-checking involves several steps, including question generation, evidence retrieval, and veracity prediction. Importantly, fact-checking is well-suited to exploit hybrid intelligence sin
...
Understanding citizens’ values in participatory systems is crucial for citizen-centric policy-making. We envision a hybrid participatory system where participants make choices and provide motivations for those choices, and AI agents estimate their value preferences by interacting
...
Aggregating multiple annotations into a single ground truth label may hide valuable insights into annotator disagreement, particularly in tasks where subjectivity plays a crucial role. In this work, we explore methods for identifying subjectivity in recognizing the human values t
...
"even explanations will not help in trusting [this] fundamentally biased system"
A Predictive Policing Case-Study
In today's society, where Artificial Intelligence (AI) has gained a vital role, concerns regarding user's trust have garnered significant attention. The use of AI systems in high-risk domains have often led users to either under-trust it, potentially causing inadequate reliance o
...
Mutual trust between humans and interactive artificial agents is crucial for effective human-agent teamwork. This involves not only the human appropriately trusting the artificial teammate, but also the artificial teammate assessing the human’s trustworthiness for different tasks
...
The roles of humans and AI as the labor force of organizations need continuous re-evaluation with the advancement of AI. While automation has replaced some tasks, knowledge-intensive work environments rely on human intelligence, as those work practices transcend canonical procedu
...
Effective support from personal assistive technologies relies on accurate user models that capture user values, preferences, and context. Knowledge-based techniques model these relationships, enabling support agents to align their actions with user values. However, understanding
...
Knowing Me, Knowing AU
How Should We Design Agent-Mediated Mimicry?
A lack of self-awareness of communicative behaviours can lead to disadvantages in important interactions. Video recordings as a tool for self-observation have been widely adopted to initiate behaviour change and reflection. Seeing oneself in a recording can lead to negative affec
...
Large-scale survey tools enable the collection of citizen feedback in opinion corpora. Extracting the key arguments from a large and noisy set of opinions helps in understanding the opinions quickly and accurately. Fully automated methods can extract arguments but (1) require lar
...
We adopt an emerging and prominent vision of human-centred Artificial Intelligence that requires building trustworthy intelligent systems. Such systems should be capable of dealing with the challenges of an interconnected, globalised world by handling plurality and by abiding by
...
Epistemic logic can be used to reason about statements such as ‘I know that you know that I know that φ ’. In this logic, and its extensions, it is commonly assumed that agents can reason about epistemic statements of arbitrary nesting depth. In contrast, empirical findings on Th
...
Commissioning for integration
Exploring the dynamics of the “subsidy tables” approach in Dutch social care delivery
Purpose: The objective of this paper is to develop a redesigned commissioning process for social care services that fosters integrated care, encourages collaboration and balances professional expertise with client engagement. Design/methodology/approach: This study employs a two-
...
Disagreements are common in online societal deliberation and may be crucial for effective collaboration, for instance in helping users understand opposing viewpoints. Although there exist automated methods for recognizing disagreement, a deeper understanding of factors that influ
...
NegoLog
An Integrated Python-based Automated Negotiation Framework with Enhanced Assessment Components
The complexity of automated negotiation research calls for dedicated, user-friendly research frameworks that facilitate advanced analytics, comprehensive loggers, visualization tools, and auto-generated domains and preference profiles. This paper introduces NegoLog, a platform th
...
In teams composed of humans, we use trust in others to make decisions, such as what to do next, who to help and who to ask for help. When a team member is artificial, they should also be able to assess whether a human teammate is trustworthy for a certain task. We see trustworthi
...
Interdependence and trust analysis (ITA)
A framework for human–machine team design
As machines' autonomy increases, the possibilities for collaboration between a human and a machine also increase. In particular, tasks may be performed with varying levels of interdependence, i.e. from independent to joint actions. The feasibility of each type of interdependence
...
Appropriate Trust in Artificial Intelligence (AI) systems has rapidly become an important area of focus for both researchers and practitioners. Various approaches have been used to achieve it, such as confidence scores, explanations, trustworthiness cues, or uncertainty communica
...
As machines' autonomy increases, their capacity to learn and adapt to humans in collaborative scenarios increases too. In particular, machines can use artificial trust (AT) to make decisions, such as task and role allocation/selection. However, the outcome of such decisions and t
...
Relevancy is a prevalent term in value alignment. We either need to keep track of the relevant moral reasons, we need to embed the relevant values, or we need to learn from the relevant behaviour. What relevancy entails in particular cases, however, is often ill-defined. The reas
...
Appropriate trust is an important component of the interaction between people and AI systems, in that "inappropriate"trust can cause disuse, misuse, or abuse of AI. To foster appropriate trust in AI, we need to understand how AI systems can elicit appropriate levels of trust from
...