Searched for: mods_originInfo_publisher_s%3A%22International%255C+Foundation%255C+for%255C+Autonomous%255C+Agents%255C+and%255C+Multiagent%255C+Systems%255C+%255C%2528IFAAMAS%255C%2529%22
(1 - 20 of 34)

Pages

document
Albers, N. (author), Neerincx, M.A. (author), Brinkman, W.P. (author)
Despite their prevalence in eHealth applications for behavior change, persuasive messages tend to have small effects on behavior. Conditions or states (e.g., confidence, knowledge, motivation) and characteristics (e.g., gender, age, personality) of persuadees are two promising components for more effective algorithms for choosing persuasive...
conference paper 2023
document
Liscio, E. (author), Lera-Leri, Roger (author), Bistaffa, Filippo (author), Dobbe, R.I.J. (author), Jonker, C.M. (author), Lopez-Sanchez, Maite (author), Rodriguez-Aguilar, Juan A. (author), Murukannaiah, P.K. (author)
conference paper 2023
document
Celikok, M.M. (author), Oliehoek, F.A. (author), Kaski, Samuel (author)
Centaurs are half-human, half-AI decision-makers where the AI's goal is to complement the human. To do so, the AI must be able to recognize the goals and constraints of the human and have the means to help them. We present a novel formulation of the interaction between the human and the AI as a sequential game where the agents are modelled...
conference paper 2022
document
Katt, Sammie (author), Nguyen, Hai (author), Oliehoek, F.A. (author), Amato, Christopher (author)
While reinforcement learning (RL) has made great advances in scalability, exploration and partial observability are still active research topics. In contrast, Bayesian RL (BRL) provides a principled answer to both state estimation and the exploration-exploitation trade-off, but struggles to scale. To tackle this challenge, BRL frameworks with...
conference paper 2022
document
Renting, B.M. (author), Hoos, Holger H. (author), Jonker, C.M. (author)
Bargaining can be used to resolve mixed-motive games in multiagent systems. Although there is an abundance of negotiation strategies implemented in automated negotiating agents, most agents are based on single fixed strategies, while it is acknowledged that there is no single best-performing strategy for all negotiation settings. In this...
conference paper 2022
document
Czechowski, A.T. (author), Piliouras, Georgios (author)
A key challenge of evolutionary game theory and multi-agent learning is to characterize the limit behavior of game dynamics. Whereas convergence is often a property of learning algorithms in games satisfying a particular reward structure (e.g., zero-sum games), even basic learning models, such as the replicator dynamics, are not guaranteed to...
conference paper 2022
document
Suau, M. (author), He, J. (author), Spaan, M.T.J. (author), Oliehoek, F.A. (author)
Learning effective policies for real-world problems is still an open challenge for the field of reinforcement learning (RL). The main limitation being the amount of data needed and the pace at which that data can be obtained. In this paper, we study how to build lightweight simulators of complicated systems that can run sufficiently fast for...
conference paper 2022
document
Neustroev, G. (author), Andringa, S.P.E. (author), Verzijlbergh, R.A. (author), de Weerdt, M.M. (author)
Wind farms suffer from so-called wake effects: when turbines are located in the wind shadows of other turbines, their power output is substantially reduced. These losses can be partially mitigated via actively changing the yaw from the individually optimal direction. Most existing wake control techniques have two major limitations: they use...
conference paper 2022
document
Li, Guangliang (author), Whiteson, Shimon (author), Dibeklioğlu, Hamdi (author), Hung, H.S. (author)
Interactive reinforcement learning provides a way for agents to learn to solve tasks from evaluative feedback provided by a human user. Previous research showed that humans give copious feedback early in training but very sparsely thereafter. In this paper, we investigate the potential of agent learning from trainers’ facial expressions via...
conference paper 2021
document
van der Linden, J.G.M. (author), Mulderij, J. (author), Huisman, B. (author), Den Ouden, Joris W. (author), Van Den Akker, Marjan (author), Hoogeveen, Han (author), de Weerdt, M.M. (author)
When trains are finished with their transportation tasks during the day, they are moved to a shunting yard where they are routed, parked, cleaned, subject to regular maintenance checks and repaired during the night. The resulting Train Unit Shunting and Servicing problem motivates advanced research in planning and scheduling in general since...
conference paper 2021
document
Mey, A. (author), Oliehoek, F.A. (author)
Machine learning and artificial intelligence models that interact with and in an environment will unavoidably have impact on this environment and change it. This is often a problem as many methods do not anticipate such a change in the environment and thus may start acting sub-optimally. Although efforts are made to deal with this problem, we...
conference paper 2021
document
Yazdanpanah, Vahid (author), Gerding, Enrico H. (author), Stein, Sebastian (author), Dastani, Mehdi (author), Jonker, C.M. (author), Norman, Timothy J. (author)
To develop and effectively deploy Trustworthy Autonomous Systems (TAS), we face various social, technological, legal, and ethical challenges in which different notions of responsibility can play a key role. In this work, we elaborate on these challenges, discuss research gaps, and show how the multidimensional notion of responsibility can...
conference paper 2021
document
Satsangi, Yash (author), Lim, Sungsu (author), Whiteson, Shimon (author), Oliehoek, F.A. (author), White, Martha (author)
Information gathering in a partially observable environment can be formulated as a reinforcement learning (RL), problem where the reward depends on the agent's uncertainty. For example, the reward can be the negative entropy of the agent's belief over an unknown (or hidden) variable. Typically, the rewards of an RL agent are defined as a...
conference paper 2020
document
van der Pol, Elise (author), Kipf, Thomas (author), Oliehoek, F.A. (author), Welling, Max (author)
This work exploits action equivariance for representation learning in reinforcement learning. Equivariance under actions states that transitions in the input space are mirrored by equivalent transitions in latent space, while the map and transition functions should also commute. We introduce a contrastive loss function that enforces action...
conference paper 2020
document
Neustroev, G. (author), de Weerdt, M.M. (author)
Reinforcement learning (RL), like any on-line learning method, inevitably faces the exploration-exploitation dilemma. When a learning algorithm requires as few data samples as possible, it is called sample efficient. The design of sample-efficient algorithms is an important area of research. Interestingly, all currently known provably efficient...
conference paper 2020
document
Renting, B.M. (author), Hoos, Holger H. (author), Jonker, C.M. (author)
Bidding and acceptance strategies have a substantial impact on the outcome of negotiations in scenarios with linear additive and nonlinear utility functions. Over the years, it has become clear that there is no single best strategy for all negotiation settings, yet many fixed strategies are still being developed. We envision a shift in the...
conference paper 2020
document
Murukannaiah, P.K. (author), Ajmeri, Nirav (author), Jonker, C.M. (author), Singh, M.P. (author)
Ethics is inherently a multiagent concern. However, research on AI ethics today is dominated by work on individual agents: (1) how an autonomous robot or car may harm or (differentially) benefit people in hypothetical situations (the so-called trolley problems) and (2) how a machine learning algorithm may produce biased decisions or...
conference paper 2020
document
Methenitis, G. (author), Kaisers, Michael (author), la Poutré, J.A. (author)
We study mechanisms to incentivize demand response in smart energy systems. We assume agents that can respond (reduce their demand) with some probability if they prepare prior to the real-ization of the demand. Both preparation and response incur costs to agents. Previous work studies truthful mechanisms that select a minimal set of agents to...
conference paper 2019
document
Katt, Sammie (author), Oliehoek, F.A. (author), Amato, Christopher (author)
Model-based Bayesian Reinforcement Learning (BRL) provides a principled solution to dealing with the exploration-exploitation trade-off, but such methods typically assume a fully observable environments. The few Bayesian RL methods that are applicable in partially observable domains, such as the Bayes-Adaptive POMDP (BA-POMDP), scale poorly. To...
conference paper 2019
document
Castellini, Jacopo (author), Oliehoek, F.A. (author), Savani, Rahul (author), Whiteson, Shimon (author)
Recent years have seen the application of deep reinforcement learning techniques to cooperative multi-agent systems, with great empirical success. In this work, we empirically investigate the representational power of various network architectures on a series of one-shot games. Despite their simplicity, these games capture many of the crucial...
conference paper 2019
Searched for: mods_originInfo_publisher_s%3A%22International%255C+Foundation%255C+for%255C+Autonomous%255C+Agents%255C+and%255C+Multiagent%255C+Systems%255C+%255C%2528IFAAMAS%255C%2529%22
(1 - 20 of 34)

Pages