The Design of Human Oversight in Autonomous Weapon Systems
Ilse Verdiesen (TU Delft - Information and Communication Technology)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
As the reach and capabilities of Artificial Intelligence (AI) systems increases, there is also a growing awareness of the ethical, legal and societal impact of the potential actions and decisions of these systems. Many are calling for guidelines and regulations that can ensure the responsible design, development, implementation, and policy of AI. In scientific literature, AI is characterized by the concepts of Adaptability, Interactivity and Autonomy (Floridi & Sanders, 2004). According to Floridi and Sanders (2004), Adaptability means that the system can change based on its interaction and can learn from its experience. Machine learning techniques are an example of this. Interactivity occurs when the system and its environment act upon each other and Autonomy implies that the system itself can change its state.