The visible hand of innovation policy
Uwe Cantner (Friedrich Schiller University Jena, University of Southern Denmark)
C Werker (RWTH Aachen University, TU Delft - Economics of Technology and Innovation)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Up until now, the question of how innovation policy deals with agency and power distributed between human and artificial intelligence has not been addressed conclusively so far. The systems failure approach often used to motivate and justify innovation policy broadly acknowledges and addresses problems stemming from the emergence and use of AI. Yet it insufficiently addresses three questions that make AI a game changer for innovation policy, i.e. (1) how to deal with ethical issues of using AI, (2) how AI-driven innovation policy can stimulate research processes leading to either exploitation or exploration, and (3) whether and how deep learning of AI might crowd-out human decisions. To solve these issues we suggest that innovation policy uses a visible hand in the context of AI, i.e. (1) to involve clearly legitimized stakeholders in the design of ethical guidelines - and avoid outsourcing this important task to expert councils, (2) to use policy measures that can distinguish between exploration and exploitation of AI, and (3) to employ a coordinated approach of involving stakeholders in several steps ensuring the implementation of their shared values in AI-driven decision processes. The invisible hand of innovation policy can neither rely on policy actions nor market relationships only but has to rely on their joint use.