While Artificial Intelligence (AI) offers major opportunities, a critical gap exists between emerging AI risk management frameworks and the practical needs of businesses. This problem threatens to hinder innovation and socio-technical risks. Regulations like the EU AI Act and sta
...
While Artificial Intelligence (AI) offers major opportunities, a critical gap exists between emerging AI risk management frameworks and the practical needs of businesses. This problem threatens to hinder innovation and socio-technical risks. Regulations like the EU AI Act and standards such as the NIST AI Management Framework (RMF) are often seen by practitioners as abstract and impractical. This creates a significant challenge for organisations navigating the complex AI landscape. This research investigates the perspectives of professionals on the practical ability of current AI risk management frameworks. It looks at how these frameworks balance risk management and innovation. The primary objective is to analyse these perspectives, identify the primary challenges organisations face, and provide actionable recommendations for both businesses and regulators. This research is guided by the central research question: "What are the current perspectives on the ability of AI risk management frameworks to address core business needs regarding the balance of risks and innovation?" The study uses a descriptive, qualitative methodology, beginning with a literature review to better understand the AI governance landscape and to identify the critical gaps between frameworks and practical business needs. Following this, semi-structured interviews were conducted with eleven professionals from diverse sectors, including finance, healthcare, and technology consulting. The collected data is analysed using a thematic analysis. The findings of the interviews are interpreted through a lens of socio-technical systems theory by applying principles from System-Theoretic Process Analysis (STPA) to find problems across the AI governance system. The findings of this study pointed out systemic disconnection between regulatory frameworks and actual business conditions. The main results have identified that the governance adoption comes first and foremost due to the pressure from external authorities, rather than being a real motivator for genuinely responsible innovation. Further, organisations are most concerned with socio-technical risk, such as the lack of AI literacy among decision-makers and staff resisting change. Besides, these risks are often missed by frameworks focusing solely on the technical aspect. Finally, the ambiguity of rules and the delay in the development of harmonised standards create uncertainty and force organisations to comply with inadequate tools. Currently, AI risk management frameworks are mostly viewed as inefficient in balancing risk management and innovation. Therefore, they are more perceived as a liability than a vital tool. This research concludes that this is a systemic failure. The study classifies AI governance as a "wicked problem" and a "dysfunctional system" struggling with it. The study suggests that businesses should be proactive and incorporate AI risk management into their basic structure. Additionally, it recommends that regulators should form partnerships, offer specific guidance to sectors, and revise the risk classifications in a manner that would reflect the actual complexities of the real world.