A sociotechnical systems lens on AI is often used to bring attention to the human factors and societal impacts that are often neglected through technical abstraction. However, abstraction is also a general principle of sociotechnical systems, where functional objectives (e.g. fai
...
A sociotechnical systems lens on AI is often used to bring attention to the human factors and societal impacts that are often neglected through technical abstraction. However, abstraction is also a general principle of sociotechnical systems, where functional objectives (e.g. fair hiring decisions) are operationalised into low-level implementations (e.g. fair algorithms, recourse, legal basis). The trouble with abstraction arises when critical contextual factors are erroneously neglected, leading to an impoverished representation of the problem space. De-contextualisation can render the resulting solutions problematic when they are re-contextualised back into the site of use, where misabstractions may produce safety hazards, harms, moral wrongs, and context frictions. Despite growing recognition that context matters for how sociotechnical systems operate in practice, the normative implications of abstraction are still understudied. In this paper, we propose misabstraction as an analytic framework for thinking about the perils and challenges of sociotechnical abstraction. We use the framework to analyse the requirements specification outlined in the procurement tender of a recommender system for public employment services and show how misabstractions cascade through the sociotechnical stack, producing ripple effects that implicate hidden and neglected contextual factors across multiple frames (e.g. institutional, organisational, operational, and algorithmic). Misabstraction can help policymakers, system designers, critical scholars, and civil society alike to attend to the political conditions that shape design, and their implications for understanding and addressing systemic risk in sociotechnical AI systems.