AI is becoming significantly more impactful in society, especially with regard to decision-making. Algorithmic fairness is the field wherein the fairness of an AI algorithm is defined, subsequently evaluated, and ideally improved. This paper uses a fairness decision tree to crit
...
AI is becoming significantly more impactful in society, especially with regard to decision-making. Algorithmic fairness is the field wherein the fairness of an AI algorithm is defined, subsequently evaluated, and ideally improved. This paper uses a fairness decision tree to critique certain notions of algorithmic fairness through a postcolonial lens by applying Gayatri Spivak's theory of the subaltern alongside other postcolonial principles. A definition and criteria for a subaltern population in AI are provided, depicting that AI and algorithmic fairness rely on subaltern marginalization, silence, and faux inclusion. A theoretical case analysis is then conducted to illustrate how demographic parity, even in cases where it is the best fairness metric, does not include the subaltern. Algorithmic fairness often defines fairness through neoliberalism, assigning a ``cost'' to ethical considerations, wherein morality is second to profits and utility. Furthermore, a large proportion of the ``justice'' conducted through AI is surface level and may actually cause more harm in the long run. A proposal is made to seriously consider not using any AI in socially relevant, complex situations.