How to measure bias in automatic speech recognition systems?

A bias metric without a reference group

More Info
expand_more

Abstract

This paper presents a novel approach to measuring bias in Automatic Speech Recognition (ASR) systems by proposing a metric that does not use the conventional approach of a reference group. Current methods typically measure bias through comparison with a ’norm’ or minimum error group, potentially introducing additional biases. To address this issue, this study introduces a new metric: a combination of the Groupto-Average Log Ratio and the Sum of Group Error Differences. This metric aims to provide a fair comparison by measuring performance relative to the average of the groups rather than a single reference group. Results indicate that the new metric reveals different aspects of bias not captured by traditional methods. This study contributes to the ongoing research on fairness in speech technology by challenging the existing bias metrics and proposing alternatives that might offer more equitable evaluations. Future research should explore further refinements of these metrics, and apply them across more varied datasets and environments. Ultimately, this research moves towards making ASR technologies more inclusive, ensuring that they serve all user groups equitably.