BlUrM(or)e

Revisiting gender obfuscation in the user-item matrix

Conference Paper (2019)
Author(s)

Christopher Strucks (Radboud Universiteit Nijmegen)

M. Slokom (TU Delft - Multimedia Computing)

MA Larson (Radboud Universiteit Nijmegen, TU Delft - Multimedia Computing)

Multimedia Computing
Copyright
© 2019 Christopher Strucks, M. Slokom, M.A. Larson
More Info
expand_more
Publication Year
2019
Language
English
Copyright
© 2019 Christopher Strucks, M. Slokom, M.A. Larson
Multimedia Computing
Pages (from-to)
1-5
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Past research has demonstrated that removing implicit gender information from the user-item matrix does not result in substantial performance losses. Such results point towards promising solutions for protecting users’ privacy without compromising prediction performance, which are of particular interest in multistakeholder environments. Here, we investigate BlurMe, a gender obfuscation technique that has been shown to block classifiers from inferring binary gender from users’ profiles. We first point out a serious shortcoming of BlurMe: Simple data visualizations can reveal that BlurMe has been applied to a data set, including which items have been impacted. We then propose an extension to BlurMe, called BlurM(or)e, that addresses this issue. We reproduce the original BlurMe experiments with the MovieLens data set, and point out the relative advantages of BlurM(or)e.