Mitigating Inference Attacks in Collaborative Credit Card Fraud Detection using Secure Multi-Party Selection

More Info
expand_more

Abstract

The convenient service offered by credit cards and the technological advances in e-commerce have caused the number of online payment transactions to increase daily. With this rising number, the opportunity for fraudsters to obtain cardholder details via online credit card fraud has also increased. As a result, according to the European Central Bank, billions of Euros are lost due to credit card fraud every year. Since verifying all transactions by hand is infeasible, automated Fraud Detection Systems (FDSs) are needed. Currently, financial institutions create such systems by training machine learning algorithms on transaction data. However, the performance of these systems is obstructed due to a lack of positive (fraud) samples in the collected transaction data. To improve performance, an ideal solution would be to merge data of all institutions and to train an FDS on the resulting data set. However, privacy reasons concerning the sensitive customer information in this data, and security risks associated with transferring data, render this solution unrealistic. Therefore, the need rises for novel protocols that allow financial institutions to collaboratively train FDSs without sharing private data. Previous research in the field of collaborative learning attempts to solve such problems by requiring participants to train local models, which are aggregated into a global model by a trusted central entity. Unfortunately, the vulnerability of these settings to inference attacks restricts their applicability. Inference attacks aim to extract additional secret knowledge from a model. These are especially powerful when performed by participants in a sequential setting, where participants train the same model one after the other following a given order. This is because in this setting participants have white-box access to the model itself and to the data used to train it. Naturally, these attacks are considered a breach of privacy and hinder collaboration. In this work, we propose a novel protocol leveraging secure multi-party computation techniques to prevent inference attacks in a sequential setting. To achieve this, we require participants to jointly determine a training order. While doing so, we ensure participants only receive information on whom to send their data to. This means participants are unaware of whose data they are receiving. With this work, we contribute a practical protocol that is robust against inference and timing attacks to facilitate privacy-preserving sequential collaborative learning. To the best of our knowledge, our work is the first to prevent inference attacks using a secure multi-party selection protocol with overhead of only a few seconds.