This is a dry-run for an upcoming conference presentation (SAC2025).
Machine Learning as a Service (MLaaS) drives ML adoption by providing businesses access to powerful, pre-trained models via APIs. These platforms enable users to leverage models trained on vast datasets, overcoming local computational or data limitations. However, transmitting raw data to external servers for inference poses privacy risks, such as exposure of sensitive information (e.g., income or FICO scores) to malicious entities or misuse by service providers.
In this work, we present a privacy-preserving framework for credit scoring systems deployed on Machine Learning as a Service (MLaaS) platforms. Our approach integrates an obfuscator-classifier model that enhances privacy while maintaining high accuracy for loan default prediction tasks. The obfuscator transforms sensitive financial data into a privacy-protected representation, minimizing the risk of privacy leakage and input reconstruction during inference. By employing a combination of center loss and noise addition, our model ensures a robust balance between privacy and utility. Through extensive experiments, we demonstrate the effectiveness of our solution in reducing information leakage.
Vittorio Prodomo has obtained both his B.Sc. (2016) and M.Sc. (2019) degrees in Computer Engineering at the University of Naples Federico II. In 2020, he began his PhD in Telematics Engineering at Carlos III University of Madrid. He currently works on Privacy-Preserving Machine Learning. More specifically, the inherent Utility-Privacy trade-off in data anonymization approaches. His main interests are Machine Learning, Deep Learning and Data Analysis.
This event will be conducted in English