Authors: Makhlouf, Karima
Stefanovic, Tamara
Arcolezi, Heber H.
Palamidessi, Catuscia
Affiliations: Computer Science 
Mathematical Institute of the Serbian Academy of Sciences and Arts 
Title: A Systematic and Formal Study of the Impact of Local Differential Privacy on Fairness: Preliminary Results
First page: 1
Last page: 16
Related Publication(s): Proceedings - IEEE Computer Security Foundations Symposium
Conference: 37th IEEE Computer Security Foundations Symposium, CSF 2024, Enschede, 8 July - 12 July 2024
Issue Date: 2024
Rank: M33
ISBN: 9798350362039
ISSN: 1940-1434
DOI: 10.1109/CSF61375.2024.00039
Abstract: 
Machine learning (ML) algorithms rely primarily on the availability of training data, and, depending on the domain, these data may include sensitive information about the data providers, thus leading to significant privacy issues. Differential privacy (DP) is the predominant solution for privacy-preserving ML, and the local model of DP is the preferred choice when the server or the data collector are not trusted. Recent experimental studies have shown that local DP can impact ML prediction for different subgroups of individuals, thus affecting fair decision-making. However, the results are conflicting in the sense that some studies show a positive impact of privacy on fairness while others show a negative one. In this work, we conduct a systematic and formal study of the effect of local DP on fairness. Specifically, we perform a quantitative study of how the fairness of the decisions made by the ML model changes under local DP for different levels of privacy and data distributions. In particular, we provide bounds in terms of the joint distributions and the privacy level, delimiting the extent to which local DP can impact the fairness of the model. We characterize the cases in which privacy reduces discrimination and those with the opposite effect. We validate our theoretical findings on synthetic and real-world datasets. Our results are preliminary in the sense that, for now, we study only the case of one sensitive attribute, and only statistical disparity, conditional statistical disparity, and equal opportunity difference.
Keywords: Differential Privacy | Fairness | Machine learning | Randomized Response
Publisher: IEEE

Show full item record

SCOPUSTM   
Citations

1
checked on Apr 3, 2025

Page view(s)

15
checked on Jan 31, 2025

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.