Differential Privacy for Strategic Information Sharing and Learning:
Foundations, Mechanisms, and Applications
M. Amin Rahimian
Pitt
Juba Ziani
GT
Marios Papachristou
ASU
Yuxin Liu
Pitt
WINE 2025: The 21st Conference on Web and Internet Economics
Outline
🎯 Motivation and Foundation of Differential Privacy
🔐 Differential Privacy in the Context of Information Sharing
💰 Markets for Privacy
Part 1: Motivation and Foundation of Differential Privacy
Motivation: Data Value vs. Data Rights
Data Value
Data Rights
�• Recommendation drives engagement
(Netflix 80%, YouTube 70%)�• Ads powered by personal data generate billions (Meta 98% from ads)�• Better data → better algorithms → better services
• GDPR / CCPA grant individuals strong data rights
• Rights protect individuals from data misuse
• Privacy rights restore user control over their data
Differential Privacy
Dataset
Algorithm
Output
Adversary
or
?
An algorithm is differentially private if its distribution of outputs doesn’t change much after adding/removing one point. (Informal)
Differential Privacy
No information flow
No privacy
Laplace & Gaussian mechanisms (Dwork et al., 2006)
➤ Classic additive-noise mechanisms based on global sensitivity.
Exponential Mechanism (McSherry & Talwar, 2007)
➤ Allows DP over structured outputs via utility-based sampling.
Randomized Response (Warner, 1965)
➤ Local DP mechanism for sensitive survey questions.
Smooth Sensitivity (Nissim et al., 2007)
➤ Reduced noise for low-sensitivity instances.
Geometric Mechanism (Ghosh et al., 2012)
➤ Optimal integer-valued DP mechanism under certain conditions.�
Literature Overview
Joint DP (Kearns et al., 2016)
➤ Ensures externalities across agents are DP-protected.
Rényi DP (Mironov, 2017)
➤ Stronger composition analysis for ML training.
Bayesian DP (Wang et al., 2015)
➤ Incorporates prior distributions and posterior stability.
Metric DP (Andrés et al., 2013)
➤ Privacy scaled by distance in metric spaces.
Pufferfish Privacy (Kifer & Machanavajjhala, 2012)
➤ Framework for specifying protected secrets + adversarial assumptions.
Literature Overview
DP-ERM (Chaudhuri et al., 2011)
➤ Foundational convex optimization under DP.
Deep Learning with DP (Shokri & Shmatikov, 2015)
➤ Gradient perturbation for neural networks.
PATE (Papernot et al., 2016)
➤ Teacher–student framework with noisy aggregation.
DP-SGD (Abadi et al., 2016)
➤ State-of-the-art training method using gradient clipping + noise.
Federated Learning (McMahan et al., 2017)
➤ Model updates from distributed devices with DP variants.
Literature Overview
Privacy auctions (Ghosh & Roth, 2011)
➤ Buy data under DP when agents have private privacy valuations.
Privacy-aware surveys (Roth & Schoenebeck, 2012)
➤ Truthful survey mechanisms with privacy preferences.
Optimal acquisition with strategic agents (Cummings et al., 2023)
➤ Optimal pricing of data with heterogeneous privacy costs.
Central/Local DP acquisition mechanisms (Fallah et al., 2024)
➤ Optimal budget allocation under different DP models.
Privacy paradox & bias–variance trade-offs (Liao et al., 2024)
➤ When individuals misreport privacy valuations and create estimation bias.
A marketplace for data (Agarwal, Dahleh & Sarkar, 2019)
➤ Builds a data market platform optimizing payments and data utility.
Optimal data acquisition for statistical estimation (Chen et al., 2018)
➤ Designs payment rules that minimize estimation error under strategic agents.
Literature Overview