False-name Attacks under Uncertainty�In Combinatorial Auctions, Voting Games, and Collaborative ML
P.h.D Thesis Committee, Technion, September 2023
Yotam Gafni
Technion, Israel
Moshe Tennenholtz
Ron Lavi
Advised by:
What are False-name Attacks?
Too ominous to deal with?
Many negative results in different settings:
Approaches to Mitigation in Combinatorial Auctions
(Taken from Michal Feldman’s slides)
A New Approach: Uncertainty
Reminder: VCG & Combinatorial Auctions
Example: Two Items and Two Bidders
v1({b}) = v1({a}) = 0, v1({a,b}) = 15
v2({b}) = 10, v2({a}) = 10, v2({a,b}) = 10
– VCG allocates 𝒂𝟏 = {a,b}, 𝒂𝟐 ={} and agent 1 (the couple) pays 10.
15
10
10
A False-name Attack for the Example
15
15
15
Is uncertainty the answer to the example?
Left-hand, the attacker’s utility is -20. Right-hand, the attacker’s utility is 10
15
15
15
15
15
15
15
This Calls for a Bayesian Analysis… [Gafni, Lavi & Tennenholtz 2020]
Results…
Agent 1 / Agent 2 | Action A | Action B | Action C |
Action A | 4, 8 | 3,9 | 6,17 |
Action B | 2,6 | 10,5 | 9,5 |
Action C | 5,10 | 3,7 | 5,9 |
What If There is No Bayesian Prior? [Gafni & Tennenholtz 2023]
Let’s apply ’Safety Level’ to the First-price Auction.
The worst-case performance of any under-bid (including exact-bid) is 0, when it does not win the item.
The worst-case performance of overbidding is negative, when it wins the item and pays more than its value.
A Refined Safety Notion
Agent 1 / Agent 2 | Action A | Action B | Action C |
Action A | 4, 8 | 3,9 | 6,17 |
Action B | 2,6 | 10,5 | 9,5 |
Action C | 5,10 | 3,7 | 5,9 |
- How does it apply to the first-price auction?
Discrete First-price Auction with DSL
Discrete First-price Auction with DSL
Ok… and how about VCG under Sybil Attacks?
Exact-bidding Strategies can come in many forms
10
10
30
30
10
10
10
10
20
10
6
20
10
4
An Attacker Can Benefit From an Exact-Bidding Attack
10
10
15
15
10
10
5
5
20
5
5
Conclusion
Future Work?
Bids:
3 rooms, 5000$ total
8 rooms, 10000$ total
1 room, 2000$ total
…
Collaborative ML: A Growing Interest…
Building shared models – a simple use case
Great… Where is the problem?
Other works study how competition changes the nature of individual learning (without sharing):
We talk about a different security risk we call exclusivity attacks
One-shot vs long-term
Main Question:
where all agents report truthfully and accept the model estimation
can an attacker deviate successfully with an exclusivity attack?
Main Results Preview:
A successful attack with one-shot data sharing
A failed attack with one-shot data sharing
Formal model – Continuous Protocol
Running Example – Continuous Protocol
Revisiting max in the continuous protocol.
Here 90 < y < x.
Agent 1 is strategic. Agent 2 is truthful.
Observed history, Strategy and Vulnerability
Observed history, Strategy and Vulnerability (Cont.)
Linear regression
Challenge Intuition:
How to ‘reverse’ effects of fake points submitted?
A sneak attack
Triangulation attacks
High level construction of triangulation for LR
Example of a triangulation attack (1-LR)
Under truth
Under triangulation
Example of a triangulation attack (2-LR)
The periodic communication model
In periodic model, LR is not vulnerable
��Take away messages:�����
Bounds on Power Indices in Weighted Voting Games
“The number of delegates ought not to be in exact proportion to the number of inhabitants, because the influence and power of those states whose delegates are numerous, will be greater [even relative to their proportion] when compared to the influence and power of the other states...”
(The Anti-Federalist Papers)
A model of weighted voting games…
Power and proportion may not coincide…
Power and proportion may not coincide II…
Power and proportion may not coincide III…
Summary of the bounds we prove
We believe (and this is supported by experiments) that the Shapley-Shubik upper bound of 2 holds not only against proportion, but any type of false-name split.
The implication would be that splitting is not too bad for the big players (as a whole).