Recommender Systems with Fairness Considerations and Strategic Agent Dynamics��Krishna Acharya�PhD Dissertation Defense �(March 03, 2026)��
Dr. Jacob�Abernethy
Dr. Vidya�Muthukumar
Dr. Aaron�Roth
Dr. Kai�Wang
Dr. Juba Ziani Advisor
Committee Members
Krishna Acharya
Speaker
Recommender systems are everywhere
2
Preliminaries: Users, Items, Recommendation Model
3
Model
Recommender models: A timeline
4
Thesis Statement
5
“This dissertation studies three challenges in recommender systems: ensuring fair performance across heterogeneous user populations, characterizing how strategic content producers shape the item catalog, and understanding how the shift to LLM-based semantic recommendation reopens these challenges in a fundamentally new setting”
Overview
6
P1) User Fairness
P2) Strategic creators & item catalog evolution
P3) LLM-based recommendation
P4) Conclusion & Future directions
Oracle Efficient Algorithms for Groupwise Regret
7
Krishna Acharya, Eshwar Ram Arunachaleswaran,
Sampath Kannan, Aaron Roth, Juba Ziani, ICLR 2024
P1) User Fairness
Recap: Online learning
8
t=1
t=2
t=7
Online learning with groupwise regret
9
t=1
t=2
Age
Race
Old
Old
Young
White
White
Black
Black
t=7
⋮
�Prior work: Sublinear regret but computationally intractable�
10
Snapshot of our algorithm
11
old
young
…
white
always active
AdaNormalHedge
Final prediction
Pred-young
Pred-white
Pred-agnostic
Update internal states of the algorithms of only active groups’ experts
(here: young, white, always active)
Experiments
12
Our Algorithm
Improving Minimax Group Fairness in Sequential Recommendation
13
Krishna Acharya, David Wardrope, Timos Korres,
Aleksandr Petrov, Anders Uhrenholt, ECIR 2025
P1) User Fairness
Task: Sequential Recommendation
14
Given: Sequence of items a user has viewed
Predict: most likely next item.
Model: Self Attentive Sequential recommendation (SASRec)
15
Transformer model for sequential recommendation
SASRec: Self-Attentive Sequential Recommendation [Kang & McAuley’18]
User Fairness in Recommendation
16
Head
Tail
Group fairness
Users segmented on
Equalize metrics across groups.
17
Minimax group fairness
18
Distributionally Robust Optimization (DRO)
19
Existing DRO approaches for recommendation
20
Limitations of group based DRO methods
21
GroupDRO & Streaming DRO have major limitations:
Performance drop
observed
Conditional Value at Risk (CVaR) DRO
22
Experiments
Normalized Discounted Cumulative Gain
23
Leave one out split
User groups
24
Gpop = {niche, diverse, popular}
Gseq= {short, medium, long}�
We experiment across thresholds and resulting group splits
Single-group setting: DRO is effective, CVaR DRO best
25
`
`
Standard training
Multi-group setting: CVaR DRO shines
26
Popularity based groups
Sequence length groups
Takeaways
27
Producer equilibria and dynamics in engagement driven recommender systems��
28
Krishna Acharya, Varun Vangala, Jingyan Wang, Juba Ziani, TMLR 2025
P2) Strategic creators
Content creation game amongst producers
29
Users
Embedding space
How to maximize user engagement?
Producers
Alice
Bob
Modelling producer competition
30
Probability of user k seeing producer i’s content
Relevance score
Content serving rules
31
Result: Producer strategy at Nash eq supported on basis vectors
32
Nash eq.
Structure of Equilibria & Producer specialization
Equilibria for serving rules
33
Experiments
34
Result: Greedier serving leads to catalog diversity
35
More greedy
Result: Producer utility increases with greedier serving
36
More
greedy
More
greedy
Linear
Round-robin
Top-10 softmax
Top-20
Full
softmax
GLoSS: Generative Language Models with Semantic Search for Sequential Recommendation��
37
Krishna Acharya, Aleksandr V. Petrov, Juba Ziani
Presented at OARS@KDD2025
P3) LLM-based recommendation
Task: Sequential Recommendation
38
Given: Sequence of items a user has viewed
Predict: most likely next item.
Identifier (ID) based sequential recommenders
✅ Pros
39
User
❌ Cons
LLM based recommendation
40
3. How many candidate texts to generate?
4. How do we ground back to the item catalog?
Generate next item title using an LLM
GLoSS Architecture
41
Candidate text generation
42
Retrieving the closest matching items
43
TF-IDF, BM25
E5, Qwen-embedder
Experiments
Metrics:
44
GLoSS vs ID-based models
45
GLoSS vs LLM-based benchmarks
46
Dense retrieval greatly improves metrics
47
Strong metrics across user interaction lengths
48
Short user
sequence
Long
User sequence
Medium
sequence
Takeaways
2. SOTA:
3. Grounding generated text:
4. Strong metrics across sequence lengths:
49
Future directions
50
New risks from economically motivated producers
1. Shilling attack: Seller introduces fake users
to boost its item visibility
2. Semantic rewrite: Seller manipulates its item’s
metadata to increase visibility
Publications
51
Part of this talk:�User Fairness
Competition & Item Diversity
LLM-based recommendation
Not part of this talk:
Algorithmic Fairness
�
�Game theory, online learning
Thanks to all my co-authors!
52
Dr. Ashwin Pananjady
Dr. Aaron Roth
Dr. Juba Ziani
Dr. Sampath Kannan
Dr. Eshar Ram
Arunachaleswaran
Dr. Aleksandr V. Petrov
Lokranjan Lakshmikanthan
Dr. Anders Kirk Uhrenholt
Dr. David Wardrope
Timos Korres
Varun Vangala
Dr. Jingyan Wang
Dr. Franziska Boenisch
Rakshit Naidu
Dr. Vidya
Muthukumar
Jim James
�
Etash Guha
Guanghui Wang
Thank you, committee!
Dr. Jacob�Abernethy
Dr. Vidya�Muthukumar
Dr. Aaron�Roth
Dr. Kai�Wang
Dr. Juba Ziani Advisor
Committee Members
Questions
54
P1) User Fairness
P2) Strategic creators & item catalog evolution
P3) LLM-based recommendation