1 of 14

Joint Bias Mitigation and Privacy Preservation Feature Representation Framework for Vision

Presenter: Maxime Lucienne Gevers

1

2 of 14

Motivation

Section 1

2

3 of 14

Political and Legal Decisions

European AI ACT (approved by the European Parliament on April 16 2024, takes effect in 2025)

Unacceptable risk

  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
  • Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
  • Biometric identification and categorisation of people
  • Real-time and remote biometric identification systems, such as facial recognition

The General Data Protection Regulation (GDPR) is the primary law protecting personal data in the EU, which came into effect on May 25, 2018.

EU AI Act: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

3

4 of 14

Utility Trade-off

Bottleneck question: How to remove semantic information revealing sensitive attributes while maintaining the information necessary for good performance of the downstream task?

  • Legally prohibited to use sensitive attributes
  • Not all bias/privacy concerning attributes are identified
  • Humans are seen as the golden standard
  • Spurious correlation make the sensitive attributes known by proxy
  • Bias amplification

4

5 of 14

Privacy Preserving: Anonymization

5

Hinojosa, C., Niebles, J. C., & Arguello, H. (2021). Learning privacy-preserving optics for human pose estimation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 2573-2582)

Dave, I. R., Chen, C., & Shah, M. (2022). Spact: Self-supervised privacy preservation for action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 20164-20173).

6 of 14

Bias Mitigation: Feature Removal

6

haha

Wang, T., Zhao, J., Yatskar, M., Chang, K. W., & Ordonez, V. (2019). Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 5310-5319)

7 of 14

Method

Section 2

7

8 of 14

Problem Statement

A sensitive or protected attribute is defined as the demographic information of an individual that is legally prohibited to use for model prediction (e.g gender, age, or ethnicity) of uncorrelated tasks

Privacy attribute is understood as identity revealing, i.e. the identity of the person can be retrieved

8

9 of 14

Model

9

10 of 14

HQ-SAM

10

Ke, L., Ye, M., Danelljan, M., Tai, Y. W., Tang, C. K., & Yu, F. (2024). Segment anything in high quality. Advances in Neural Information Processing Systems, 36.

11 of 14

Experiments

Section 3

11

12 of 14

Evaluation - Datasets

Sensitive Attribute Classification

  • FACET consists of 32K diverse, high-resolution, privacy protecting images labeled across 13 person attributes and 52 person classes. They partnered with expert annotators to label person-related attributes (e.g., perceived gender presentation, perceived skin tone, hairstyle) and person-related classes (e.g., basketball player, doctor) for the 32K images.
  • VISPR is an image dataset with a diverse set of personal information in an image like skin color, face, gender, clothing, document information etc.

Action Detection

  • UCF101 and HMDB51 are two of the most commonly used datasets for the human action recognition.
  • PA-HMDB is dataset of 515 videos annotated with video level action annotation and framewise human privacy annotations. The dataset consists of 51 different actions and 5 different human privacy attributes.

12

13 of 14

Benchmark

13

Dave, I. R., Chen, C., & Shah, M. (2022). Spact: Self-supervised privacy preservation for action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 20164-20173).

14 of 14

Questions?

14