1 of 1

Contact: yueniu@usc.edu

Overview

inference_VGG_ResNet.pdf

Short Abstract

we propose an asymmetric model decomposition framework, AsymML, to 1) accelerate training/inference using parallel hardware; and 2) preserve privacy using TEEs. By exploiting the low-rank characteristics in data and intermediate features, AsymML asymmetrically splits a DNN model into trusted and untrusted parts:

the trusted part features privacy-sensitive data but incurs small compute/memory costs, while the untrusted part is computationally-intensive but not privacy-sensitive. Computing performance and privacy are guaranteed by respectively delegating the trusted and untrusted part to TEEs and GPUs.

Evaluation

We evaluate AsymML from the following perspective: 1) training performance, 2) inference performance, 3) information leakage, and 4) running time breakdown. For training performance, we use SGX-only and GPU-only as two baselines, while for inference performance, we include one more baseline: Slalom []. W measure mutual information between inputs and activations in GPUs as potential information leakage in AsymML. For running time breakdown, we break the whole running time into time on SGX and GPU, respectively, and explore design bottlenecks in the system.

 

��

Motivation

Data privacy is crucial in many applications.

GPUs are capable of high performance, but unable to guarantee privacy;

TEEs achieve privacy protection at the cost of performance degradation.

Can we combine them and exploit both advantages (performance, and privacy)?

Details

3LegRace: Privacy-Preserving DNN Training over TEEs and GPUs (PETS 2022)

Yue Niu, Ramy E. Ali, Salman Avestimehr

Ming Hsieh Dept. of Electrical and Computer Engineering

AsymML Decomposition Framework

1) training performance

2) inference performance

3) accuracy vs information leakage

4) against model inversion attacks

original data

residuals

reconstructed data

scan for the paper

Theoretical Guarantee I: In CNNs, low-rank input generates low-rank output:

Rank(output) = C(k) * Rank(input), k: kernel size

Theoretical Guarantee II: AsymML is (𝝐, δ)-differentially private given noise added to the residuals