Confidentiality-preserving Proof Theories�for Distributed Proof Systems
Kazuhiro Minami
National Institute of Informatics
FAIS 2011
Distributed proving is an effective way to combine information in different administrative domains�
Distributed Proof System
Ç
Ç
Quoted fact
Protecting each domain’s confidential information is crucial
To determine the safety of a system involving multiple principals is not so trivial
Suppose that principal p0 is willing to disclose the truth of fact f0 only to p2
What if p2 still derives fact f2?
Problem Statements
Outline
Abstract System Model
Datalog inference rule:
Reference System D[IS]
(COND)
(SAYS)
TTP is a fixpoint function that computes the final state of a system�
Trusted Third Party (TTP)
p1
p2
pn
KB1
KBn
fixpoint1(KB)
fixpointn(KB)
KB2
fixpoint2(KB)
Inference
rules I
Soundness Requirement
Definition
Definition (Soundness)
A distributed proof system D[I] is sound if
Any confidentiality-preserving system D[I] should not prove a fact that is not provable with the reference system D[IS]
Outline
Confidentiality Policies
Attack Model
A
System D
Fact f0 is confidential because all the principals in A are not authorized to learn its truth
Fact f1 is NOT confidential because p4 is authorized to learn its truth
Attack Model (Cont.)
A
System D
Malicious principals only use their initial and final states ) to perform inferences
Attack Model (Cont.)
A
System D
Malicious principals only use their initial and final states to perform inferences
are available
Sutherland’s non-deducibility model inferences �by considering all possible worlds W
Consider two information functions v1: W → X and v2: W → Y.
X
Y
W
v1
Public
view
Private
view
w
w’
x
v2
y
y’
W’ = { w ⎢ v1(w) = x}
Y’
This cannot be possible!
Nondeducibility considers information flow between two information functions regarding system configuration
A set of initial
configurations
Initial and final states of malicious principals in set A
Confidential facts that are actually maintained by non-malicious principals
Information
flow
Function v1
Function v2
Safety Definition
We say that a distributed proof system D[P, I] is safe if
for every possible initial state KB,
for every possible subset of principals A,
for every possible subset of confidential facts Q,
there exists another initial state KB’ such that
Malicious principals A has the same initial and final local states
Non-malicious principals could posses any subset of confidential facts
Outline
DAC System D[IDAC]
Enforce confidentiality policies on a principal-to-principal basis
(COND)
(DAC-SAYS)
Example Derivations in D[IDAC]
(DAC-SAYS)
(COND)
D[P, IDAC] is Safe because deviations performed by one principal are transparent from others
Let P and A be {p0, p1} and {p1} respectively
KB0
KB1
KB’0
Principal p1 cannot distinguish
KB0 and KB’0
NE System D[INE]
Inference Rules INE
(ECOND)
(DEC1)
(DEC2)
(ENC-SAYS)
Example Derivations
(ENC-SAYS)
(ECOND)
(ENC-SAYS)
(DEC1)
(DEC1)
(DEC2)
(ECOND)
Analysis of System D[INE]
Malicious principals A
KB0
KB0
NE System is Safe
Conversion Method – Part 1
replace the sequence above with
Conversion Method – Part 2
CE System D[ICE] is NOT safe
(CE-DEC)
Summary
Thank you!