what you lose is what you leak - profs area scienze ed...
TRANSCRIPT
WHAT YOU LOSE IS WHAT YOU LEAKINFORMATION LEAKAGE IN DECLASSIFICATION POLICIES
A. Banerjee, R. Giacobazzi and I. Mastroeni
Kansas State University Universita di Verona
Manhattan (KS), USA Verona, Italy
MFPS 2007
What you lose is what you leak – p.1/19
Overview
By exploiting the strong relation between completeness andnon-interference we can obtain the following results:
Model declassification as a forward completeness problem for theweakest precondition semantics;
Derive counterexamples to a given declassification policy;
Refine a given declassification policy;
What you lose is what you leak – p.2/19
Overview
By exploiting the strong relation between completeness andnon-interference we can obtain the following results:
Model declassification as a forward completeness problem for theweakest precondition semantics;
Derive counterexamples to a given declassification policy;
Refine a given declassification policy;
We can model declassification as a model checking problem (see therelation with robust declassification)
What you lose is what you leak – p.2/19
Standard Non-Interference
Private InputPublic Input
Public Output
JPK
∀l : L, ∀h1, h2 : H. JPK(h1, l)L = JPK(h2, l)L
What you lose is what you leak – p.3/19
Standard Non-Interference
Private InputPublic Input
Public Output
JPK
∀l : L, ∀h1, h2 : H. JPK(h1, l)L = JPK(h2, l)L
What you lose is what you leak – p.3/19
Standard Non-Interference
Private InputPublic Input
Public Output
JPK
∀l : L, ∀h1, h2 : H. JPK(h1, l)L = JPK(h2, l)L
What you lose is what you leak – p.3/19
Standard Non-Interference
Private InputPublic Input
Public Output
JPK
∀l : L, ∀h1, h2 : H. JPK(h1, l)L = JPK(h2, l)L
What you lose is what you leak – p.3/19
Standard Non-Interference
Private InputPublic Input
Public Output
JPK
∀l : L, ∀h1, h2 : H. JPK(h1, l)L = JPK(h2, l)L
What you lose is what you leak – p.3/19
Standard Non-Interference
Private InputPublic Input
Public Output
JPK
∀l : L, ∀h1, h2 : H. JPK(h1, l)L = JPK(h2, l)L
What you lose is what you leak – p.3/19
NI: A completeness problem
Recall that [Joshi & Leino’00]
P is secure iff HH ; P ; HH.= P ; HH
What you lose is what you leak – p.4/19
NI: A completeness problem
Recall that [Joshi & Leino’00]
P is secure iff HH ; P ; HH.= P ; HH
Let X = 〈X H, X L〉 ⇒ H(X )def= 〈⊤H, X L〉 ∈ uco(℘(V))
HH ; P ; HH.= P ; HH
⇓H◦JPK◦H = H◦JPK
What you lose is what you leak – p.4/19
NI: A completeness problem
Recall that [Joshi & Leino’00]
P is secure iff HH ; P ; HH.= P ; HH
Let X = 〈X H, X L〉 ⇒ H(X )def= 〈⊤H, X L〉 ∈ uco(℘(V))
HH ; P ; HH.= P ; HH
⇓H◦JPK◦H = H◦JPK
⇒ A COMPLETENESS PROBLEM
[Giacobazzi& Mastroeni ‘05]
What you lose is what you leak – p.4/19
Declassified NI
Private InputPublic Input
Public Output
φ
JPK
φ ∈ Abs(℘(VH)): φ(h1)=φ(h2) ⇒ JPK(h1, l)L= JPK(h2, l)L
[Mastroeni ’05]
What you lose is what you leak – p.5/19
Declassified NI
Private InputPublic Input
Public Output
φ
JPK
φ ∈ Abs(℘(VH)): φ(h1)=φ(h2) ⇒ JPK(h1, l)L= JPK(h2, l)L
[Mastroeni ’05]
What you lose is what you leak – p.5/19
Declassified NI
Private InputPublic Input
Public Output
φ
JPK
φ ∈ Abs(℘(VH)): φ(h1)=φ(h2) ⇒ JPK(h1, l)L= JPK(h2, l)L
[Mastroeni ’05]
What you lose is what you leak – p.5/19
Modelling declassification: A running example
Let φ =Paritydef= {⊤, Even, Odd, ∅},
P =
"h := |h |;
while (h > 0) do (h := h − 1; l := h) endw
What you lose is what you leak – p.6/19
Modelling declassification: A running example
Let φ =Paritydef= {⊤, Even, Odd, ∅},
{h ∈ Z}
h := |h |;
{(h = 0 ∧ l = 0) ∨ h > 0}
while (h > 0) do (h := h − 1; l := h) endw
{l = 0}
Z ∈ φ ⇒ φ is ok!
What you lose is what you leak – p.6/19
Modelling declassification: A running example
Let φ =Paritydef= {⊤, Even, Odd, ∅},
{h = 0}
h := |h |;
{h = 0 ∧ l = a}
while (h > 0) do (h := h − 1; l := h) endw
{l = a 6= 0}
{0} /∈ φ ⇒ φ is not ok!
What you lose is what you leak – p.6/19
Modelling declassification: A running example
Let φ =Paritydef= {⊤, Even, Odd, ∅},
Ha ={〈h, l〉
˛˛ h ∈ Z, l = a
}(a value observed in output).
P =
"h := |h |;
while (h > 0) do (h := h − 1; l := h) endw
Wlp :
{H0 7→ {〈h, l〉 | h 6= 0, l ∈ Z} ∪ {〈0, 0〉}
Ha 7→ {〈0, a〉} (a 6= 0)
P secure with φ declassified ⇔ Hφ◦WlpP ◦H = WlpP ◦H
What you lose is what you leak – p.6/19
DNI: A completeness problem (1)
Let Hφ the abstract domain declassifying the property φ of the private input:
H◦JPK◦Hφ = H◦JPK ⇔ Hφ◦WlpP ◦H = WlpP ◦H
⇓To release φ means to distinguish between elements in φ!
What you lose is what you leak – p.7/19
DNI: A completeness problem (1)
Let Hφ the abstract domain declassifying the property φ of the private input:
H◦JPK◦Hφ = H◦JPK ⇔ Hφ◦WlpP ◦H = WlpP ◦H
〈φ(XH), XL〉
Output
H
〈⊤, xL〉
〈xH, xL〉
Input
Hφ
WlpP 〈XH, XL〉
What you lose is what you leak – p.7/19
Deriving counterexamples: A running example
Consider again φ =Paritydef= {⊤, Even, Odd, ∅},
P =
"h := |h |;
while (h > 0) do (h := h − 1; l := h) endw
What you lose is what you leak – p.8/19
Deriving counterexamples: A running example
Consider again φ =Paritydef= {⊤, Even, Odd, ∅},
{h = 0} ⇒Even split in {0} and Even r {0}
h := |h |;
{h = 0 ∧ l = a}
while (h > 0) do (h := h − 1; l := h) endw
{l = a 6= 0}
Let l = 5, h1 = 0 ∈ Even and h2 = 2 ∈ Even:JPK(〈0, 5〉) = 〈0, 5〉 6= 〈0, 0〉 = JPK(〈2, 5〉)
What you lose is what you leak – p.8/19
DNI: A completeness problem (2)
Let Hφ the abstract domain declassifying the property φ of the private input:
H◦JPK◦Hφ = H◦JPK ⇔ Hφ◦WlpP ◦H = WlpP ◦H
〈φ(XH), XL〉
Output
H
〈⊤, xL〉
〈xH, xL〉
Input
Hφ
WlpP 〈XH, XL〉
What you lose is what you leak – p.9/19
DNI: A completeness problem (2)
Let Hφ the abstract domain declassifying the property φ of the private input:
H◦JPK◦Hφ = H◦JPK ⇔ Hφ◦WlpP ◦H = WlpP ◦H
Counterexample
Output
H
〈⊤, xL〉
〈xH, xL〉
Input
Hφ
WlpP 〈XH, XL〉
〈φ(XH), XL〉
What you lose is what you leak – p.9/19
DNI: A completeness problem (2)
Let Hφ the abstract domain declassifying the property φ of the private input:
H◦JPK◦Hφ = H◦JPK ⇔ Hφ◦WlpP ◦H = WlpP ◦H
Counterexample
LeakeageOutput
H
〈⊤, xL〉
〈xH, xL〉
Input
Hφ
WlpP 〈XH, XL〉
〈φ(XH), XL〉
What you lose is what you leak – p.9/19
Refining policies: An example
Consider φ ={ {
〈h1, h2, . . . , hn 〉˛˛ (h1 + h2 + . . . + hn )/n = a
} ˛˛ a ∈ Z
}
P =
"h1 := h1; h2 := h2; . . . ; hn = hn
avg := declassify((h1 + h2 + . . . + hn )/n);
What you lose is what you leak – p.10/19
Refining policies: An example
Consider φ ={ {
〈h1, h2, . . . , hn 〉˛˛ (h1 + h2 + . . . + hn )/n = a
} ˛˛ a ∈ Z
}
{h1 = a}
h1 := h1; h2 := h2; . . . ; hn = hn
{(h1 + h2 + . . . + hn )/n = a}
avg := declassify((h1 + h2 + . . . + hn )/n);
{avg = a}
{〈h1, h2, . . . , hn 〉
˛˛ (h1 + h2 + . . . + hn )/n = a, h1 = a
}/∈ φ ⇒ φ is not ok!
What you lose is what you leak – p.10/19
Refining policies: An example
Consider φ ={ {
〈h1, h2, . . . , hn 〉˛˛ (h1 + h2 + . . . + hn )/n = a
} ˛˛ a ∈ Z
}
Ha ={〈h1, . . . , hn , avg〉
˛˛ avghi
= a}
(a value observed in output).
P =
"h1 := h1; h2 := h2; . . . ; hn = hn
avg := declassify((h1 + h2 + . . . + hn )/n);
Wlp : Ha 7→{〈a, h2, . . . , hn , a〉
˛˛ avg = a
}
P secure with φ ′ declassified ⇔ φ ′ = φ ⊓{
Wlp(Ha )˛˛ a ∈ Z
}
What you lose is what you leak – p.10/19
DNI: A completeness problem (3)
Let Hφ the abstract domain declassifying the property φ of the private input:
H◦JPK◦Hφ = H◦JPK ⇔ Hφ◦WlpP ◦H = WlpP ◦H
〈φ(XH), XL〉
Output
H
〈⊤, xL〉
〈xH, xL〉
Input
Hφ
WlpP 〈XH, XL〉
What you lose is what you leak – p.11/19
DNI: A completeness problem (3)
Let Hφ the abstract domain declassifying the property φ of the private input:
H◦JPK◦Hφ = H◦JPK ⇔ Hφ◦WlpP ◦H = WlpP ◦H
Counterexample
Output
H
〈⊤, xL〉
〈xH, xL〉
Input
Hφ
WlpP 〈XH, XL〉
〈φ(XH), XL〉
What you lose is what you leak – p.11/19
DNI: A completeness problem (3)
Let Hφ the abstract domain declassifying the property φ of the private input:
H◦JPK◦Hφ = H◦JPK ⇔ Hφ◦WlpP ◦H = WlpP ◦H
Counterexample
LeakeageOutput
H
〈⊤, xL〉
〈xH, xL〉
Input
Hφ
WlpP 〈XH, XL〉
〈φ(XH), XL〉
What you lose is what you leak – p.11/19
DNI: A completeness problem (3)
Let Hφ the abstract domain declassifying the property φ of the private input:
H◦JPK◦Hφ = H◦JPK ⇔ Hφ◦WlpP ◦H = WlpP ◦H
Refinement
Output
H
〈⊤, xL〉
〈xH, xL〉
Input
Hφ
WlpP 〈XH, XL〉
〈φ(XH), XL〉
What you lose is what you leak – p.11/19
Example: Oblivious Transfer Protocol
[C. Morgan]
What you lose is what you leak – p.12/19
Example: Oblivious Transfer Protocol
[C. Morgan]Ted: Trusted party
Alice Bob
Hid: r ; d ; c ∈ {0, 1};m m0;m1; r0; r1
Vis: m0; m1; r0; r1; f0; f1; e c;m ; d ; r ; f0; f1; e
Pdef=
2666664
r0, r1 :∈ M ; d :∈ {0, 1};
r := rd ;
e := c ⊕ d ;
f0, f1 := m0 ⊕ re , m1 ⊕ r1⊕e ;
m := fc ⊕ r ;
What you lose is what you leak – p.12/19
Example: Oblivious Transfer Protocol
[C. Morgan]Ted: Trusted party
Alice Bob
Hid: r ; d ; c ∈ {0, 1};m m0;m1; r0; r1
Vis: m0; m1; r0; r1; f0; f1; e c;m ; d ; r ; f0; f1; e
Bob’s point of view: He has not to see m1⊕c
r0, r1 :∈ M ; d :∈ {0, 1};
r := rd ;
{(c = d , f0 = m0 ⊕ r0, f1 = m1 ⊕ r1) ∨ (c 6= d , f0 = m0 ⊕ r1, f1 = m1 ⊕ r0)}
e := c ⊕ d ;
{f0 = m0 ⊕ re , f1 = m1 ⊕ r1⊕e }
f0, f1 := m0 ⊕ re , m1 ⊕ r1⊕e ;
m := fc ⊕ r ;
{f0; f1; m}
What you lose is what you leak – p.12/19
Example: Oblivious Transfer Protocol
[C. Morgan]Ted: Trusted party
Alice Bob
Hid: r ; d ; c ∈ {0, 1};m m0;m1; r0; r1
Vis: m0; m1; r0; r1; f0; f1; e c;m ; d ; r ; f0; f1; e
Bob’s point of view: He has not to see m1⊕c
Soundness guarantees that Bob knows m = mc , fc , rdWlp guarantees that Bob knows only fc = mc ⊕ rd and f1⊕c = m1⊕c ⊕ r1⊕d
⇓f1⊕c tells almost nothing of the secret m1⊕c
What you lose is what you leak – p.12/19
Declassified Abstract non-interference
Pdef=
2
6
6
6
6
6
6
6
6
6
6
6
6
6
4
if(d ≤ x +y ≤ d +dx +dy ∧ −dy ≤ x −y ≤ dx ) then
if(x ≥ 0 ∧x ≤ d) then xL := d ;
if(x > d ∧ x ≤ dx ) then xL := x ;
if(x > dx ∧ x ≤ dx +d) then xL := dx ;
if(y ≥ 0 ∧y ≤ d) then yL := d ;
if(y > d ∧ y ≤ dy) then yL := y ;
if(y > dy ∧ y ≤ dy +d) then yL := dy ;
What you lose is what you leak – p.13/19
Declassified Abstract non-interference
Pdef=
2
6
6
6
6
6
6
6
6
6
6
6
6
6
4
if(d ≤ x +y ≤ d +dx +dy ∧ −dy ≤ x −y ≤ dx ) then
if(x ≥ 0 ∧x ≤ d) then xL := d ;
if(x > d ∧ x ≤ dx ) then xL := x ;
if(x > dx ∧ x ≤ dx +d) then xL := dx ;
if(y ≥ 0 ∧y ≤ d) then yL := d ;
if(y > d ∧ y ≤ dy) then yL := y ;
if(y > dy ∧ y ≤ dy +d) then yL := dy ;
Hφη◦WlpP ◦Hρ = WlpP ◦Hρ
What you lose is what you leak – p.13/19
Declassified Abstract non-interference
Private InputPublic Input
Public Output
η
JPK
ρ
φ
ρ, η ∈ Abs(℘(VL)),φ ∈ Abs(℘(VH)): (η)P(φ ⇒ ρ):η(l1)=η(l2) and φ(h1)=φ(h2) ⇒ ρ(JPK(h1, η(l1))L)=ρ(JPK(h2, η(l2))L)
[Giacobazzi & Mastroeni ’04]
What you lose is what you leak – p.13/19
Declassified Abstract non-interference
Private InputPublic Input
Public Output
η
JPK
ρ
φ
ρ, η ∈ Abs(℘(VL)),φ ∈ Abs(℘(VH)): (η)P(φ ⇒ ρ):η(l1)=η(l2) and φ(h1)=φ(h2) ⇒ ρ(JPK(h1, η(l1))L)=ρ(JPK(h2, η(l2))L)
[Giacobazzi & Mastroeni ’04]
What you lose is what you leak – p.13/19
Declassified Abstract non-interference
Private InputPublic Input
Public Output
η
JPK
ρ
φ
ρ, η ∈ Abs(℘(VL)),φ ∈ Abs(℘(VH)): (η)P(φ ⇒ ρ):η(l1)=η(l2) and φ(h1)=φ(h2) ⇒ ρ(JPK(h1, η(l1))L)=ρ(JPK(h2, η(l2))L)
[Giacobazzi & Mastroeni ’04]
What you lose is what you leak – p.13/19
Abstract Model checking DNI
Robust declassification transforms the attacker observational capability[Zdancewic & Myers ’01]:
∀σ, σ′ ∈ Σ . 〈σ, σ
′〉 ∈ S [≈] ⇔ Obsσ(S ,≈) ≡ Obsσ ′(S ,≈)
What you lose is what you leak – p.14/19
Abstract Model checking DNI
Robust declassification transforms the attacker observational capability[Zdancewic & Myers ’01]:
∀σ, σ′ ∈ Σ . 〈σ, σ
′〉 ∈ S [≈] ⇔ Obsσ(S ,≈) ≡ Obsσ ′(S ,≈)
S [≈] =≈ iff ≈ backward complete for post
What you lose is what you leak – p.14/19
Abstract Model checking DNI
Robust declassification transforms the attacker observational capability[Zdancewic & Myers ’01]:
∀σ, σ′ ∈ Σ . 〈σ, σ
′〉 ∈ S [≈] ⇔ Obsσ(S ,≈) ≡ Obsσ ′(S ,≈)
Example:〈t , h, p, q , r〉 7→ 〈t , h, p, q , r〉
〈0, h, q , q , 0〉 7→ 〈1, h, q , q , 1〉
〈0, h, q , q , 1〉 7→ 〈1, h, q , q , 0〉
〈0, h, p, q , 0〉 7→ 〈1, h, p, q , 0〉 p 6= q
〈0, h, p, q , 1〉 7→ 〈1, h, p, q , 1〉 p 6= q
The public variables are t , q , r , hence the partition induced by H is:
〈t , h, p, q , r〉 ≡ 〈t ′, h ′, p
′, q
′, r
′〉 iff t = t′
∧ q = q′
∧ r = r′
What you lose is what you leak – p.14/19
Abstract Model checking DNI
Robust declassification transforms the attacker observational capability[Zdancewic & Myers ’01]:
∀σ, σ′ ∈ Σ . 〈σ, σ
′〉 ∈ S [≈] ⇔ Obsσ(S ,≈) ≡ Obsσ ′(S ,≈)
Example:〈t , h, p, q , r〉 7→ 〈t , h, p, q , r〉
〈0, h, q , q , 0〉 7→ 〈1, h, q , q , 1〉
〈0, h, q , q , 1〉 7→ 〈1, h, q , q , 0〉
〈0, h, p, q , 0〉 7→ 〈1, h, p, q , 0〉 p 6= q
〈0, h, p, q , 1〉 7→ 〈1, h, p, q , 1〉 p 6= q
〈0, h, p, q , 0〉 7→{
〈1, h, q , q , 1〉
〈1, h, p, q , 0〉fpreP :
{〈1, h, q , q , 1〉 7→ 〈0, h, q , q , 0〉
〈1, h, p, q , 0〉 7→ 〈0, h, p, q , 0〉 p 6= q
What you lose is what you leak – p.14/19
Discussion
What already exists: Several studies about declassification, derivation ofcounterexamples and refinements... nothing that combines all together
Several approaches for modelling and checking declassificationpolicies: PER model [Sabelfeld and Sands], dynamic logic [Darvas etal.], robust declassification [Zdancewic and Myers], Delimited release[Sabelfeld and Myers], Relaxed non-interference [Li andZdancewic],...;
Derivation of counterexamples of secure information flows (withoutdeclassification) [Unno et al.];
Preservation of secrecy under refinement [Alur et al.];
What you lose is what you leak – p.15/19
Discussion
What already exists: Several studies about declassification, derivation ofcounterexamples and refinements... nothing that combines all together
What we have done: Modelling declassification as a completenessproblem;
We analyze the accuracy of a declassification policy;
We associate with each public observation the correspondinginformation released;
We can refine the accuracy of the policy;
We create a connection with abstract model checking;
What you lose is what you leak – p.15/19
Discussion
What already exists: Several studies about declassification, derivation ofcounterexamples and refinements... nothing that combines all together
What we have done: Modelling declassification as a completenessproblem;
We analyze the accuracy of a declassification policy;
We associate with each public observation the correspondinginformation released;
We can refine the accuracy of the policy;
We create a connection with abstract model checking;
What we have to do: We are interested in......extending our approach to more complex systems;
...exploiting this connection for implementing our approach;
What you lose is what you leak – p.15/19
Abstract Interpretation
Consider the complete lattice < C ,≤, ∧, ∨,⊥,⊤ >, Ai ∈ uco(C )
Lattice of Abstract Domains ≡ Lattice ucoA ≡ ρ(C )
< uco(C ),⊑,⊓,⊔, λx . ⊤, λx . x >
What you lose is what you leak – p.16/19
Abstract Interpretation
Consider the complete lattice < C ,≤, ∧, ∨,⊥,⊤ >, Ai ∈ uco(C )
Lattice of Abstract Domains ≡ Lattice ucoA ≡ ρ(C )
< uco(C ),⊑,⊓,⊔, λx . ⊤, λx . x >
A1 ⊑ A2 ⇔ A2 ⊆ A1
What you lose is what you leak – p.16/19
Abstract Interpretation
Consider the complete lattice < C ,≤, ∧, ∨,⊥,⊤ >, Ai ∈ uco(C )
Lattice of Abstract Domains ≡ Lattice ucoA ≡ ρ(C )
< uco(C ),⊑,⊓,⊔, λx . ⊤, λx . x >
A1 ⊑ A2 ⇔ A2 ⊆ A1
⊓iAi = M(∪iAi )
What you lose is what you leak – p.16/19
Abstract Interpretation
Consider the complete lattice < C ,≤, ∧, ∨,⊥,⊤ >, Ai ∈ uco(C )
Lattice of Abstract Domains ≡ Lattice ucoA ≡ ρ(C )
< uco(C ),⊑,⊓,⊔, λx . ⊤, λx . x >
A1 ⊑ A2 ⇔ A2 ⊆ A1
⊓iAi = M(∪iAi )
⊔iAi = ∩iAi
What you lose is what you leak – p.16/19
Abstract Interpretation
Consider the complete lattice < C ,≤, ∧, ∨,⊥,⊤ >, Ai ∈ uco(C )
Lattice of Abstract Domains ≡ Lattice ucoA ≡ ρ(C )
< uco(C ),⊑,⊓,⊔, λx . ⊤, λx . x >
A1 ⊑ A2 ⇔ A2 ⊆ A1
⊓iAi = M(∪iAi )
⊔iAi = ∩iAi
x
C
Top:
A
What you lose is what you leak – p.16/19
Abstract Interpretation
Consider the complete lattice < C ,≤, ∧, ∨,⊥,⊤ >, Ai ∈ uco(C )
Lattice of Abstract Domains ≡ Lattice ucoA ≡ ρ(C )
< uco(C ),⊑,⊓,⊔, λx . ⊤, λx . x >
A1 ⊑ A2 ⇔ A2 ⊆ A1
⊓iAi = M(∪iAi )
⊔iAi = ∩iAi
x
C
Top:
x
C A
x
Bottom:
A
What you lose is what you leak – p.16/19
Abstract domain backward completeness
Let < A, α, γ, C > a Galois insertion. [Cousot & Cousot ’77,’79]f : C −→ C , f a = α ◦ f ◦ γ : A −→ A (b.c.a. of f ) and ρ=γ ◦ α
αf (x )
α(x )x
f
αf (x ) =f aα(x )
⊥ ⊥a
⊤ ⊤a
ρ correct for f
f a
αf (x )
What you lose is what you leak – p.17/19
Abstract domain backward completeness
Let < A, α, γ, C > a Galois insertion. [Cousot & Cousot ’77,’79]f : C −→ C , f a = α ◦ f ◦ γ : A −→ A (b.c.a. of f ) and ρ=γ ◦ α
f (x )
α(x )x
f
αf (x ) = f aα(x )
⊥ ⊥a
⊤ ⊤a
ρ complete for f
f a
α
ρf ρ = ρf
What you lose is what you leak – p.17/19
Making backward complete
Giacobazzi et al. ‘00
f
x
ρ1
ρ2
C1
C2
ρ2f ρ1 = ρ2f
What you lose is what you leak – p.18/19
Making backward complete
Giacobazzi et al. ‘00
f
x
ρ1
ρ2
C1
C2
ρ2f ρ1 = ρ2f
What you lose is what you leak – p.18/19
Abstract domain forward completeness
Let < A, α, γ, C > a Galois insertion. [Cousot & Cousot ’77,’79]f : C −→ C , f a = α ◦ f ◦ γ : A −→ A (b.c.a. of f ) and ρ=γ ◦ α
ρ correct for f
⊥ ⊥a
⊤ ⊤a
f
x
f a(x )
γ(x )
γ(f a(x )) = f (γ(x ))
γ(f a(x ))
γ
f a
What you lose is what you leak – p.19/19
Abstract domain forward completeness
Let < A, α, γ, C > a Galois insertion. [Cousot & Cousot ’77,’79]f : C −→ C , f a = α ◦ f ◦ γ : A −→ A (b.c.a. of f ) and ρ=γ ◦ α
ρ complete for f
⊥ ⊥a
⊤ ⊤a
f
x
f a(x )
γ(x )
γ(f a(x )) = f (γ(x )) γ
f aρf ρ = f ρ
What you lose is what you leak – p.19/19