under-approximating to over-approximate invisible invariants and abstract interpretation

Post on 24-Feb-2016

43 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Under-approximating to Over-approximate Invisible Invariants and Abstract Interpretation. Ken McMillan Microsoft Research. Lenore Zuck University of Illinois Chicago. TexPoint fonts used in EMF: A A A A A. Overview. - PowerPoint PPT Presentation

TRANSCRIPT

Under-approximating to Over-approximate

Invisible Invariants and Abstract Interpretation

Ken McMillanMicrosoft Research

Lenore ZuckUniversity of Illinois

Chicago

Overview• For some abstract domains, computation of the best abstract

transformer can be very costly– (Indexed) Predicate Abstraction– Canonical shape graphs

• In these cases we may under-approximate the best transformer using finite-state methods, by restricting to a representative finite subset of the state space.– In practice, this can be a close under-approximation or even yield the exact

abstract least fixed point– In some cases, finite-state under-approximations can yield orders-of-

magnitude run-time reductions by reducing evaluation of the true abstract transformer.

In this talk, we'll consider some generic strategies of this type,suggested by Pnueli and Zuck's Invisible Invariants method(viewed as Abstract Interpretation).

Parameterized Systems• Suppose we have a parallel composition of N (finite state) processes,

where N is unknown

P1 P2 P3 PN...

• Proofs require auxiliary constructs, parameterized on N– For safety, an inductive invariant– For liveness, say, a ranking

• Pnueli, et al., 2001: derive these constructs for general N by abstracting from the mechanical proof of a particular N.– Surprising practical result: under-approximations can yield over-

approximations at the fixed point.

Recipe for an invariant1. Compute the reachable states RN for fixed N (say, N=5)

● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●

● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●

2. Project onto a small subset of processes (say 2)

● ●● ●● ●

● ●● ●● ● = {(s1,s2) | 9 (s1,s2,...) 2 RN}

Recipe for an invariant

3. Generalize from 2 to N, to get GN

2. Project onto a small subset of processes (say 2)

● ●● ●● ●

● ●● ●● ●

● ● ....... ●● ● ....... ●

● ● ....... ●● ● ....... ●

N

... ...

= {(s1,s2) | 9 (s1,s2,...) 2 RN}

N

GN = Æ i j 2 [1..N] (si,sj)

4. Test whether GN is inductive invariant for all N 8 N. GN ) X GN

● ● ....... ● ● ● ....... ●

Checking inductiveness• Inductiveness is equivalent to validity of this formula:

GN Æ T ) G’NTransition relation

• Small model theorem:– If there is a countermodel with N>M, there is a countermodel with N=M– Suffices to check inductiveness for N·M

In this case both the invariant generation and invariant checkingamount to finite-state model checking.

If no small model result is available, however, we can rely ona theorem prover to check inductiveness.

Abstraction setting

𝑆

Concrete state space

𝐿 Abstract language

𝛾 :𝐿→2𝑆

preserves conjunctions

𝛼 :2𝑆→𝐿(s) = Æ { 2 L | s µ () }

𝜏

concrete transformer

Parameterized systems• Concrete state space is set of valuations of variables

– Special variable Nat represents system parameter– Ranges of other variables depend on .

• Example: – For fixed , all ranges are finite: – Concrete transition system defined in FOL:

• Abstract domain is Indexed Predicate Abstraction– Fixed set of of index variables (say – Fixed set of atomic predicates . – A matrix is a Boolean combination over (in some normal form)– is the set of formulas where is a matrix.

Example: 8 i,j: i j ) :(q[i] Æ q[j])

• Small model results: – M depends mainly on quantifier structure of GN and T– Example: if T has one universal and GN has two, then M = 2b+3

Invariant by AI• Abstract transformer #

𝜏¿

=

lfp

• Compute strongest inductive invariant in L

𝜏¿

𝜏¿

𝜏¿

# is difficult to compute (exponential TP calls)

For our abstraction, this computation can be quite expensive!

Restricted abstraction

𝑆

Concrete state space is a union of finite spaces for

each value of

𝑆1 𝑆2 𝑆3

...𝑆𝑁

...

𝐿 Abstract language

𝛾 :𝐿→2𝑆 𝛼 :2𝑆→𝐿𝛾𝑁 :𝐿→2𝑆𝑁 𝛼𝑁 :2

𝑆𝑁→𝐿

Restricted concretization:

Galois connection:

2𝑆𝑁⇋ 𝐿𝛾𝑁

𝛼𝑁

"project""generalize"computable!

𝜏𝑁 :2𝑆𝑁→2𝑆𝑁

Invisible invariant construction• We construct the invariant guess by reachability and abstraction

N N N N

• Testing the invariant guess

N

N

N

NSMT

if N >= M

N

N

=

lfp

t#

RN

GN

Under-approximation• The idea of generalizing from finite instances suggests we can under-

approximate the best abstract transformer #

t#

N

N N

t#N

is an under-approximation of that we can compute with finite-state methods.

Three methods

𝜏¿

𝜏¿

A

𝜏¿ lfp()

N N N

NC

N

N NBN

N N

lfp()

N

N N

lfp()=?

𝜏¿

...

N

N

lfp 𝜏𝑁

𝛾𝑁 𝛼𝑁

𝜏𝑁¿=?

lfp()

...

𝜏𝑁¿ 𝜏𝑁

¿

Experiments• Evaluate three approaches

– Strategy A: compute using UCLID PA (eager reduction to ALL-SAT)– Strategy B: compute by TLV with BDD's and using UCLID– Strategy C: compute and by TLV and using UCLID

• Strategy C wins in two cases– Fewer computations of and

• Strategy B wins in one case– More abstraction reduces BDD size and iterations

• In all cases, only one theorem prover call is needed.

Related Work• Yorsh, Ball, Sagiv 2006

– Combines testing and abstract interpretation– Does not compute abstract fixed-points for finite sub-spaces as here– Here we apply model checking aggressively to reduce computation

• Bingham 2008– Essentially applies Strategy B with small model theorem to verify FLASH

cache coherence protocol– Compare to interactive proof with PVS with 111 lemmas (776 lines) and

extensive proof script!

Static analysis with finite domains can replace very substantial hand proof efforts.

Conclusion• Invisible invariants suggest a general approach to minimizing

computation of the best transformer, based on two ideas:– Under-approximations can yield over-approximations at the fixed point

• This is a bit mysterious, but observationally true– Computing the fixed point with under-approximations can use more light-

weight methods• For example, BDD-based model checking instead of a theorem prover• Using under-approximations can reduce the number of theorem prover

calls to just one in the best case.• We can apply this idea whenever we can define finite sub-spaces that

are representative of the whole space.– Parametricity and symmetry are not required– For example, could be applied to heap-manipulating programs by bounding

the heap size.

Example: Peterson ME• N-process version from Pnueli et al. 2001.

: pc : in : last :

Initially in

: <non-critical>; goto : ; goto : if goto else goto : if then goto else goto : goto : <Critical>; goto : ; goto

¿∨¿𝑖=1…𝑁 ¿

Peterson invariant• Hand-made invariant for N-process Peterson

(m.ZERO < m.in(i) & m.in(i) < m.N => m.in(m.last(m.in(i))) = m.in(i)) & (m.in(i) = m.in(j) & m.ZERO < l & l < m.in(i) => m.in(m.last(l)) = l) & (m.pc(i) = L4 => (m.last(m.in(i)) != i | m.in(j) < m.in(i))) & ((m.pc(i) = L5 | m.pc(i) = L6) => m.in(i) = m.N) & ((m.pc(i) = L0 | m.pc(i) = L1) => m.in(i) = m.ZERO) & (m.pc(i) = L2 => m.in(i) > m.ZERO) & ((m.pc(i) = L3 | m.pc(i) = L4) => m.in(i) < m.N & m.in(i) > m.ZERO) & (~(m.in(i) = m.N & m.in(j) = m.N))

• Required a few hours of trial-and error with a theorem prover

Peterson Invariant (cont.)• Machine generated by TLV in 6.8 seconds

X18 := ~levlty1 & y1ltN & ~y1eqN & ~y2eqN & ~y1gtz & y1eqz & (~ysy1eqy1 => ~sy1eq1);X15 := ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & ysy1eqy1;X5 := (~levlty1 => y1ltN & X15);X1 := ysy1eqy1 & ~sy1eq1;X0 := ysy1eqy1 & sy1eq1;X16 := y1eqN & y2eqN & y1gtz & ~y1eqz & ysleveqlev & X0;X14 := y1eqN & y2eqN & y1gtz & ~y1eqz & X0;X13 := ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & (ysleveqlev => ysy1eqy1) & (~ysleveqlev => X0);X7 := (levlty1 => y1ltN & ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & ysleveqlev & ysy1eqy1) & X5;X6 := ~y1eqy2 & X7;X4 := (levlty1 => y1ltN & X13) & X5;X3 := (levlty1 => y1ltN & ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & ysleveqlev & X1) & (~levlty1 => y1ltN & ~y1eqN & ~y2eqN & y1gtz & ~y1eqz & X1);X2 := ~y1eqy2 & X3;X17 := (levlty1 => (y1ltN => X13) & (~y1ltN => X14)) & (~levlty1 => (y1ltN => X15) & (~y1ltN => X16));X12 := (y1eqy2 => X7);X11 := (y1lty2 => X6);X10 := y1lty2 & X6;X9 := ~y1lty2 & ~y1eqy2 & X4;X8 := (~y1eqy2 => X4);

matrix := ((loc1 = L5 | loc1 = L6) => (loc2 = L0 | loc2 = L1 | loc2 =L2 | loc2 = L3 | loc2 = L4) & ~y1lty2 & ~y1eqy2 & (levlty1 => ~y1ltN &X14) & (~levlty1 => ~y1ltN & X16)) & (loc1 = L4 => ((loc2 = L5 | loc2= L6) => y1lty2 & X2) & ((loc2 = L2 | loc2 = L3 | loc2 = L4) =>(y1lty2 => X2) & (~y1lty2 => (y1eqy2 => X3) & X8)) & ((loc2 = L0 |loc2 = L1) => X9)) & (loc1 = L3 => ((loc2 = L5 | loc2 = L6) => X10) &((loc2 = L2 | loc2 = L3 | loc2 = L4) => X11 & (~y1lty2 => X12 & X8)) &((loc2 = L0 | loc2 = L1) => X9)) & (loc1 = L2 => ((loc2 = L5 | loc2 =L6) => X10) & ((loc2 = L2 | loc2 = L3 | loc2 = L4) => X11 & (~y1lty2=> X12 & (~y1eqy2 => X17))) & ((loc2 = L0 | loc2 = L1) => ~y1lty2 &~y1eqy2 & X17)) & ((loc1 = L0 | loc1 = L1) => (~(loc2 = L1 | loc2 =L0) => y1lty2 & ~y1eqy2 & X18) & ((loc2 = L0 | loc2 = L1) => ~y1lty2 &y1eqy2 & X18));

top related