© 2007 carnegie mellon university optimized l*-based assume-guarantee reasoning sagar chaki, ofer...
Post on 21-Dec-2015
219 views
TRANSCRIPT
© 2007 Carnegie Mellon University
Optimized L*-based Assume-Guarantee Reasoning
Sagar Chaki, Ofer StrichmanMarch 27, 2007
2
Motivation: Reasoning by Decomposition
Let M1 and M2 be two NFAs
Let p be a property expressed as an NFA
Is L(M1 £ M2) µ L(p) ?
(Our notation: M1 £ M2 ² p)
Q: What if this is too hard to compute ?
A: Decompose
3
Assume-Guarantee Reasoning
An Assume-Guarantee rule:
• M1 and M2 are NFAs with alphabets 1 and 2
This rule is sound and complete
• For ¹ being trace containement, simulation etc.
• There always exists such an assumption A (e.g. M2)
• Need to find A such that M1 £ A is easier to compute than M1 £ M2
A £ M1 ¹ p M2 ¹ A
----------------M1 £ M2 ¹ p
A £ (M1 £ :p) ¹ ? M2 ¹ A---------------------
(M1 £ :p) £ M2 ¹ ?
≡
4
Learning the Assumption
Q: How can we find such an assumption A ?
A: Learn it with L*
The L* algorithm is by Angluin [87]
• Later improved by Rivest & Schapire [93] – this is what we use.
5
Positive feedback
L*
Teacher Unknown regular
Language U
L*s 2 U ?
L(A) = U ?
DA A s.t. L(A) = U
yes/no
yes
U)
Membership query
Candidate query
No. 2 L(A) - U Negative feedbackNo. 2 U - L(A)
L* finds the minimal A such that L(A) = U
8
Trying to distinguish between:
L (M1£:p) L (M2)
M1 £ M2 ² p is the same as (M1 £ :p) £ M2 ² ?
M1 £ M2 2 p
L (M1£:p) L (M2) M1 £ M2 ² p
9
On the way we can …
L (M1£:p) L (M2)
A
Find an assumption A such that
• L(M2) µ L(A) µ * - L(M1 £ :p)
Our HOPE: A is ‘simpler’ to represent than M2
• i.e., |M1 £ :p £ A| << |M1 £ :p £ M2|
A is an ‘acceptable’ assumption.
A £ (M1 £ :p) ² ? M2 ² A
----------------(M1 £ :p) £ M2 ² ?
10
How ?
Learn the language U = * - L(M1 £ :p)
Well defined. We can construct a teacher for it.
Membership query is answered by simulating on M1 £ :p.
L (M1£:p) L (M2)
A = U
A £ (M1 £ :p) ² ? M2 ² A
----------------(M1 £ :p) £ M2 ² ?
12
L* - when M1 £ M2 ² p
A conjecture query: is A acceptable ?
L (M1£:p) L (M2)
A
A £ (M1 £ :p) ² ? M2 ² A
----------------(M1 £ :p) £ M2 ² ?
Check 2 L(M2) …
13
Check 2 L(M2) …
L* - when M1 £ M2 ² p
L (M1£:p)
L (M2)
A
A £ (M1 £ :p) ² ? M2 ² A
----------------(M1 £ :p) £ M2 ² ?
If yes, is a real counterexample. Otherwise …
14
L* - when M1 £ M2 ² p
L (M1£:p)
L (M2)
A £ (M1 £ :p) ² ? M2 ² A
----------------(M1 £ :p) £ M2 ² ?
A
L* receives a negative feedback: should be removed from A
A matter of luck!
18
A-G with Learning
Model Checking
A £ M1 ² p
M2 ² A
A
true
true
Negative feedback
L*
N
M1£M2² p
Positive feedback
N
Y
M1£M2 2 p
² M2
false,
false, ² M1 £ :p Y
20
This work
In this work we improve the A-G framework with three optimizations
1. Feedback reuse: reduce the number of candidate queries
2. Lazy Learning: reduce the number of membership queries
3. Incremental Alphabet - reduce the size of A, the number of membership queries and conjectures.
As a result: reduced overall verification time of component-based systems.
• We will talk in details about the third optimization only
25
Optimization 3: Incremental Alphabet
Choosing = (1 [ p) \ 2 always works
• We call (1 [ p) \ 2 the “full interface alphabet”
But there may be a smaller that also works
We wish to find a small such using iterative refinement
• Start with = ;
• Is the current adequate ?
— no – update and repeat
— yes – continue as usual
26
Optimization 3: incremental alphabet
Claim: removing letters from the global alphabet, over-approximates the product.
Example:
If = {a,b} then ‘bb’ L(A £ B)
If = {b} then ‘bb’ 2 L(A £ B)
ab b
£
A B
L (M)
L (M)
Decreased
27
A-G with Learning
Model Checking
A £ M1 ² p
M2 ² A
A
true
true
remove from L(A)
Learningwith L*
N
M1£M2² p
add to L(A)
N
Y
M1£M2 2 p
² M2
false,
false, ² M1 £ :p Y
28
A-G with Learning
Model Checking
A £ M1 ² p
M2 ² A
A
true
true
remove from L(A)
Learningwith L* ()
= ;
N
M1£M2² p
add to L(A)
N
Y
M1£M2 2 p
² M2
false,
false, ² M1 £ :p Y
29
Optimization 3: Check if ² M1 £ :p
We first check with full alphabet :
A
L (M1£:p) L (M2)
A
L (M1£:p)L (M2)
A real counterexample!
30
Optimization 3: Check if ² M1 £ :p
We first check with full alphabet :
Then with a reduced alphabet A:
L (M1£:p) L (M2)
A
A
L (M1£:p) L (M2)
Positive feedback
Proceed as usual
31
Optimization 3: Check if ² M1 £ :p
We first check with full alphabet :
Then with a reduced alphabet A:
L (M1£:p) L (M2)
A
A
L (M1£:p) L (M2)
No positive feedback
is spurious
Must refine A
32
There are various letters that we can add to A in order to eliminate .
But adding a letter for each spurious counterexample is wasteful.
Better: find a small set of letters that eliminate all the spurious counterexamples seen so far.
Optimization 3: Refinement
33
So we face the following problem:
“Given a set of sets of letters, find the smallest set of letters that intersects all of them.”
This is a minimum-hitting-set problem.
Optimization 3: Refinement
34
A naïve solution:
• Find for each counterexample the set of letters that eliminate it.
— Explicit traversal of M1 £ :p.
• Formulate the problem: “find the smallest set of letters that intersects all these sets”
— A 0-1 ILP problem.
Optimization 3: Refinement
35
Alternative solution: integrate the two stages.
• Formulate the problem:“find the smallest set of letters that eliminate all these counterexamples”
— a 0-1 ILP problem
Optimization 3: Incremental Alphabet
36
Let M1 £ : p =
Let =
Introduce a variable for each state-pair: (p,x),(p,y),…
Introduce choice variables A() and A()
Initial constraint: (p,x)
• initial state always reachable
Final constraint: :(r,z)
• final states must be unreachable
p q
r
x y
z
Optimization 3: Incremental Alphabet
37
Let M1 £ : p =
Let =
Some sample transitions:
(p,x) ^ :A() ) (q,x)
(p,x) ^ :A() ) (p,y)
(q,x) ) (r,y) _(q,x) ^ :A() ) (r,x) ^(q,y)
Find a solution that minimizes A() + A()
• In this case setting A() = A() = TRUE
• Updated alphabet = {,}
p q
r
x y
z
Optimization 3: Incremental Alphabet
38
Experimental Results: Overall
NameCandidateQueries
MembershipQueries ||
(Time) : T1 (Time) T1
: T2 T2 : T2 T2
: T1 T1 : T2 T2 : T3 T3 : T3 T3 : T3 T3 : T3 T3 : T3 T3
1 2.2 2.0 37.5 4.5 12 1 25 19.7 12.3 20.0 23.8 20.1 10.5 20.5
2 5.0 5.2 101.5 11.5 12 4 32 40.0 12.6 30.0 32.4 44.6 13.7 30.2
3 8.5 7.5 163.0 28.0 12 4 44 49.1 14.5 35.3 45.6 48.9 15.6 35.5
4 13.0 10.5 248.0 56.5 12 4 63 67.5 17.4 58.1 61.5 67.7 18.6 48.4
5 3.2 3.0 73.0 9.5 12 1 34 22.3 13.6 24.1 36.2 22.2 13.8 22.2
6 6.8 7.2 252.0 36.5 12 2 103 30.6 24.2 29.0 102.2 43.3 23.1 29.8
7 9.8 8.0 328.8 52.5 12 2 140 44.4 27.8 43.9 138.2 38.6 28.2 40.6
8 15.0 13.0 443.0 77.5 12 3 183 73.6 37.1 67.9 184.0 73.2 35.8 64.2
9 23.5 18.2 568.0 109.5 12 3 234 121 44.1 133.7 236.2 133.4 41.0 109.3
10 25.5 22.0 689.5 128.5 12 3 294 189 48.4 168.1 297.0 179.9 45.9 169.7
Avg. 10.8 9.2 290.0 51.0 12 2 115 65.6 25.2 61.0 115.7 67.2 24.6 57.1
39
Experimental Results: Optimization 3
NameCandidateQueries
MembershipQueries ||
(Time) : T1 (Time) T1
: T2 T2 : T2 T2
: T1 T1 : T2 T2 : T3 T3 : T3 T3 : T3 T3 : T3 T3 : T3 T3
1 2.2 2.0 37.5 4.5 12 1 25 19.7 12.3 20.0 23.8 20.1 10.5 20.5
2 5.0 5.2 101.5 11.5 12 4 32 40.0 12.6 30.0 32.4 44.6 13.7 30.2
3 8.5 7.5 163.0 28.0 12 4 44 49.1 14.5 35.3 45.6 48.9 15.6 35.5
4 13.0 10.5 248.0 56.5 12 4 63 67.5 17.4 58.1 61.5 67.7 18.6 48.4
5 3.2 3.0 73.0 9.5 12 1 34 22.3 13.6 24.1 36.2 22.2 13.8 22.2
6 6.8 7.2 252.0 36.5 12 2 103 30.6 24.2 29.0 102.2 43.3 23.1 29.8
7 9.8 8.0 328.8 52.5 12 2 140 44.4 27.8 43.9 138.2 38.6 28.2 40.6
8 15.0 13.0 443.0 77.5 12 3 183 73.6 37.1 67.9 184.0 73.2 35.8 64.2
9 23.5 18.2 568.0 109.5 12 3 234 121 44.1 133.7 236.2 133.4 41.0 109.3
10 25.5 22.0 689.5 128.5 12 3 294 189 48.4 168.1 297.0 179.9 45.9 169.7
Avg. 10.8 9.2 290.0 51.0 12 2 115 65.6 25.2 61.0 115.7 67.2 24.6 57.1
40
Experimental Results: Optimization 2
NameCandidateQueries
MembershipQueries ||
(Time) : T1 (Time) T1
: T2 T2 : T2 T2
: T1 T1 : T2 T2 : T3 T3 : T3 T3 : T3 T3 : T3 T3 : T3 T3
1 2.2 2.0 37.5 4.5 12 1 25 19.7 12.3 20.0 23.8 20.1 10.5 20.5
2 5.0 5.2 101.5 11.5 12 4 32 40.0 12.6 30.0 32.4 44.6 13.7 30.2
3 8.5 7.5 163.0 28.0 12 4 44 49.1 14.5 35.3 45.6 48.9 15.6 35.5
4 13.0 10.5 248.0 56.5 12 4 63 67.5 17.4 58.1 61.5 67.7 18.6 48.4
5 3.2 3.0 73.0 9.5 12 1 34 22.3 13.6 24.1 36.2 22.2 13.8 22.2
6 6.8 7.2 252.0 36.5 12 2 103 30.6 24.2 29.0 102.2 43.3 23.1 29.8
7 9.8 8.0 328.8 52.5 12 2 140 44.4 27.8 43.9 138.2 38.6 28.2 40.6
8 15.0 13.0 443.0 77.5 12 3 183 73.6 37.1 67.9 184.0 73.2 35.8 64.2
9 23.5 18.2 568.0 109.5 12 3 234 121 44.1 133.7 236.2 133.4 41.0 109.3
10 25.5 22.0 689.5 128.5 12 3 294 189 48.4 168.1 297.0 179.9 45.9 169.7
Avg. 10.8 9.2 290.0 51.0 12 2 115 65.6 25.2 61.0 115.7 67.2 24.6 57.1
41
Experimental Results: Optimization 1
NameCandidateQueries
MembershipQueries ||
(Time) : T1 (Time) T1
: T2 T2 : T2 T2
: T1 T1 : T2 T2 : T3 T3 : T3 T3 : T3 T3 : T3 T3 : T3 T3
1 2.2 2.0 37.5 4.5 12 1 25 19.7 12.3 20.0 23.8 20.1 10.5 20.5
2 5.0 5.2 101.5 11.5 12 4 32 40.0 12.6 30.0 32.4 44.6 13.7 30.2
3 8.5 7.5 163.0 28.0 12 4 44 49.1 14.5 35.3 45.6 48.9 15.6 35.5
4 13.0 10.5 248.0 56.5 12 4 63 67.5 17.4 58.1 61.5 67.7 18.6 48.4
5 3.2 3.0 73.0 9.5 12 1 34 22.3 13.6 24.1 36.2 22.2 13.8 22.2
6 6.8 7.2 252.0 36.5 12 2 103 30.6 24.2 29.0 102.2 43.3 23.1 29.8
7 9.8 8.0 328.8 52.5 12 2 140 44.4 27.8 43.9 138.2 38.6 28.2 40.6
8 15.0 13.0 443.0 77.5 12 3 183 73.6 37.1 67.9 184.0 73.2 35.8 64.2
9 23.5 18.2 568.0 109.5 12 3 234 121 44.1 133.7 236.2 133.4 41.0 109.3
10 25.5 22.0 689.5 128.5 12 3 294 189 48.4 168.1 297.0 179.9 45.9 169.7
Avg. 10.8 9.2 290.0 51.0 12 2 115 65.6 25.2 61.0 115.7 67.2 24.6 57.1
42
Related Work
NASA – original work – Cobleigh, Giannakopoulou, Pasareanu et al.
Applications to simulation & deadlock
Symbolic approach – Alur et al.
Heuristic approach to optimization 3 – Gheorghiu