test scenario

24
Computer Science 1 Test Scenario JeeHyun Hwang

Upload: afric

Post on 24-Feb-2016

79 views

Category:

Documents


0 download

DESCRIPTION

JeeHyun Hwang. Test Scenario. Example. Test deleteUserAccount Code. Ideas. Idea 1: Find various scenarios to cover certain coverages Ides 2: One test case, while changing variables to cover many policies (only chainging …variables) within context - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Test  Scenario

Computer Science

1

Test Scenario

JeeHyun Hwang

Page 2: Test  Scenario

2

Example• Test

deleteUserAccountCode

Page 3: Test  Scenario

3

Ideas• Idea 1: Find various scenarios to cover

certain coverages

• Ides 2: One test case, while changing variables to cover many policies (only chainging…variables) within context

• Idea 3: High covearge of code while maintaiing high coverage of ac

• Idea 4: find dead access code results• Idea 5: formulate a request set• Idea 6: flow test (after one thing A ->

A’) yes or no.• Idea 7: Role combination - Similarility

Page 4: Test  Scenario

4

Example• Test Covearage while achiving high

coverage on access control policies?

Page 5: Test  Scenario

5

Example• Test Covearage while achiving high

coverage on access control policies?

Page 6: Test  Scenario

6

Access Control• Access control is one of the most widely

used privacy and security mechanisms– used to control which principals (e.g.,

users or processes) have access to which resources

• Access control is often governed by security policies called Access Control Policies (ACP)

• Security policies are often specified and maintained separately from application code

Page 7: Test  Scenario

7

Motivation• Security requirements change over

times -> Security policies are often evolved

• Security policy changes may introduce security faults (e.g., unauthorized access)

• System developers execute system test cases to ensure that behavior changes (introduced by security policy changes) are expected

Page 8: Test  Scenario

8

Problem• Two pitfalls of executing all of existing

system test cases– Executing all of existing system test

cases is time consuming – Existing system test cases may not

expose behavior changes sufficiently induced by security policy changes

• There are no existing approaches for testing applications effectively in the context of security policy evolution

Page 9: Test  Scenario

9

Our Goal • Regression system test cases for

policy evolution– Select and execute only system test

cases (from an existing test suite), which expose behavior changes

– Augment system test cases to expose behavior changes (which are not exposed with existing system tests)

Page 10: Test  Scenario

10

Challenges• Select and augment regression

system test cases impacted by policy changes with low false-positives and false-negatives– require to analyze effects correctly of

policy changes– require to monitor interactions correctly

between system test cases and security policies

Page 11: Test  Scenario

11

Definition: Coverage• Coverage for security policies – measure which rules of the policy are

involved (called “covered”) in policy evaluation [Martin et al. WWW 07]

Page 12: Test  Scenario

12

Test Selection Technique I• Find system test cases impacted for policy

changes by mutation analysis

[Setup: rule-test correlation]1. Policy P and its mutant Pm by changing rule

ri’s decision (e.g., Permit -> Deny)2. Collect requests Q issued from system test

cases T 3. Evaluate Q against P and Pm, respectively

and find requests Qimp (Qimp Є Q) which expose different policy behaviors

4. Correlate ri with system tests Timp (Timp Є T), which issue requests Qimp

5. Continue until we find each rule’s correlated system test cases in turn

Page 13: Test  Scenario

13

Test Selection Technique I - cont

[Test selection for policy changes]1. Find rules R impacted by policy

changes2. Select system test cases correlated

with a rule r Є R

Cost: given n rules in P , we need to execute R for 2*n times. However, we are enabled to conduct setup process prior to encountering policy changes.

Page 14: Test  Scenario

14

Test Selection Technique II• Find system test cases impacted

for policy changes by analyzing which rules are evaluated (i.e., covered)

[Setup: rule-test correlation]– Execute systems test cases T– Detect which rules rs are evaluated for

each system test case Timp

– Correlate a rule r with its corresponding system test cases

Page 15: Test  Scenario

15

Test Selection Technique II[Test selection for policy changes]

1. Find rules R impacted by policy changes

2. Select system test cases correlated with a rule r Є R

Cost: given n rules in P , we need to execute T once. However, we are enabled to conduct setup process prior to encountering policy changes.

Page 16: Test  Scenario

16

Test Selection Techniques III• Find system test cases impacted

for policy changes by recording and evaluating requests

[Setup: request collection]1. Record all requests issued to policy

decision point (PDP) for each system test case

Page 17: Test  Scenario

17

Test Selection Techniques III - cont

[Test selection for policy changes]1. Select requests (with corresponding

system test cases) to evaluate different decisions for two different policy versions

Cost: given n rules, we need to execute all of system test cases for only once.

Page 18: Test  Scenario

18

Test Augmentation Technique• Augment system test cases for

policy evolution1. Collect request-response pairs qs, which

expose different policy behaviors2. Select only pairs qsi (qsi С qs ) , which

cannot be issued from existing system tests3. Find system test cases to issue requests in

high similarity with qsi by counting the number of the common attribute values• Two requests (faculty, write, grades) and

(student, write, grades) include two common attribute values

4. Manually modify system test cases to issue a request q (q Є qsi )

Page 19: Test  Scenario

19

A collection of Java programs interacting with security policies

Evaluation Subjects

Subject Names

# classes

# method

LOC

LMS 62 355 3204VMS 134 581 6077ASMS 122 797 10703

Page 20: Test  Scenario

20

Metrics• True Positive: # correclty T/ # collected

T• False Positive: System test cases as

when they are selected• the entities predicted as vulnerable

when they are not vulnerable• False Negative: Selected Test cases• entities predicted as non-vulnerable

when they are vulnerable• True Negative: • Elapsed time for execution• The number of test runs

Page 21: Test  Scenario

21

Research Questions• RQ1: How effectively our proposed

techniques select system test cases with policy changes?– Precision and recall– Cost of each technique: elapsed time for

execution and the number of test runs

• RQ2: How effectively our test augmentation technique suggests system test cases (which expose policy behavior differences) while existing system test cases cannot expose such differences?– Precision and recall

Page 22: Test  Scenario

22

Page 23: Test  Scenario

23

Open Questions• How to correlate unit test cases with

each changed location?– Our techniques are sound assuming when

we apply rule decision change mutation– For rule addition/deletion, we may

correlate unit test cases to a default-fall-through rule or non-applicable cases

– If we consider other types of mutants (e.g., rule combination), it would be challenging

Page 24: Test  Scenario

24

Open Questions – cont’• How to partition of difference-

exposing policy unit test cases produced by Margrave– For OrBAC, each rule is evaluated by only

one request. I think that each request represents one category. (I need to synthesize outcome of Margrave to find all possible requests).

– In general, a XACML policy may include rules to be applicable for more than one request, we may categorize requests based on covering rules. Consider that req1 and req2 cover rule 1. We classify these two requests into the same category.