software engineering lecture 19: object-oriented testing technical metrics
DESCRIPTION
O-O Programs are Different l High Degree of Reuse Does this mean more, or less testing? l Unit Testing vs. Class Testing What is the right “unit” in OO testing? l Review of Analysis & Design Classes appear early, so defects can be recognized early as wellTRANSCRIPT
Software Engineering
Lecture 19: Object-OrientedTesting & Technical Metrics
Today’s Topics Evaluating OOA and OOD Models Unit, Class & Integration Testing OO Design Metrics Class-Oriented Metrics Operation-Oriented Metrics Testing Metrics Project Metrics
O-O Programs are Different High Degree of Reuse
• Does this mean more, or less testing? Unit Testing vs. Class Testing
• What is the right “unit” in OO testing? Review of Analysis & Design
• Classes appear early, so defects can be recognized early as well
Testing OOA and OOD Models Correctness (of each model element)
• Syntactic (notation, conventions)review by modeling experts
• Semantic (conforms to real problem)review by domain experts
Consistency (of each class)• Revisit CRC & Class Diagram• Trace delegated responsibilities• Examine / adjust cohesion of responsibilities
Model Testing [2] Evaluating the Design
• Compare behavioral model to class model• Compare behavioral & class models to the use cases • Inspect the detailed design for each class (algorithms & data
structures)
Unit Testing What is a “Unit”?
• Traditional: a “single operation”• O-O: encapsulated data & operations
Smallest testable unit = classmany operations
Inheritance• testing “in isolation” is impossible
operations must be tested every place they are used
Testing under InheritanceShape
move()
Circle
resize()
Square
resize()
Ellipse
resize()
Q: What if implementation of resize() for each subclass calls inherited operation move() ?
A: Shape cannot be completely tested unlesswe also test Circle, Square, & Ellipse!
Integration Testing O-O Integration: Not Hierarchical
• Coupling is not via subroutine• “Top-down” and “Bottom-up” have little meaning
Integrating one operation at a time is difficult• Indirect interactions among operations
O-O Integration Testing Thread-Based Testing
• Integrate set of classes required to respond to one input or event• Integrate one thread at a time
Example: Event-Dispatching Thread vs. Event Handlers in Java• Implement & test all GUI events first• Add event handlers one at a time
O-O Integration [2] Use-Based Testing
• Implement & test independent classes first• Then implement dependent classes (layer by layer, or cluster-
based)• Simple driver classes or methods sometimes required to test
lower layers
Validation Testing Details of objects not visible Focus on user-observable input and output Methods:
• Utilize use cases to derive tests (both manual & automatic)• Black-box testing for automatic tests
Test Case Design Focus: “Designing sequences of operations to
exercise the states of a class instance” Challenge: Observability
• Do we have methods that allow us to inspect the inner state of an object?
Challenge: Inheritance• Can test cases for a superclass be used to test a subclass?
Test Case Checklist [Berard ’93] Identify unique tests & associate with a particular
class Describe purpose of the test Develop list of testing steps:
• Specified states to be tested• Operations (methods) to be tested• Exceptions that might occur• External conditions & changes thereto• Supplemental information (if needed)
Object-Oriented Metrics Five characteristics [Berard ’95]:
• Localizationoperations used in many classes
• Encapsulationmetrics for classes, not modules
• Information Hidingshould be measured & improved
• Inheritanceadds complexity, should be measured
• Object Abstractionmetrics represent level of abstraction
Design Metrics [Whitmire ’97] Size
• Population (# of classes, operations)• Volume (dynamic object count)• Length (e.g., depth of inheritance)• Functionality (# of user functions)
Complexity• How classes are interrelated
Design Metrics [2] Coupling
• # of collaborations between classes, number of method calls, etc.
Sufficiency• Does a class reflect the necessary properties of the problem
domain? Completeness
• Does a class reflect all the properties of the problem domain? (for reuse)
Design Metrics [3] Cohesion
• Do the attributes and operations in a class achieve a single, well-defined purpose in the problem domain?
Primitiveness (Simplicity)• Degree to which class operations can’t be composed from other
operations
Design Metrics [4] Similarity
• Comparison of structure, function, behavior of two or more classes
Volatility• The likelihood that a change will occur in the design or
implementation of a class
Class-Oriented Metrics Of central importance in evaluating object-
oriented design (which is inherently class-based) A variety of metrics proposed:
• Chidamber & Kemerer (1994)• Lorenz & Kidd (1994)• Harrison, Counsell & Hithi (1998)
Weighted Methods per Class Assume class C has n methods,
complexity measures c0…ci
WMC(C) = ci
Complexity is a function of the # of methods and their complexity
Issues:• How to count methods? (inheritance)• Normalize ci to 1.0
Depth of Inheritance Tree Maximum length from a node C to the root of the
tree PRO: inheritance = reuse CON: Greater depth implies greater complexity
• Hard to predict behavior under inheritance• Greater design complexity (effort)
DIT Example
[from SEPA 5/e]
DIT = 4(Longest pathfrom root to childnode in hierarchy)
1
2
3
4
Number of Children Subclasses immediately subordinate to class C
are its children As # of children (NOC) increases:
• PRO: more reuse• CON: parent becomes less abstract• CON: more testing required
Coupling Between Objects Number of collaborations for a given class C As CBO increases:
• CON: reusability decreases• CON: harder to modify, test
CBO should be minimized
Response For A Class Response Set: the set of methods than can
potentially execute in response to some message RFC: The # of methods in the response set As RFC increases:
• CON: Effort for testing increases• CON: Design complexity increases
Lack of Cohesion in Methods LCOM: # of methods that access one or more of
the same attributes When LCOM is high:
• More coupling between methods• Additional design complexity
When LCOM is low:• Lack of cohesion?
e.g.: control panel gauges• Reduced design complexity
Class Size Number of operations
• Inherited & Local Number of attributes
• Inherited & Local These may be added, but… They lack the weighting for complexity which
WMC provides
Method Inheritance Factor Proportion of inherited methods to total methods
available in a class
MIF = Mi(Ci) / Ma(Ci)
A way to measure inheritance(and the additional design & testing complexity)
Operation-Oriented Metrics Average Operation Size (OSavg)
• LOC not a good measure• Better: number of messages sent• Should strive to minimize
Operation Complexity (OC)• E.g., Function Points; minimize
Average # Parameters (NPavg)• Larger = more complex collaborations between objects; try to
minimize
O-O Testing Metrics Percent Public & Protected (PAP)
• Comparison of attribute types• Higher: greater chance of side-effects
Public Access to Data (PAD)• # of classes that can access data in another (encapsulation
violation)• Higher: greater chance of side-effects
Testing Metrics [2] Number of Root Classes (NOR)
• # of distinct class hierarchies• Higher: increased testing effort, since test cases must be defined
for each Fan-In (FIN)
• In O-O context = multiple inheritance• FIN > 1 should be avoided! (Java)
Project Metrics Number of Scenario Scripts (NSS)
• Proportional to # classes, methods…• Strong indicator of program size
Number of Key Classes (NKC)• Unique to solution (not reused)• Higher: substantial development work
Number of Subsystems (NSUB)• Impact: resource allocation, parallel scheduling, integration
effort
Questions?