software engineering engineering.pdf · statistical software quality assurance, software...
TRANSCRIPT
Lecture Notes On
SOFTWARE ENGINEERING Course Code: CS503PC
By
Dr. M. RAMASUBRAMANIAN Professor
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
SRIDEVI WOMENS ENGINEERING COLLEGE V.N.pally, Near Gandipet , R.R. Dist, Hyderabad, Telangana, India
Department of CSE,SWEC Page 1
R16 B.TECH CSE. SOFTWARE ENGINEERING
B.Tech. III Year I Sem. L T P C Course Code: CS503PC 4 0 0 4
Course Objectives:
To understanding of software process models such as waterfall and evolutionary
models.
To understanding of software requirements and SRS document.
To understanding of different software architectural styles.
To understanding of software testing approaches such as unit testing and integration
testing.
To understanding on quality control and how to ensure good quality software.
Course Outcomes:
Ability to identify the minimum requirements for the development of application.
Ability to develop, maintain, efficient, reliable and cost effective software solutions
Ability to critically thinking and evaluate assumptions and arguments.
UNIT- I Introduction to Software Engineering: The evolving role of software, Changing Nature of Software, legacy software, Software myths. A Generic view of process: Software engineering- A layered technology, a process framework, The Capability Maturity Model Integration (CMMI), Process patterns, process assessment, personal and team process models. Process models: The waterfall model, Incremental process models, Evolutionary process models, Specialized process models, The Unified process. UNIT- II Software Requirements: Functional and non-functional requirements, User requirements, System requirements, Interface specification, the software requirements document. Requirements engineering process: Feasibility studies, Requirements elicitation and analysis, Requirements validation, Requirements management. System models: Context Models, Behavioral models, Data models, Object models, structured methods. UNIT- III Design Engineering: Design process and Design quality, Design concepts, the design model, pattern based software design. Creating an architectural design: software architecture, Data design, Architectural styles and patterns, Architectural Design, assessing alternative architectural designs, mapping data flow into a software architecture. Modeling component-level design: Designing class-based components, conducting component-level design, object constraint language, designing conventional components. Performing User interface design: Golden rules, User interface analysis, and design, interface analysis, interface design steps, Design evaluation. UNIT- IV Testing Strategies: A strategic approach to software testing, test strategies for conventional software, Black-Box and White-Box testing, Validation testing, System testing, the art of Debugging. Product metrics: Software Quality, Frame work for Product metrics, Metrics for Analysis Model, Metrics for Design Model, Metrics for source code, Metrics for testing, Metrics for maintenance. Metrics for Process and Products: Software Measurement, Metrics for software quality. UNIT- V Risk management: Reactive vs Proactive Risk strategies, software risks, Risk identification,
Risk projection, Risk refinement, RMMM, RMMM Plan. Quality Management: Quality concepts, Software quality assurance, Software Reviews, Formal technical reviews, Statistical Software quality Assurance, Software reliability, The ISO 9000 quality standards. TEXT BOOKS: 1. Software engineering A practitioner’s Approach, Roger S Pressman, sixth edition McGraw Hill International Edition. 2. Software Engineering, Ian Sommerville, seventh edition, Pearson education. REFERENCE BOOKS: 1. Software Engineering, A Precise Approach, Pankaj Jalote, Wiley India, 2010. 2. Software Engineering : A Primer, Waman S Jawadekar, Tata McGraw-Hill, 2008 3. Fundamentals of Software Engineering, Rajib Mall, PHI, 2005 4. Software Engineering, Principles and Practices, Deepak Jain, Oxford University Press. 5. Software Engineering1: Abstraction and modeling, Diner Bjorner, Springer International edition, 2006. 6. Software Engineering2: Specification of systems and languages, Diner Bjorner, Springer International edition 2006. 7. Software Engineering Foundations, Yingxu Wang, Auerbach Publications, 2008. 8. Software Engineering Principles and Practice, Hans Van Vliet, 3rd edition, John Wiley &Sons Ltd. 9. Software Engineering 3: Domains, Requirements, and Software Design, D. Bjorner, Springer International Edition. 10. Introduction to Software Engineering, R. J. Leach, CRC Press.
Department of CSE,SWEC Page 4
UNIT – I
Department of CSE,SWEC Page 5
Software definition.
Software is (1) instructions (computer programs) that when executed provide
desired function and performance, (2) data structures that enable the programs to
adequately manipulate information, and (3) documents that describe the operation
and use of the programs.
Software Characteristics.
Software has characteristics that are considerably different than those of hardware:
1. Software is developed or engineered, it is not manufactured in the classical
sense. Although some similarities exist between software development and
hardware manufacture, the two activities are fundamentally different. In both
Department of CSE,SWEC Page 6
activities, high qual ity is achieved through good design, but the manufacturing
phase for hardware can introduce quality problems that are nonexistent (or easily
corrected) for software. Both activities are dependent on people, but the
relationship between people applied and work accomplished is entirely different.
Both activities require the construction of a "product" but the approaches are
different. Software costs are concentrated in engineering. This means that software
projects cannot be managed as if they were manufacturing projects.
2. Software doesn't "wear out."
Figure 1.1 depicts failure rate as a function of time for hardware. The relationship,
often called the "bathtub curve," indicates that hardware exhibits relatively high
failure rates early in its life (these failures are often attributable to design or
manufacturing defects); defects are corrected and the failure rate drops to a steady-
state level (ideally, quite low) for some period of time. As time passes, however,
the failure rate rises again as hardware components suffer from the cumulative
affects of dust, vibration, abuse, temperature extremes, and many other
environmental maladies. Stated simply, the hardware begins to wear out. Software
is not susceptible to the environmental maladies that cause hardware to wear out.
In theory, therefore, the failure rate curve for software should take the form of the
“idealized curve” shown in Figure 1.2. Undiscovered defects will cause high
failure rates early in the life of a program. However, these are corrected (ideally,
without introducing other errors) and the curve flattens as shown.The idealized
curve is a gross oversimplification of actual failure models (see Chapter 8 for more
information) for software. However, the implication is clear—software doesn't
wear out. But it does deteriorate! This is shown as the “actual curve” in Figure 1.2.
Department of CSE,SWEC Page 7
During its life, software will undergo change (maintenance). As changes are made,
it is likely that some new defects will be introduced, causing the failure rate curve
to spike as shown in Figure 1.2. Before the curve can return to the original steady-
state failure rate, another change is requested, causing the curve to spike again.
Slowly, the minimum failure rate level begins to rise—the software is deteriorating
due to change.
3. Although the industry is moving toward component-based assembly, most
software continues to be custom built.
Consider the manner in which the control hardware for a computer-based product
is designed and built. The design engineer draws a simple schematic of the digital
circuitry, does some fundamental analysis to assure that proper function will be
achieved, and then goes to the shelf where catalogs of digital components exist.
Each integrated circuit (called an IC or a chip) has a part number, a defined and
Department of CSE,SWEC Page 8
validated function, a well-defined interface, and a standard set of integration
guidelines. After each component is selected, it can be ordered off the shelf. As an
engineering discipline evolves, a collection of standard design components is
created. Standard screws and off-the-shelf integrated circuits are only two of
thousands of standard components that are used by mechanical and electrical
engineers as they design new systems. The reusable components have been created
so that the engineer can concentrate on the truly innovative elements of a design,
that is, the parts of the design that represent something new. In the hardware world,
component reuse is a natural part of the engineering process. In the software world,
it is something that has only begun to be achieved on a broad scale. A software
component should be designed and implemented so that it can be reused in many
different programs.
Software myths.
Unlike ancient myths that often provide human lessons well worth heeding,
software myths propagated misinformation and confusion. Software myths had a
number of attributes that made them insidious; for instance, they appeared to be
reasonable statements of fact (sometimes containing elements of truth), they had an
intuitive feel, and they were often promulgated by experienced practitioners who
"knew the score."
Three types of myths.
1) Management myths. Managers with software responsibility, like managers
in most disciplines, are often under pressure to maintain budgets, keep schedules
Department of CSE,SWEC Page 9
from slipping, and improve quality. Like a drowning person who grasps at a straw,
a software manager often grasps at belief in a software myth, if that belief will
lessen the pressure (even temporarily).
a) Myth: We already have a book that's full of standards and procedures for
building software, won't that provide my people with everything they need to
know?
Reality: The book of standards may very well exist, but is it used? Are software
practitioners aware of its existence? Does it reflect modern software engineering
practice? Is it complete? Is it streamlined to improve time to delivery while still
maintaining a focus on quality? In many cases, the answer to all of these questions
is "no."
b) Myth: My people have state-of-the-art software development tools, after all,
we buy them the newest computers.
Reality: It takes much more than the latest model mainframe, workstation, or PC
to do high-quality software development. Computer-aided software engineering
(CASE) tools are more important than hardware for achieving good quality and
productivity, yet the majority of software developers still do not use them
effectively.
c) Myth: If we get behind schedule, we can add more programmers and catch
up (sometimes called the Mongolian horde concept).
Reality: Software development is not a mechanistic process like manufacturing.
Adding people to a late software project makes it later. At first, this statement may
seem counterintuitive. However, as new people are added, people who were
working must spend time educating the newcomers, thereby reducing the amount
Department of CSE,SWEC Page 10
of time spent on productive development effort. People can be added but only in a
planned and well-coordinated manner.
d) Myth: If I decide to outsource3 the software project to a third party, I can
just relax and let that firm build it.
Reality: If an organization does not understand how to manage and control
software projects internally, it will invariably struggle when it outsources software
projects.
2) Customer myths. A customer who requests computer software may be a
person at the next desk, a technical group down the hall, the marketing/sales
department, or an outside company that has requested software under contract. In
many cases, the customer believes myths about software because software
managers and practitioners do little to correct misinformation. Myths lead to false
expectations (by the customer) and ultimately, dissatisfaction with the developer. a) Myth: A general statement of objectives is sufficient to begin writing
programs— we can fill in the details later.
Reality: A poor up-front definition is the major cause of failed software efforts. A
formal and detailed description of the information domain, function, behavior,
performance, interfaces, design constraints, and validation criteria is essential.
These characteristics can be determined only after thorough communication
between customer and developer.
b) Myth: Project requirements continually change, but change can be easily
accommodated because software is flexible.
Reality: It is true that software requirements change, but the impact of change
varies with the time at which it is introduced. Figure 1.3 illustrates the impact of Department of CSE,SWEC Page 11
change. If serious attention is given to up-front definition, early requests for
change can be accommodated easily. The customer can review requirements and
recommend modifications with relatively little impact on cost. When changes are
requested during software design, the cost impact grows rapidly. Resources have
been committed and a design framework has been established. Change can cause
upheaval that requires additional resources and major design modification, that is,
additional cost. Changes in function, performance, interface, or other
characteristics during implementation (code and test) have a severe impact on
cost. Change, when requested after software is in production, can be over an order
of magnitude more expensive than the same change requested earlier.
3) Practitioner's myths. Myths that are still believed by software practitioners
have been fostered by 50 years of programming culture. During the early days of
software, programming was viewed as an art form. Old ways and attitudes die
hard.
Department of CSE,SWEC Page 12
a) Myth: Once we write the program and get it to work, our job is done.
Reality: Someone once said that "the sooner you begin 'writing code', the longer
it'll take you to get done." Industry data ([LIE80], [JON91], [PUT97]) indicate
that between 60 and 80 percent of all effort expended on software will be
expended after it is delivered to the customer for the first time.
b) Myth: Until I get the program "running" I have no way of assessing its
quality.
Reality: One of the most effective software quality assurance mechanisms can be
applied from the inception of a project—the formal technical review. Software
reviews are a "quality filter" that have been found to be more effective than
testing for finding certain classes of software defects.
c) Myth: The only deliverable work product for a successful project is the
working program.
Reality: A working program is only one part of a software configuration that
includes many elements. Documentation provides a foundation for successful
engineering and, more important, guidance for software support.
d) Myth: Software engineering will make us create voluminous and
unnecessary documentation and will invariably slow us down.
Reality: Software engineering is not about creating documents. It is about creating
quality. Better quality leads to reduced rework. And reduced rework results in faster
delivery times. Many software professionals recognize the fallacy of the myths just
described. Regrettably, habitual attitudes and methods foster poor management and
technical practices, even when reality dictates a better approach.
Department of CSE,SWEC Page 13
Recognition of software realities is the first step toward formulation of practical
solutions for software engineering.
3) Discuss the CMMI guidelines in the development of process model as
contended by SEI.
The existence of a software process is no guarantee that software will be delivered
on time, that it will meet the customer’s needs, or that it will exhibit the technical
characteristics that will lead to long-term quality characteristics. Process patterns
must be coupled with solid software engineering practice. In addition, the process
itself can be assessed to ensure that it meets a set of basic process criteria that
have been shown to be essential for a successful software engineering. A number
of different and improvement have been proposed over the past few decades:
Standard CMMI Assessment Method for Process Improvement (SCAMPI)—
provides a five-step process assessment model that incorporates five phases:
initiating, diagnosing, establishing, acting, approaches to software process
assessment and learning. The SCAMPI method uses the SEI CMMI as the basis
for assessment [SEI00].
Department of CSE,SWEC Page 14
Department of CSE,SWEC Page 15
Department of CSE,SWEC Page 16
Department of CSE,SWEC Page 17
Department of CSE,SWEC Page 18
Department of CSE,SWEC Page 19
Department of CSE,SWEC Page 20
The unified process model.
The Unified Process recognizes the importance of customer communication and
streamlined methods for describing the customer’s view of a system. It
emphasizes the important role of software architecture and “helps the architect
focus on the right goals, such as understandability, reliance to future changes, and
reuse” . It suggests a process flow that is iterative and incremental, providing the
evolutionary feel that is essential in modern software. Figure below depicts the
“phases” of the UP and relates them to the generic activities.
The inception phase of the UP encompasses both customer communication and
planning activities. By collaborating with stakeholders, business requirements for
the software are identified; a rough architecture for the system is proposed; and a
plan for the iterative, incremental nature of the ensuing project is developed.
Fundamental business requirements are described through a set of preliminary use
cases that describe which features and functions each major class of users desires.
Architecture at this point is nothing more than a tentative outline of major
subsystems and the function and features that populate them. Later, the
Department of CSE,SWEC Page 22
architecture will be refined and expanded into a set of models that will represent
different views of the system. Planning identifies resources, assesses major risks,
defines a schedule, and establishes a basis for the phases that are to be applied as
the software increment is developed.
The elaboration phase encompasses the communication and modeling activities
of the generic process model. Elaboration refines and expands the preliminary use
cases that were developed as part of the inception phase and expands the
architectural representation to include five different views of the software—the
use case model, the requirements model, the design model, the implementation
model, and the deployment model. In some cases, elaboration creates an
“executable architectural baseline” that represents a “first cut” executable system.
The architectural baseline demonstrates the viability of the architecture but does
not provide all features and functions required to use the system. In addition, the
plan is carefully reviewed at the culmination of the elaboration phase to ensure
that scope, risks, and delivery dates remain reasonable. Modifications to the plan
are often made at this time.
The construction phase of the UP is identical to the construction activity defined
for the generic software process. Using the architectural model as input, the
construction phase develops or acquires the software components that will make
each use case operational for end users. To accomplish this, requirements and
design models that were started during the elaboration phase are completed to
reflect the final version of the software increment. All necessary and required
features and functions for the software increment (i.e., the release) are then
implemented in source code. As components are being implemented, unit tests
Department of CSE,SWEC Page 23
are designed and executed for each. In addition, integration activities (component
assembly and integration testing) are conducted. Use cases are used to derive a
suite of acceptance tests that are executed prior to the initiation of the next UP
phase.
The transition phase of the UP encompasses the latter stages of the generic
construction activity and the first part of the generic deployment (delivery and
feedback) activity. Software is given to end users for beta testing and user
feedback reports both defects and necessary changes. In addition, the software
team creates the necessary support information (e.g., user manuals,
troubleshooting guides, installation procedures) that is required for the release. At
the conclusion of the transition phase, the software increment becomes a usable
software release.
The production phase of the UP coincides with the deployment activity of the
generic process. During this phase, the ongoing use of the software is monitored,
support for the operating environment (infrastructure) is provided, and defect reports
and requests for changes are submitted and evaluated. It is likely that at the same
time the construction, transition, and production phases are being conducted, work
may have already begun on the next software increment. This means that the five UP
phases do not occur in a sequence, but rather with staggered concurrency. A software
engineering workflow is distributed across all UP phases. In the context of UP, a
workflow is analogous to a task set . That is, a workflow identifies the tasks required
to accomplish an important software engineering action and the work products that
are produced as a consequence of successfully completing the tasks. It should be
noted that not every task identified for a UP
Department of CSE,SWEC Page 24
workflow is conducted for every software project. The team adapts the process
(actions, tasks, subtasks, and work products) to meet its needs.
UNIT – II
Department of CSE,SWEC Page 25
Requirements Elicitation.
Requirements engineering provides appropriate mechanism for understanding what
the customer wants, analyzing need, assessing feasibility, negotiating a reasonable
solution, specifying the solution ambiguously, validating the specification, and
managing the requirements as they are transformed into an operational system. The
requirements engineering process can be described in five distinct steps:
• requirements elicitation • requirements analysis and negotiation • requirements specification • system modeling • requirements validation • requirements management
Requirements elicitation involves the following activities:
ask the customer, the users, and others what the objectives for the system or
product are what is to be accomplished how the system or product fits into the needs of the business
and finally, how the system or product is to be used on a day-to-day basis.
But it isn’t simple to gather the requirements. The following describe why
requirements elicitation is difficult:
Problems of scope. The boundary of the system is ill-defined or the
customers/ users specify unnecessary technical detail that may confuse,
rather than clarify, overall system objectives.
Problems of understanding. The customers/users are not completely sure
of what is needed, have a poor understanding of the capabilities and limitations of
their computing environment, don’t have a full understanding of the problem
domain, have trouble communicating needs to the system engineer, omit
information that is believed to be “obvious,” specify requirements that conflict
with the needs of other customers/users, or specify requirements that are
ambiguous or unstable.
Department of CSE,SWEC Page 26
Problems of volatility. The requirements change over time. To help
overcome these problems, system engineers must approach the requirements
gathering activity in an organized manner.
Guidelines for requirements elicitation:
Assess the business and technical feasibility for the proposed system.
Identify the people who will help specify requirements and understand
their organizational bias.
Define the technical environment (e.g., computing architecture, operating
system, telecommunications needs) into which the system or product will be
placed.
Identify “domain constraints” (i.e., characteristics of the business
environment specific to the application domain) that limit the functionality
or performance of the system or product to be built.
Define one or more requirements elicitation methods (e.g., interviews,
focus groups, team meetings).
Solicit participation from many people so that requirements are defined from
different points of view; be sure to identify the rationale for each
requirement that is recorded. Identify ambiguous requirements as candidates for prototyping.
Create usage scenarios to help customers/users better identify key
requirements.
The work products produced as a consequence of the requirements elicitation
include:
A statement of need and feasibility. A bounded statement of scope for the system or product. A list of customers, users, and other stakeholders who participated in the requirements elicitation activity. A description of the system’s technical environment. A list of requirements (preferably organized by function) and the domain Constraints that apply to each.
Department of CSE,SWEC Page 27
A set of usage scenarios that provide insight into the use of the system or Product under different operating conditions. Any prototypes developed to better define requirements.
Each of these work products is reviewed by all people who have participated in the
requirements elicitation.
The roadmap from communication to understanding is often full of potholes.
1. Initiating the Process
The most commonly used requirements elicitation technique is to conduct a
meeting or interview. The first meeting between a software engineer (the analyst)
and the customer can be awkward. Neither person knows what to say or ask; both
are worried that what they do say will be misinterpreted; both are thinking about
where it might lead (both likely have radically different expectations here); both
want to get the thing over with, but at the same time, both want it to be a success.
Yet, communication must be initiated. Generally, the analyst starts by asking
context-free questions. That is, a set of questions that will lead to a basic
understanding of the problem, the people who want a solution, the nature of the
solution that is desired, and the effectiveness of the first encounter itself. The first
set of context-free questions focuses on the customer, the overall goals, and the
benefits. For example, the analyst might ask:
Who is behind the request for this work?• Who will use the solution? • What will be the economic benefit of a successful solution? • Is there another source for the solution that you need? These questions help to identify all stakeholders who will have interest in the
software to be built. In addition, the questions identify the measurable benefit of a
successful implementation and possible alternatives to custom software
development. The next set of questions enables the analyst to gain a better understanding of the
problem and the customer to voice his or her perceptions about a solution: • How would you characterize "good" output that would be generated by a
successful solution?
Department of CSE,SWEC Page 28
• What problem(s) will this solution address? • Can you show me (or describe) the environment in which the solution will be
used? • Will special performance issues or constraints affect the way the solution
is approached? The final set of questions focuses on the effectiveness of the meeting called the
meta-questions and proposes the following list: • Are you the right person to answer these questions? Are your answers "official"? • Are my questions relevant to the problem that you have? • Am I asking too many questions? • Can anyone else provide additional information? • Should I be asking you anything else?
These questions (and others) will help to "break the ice" and initiate the
communication that is essential to successful analysis.
2. Facilitated Application Specification Techniques. Too often, customers
and software engineers have an unconscious "us and them" mind-set. Rather than
working as a team to identify and refine requirements, each constituency defines its
own "territory" and communicates through a series of memos, formal position
papers, documents, and question and answer sessions. History has shown that this
approach doesn't work very well. Misunderstandings abound, important
information is omitted, and a successful working relationship is never established.
It is with these problems in mind that a number of independent investigators have
developed a team-oriented approach to requirements gathering that is applied
during early stages of analysis and specification. Called facilitated application
specification techniques (FAST), this approach encourages the creation of a joint
team of customers and developers who work together to identify the problem,
propose elements of the solution, negotiate different approaches and specify a
preliminary set of solution requirements. FAST has been used predominantly by
the information systems community, but the technique offers potential for
improved communication in applications of all kinds. Many different approaches
to FAST have been proposed. Each makes use of a slightly different scenario, but
all apply some variation on the following basic guidelines: • A meeting is conducted at a neutral site and attended by both software
engineers and customers.
Department of CSE,SWEC Page 29
• Rules for preparation and participation are established. • An agenda is suggested that is formal enough to cover all important
points but informal enough to encourage the free flow of ideas. • A "facilitator" (can be a customer, a developer, or an outsider) controls
the meeting. • A "definition mechanism" (can be work sheets, flip charts, or wall stickers
or an electronic bulletin board, chat room or virtual forum) is used.
• The goal is to identify the problem, propose elements of the solution, negotiate
different approaches, and specify a preliminary set of solution requirements in an
atmosphere that is conducive to the accomplishment of the goal. To better
understand the flow of events as they occur in a typical FAST meeting, we present
a brief scenario that outlines the sequence of events that lead up to the meeting,
occur during the meeting, and follow the meeting. Initial meetings between the
developer and customer occur and basic questions and answers help to establish
the scope of the problem and the overall perception of a solution. Out of these
initial meetings, the developer and customer write a one- or two-page "product
request." A meeting place, time, and date for FAST are selected and a facilitator is
chosen. Attendees from both the development and customer/user organizations are
invited to attend. The product request is distributed to all attendees before the
meeting date. While reviewing the request in the days before the meeting, each
FAST attendee is asked to make a list of objects that are part of the environment
that surrounds the system, other objects that are to be produced by the system, and
objects that are used by the system to perform its functions. In addition, each
attendee is asked to make another list of services (processes or functions) that
manipulate or interact with the objects. Finally, lists of constraints (e.g., cost, size,
business rules) and performance criteria (e.g., speed, accuracy) are also developed.
The attendees are informed that the lists are not expected to be exhaustive but are
expected to reflect each person’s perception of the system.
Department of CSE,SWEC Page 30
As an example, assume that a FAST team working for a consumer products
company has been provided with the following product description:
Our research indicates that the market for home security systems is growing at a
rate of 40 percent per year. We would like to enter this market by building a
microprocessor-based home security system that would protect against and/or
recognize a variety of undesirable "situations" such as illegal entry, fire, flooding,
and others. The product, tentatively called SafeHome, will use appropriate sensors
to detect each situation, can be programmed by the homeowner, and will
automatically telephone a monitoring agency when a situation is detected. The
FAST team is composed of representatives from marketing, software and hardware
engineering, and manufacturing. An outside facilitator is to be used. Each person
on the FAST team develops the lists described previously. Objects described for
SafeHome might include smoke detectors, window and door sensors, motion
detectors, an alarm, an event (a sensor has been activated), a control panel, a
display, telephone numbers, a telephone call, and so on. The list of services might
include setting the alarm, monitoring the sensors, dialing the phone, programming
the control panel, reading the display (note that services act on objects). In a
similar fashion, each FAST attendee will develop lists of constraints (e.g., the
system must have a manufactured cost of less than $80, must be user-friendly,
must interface directly to a standard phone line) and performance criteria (e.g., a
sensor event should be recognized within one second, an event priority scheme
should be implemented). As the FAST meeting begins, the first topic of discussion
is the need and justification for the new product—everyone should agree that the
product is justified. Once agreement has been established, each participant presents
his or her lists for discussion. The lists can be pinned to the walls of the room using
large sheets of paper, stuck to the walls using adhesive backed sheets, or written on
a wall board. Alternatively, the lists may have been posted on an electronic bulletin
board or posed in a chat room environment for review prior to the meeting. Ideally,
each list entry should be capable of being manipulated separately so that lists can
be combined, entries can be deleted and additions can be made. At this stage,
critique and debate are strictly prohibited. After individual lists are presented in
one topic area, a combined list is created by the group. The combined list
eliminates redundant entries, adds any new ideas that come up during the
discussion, but does not delete anything. After combined lists for all topic areas
Department of CSE,SWEC Page 31
have been created, discussion—coordinated by the facilitator—ensues. The
combined list is shortened, lengthened, or reworded to properly reflect the product/
system to be developed. The objective is to develop a consensus list in each topic
rea (objects, services, constraints, and performance). The lists are then set aside or
later action. Once the consensus lists have been completed, the team is divided into
smaller subteams; each works to develop mini-specifications for one or more
entries on each of the lists. Each mini-specification is an elaboration of the word or
phrase contained on a list. For example, the mini-specification for the SafeHome
object control panel might be
• mounted on wall • size approximately 9- 5 inches • contains standard 12-key pad and special keys • contains LCD display of the form shown in sketch • all customer interaction occurs through keys • used to enable and disable the system • software provides interaction guidance, echoes, and the like connected to all
sensors
Each subteam then presents each of its mini-specs to all FAST attendees for
discussion. Additions, deletions, and further elaboration are made. In some cases,
the development of mini-specs will uncover new objects, services, constraints, or
performance requirements that will be added to the original lists. During all
discussions, the team may raise an issue that cannot be resolved during the
meeting. An issues list is maintained so that these ideas will be acted on later. After
the mini-specs are completed, each FAST attendee makes a list of validation
criteria for the product/system and presents his or her list to the team. A consensus
list of validation criteria is then created. Finally, one or more participants (or
outsiders) is assigned the task of writing the complete draft specification using all
Department of CSE,SWEC Page 32
inputs from the FAST meeting. FAST is not a panacea for the problems
encountered in early requirements elicitation.
3. Quality Function Deployment
Quality function deployment (QFD) is a quality management technique that
translates the needs of the customer into technical requirements for software.
Originally developed in Japan and first used at the Kobe Shipyard of Mitsubishi
Heavy Industries, Ltd., in the early 1970s, QFD “concentrates on maximizing
customer satisfaction from the software engineering process.” To accomplish this,
QFD emphasizes an understanding of what is valuable to the customer and then
deploys these values throughout the engineering process. QFD identifies three
types of requirements.
Normal requirements. The objectives and goals that are stated for a product
or system during meetings with the customer. If these requirements are present, the
customer is satisfied. Examples of normal requirements might be requested types
of graphical displays, specific system functions, and defined levels of performance.
Expected requirements. These requirements are implicit to the product or system
and may be so fundamental that the customer does not explicitly state them. Their
absence will be a cause for significant dissatisfaction. Examples of expected
requirements are: ease of human/machine interaction, overall operational
correctness and reliability, and ease of software installation.
Exciting requirements. These features go beyond the customer’s expectations and
prove to be very satisfying when present. For example, word processing software
is requested with standard features. The delivered product contains a number of
page layout capabilities that are quite pleasing and unexpected. In actuality, QFD
spans the entire engineering process.
In meetings with the customer, function deployment is used to determine the value
of each function that is required for the system. Information deployment identifies
both the data objects and events that the system must consume and produce. These
are tied to the functions. Finally, task deployment examines the behavior of the
system or product within the context of its environment. Value analysis is
Department of CSE,SWEC Page 33
conducted to determine the relative priority of requirements determined during
each of the three deployments. QFD defines requirements in a way that maximizes
QFD uses customer interviews and observation, surveys, and examination of
historical data (e.g., problem reports) as raw data for the requirements gathering
activity. These data are then translated into a table of requirements—called the
customer voice table—that is reviewed with the customer. A variety of diagrams,
matrices, and evaluation methods are then used to extract expected requirements
and to attempt to derive exciting requirements.
4. Use-Cases
As requirements are gathered as part of informal meetings, FAST, or QFD, the
software engineer (analyst) can create a set of scenarios that identify a thread of
usage for the system to be constructed. The scenarios, often called use-cases,
provide a description of how the system will be used. To create a use-case, the
analyst must first identify the different types of people (or devices) that use the
system or product. These actors actually represent roles that people (or devices)
play as the system operates. Defined formally, an actor is anything that
communicates with the system or product and that is external to the system itself.
It is important to note that an actor and a user are not the same thing. A typical user
may play a number of different roles when using a system, whereas an actor
represents a class of external entities (often, but not always, people) that play just
one role. As an example, consider a machine operator (a user) who interacts with
the control computer for a manufacturing cell that contains a number of robots and
numerically controlled machines. After careful review of requirements, the
software for the control computer requires four different modes (roles) for
interaction: programming mode, test mode, monitoring mode, and troubleshooting
mode. Therefore, four actors can be defined: programmer, tester, monitor, and
troubleshooter. In some cases, the machine operator can play all of these roles. In
others, different people may play the role of each actor. Because requirements
elicitation is an evolutionary activity, not all actors are identified during the first
iteration. It is possible to identify primary actors during the first iteration and
secondary actors as more is learned about the system. Primary actors interact to
achieve required system function and derive the intended benefit from the system.
They work directly and frequently with the software. Secondary actors support the
system so that primary actors can do their work. Once actors have been identified,
Department of CSE,SWEC Page 34
use-cases can be developed. The use-case describes the manner in which an actor
interacts with the system. The following questions that should be answered by the
use-case:
• What main tasks or functions are performed by the actor? • What system information will the actor acquire, produce, or change? • Will the actor have to inform the system about changes in the external
environment? A use-case is a scenario that describes how software is to be used in a given
situation. Use-cases are defined from an actor’s point of view. An actor is a
role that people (users) or devices play as they interact with the software.
What information does the actor desire from the system? Does the actor wish to be informed about unexpected changes?
In general, a use-case is simply a written narrative that describes the role of an
actor as interaction with the system occurs. For basic SafeHome requirements, we
can define three actors: the homeowner (the user), sensors (devices attached to the
system), and the monitoring and response subsystem (the central station that
monitors SafeHome). For the purposes of this example, we consider only the
homeowner actor. The homeowner interacts with the product in a number of
different ways:
• enters a password to allow all other interactions
Department of CSE,SWEC Page 35
• inquires about the status of a security zone • inquires about the status of a sensor • presses the panic button in an emergency • activates/deactivates the security system
A use-case for system activation follows:
1. The homeowner observes a prototype of the SafeHome control panel to
determine if the system is ready for input. If the system is not ready, the
homeowner must physically close windows/doors so that the ready indicator is
present. [A not ready indicator implies that a sensor is open; i.e., that a door or
window is open.] 2. The homeowner uses the keypad to key in a four-digit password. The password
is compared with the valid password stored in the system. If the password is
incorrect, the control panel will beep once and reset itself foradditional input. If the
password is correct, the control panel awaits further action.
3. The homeowner selects and keys in stay or away to activate the system. Stay
activates only perimeter sensors (inside motion detecting sensors are deactivated).
Away activates all sensors.
4. When activation occurs, a red alarm light can be observed by the homeowner.
Use-cases for other homeowner interactions would be developed in a similar
manner. It is important to note that each use-case must be reviewed with care. If some
element of the interaction is ambiguous, it is likely that a review of the use-case
will indicate a problem. Each use-case provides an unambiguous scenario of
interaction between an actor and the software. It can also be used to specify timing
requirements or other constraints for the scenario. For example, in the use-case just
noted, requirements indicate that activation occurs 30 seconds after the stay or
away key is hit. This information can be appended to the use-case. Use-cases describe scenarios that will be perceived differently by different actors.
Quality function deployment can be used to develop a weighted priority value for
each use-case. To accomplish this, use-cases are evaluated from the point of view
Department of CSE,SWEC Page 36
of all actors defined for the system. A priority value is assigned to each use-case
(e.g., a value from 1 to 10) by each of the actors.
5. An average priority is then computed, indicating the perceived importance of
each of the use cases. When an iterative process model is used for software
engineering, the priorities can influence which system functionality is delivered
first.
Functional and nonfunctional requirements of requirements
Functional and non-functional requirements Software system requirements are
often classified as functional requirements or nonfunctional requirements:
1. Functional requirements. These are statements of services the system should
provide, how the system should react to particular inputs, and how the system
should behave in particular situations. In some cases, the functional requirements
may also explicitly state what the system should not do.
2. Non-functional requirements. These are constraints on the services or
functions offered by the system. They include timing constraints, constraints on the
development process, and constraints imposed by standards. Non-functional
requirements often apply to the system as a whole, rather than individual system
features or services.
i) Functional requirements. The functional requirements for a system
describe what the system should do. These requirements depend on the type of
software being developed, the expected users of the software, and the general
Department of CSE,SWEC Page 37
approach taken by the organization when writing requirements. When expressed as
user requirements, functional requirements are usually described in an abstract way
that can be understood by system users. However, more specific functional system
requirements describe the system functions, its inputs and outputs, exceptions, etc.,
in detail. Functional system requirements vary from general requirements covering
what the system should do to very specific requirements reflecting local ways of
working or an organization’s existing systems. For example, here are examples of
functional requirements for the MHC-PMS system, used to maintain information
about patients receiving treatment for mental health problems:
1. A user shall be able to search the appointments lists for all clinics. 2. The system shall generate each day, for each clinic, a list of patients who
are expected to attend appointments that day. 3. Each staff member using the system shall be uniquely identified by his or
her eight-digit employee number. These functional user requirements define specific facilities to be provided by the
system. These have been taken from the user requirements document and they
show that functional requirements may be written at different levels of detail.
Imprecision in the requirements specification is the cause of many software
engineering problems. It is natural for a system developer to interpret an
ambiguous requirement in a way that simplifies its implementation. This is not
what the customer wants. New requirements have to be established and changes
made to the system. But, this delays system delivery and increases costs. For
example, the first example requirement for the MHC-PMS states that a user shall
be able to search the appointments lists for all clinics. The rationale for this
requirement is that patients with mental health problems are sometimes confused.
They may have an appointment at one clinic but actually go to a different clinic. If
they have an appointment, they will be recorded as having attended, irrespective of
the clinic. The medical staff member specifying this may expect ‘search’ to mean
that, given a patient name, the system looks for that name in all appointments at all
clinics. However, this is not explicit in the requirement. System developers may
interpret the requirement in a different way and may implement a search so that the
user has to choose a clinic then carry out the search. This obviously will involve
more user input and so take longer. In principle, the functional requirements
specification of a system should be both complete and consistent. Completeness
means that all services required by the user should be defined. Consistency means
Department of CSE,SWEC Page 38
that requirements should not have contradictory definitions. In practice, for large,
complex systems, it is practically impossible to achieve requirements consistency
and completeness. One reason for this is that it is easy to make mistakes and
omissions when writing specifications for complex systems. Another reason is that
there are many stakeholders in a large system. A stakeholder is a person or role
that is affected by the system in some way. Stakeholders have different— and
often inconsistent—needs. These inconsistencies may not be obvious when the
requirements are first specified, so inconsistent requirements are included in the
specification. The problems may only emerge after deeper analysis or after the
system has been delivered to the customer.
ii) Non-functional requirements. Non-functional requirements, as the
name suggests, are requirements that are not directly concerned with the specific
services delivered by the system to its users. They may relate to emergent system
properties such as reliability, response time, and store occupancy. Alternatively,
they may define constraints on the system implementation such as the capabilities
of I/O devices or the data representations used in interfaces with other systems.
Non-functional requirements, such as performance, security, or availability,
usually specify or constrain characteristics of the system as a whole. Non-
functional requirements are often more critical than individual functional
requirements. System users can usually find ways to work around a system
function that doesn’t really meet their needs. However, failing to meet a non-
functional requirement can mean that the whole system is unusable. For example,
if an aircraft system does not meet its reliability requirements, it will not be
certified as safe for operation; if an embedded control system fails to meet its
performance requirements, the control functions will not operate correctly.
Although it is often possible to identify which system components implement
specific functional requirements (e.g., there may be formatting components that
implement reporting requirements), it is often more difficult to relate components
to non-functional requirements. The implementation of these requirements may be
diffused throughout the system. There are two reasons for this:
1. Non-functional requirements may affect the overall architecture of a
system rather than the individual components. For example, to ensure that
performance requirements are met, you may have to organize the system to
minimize communications between components.
Department of CSE,SWEC Page 39
2. A single non-functional requirement, such as a security requirement, may
generate a number of related functional requirements that define new system
services that are required. In addition, it may also generate requirements that
restrict existing requirements. Non-functional requirements arise through user
organizational policies, the need for interoperability with other software or
hardware systems, or external factors such as safety regulations or privacy
legislation. Figure above is a classification of non-functional requirements. The
non-functional requirements may come from required characteristics of the
software (product requirements), the organization developing the software
(organizational requirements), or from external sources:
1. Product requirements. These requirements specify or constrain the
behavior of the software. Examples include performance requirements on
how fast the system must execute and how much memory it requires,
reliability requirements that set out the acceptable failure rate, security
requirements, and usability requirements.
Department of CSE,SWEC Page 40
2. Organizational requirements. These requirements are broad system
requirements derived from policies and procedures in the customer’s and
developer’s organization. Examples include operational process
requirements that define how the system will be used, development process
requirements that specify the programming language, the development
environment or process standards to be used, and environmental
requirements that specify the operating environment of the system.
3. External requirements. This broad heading covers all requirements that
are derived from factors external to the system and its development process.
These may include regulatory requirements that set out what must be done for
the system to be approved for use by a regulator, such as a central bank;
legislative requirements that must be followed to ensure that the system
operates within the law; and ethical requirements that ensure that the system
will be acceptable to its users and the general public.
Figure above shows examples of product, organizational, and external
requirements taken from the MHC-PMS. The product requirement is an
availability requirement that defines when the system has to be available and the
allowed down time each day. It says nothing about the functionality of MHC-PMS
and clearly identifies a constraint that has to be considered by the system
designers. The organizational requirement specifies how users authenticate
themselves to the system. The health authority that operates the system is moving
to a standard authentication procedure for all software where, instead of users
Department of CSE,SWEC Page 41
having a login name, they swipe their identity card through a reader to identify
themselves. The external requirement is derived from the need for the system to
conform to privacy legislation. Privacy is obviously a very important issue in
healthcare systems and the requirement specifies that the system should be
developed in accordance with a national privacy standard. A common problem
with non-functional requirements is that users or customers often propose these
requirements as general goals, such as ease of use, the ability of the system to
recover from failure, or rapid user response. Goals set out good intentions but
cause problems for system developers as they leave scope for interpretation and
subsequent dispute once the system is delivered. For example, the following
system goal is typical of how a manager might express usability requirements: The
system should be easy to use by medical staff and should be organized in such a
way that user errors are minimized. It is impossible to objectively verify the system
goal, but in the description below you can at least include software instrumentation
to count the errors made by users when they are testing the system. Medical staff
shall be able to use all the system functions after four hours of training. After this
training, the average number of errors made by experienced users shall not exceed
two per hour of system use. Whenever possible, you should write non-functional
requirements quantitatively so that they can be objectively tested. Figure shows
metrics that you can use to specify non-functional system properties.
Department of CSE,SWEC Page 42
In practice, customers for a system often find it difficult to translate their goals into
measurable requirements. For some goals, such as maintainability, there are no
metrics that can be used. In other cases, even when quantitative specification is
possible, customers may not be able to relate their needs to these specifications.
They don’t understand what some number defining the required reliability (say)
means in terms of their everyday experience with computer systems. Furthermore,
the cost of objectively verifying measurable, non-functional requirements can be
very high and the customers paying for the system may not think these costs are
justified. Non-functional requirements often conflict and interact with other
functional or non-functional requirements. For example, the authentication
requirement in Figure obviously requires a card reader to be installed with each
computer attached to the system. However, there may be another requirement that
requests mobile access to the system from doctors’ or nurses’ laptops. These are
not normally equipped with card readers so, in these circumstances, some
alternative authentication method may have to be allowed. It is difficult to separate
functional and non-functional requirements in the requirements document. If the
non-functional requirements are stated separately from the functional requirements,
the relationships between them may be hard to understand. However, you should
Department of CSE,SWEC Page 43
explicitly highlight requirements that are clearly related to emergent system
properties, such as performance or reliability. You can do this by putting them in a
separate section of the requirements document or by distinguishing them, in some
way, from other system requirements.
Non-functional requirements such as reliability, safety, and confidentiality
requirements are particularly important for critical systems.
Department of CSE,SWEC Page 44
UNIT - III
Department of CSE,SWEC Page 45
Quality attributes and guidelines of software design.
Software Quality Guidelines and Attributes: Throughout the design process,
the quality of the evolving design is assessed with a series of technical reviews.
Three characteristics that serve as a guide for the evaluation of a good design:
• The design must implement all of the explicit requirements contained in the
requirements model, and it must accommodate all of the implicit requirements
desired by stakeholders. • The design must be a readable, understandable guide for those who generate
code and for those who test and subsequently support the software. • The design should provide a complete picture of the software, addressing the
data, functional, and behavioral domains from an implementation perspective.
Each of these characteristics is actually a goal of the design process.
Quality Guidelines. In order to evaluate the quality of a design representation,
you and other members of the software team must establish technical criteria for
good design. Design concepts that also serve as software quality criteria. The
following are the guidelines:
1) A design should exhibit an architecture that
(i) has been created using recognizable architectural styles or patterns, (ii) is composed of components that exhibit good design characteristics and (iii) can be implemented in an evolutionary fashion, thereby facilitating
implementation and testing.
2. A design should be modular; that is, the software should be logically
partitioned into elements or subsystems.
3. A design should contain distinct representations of data, architecture,
interfaces, and components. 4. A design should lead to data structures that are appropriate for the classes to be
implemented and are drawn from recognizable data patterns.
Department of CSE,SWEC Page 46
5. A design should lead to components that exhibit independent functional
characteristics. 6. A design should lead to interfaces that reduce the complexity of connections
between components and with the external environment. 7. A design should be derived using a repeatable method that is driven by
information obtained during software requirements analysis. 8. A design should be represented using a notation that effectively communicates
its meaning.
These design guidelines are not achieved by chance. They are achieved through
the application of fundamental design principles, systematic methodology, and
thorough review.
Assessing Design Quality—The Technical Review
Design is important because it allows a software team to assess the quality of the
software before it is implemented—at a time when errors, omissions, or
inconsistencies are easy and inexpensive to correct. But how do we assess quality
during design? The software can’t be tested, because there is no executable
software to test. During design, quality is assessed by conducting a series of
technical reviews (TRs). A technical review is a meeting conducted by members
of the software team. Usually two, three, or four people participate depending on
the scope of the design information to be reviewed. Each person plays a role: the
review leader plans the meeting, sets an agenda, and runs the meeting; the
recorder takes notes so that nothing is missed; the producer is the person whose
work product (e.g., the design of a software component) is being reviewed. Prior
to the meeting, each person on the review team is given a copy of the design work
product and is asked to read it, looking for errors, omissions, or ambiguity. When
the meeting commences, the intent is to note all problems with the work product
so that they can be corrected before implementation begins. The TR typically lasts
between 90 minutes and 2 hours. At the conclusion of the TR, the review team
determines whether further actions are required on the part of the producer before
Department of CSE,SWEC Page 47
the design work product can be approved as part of the final design model. The
quality factors can assist the review team as it assesses quality. Technical reviews
are a critical part of the design process and are an important mechanism for
achieving design quality.
Quality Attributes.
Hewlett-Packard developed a set of software quality attributes that has been given
the acronym FURPS—functionality, usability, reliability, performance, and
supportability.
The FURPS quality attributes represent a target for all software design:
• Functionality is assessed by evaluating the feature set and capabilities of the
program, the generality of the functions that are delivered, and the security of the
overall system.
Usability is assessed by considering human factors, overall aesthetics,
consistency, and documentation. Reliability is evaluated by measuring the frequency and severity of failure, the
accuracy of output results, the mean-time-to-failure (MTTF), the ability to
recover from failure, and the predictability of the program. Performance is measured by considering processing speed, response time,
resource consumption, throughput, and efficiency. Supportability combines the ability to extend the program (extensibility),
adaptability, serviceability—these three attributes represent a more common
term, maintainability—and in addition, testability, compatibility, configurability
the ability to organize and control elements of the software configuration, the
ease with which a system can be installed, and the ease with which problems
can be localized. Not every software quality attribute is weighted equally as the
software design is developed. One application may stress functionality with a
special emphasis on security. Another may demand performance with particular
emphasis on processing speed. A third might focus on reliability. Regardless of
the weighting, it is important to note that these quality attributes must be
considered as design commences, not after the design is complete and
construction has begun.
Department of CSE,SWEC Page 48
Design concepts in software engineering
A set of fundamental software design concepts has evolved over the history of
software engineering. Each provides the software designer with a foundation from
which more sophisticated design methods can be applied. Each helps you answer
the following questions:
• What criteria can be used to partition software into individual components?
• How is function or data structure detail separated from a conceptual
representation of the software?
• What uniform criteria define the technical quality of a software design? Overview of important software design concepts that span both traditional and
object-oriented software development:
1) Abstraction: When you consider a modular solution to any problem, many
levels of abstraction can be posed. At the highest level of abstraction, a solution is
stated in broad terms using the language of the problem environment. At lower
levels of abstraction, a more detailed description of the solution is provided.
Problem-oriented terminology is coupled with implementation-oriented
terminology in an effort to state a solution. Finally, at the lowest level of
abstraction, the solution is stated in a manner that can be directly implemented. As
different levels of abstraction are developed, you work to create both procedural
and data abstractions. A procedural abstraction refers to a sequence of instructions
that have a specific and limited function. The name of a procedural abstraction
implies these functions, but specific details are suppressed. An example of a
procedural abstraction would be the word open for a door. Open implies a long
sequence of procedural steps (e.g., walk to the door, reach out and grasp knob,
turn knob and pull door, step away from moving door, etc.). A data abstraction is
a named collection of data that describes a data object. In the context of the
procedural abstraction open, we can define a data abstraction called door. Like
any data object, the data abstraction for door would encompass a set of attributes
that describe the door (e.g., door type, swing direction, opening mechanism,
Department of CSE,SWEC Page 49
weight, dimensions). It follows that the procedural abstraction open would make
use of information contained in the attributes of the data abstraction door.
2) Architecture: Software architecture alludes to “the overall structure of the
software and the ways in which that structure provides conceptual integrity for a
system”. In its simplest form, architecture is the structure or organization of
program components (modules), the manner in which these components interact,
and the structure of data that are used by the components. In a broader sense,
however, components can be generalized to represent major system elements and
their interactions. One goal of software design is to derive an architectural
rendering of a system. This rendering serves as a framework from which more
detailed design activities are conducted. A set of architectural patterns enables a
software engineer to solve common design problems.
The following set of properties that should be specified as part of an architectural
design:
i) Structural properties. This aspect of the architectural design representation
defines the components of a system (e.g., modules, objects, filters) and the
manner in which those components are packaged and interact with one another.
For example, objects are packaged to encapsulate both data and the processing
that manipulates the data and interact via the invocation of methods. ii) Extra-functional properties. The architectural design description should
address how the design architecture achieves requirements for performance,
capacity, reliability, security, adaptability, and other system characteristics.
Families of related systems. The architectural design should draw upon repeatable patterns that are commonly
encountered in the design of families of similar systems. The design should have
the ability to reuse architectural building blocks. Given the specification of these
properties, the architectural design can be represented using one or more of a
number of different models. Structural models represent architecture as an organized collection of program
components.
Department of CSE,SWEC Page 50
Framework models increase the level of design abstraction by attempting to
identify repeatable architectural design frameworks that are encountered in similar
types of applications.
Dynamic models address the behavioral aspects of the program architecture,
indicating how the structure or system configuration may change as a function of
external events.
Process models focus on the design of the business or technical process that the
system must accommodate.
Finally, functional models can be used to represent the functional hierarchy of a
system.
A number of different architectural description languages (ADLs) have been
developed to represent these models [Sha95b]. Although many different ADLs
have been proposed, the majority provide mechanisms for describing system
components and the manner in which they are connected to one another. The
manner in which software architecture is characterized and its role in design are
described as patterns.
3. Patterns defines a design pattern in the following manner: “A pattern is a
named nugget of insight which conveys the essence of a proven solution to a
recurring problem within a certain context amidst competing concerns”. Stated in
another way, a design pattern describes a design structure that solves a particular
design problem within a specific context and amid “forces” that may have an
impact on the manner in which the pattern is applied and used.
The intent of each design pattern is to provide a description that enables a
designer to determine
(1) whether the pattern is applicable to the current work, (2) whether the pattern can be reused (hence, saving design time), and
(3) whether the pattern can serve as a guide for developing a similar, but
functionally or structurally different pattern. 4) Separation of concerns is a design concept that suggests that any complex
problem can be more easily handled if it is subdivided into pieces that can each be
solved and/or optimized independently. A concern is a feature or behavior that is
specified as part of the requirements model for the software. By separating
Department of CSE,SWEC Page 51
concerns into smaller, and therefore more manageable pieces, a problem takes less
effort and time to solve. For two problems, p1 and p2, if the perceived complexity
of p1 is greater than the perceived complexity of p2, it follows that the effort
required to solve p1 is greater than the effort required to solve p2. As a general
case, this result is intuitively obvious. It does take more time to solve a difficult
problem. It also follows that the perceived complexity of two problems when they
are combined is often greater than the sum of the perceived complexity when each
is taken separately. This leads to a divide-and-conquer strategy—it’s easier to
solve a complex problem when you break it into manageable pieces. This has
important implications with regard to software modularity. Separation of concerns
is manifested in other related design concepts: modularity, aspects, functional
independence, and refinement.
5) Modularity is the most common manifestation of separation of concerns.
Software is divided into separately named and addressable components, sometimes
called modules, that are integrated to satisfy problem requirements. It has been
stated that “modularity is the single attribute of software that allows a program to
be intellectually manageable”. Monolithic software (i.e., a large program
composed of a single module) cannot be easily grasped by a software engineer.
The number of control paths, span of reference, number of variables, and overall
complexity would make understanding close to impossible. In almost all instances,
you should break the design into many modules, hoping to make understanding
easier and, as a consequence, reduce the cost required to build the software.
Department of CSE,SWEC Page 52
UNIT-IV
A strategic Approach for Software testing • Software Testing One of the important phases of software development
Testing is the process of execution of a program with the intention of finding errors Involves 40% of total project cost
Testing Strategy
A road map that incorporates test planning, test case design, test execution and resultant data
collection and execution
Validation refers to a different set of activities that ensures that the software is traceable to the
customer requirements.
V&V encompasses a wide array of Software Quality Assurance
Perform Formal Technical reviews(FTR) to uncover errors during software development Begin testing at component level and move outward to integration of entire component based
system.
Adopt testing techniques relevant to stages of testing Testing can be done by software developer and independent testing group
Testing and debugging are different activities. Debugging follows testing Low level tests verifies small code segments.
High level tests validate major system functions against customer requirements
Testing Strategies for Conventional Software
1)Unit Testing
2) Integration Testing
3)Validation Testing and
4)System Testing
Criteria for completion of software testing
• No body is absolutely certain that software will not fail
• Based on statistical modeling and software reliability models
• 95 percent confidence(probability) that 1000 CPU hours of failure free operation is at least 0.995
Software Testing
• Two major categories of software testing Black box testing
White box testing
Black box testing
Treats the system as black box whose behavior can be determined by studying its input and related output
Not concerned with the internal structure of the program
Black Box Testing
• It focuses on the functional requirements of the software ie it enables the sw engineer to derive a
set of input conditions that fully exercise all the functional requirements for that program.
• Concerned with functionality and implementation
1)Graph based testing method
2) Equivalence partitioning
Graph based testing
• Draw a graph of objects and relations
• Devise test cases t uncover the graph such that each object and its relationship exercised.
Equivalence partitioning
• Divides all possible inputs into classes such that there are a finite equivalence classes.
• Equivalence class
-- Set of objects that can be linked by relationship
• Reduces the cost of testing
• Example
• Input consists of 1 to 10
• Then classes are n<1,1<=n<=10,n>10
• Choose one valid class with value within the allowed range and two invalid classes where values
are greater than maximum value and smaller than minimum value.
Boundary Value analysis
• Select input from equivalence classes such that the input lies at the edge of the
equivalence classes
• Set of data lies on the edge or boundary of a class of input data or generates the data that lies at the
boundary of a class of output data
Example
• If 0.0<=x<=1.0
• Then test cases (0.0,1.0) for valid input and (-0.1 and 1.1) for invalid input
Orthogonal array Testing
• To problems in which input domain is relatively small but too large for exhaustive testing
Example
• Three inputs A,B,C each having three values will require 27 test cases
• L9 orthogonal testing will reduce the number of test case to 9 as shown below
A B C 1 1 1
1 2 2 1 3 3
2 1 3
2 2 3
2 3 1 3 1 3
3 2 1
3 3 2
White Box testing
• Also called glass box testing • Involves knowing the internal working of a program
• Guarantees that all independent paths will be exercised at least once.
• Exercises all logical decisions on their true and false sides • Executes all loops
• Exercises all data structures for their validity
• White box testing techniques
1. Basis path testing
2. Control structure testing
Basis path testing
• Proposed by Tom McCabe
• Defines a basic set of execution paths based on logical complexity of a procedural design • Guarantees to execute every statement in the program at least once
• Steps of Basis Path Testing • Draw the flow graph from flow chart of the program
• Calculate the cyclomatic complexity of the resultant flow graph
• Prepare test cases that will force execution of each path • Three methods to compute Cyclomatic complexity number
• V(G)=E-N+2(E is number of edges, N is number of nodes
• V(G)=Number of regions
• V(G)= Number of predicates +1
• Control Structure testing
• Basis path testing is simple and effective • It is not sufficient in itself • Control structure broadens the basic test coverage and improves the quality of white boxtesting
• Condition Testing
• Data flow Testing • Loop Testing
•
Condition Testing • --Exercise the logical conditions contained in a program module
• --Focuses on testing each condition in the program to ensure that it does contain errors
• --Simple condition
• E1<relation operator>E2
• --Compound condition • simple condition<Boolean operator>simple condition
Data flow Testing
• Selects test paths according to the locations of definitions and use of variables in a program • Aims to ensure that the definitions of variables and subsequent use is tested
• First construct a definition-use graph from the control flow of a program
Loop Testing
• Focuses on the validity of loop constructs • Four categories can be defined
1. Simple loops 2. Nested loops
3. Concatenated loops 4. Unstructured loops
Testing of simple loops
-- N is the maximum number of allowable passes through the loop
Skip the loop entirely Only one pass through the loop
Two passes through the loop
m passes through the loop where m>N
N-1,N,N+1 passes the loop
Nested Loops
1. Start at the innermost loop. Set all other loops to maximum values
2. Conduct simple loop test for the innermost loop while holding the outer loops at their minimum
iteration parameter.
3. Work outward conducting tests for the next loop but keeping all other loops at minimum.
Concatenated loops
• Follow the approach defined for simple loops, if each of the loop is independent of other.
• If the loops are not independent, then follow the approach for the nested loops
Unstructured Loops
• Redesign the program to avoid unstructured loops
Validation Testing
• It succeeds when the software functions in a manner that can be reasonably expected by the
customer.
1) Validation Test Criteria
2)Configuration Review
3)Alpha And Beta Testing
System Testing
• Its primary purpose is to test the complete software.
1)Recovery Testing
2) Security Testing
3Stress Testing and
4)Performance Testing
The Art of Debugging
• Debugging occurs as a consequences of successful testing.
• Debugging Stratergies
1)Brute Force Method.
2)Back Tracking
3)Cause Elimination and
4)Automated debugging
• Brute force
-- Most common and least efficient
-- Applied when all else fails
-- Memory dumps are taken
-- Tries to find the cause from the load of information
• Back tracking
-- Common debugging approach
-- Useful for small programs
-- Beginning at the system where the symptom has been uncovered, the source code traced
backward until the site of the cause is found.
• Cause Elimination
-- Based on the concept of Binary partitioning
-- A list of all possible causes is developed and tests are conducted to eliminate each
Software Quality
• Conformance to explicitly stated functional and performance requirements, explicitly documented
development standards, and implicit characteristics that are expected of all professionally developed software.
• Factors that affect software quality can be categorized in two broad groups:
1. Factors that can be directly measured (e.g. defects uncovered during testing)
2. Factors that can be measured only indirectly (e.g. usability or maintainability)
• McCall’s quality factors
1. Product operation
a. Correctness
b. Reliability
c. Efficiency
d. Integrity
e. Usability
2. Product Revision
a. Maintainability
b. Flexibility
c. Testability
3. Product Transition
a. Portability
b. Reusability
c. Interoperability
ISO 9126 Quality Factors
1.Functionality
2. Reliability
3.Usability
4.Efficiency
5.Maintainability
6.Portability
Product metrics
• Product metrics for computer software helps us to assess quality.
• Measure -- Provides a quantitative indication of the extent, amount, dimension, capacity or size of some attribute of
a product or process
• Metric(IEEE 93 definition)
-- A quantitative measure of the degree to which a system, component or process possess a given attribute
• Indicator -- A metric or a combination of metrics that provide insight into the software process, a software project or
a product itself
Product Metrics for analysis,Design,Test and maintenance
• Product metrics for the Analysis model
Function point Metric
First proposed by Albrecht
Measures the functionality delivered by the system FP computed from the following parameters 1) Number of external inputs(EIS)
2) Number external outputs(EOS)
3) Number of external Inquiries(EQS)
4) Number of Internal Logical Files(ILF)
5) Number of external interface files(EIFS)
Each parameter is classified as simple, average or complex and weights are assigned as follows • Information Domain Count Simple avg Complex
EIS 3 4 6
EOS 4 5 7
EQS 3 4 6
ILFS 7 10 15
EIFS 5 7 10
FP=Count total *[0.65+0.01*E(Fi)]
Metrics for Design Model
• DSQI(Design Structure Quality Index)
• US air force has designed the DSQI • Compute s1 to s7 from data and architectural design
• S1:Total number of modules
• S2:Number of modules whose correct function depends on the data input • S3:Number of modules whose function depends on prior processing
• S4:Number of data base items
• S5:Number of unique database items
• S6: Number of database segments • S7:Number of modules with single entry and exit • Calculate D1 to D6 from s1 to s7 as follows:
• D1=1 if standard design is followed otherwise D1=0
• D2(module independence)=(1-(s2/s1))
• D3(module not depending on prior processing)=(1-(s3/s1)) • D4(Data base size)=(1-(s5/s4))
• D5(Database compartmentalization)=(1-(s6/s4)
• D6(Module entry/exit characteristics)=(1-(s7/s1)) • DSQI=sigma of WiDi
• i=1 to 6,Wi is weight assigned to Di • If sigma of wi is 1 then all weights are equal to 0.167
• DSQI of present design be compared with past DSQI. If DSQI is significantly lower than the average, further design work and review are indicated
• METRIC FOR SOURCE CODE
• HSS(Halstead Software science)
• Primitive measure that may be derived after the code is generated or estimated once design is
complete
• n1 = the number of distinct operators that appear in a program • n2 = the number of distinct operands that appear in a program
• N1 = the total number of operator occurrences. • N2 = the total number of operand occurrence.
• Overall program length N can be computed:
• N = n1 log2 n1 + n2 log2 n2
• V = N log2 (n1 + n2) METRIC FOR TESTING
• n1 = the number of distinct operators that appear in a program
• n2 = the number of distinct operands that appear in a program
• N1 = the total number of operator occurrences. • N2 = the total number of operand occurrence. • Program Level and Effort
• PL = 1/[(n1 / 2) x (N2 / n2 l)] • e = V/PL •
METRICS FOR MAINTENANCE
• Mt = the number of modules in the current release
• Fc = the number of modules in the current release that have been changed
• Fa = the number of modules in the current release that have been added. • Fd = the number of modules from the preceding release that were deleted in the current release
• The Software Maturity Index, SMI, is defined as:
• SMI = [Mt – (Fc + Fa + Fd)/ Mt ] METRICS FOR PROCESS AND PROJECTS
1) SOFTWARE MEASUREMENT
Software measurement can be categorized in two ways. (1) Direct measures of the software engineering process include cost and effort applied. Direct
measures of the product include lines of code (LOC) produced, execution speed, memory size, and
defects reported over some set period of time.
(2) Indirect measures of the product include functionality, quality, complexity, efficiency, reliability,
maintainability, and many other "–abilities"
1.1 Size-Oriented Metrics
Size-oriented software metrics are derived by normalizing quality and/or productivity measures by
considering the size of the software that has been produced. To develop metrics that can be assimilated with similar metrics from other projects, we choose lines of code as our normalization value. From the rudimentary data contained in the table, a set of simple size-
oriented metrics can be developed for each project:
Errors per KLOC (thousand lines of code).
Defects per KLOC.
$ per LOC.
Page of documentation per KLOC.
In addition, other interesting metrics can be computed:
Errors per person-month.
LOC per person-month.
$ per page of documentation.
1.2 Function-Oriented Metrics
Function-oriented software metrics use a measure of the functionality delivered by the application
as a normalization value. Since ‘functionality’ cannot be measured directly, it must be derived indirectly
using other direct measures. Function-oriented metrics were first proposed by Albrecht, who suggested a
measure called the function point. Function points are derived using an empirical relationship based on
countable (direct) measures of software's information domain and assessments of software complexity.
Proponents claim that FP is programming language independent, making it ideal for application
using conventional and nonprocedural languages, and that it is based on data that are more likely
to be known early in the evolution of a project, making FP more attractive as an estimation
approach. Opponents claim that the method requires some “sleight of hand ” in that computation is
basedsubjective rather than objective data, that counts of the information domain can be difficult
to collect after the fact, and that FP has no direct physical meaning- it’s just a number.
Typical Function-Oriented Metrics:
errors per FP (thousand lines of code)
defects per FP $ per FP
pages of documentation per FP
FP per person-month
1.3) Reconciling Different Metrics Approaches
The relationship between lines of code and function points depend upon the programming
language that is used to implement the software and the quality of the design.
Function points and LOC based metrics have been found to be relatively accurate predictors of software
development effort and cost.
1.4) Object Oriented Metrics:
Conventional software project metrics (LOC or FP) can be used to estimate object oriented
software projects. Lorenz and Kidd suggest the following set of metrics for OO projects: Number of scenario scripts: A scenario script is a detailed sequence of steps that describes the
interaction between the user and the application.
Number of key classes: Key classes are the “highly independent components that are defined
early in object-oriented analysis.
Number of support classes: Support classes are required to implement the system but are not immediately related to the problem domain.
Average number of support classes per key class: Of the average number of support classes per
key class were known for a given problem domain estimation would be much simplified. Lorenz and Kidd suggest that applications with a GUI have between two and three times the number of
support classes as key classes.
Number of subsystems: A subsystem is an aggregation of classes that support a function that is
visible to the end-user of a system. Once subsystems are identified, it is easier to lay out a
reasonable schedule in ehic work on subsystems is partitioned among project staff.
1.5) Use-Case Oriented Metrics
Use-cases describe user-visible functions and features that are basic requirements for a system. The use-cases is directly proportional to the size of the application in LOC and to the number of use-cases is directly proportional to the size of the application in LOC and to the number of test cases that will have to be designed to fully exercise the application.
Because use-cases can be created at vastly different levels of abstraction, there is no standard size
for a use-case. Without a standard measure of what a use-case is, its application as a normalization measure
is suspect.
1.6) Web Engineering Project Metrics
The objective of all web engineering projects is to build a Web application that delivers a combination of
content and functionality to the end-user.
Number of static Web pages: These pages represent low relative complexity and generally require
less effort to construct than dynamic pages. This measures provides an indication of the overall
size of the application and the effort required to develop it.
Number of dynamic Web pages: : Web pages with dynamic content are essential in all e-
commerce applications, search engines, financial application, and many other Web App
categories. These pages represent higher relative complexity and require more effort to construct
than static pages. This measure provides an indication of the overall size of the application and the
effort required to develop it. Number of internal page link: Internal page links are pointers that provide an indication of the
degree of architectural coupling within the Web App.
Number of persistent data objects: As the number of persistent data objects grows, the complexity
of the Web App also grows, and effort to implement it increases proportionally.
Number of external systems interfaced: As the requirement for interfacing grows, system
complexity and development effort also increase.
Number of static content objects: Static content objects encompass static text- based, graphical,
video, animation, and audio information that are incorporated within the Web App.
Number of dynamic content objects: Dynamic content objects are generated based on end-user
actions and encompass internally generated text-based, graphical, video, animation, and audio
information that are incorporated within the Web App.
Number of executable functions: An executable function provides some computational service to
the end-user. As the number of executable functions increases, modeling and construction effort also increase.
2) METRICS FOR SOFTWARE QUALITY The overriding goal of software engineering is to produce a high-quality system, application, or
product within a timeframe that satisfies a market need. To achieve this goal, software engineers must apply effective methods coupled with modern tools within the context of a mature software process.
2.1 Measuring Quality
The measures of software quality are correctness, maintainability, integrity, and usability. These measures
will provide useful indicators for the project team.
Correctness. Correctness is the degree to which the software performs its required function. The most common measure for correctness is defects per KLOC, where a defect is defined as a verified
lack of conformance to requirements.
Maintainability. Maintainability is the ease with which a program can be corrected if an error is
encountered, adapted if its environment changes, or enhanced if the customer desires a change in
requirements. A simple time-oriented metric is mean-time-tochange (MTTC), the time it takes to
analyze the change request, design an appropriate modification, implement the change, test it, and
distribute the change to all users.
Integrity. Attacks can be made on all three components of software: programs, data, and
documents.
To measure integrity, two additional attributes must be defined: threat and security. Threat is the
probability (which can be estimated or derived from empirical evidence) that an attack of a
specific type will occur within a given time. Security is the probability (which can be estimated or
derived from empirical evidence) that the attack of a specific type will be repelled. The integrity of
a system can then be defined as
integrity = ∑ [1 – – security))] Usability: Usability is an attempt to quantify user-friendliness and can be measured in terms of
four characteristics:
2.2 Defect Removal Efficiency
A quality metric that provides benefit at both the project and process level is defect removal
efficiency (DRE). In essence, DRE is a measure of the filtering ability of quality assurance and control
activities as they are applied throughout all process framework activities.
When considered for a project as a whole, DRE is defined in the following manner: DRE = E/(E + D)
where E is the number of errors found before delivery of the software to the end-user and
D is the number of defects found after delivery.
Those errors that are not found during the review of the analysis model are passed on to the design task (where they may or may not be found). When used in this context, we redefine DRE as DREi = Ei/(Ei+Ei+1) Ei is the number of errors found during software engineering activity i and Ei+1 is the number of errors found during software engineering activity i+1 that are traceable to errors that were not discovered in software engineering activity i.
A quality objective for a software team (or an individual software engineer) is to achieve DRE
that approaches 1. That is, errors should be filtered out before they are passed on to the next activity.
UNIT-V
RISK MANAGEMENT
1) REACTIVE VS. PROACTIVE RISK STRATEGIES
At best, a reactive strategy monitors the project for likely risks. Resources are set aside to deal with them, should they become actual problems. More commonly, the software team does nothing about risks until something goes wrong. Then, the team flies into action in an attempt to correct the problem
rapidly. This is often called a fire fighting mode.
project team reacts to risks when they occur mitigation—plan for additional resources in anticipation of fire fighting fix on failure—resource are found and applied when the risk strikes
crisis management—failure does not respond to applied resources and project is in
jeopardy
A proactive strategy begins long before technical work is initiated. Potential risks are identified, their
probability and impact are assessed, and they are ranked by importance. Then, the software team
establishes a plan for managing risk.
formal risk analysis is performed organization corrects the root causes of risk
o examining risk sources that lie beyond the bounds of the software
o developing the skill to manage change
Risk Management Paradigm
SOFTWARE RISK Risk always involves two characteristics
Uncertainty—the risk may or may not happen; that is, there are no 100% probable risks Loss—if the risk becomes a reality, unwanted consequences or losses will occur.
When risks are analyzed, it is important to quantify the level of uncertainty in the degree of loss associated with each risk. To accomplish this, different categories of risks are considered.
Project risks threaten the project plan. That is, if project risks become real, it is likely that project schedule
will slip and that costs will increase.
Technical risks threaten the quality and timeliness of the software to be produced. If a technical risk
becomes a reality, implementation may become difficult or impossible. Technical risks identify potential
design, implementation, interface, verification, and maintenance problems.
Business risks threaten the viability of the software to be built. Business risks often jeopardize the project
or the product. Candidates for the top five business risks are
(1) Building a excellent product or system that no one really wants (market risk),
(2) Building a product that no longer fits into the overall business strategy for the company (strategic risk),
(3) Building a product that the sales force doesn't understand how to sell, (4) Losing the support of senior management due to a change in focus or a change in people (management
risk), and
(5) Losing budgetary or personnel commitment (budget risks).
Known risks are those that can be uncovered after careful evaluation of the project plan, the business and technical environment in which the project is being developed, and other reliable information sources.
Predictable risks are extrapolated from past project experience.
Unpredictable risks are the joker in the deck. They can and do occur, but they are extremely
difficult to identify in advance.
2) RISK IDENTIFICATION
Risk identification is a systematic attempt to specify threats to the project plan. There are two distinct types of risks.
1) Generic risks and
2) product-specific risks.
Generic risks are a potential threat to every software project. Product-specific risks can be identified only by those with a clear understanding of the technology, the
people, and the environment that is specific to the project that is to be built.
Known and predictable risks in the following generic subcategories: Product size—risks associated with the overall size of the software to be built or modified.
Business impact—risks associated with constraints imposed by management or the marketplace. Customer characteristics—risks associated with the sophistication of the customer and the
developer's ability to communicate with the customer in a timely manner.
Process definition—risks associated with the degree to which the software process has been
defined and is followed by the development organization.
Development environment—risks associated with the availability and quality of the tools to be
used to build the product.
Technology to be built—risks associated with the complexity of the system to be built and the
"newness" of the technology that is packaged by the system.
Staff size and experience—risks associated with the overall technical and project experience of the software engineers who will do the work.
Assessing Overall Project Risk
The questions are ordered by their relate importance to the success of a project. 1. Have top software and customer managers formally committed to support the project?
2. Are end-users enthusiastically committed to the project and the system/product to be built? 3. Are requirements fully understood by the software engineering team and theircustomers?
4. Have customers been involved fully in the definition of requirements?
5. Do end-users have realistic expectations?
6. Is project scope stable?
7. Does the software engineering team have the right mix of skills?
8. Are project requirements stable? 9. Does the project team have experience with the technology to be
Implemented?
10. Is the number of people on the project team adequate to do the job? 11. Do all customer/user constituencies agree on the importance of the project and on the requirements for
the system/product to be built?
3.2 Risk Components and Drivers
The risk components are defined in the following manner:
• Performance risk—the degree of uncertainty that the product will meet its requirements and be fit for its
intended use.
• Cost risk—the degree of uncertainty that the project budget will be maintained. • Support risk—the degree of uncertainty that the resultant software will be easy to correct, adapt, and
enhance.
• Schedule risk—the degree of uncertainty that the project schedule will be maintained and that the product
will be delivered on time.
The impact of each risk driver on the risk component is divided into one of four impact categories—
negligible, marginal, critical, or catastrophic.
3) RISK PROJECTION
Risk projection, also called risk estimation, attempts to rate each risk in two ways—the likelihood or probability that the risk is real and the consequences of the problems associated with the risk, should it occur.
The project planner, along with other managers and technical staff, performs four risk projection activities:
(1) establish a scale that reflects the perceived likelihood of a risk, (2) delineate the consequences of the risk,
(3) estimate the impact of the risk on the project and the product, and
(4) note the overall accuracy of the risk projection so that there will be no misunderstandings.
4.1 Developing a Risk Table
Building a Ris
A project team begins by listing all risks (no matter how remote) in the first column of the table.
Each risk is categorized in Next; the impact of each risk is assessed.
The categories for each of the four risk components—performance, support, cost, and schedule—
are averaged to determine an overall impact value.
High-probability, high-impact risks percolate to the top of the table, and low-probability risks drop
to the bottom. This accomplishes first-order risk prioritization.
The project manager studies the resultant sorted table and defines a cutoff line. The cutoff line (drawn horizontally at some point in the table) implies that only risks that lie above the line will be given further attention. Risks that fall below the line are re-evaluated to accomplish second-order prioritization.
4.2 Assessing Risk Impact
Three factors affect the consequences that are likely if a risk does occur: its nature, its scope, and its timing.
The nature of the risk indicates the problems that are likely if it occurs. The scope of a risk combines the severity (just how serious is it?) with its overall distribution.
Finally, the timing of a risk considers when and for how long the impact will be felt.
The overall risk exposure, RE, is determined using the following relationship RE = P x C Where P is the probability of occurrence for a risk, and C is the cost to the project should the risk occur.
Risk identification. Only 70 percent of the software components scheduled for reuse will, in fact, be
integrated into the application. The remaining functionality will have to be custom developed.
Risk probability. 80% (likely).
Risk impact. 60 reusable software components were planned.
Risk exposure. RE = 0.80 x 25,200 ~ $20,200.
The total risk exposure for all risks (above the cutoff in the risk table) can provide a means for adjusting the
final cost estimate for a project etc.
4) RISK REFINEMENT
One way for risk refinement is to represent the risk in condition-transition-consequence(CTC) format. This general condition can be refined in the following manner: Sub condition 1. Certain reusable components were developed by a third party with no knowledge of
internal design standards.
Sub condition 2. The design standard for component interfaces has not been solidified and may not
conform to certain existing reusable components.
Sub condition 3. Certain reusable components have been implemented in a language that is not supported
on the target environment.
5) RISK MITIGATION, MONITORING, AND MANAGEMENT
An effective strategy must consider three issues:
Risk avoidance
Risk monitoring
Risk management and contingency planning
If a software team adopts a proactive approach to risk, avoidance is always the best strategy.
To mitigate this risk, project management must develop a strategy for reducing turnover. Among the possible steps to be taken are
Meet with current staff to determine causes for turnover (e.g., poor working conditions, low pay,
competitive job market).
Mitigate those causes that are under our control before the project starts.
Once the project commences, assume turnover will occur and develop techniques to ensure continuity when people leave.
Organize project teams so that information about each development activity is widely dispersed.
Define documentation standards and establish mechanisms to be sure that documents are
developed in a timely manner.
Conduct peer reviews of all work (so that more than one person is "up to speed”). • Assign a
backup staff member for every critical technologist.
As the project proceeds, risk monitoring activities commence. The following factors can be monitored:
General attitude of team members based on project pressures.
The degree to which the team has jelled.
Interpersonal relationships among team members.
Potential problems with compensation and benefits
The availability of jobs within the company and outside it.
Software safety and hazard analysis are software quality assurance activities that focus on the
identification and assessment of potential hazards that may affect software negatively and cause an entire
system to fail. If hazards can be identified early in the software engineering process, software design
features can be specified that will either eliminate or control potential hazards.
6) THE RMMM PLAN
A risk management strategy can be included in the software project plan or the risk management steps can be organized into a separate Risk Mitigation, Monitoring and Management Plan.
The RMMM plan documents all work performed as part of risk analysis and is used by the project
manager as part of the overall project plan.
Risk monitoring is a project tracking activity with three primary objectives:
1) to assess whether predicted risks do, in fact, occur; 2) to ensure that risk aversion steps defined for the risk are being properly applied; and
3) to collect information that can be used for future risk analysis.
QUALITY MANAGEMENT
1) QUALITY CONCEPTS:
Quality management encompasses (1) a quality management approach,
(2) effective software engineering technology (methods and tools), (3) formal technical reviews that are applied throughout the software process,
(4) a multitiered testing strategy,
(5) control of software documentation and the changes made to it, (6) a procedure to ensure compliance with software development standards (when applicable), and
(7) measurement and reporting mechanisms.
Variation control is the heart of quality control.
1.1 Quality
The American Heritage Dictionary defines quality as “a characteristic or attribute of something.”
Quality of design refers to the characteristics that designers specify for an item.
Quality of conformance is the degree to which the design specifications are followed during manufacturing.
In software development, quality of design encompasses requirements, specifications, and the design of the
system. Quality of conformance is an issue focused primarily on implementation. If the implementation
follows the design and the resulting system meets its requirements and performance goals, conformance
quality is high.
Robert Glass argues that a more “intuitive” relationship is in order:
User satisfaction = compliant product + good quality + delivery within budget and schedule
1.2 Quality Control
Quality control involves the series of inspections, reviews, and tests used throughout the software process
to ensure each work product meets the requirements placed upon it.
A key concept of quality control is that all work products have defined, measurable specifications to which
we may compare the output of each process. The feedback loop is essential to minimize the defects
produced.
1.3 Quality Assurance
Quality assurance consists of the auditing and reporting functions that assess the effectiveness and
completeness of quality control activities. The goal of quality assurance is to provide management with the
data necessary to be informed about product quality, thereby gaining insight and confidence that product
quality is meeting its goals.
1.4 Cost of Quality The cost of quality includes all costs incurred in the pursuit of quality or in performing quality-related
activities.
Quality costs may be divided into costs associated with prevention, appraisal, and failure.
Prevention costs include quality planning
formal technical reviews
test equipment
training
Appraisal costs include activities to gain insight into product condition the “first time through” each
process. Examples of appraisal costs include
in-process and interprocess inspection
equipment calibration and maintenance
testing
Failure costs are those that would disappear if no defects appeared before shipping a product to customers. Failure costs may be subdivided into internal failure costs and external failure costs.
Internal failure costs are incurred when we detect a defect in our product prior to shipment. Internal failure
costs include
rework
repair
failure mode analysis
External failure costs are associated with defects found after the product has been shipped to the customer. Examples of external failure costs are
complaint resolution
product return and replacement
help line support
warranty work
2) SOFTWARE QUALITY ASSURANCE
Software quality is defined as conformance to explicitly stated functional and performance requirements,
explicitly documented development standards, and implicit characteristics that are expected of all professionally developed software.
The definition serves to emphasize three important points: 1) Software requirements are the foundation from which quality is measured. Lack of conformance to
requirements is lack of quality.
2) Specified standards define a set of development criteria that guide the manner in which software is engineered. If the criteria are not followed, lack of quality will almost surely result.
3) A set of implicit requirements often goes unmentioned (e.g., the desire for ease of use and good
maintainability). If software conforms to its explicit requirements but fails to meet implicit
requirements, software quality is suspect.
2.1 Background Issues
The first formal quality assurance and control function was introduced at Bell Labs in 1916 and
spread rapidly throughout the manufacturing world. During the 1940s, more formal approaches to quality
control were suggested. These relied on measurement and continuous process improvement as key
elements of quality management.Today, every company has mechanisms to ensure quality in its products.
During the early days of computing (1950s and 1960s), quality was the sole responsibility of the
programmer. Standards for quality assurance for software were introduced in military contract software
development during the 1970s.
Extending the definition presented earlier, software quality assurance is a "planned and systematic
pattern of actions" that are required to ensure high quality in software. The scope of quality assurance
responsibility might best be characterized by paraphrasing a once-popular automobile commercial: "Quality Is Job #1." The implication for software is that many different constituencies have software quality
assurance responsibility—software engineers, project managers, customers, salespeople, and the
individuals who serve within an SQA group.
The SQA group serves as the customer's in-house representative. That is, the people who perform
SQA must look at the software from the customer's point of view
2.2 SQA Activities
Software quality assurance is composed of a variety of tasks associated with two different constituencies— the software engineers who do technical work and an SQA group that has responsibility for quality assurance planning, oversight, record keeping,
analysis, and reporting.
The Software Engineering Institute recommends a set of SQA activities that address quality assurance
planning, oversight, record keeping, analysis, and reporting. These activities are performed (or facilitated)
by an independent SQA group that conducts the following activities.
Prepares an SQA plan for a project. The plan is developed during project planning and is reviewed by all
interested parties. Quality assurance activities performed by the software engineering team and the SQA
group are governed by the plan. The plan identifies
evaluations to be performed
audits and reviews to be performed standards that are applicable to the project
procedures for error reporting and tracking
documents to be produced by the SQA group amount of feedback provided to the software project team
Participates in the development of the project’s software process description. The software team selects a process for the work to be performed. The SQA group reviews the process description for compliance with organizational policy, internal software standards, externally imposed standards (e.g., ISO-9001), and other parts of the software project plan.
Reviews software engineering activities to verify compliance with the defined software process. The SQA group identifies, documents, and tracks deviations from the process and verifies that corrections have been made.
Audits designated software work products to verify compliance with those defined as part of the software process. The SQA group reviews selected work products; identifies, documents, and tracks deviations; verifies that corrections have been made; and periodically reports the results of its work to the project manager.
Ensures that deviations in software work and work products are documented and handled according to a documented procedure.Deviations may be encountered in the project plan, process description, applicable standards, or technical work products.
Records any noncompliance and reports to senior management. Noncompliance items are tracked until they are resolved.
3) SOFTWARE REVIEWS
Software reviews are a "filter" for the software engineering process. That is, reviews are applied at various points during software development and serve to uncover errors and defects that can then be removed.
Software reviews "purify" the software engineering activities that we have called analysis, design, and
coding. Many different types of reviews can be conducted as part of software engineering. Each has its
place. An informal meeting around the coffee machine is a form of review, if technical problems are
discussed. A formal presentation of software design to an audience of customers, management, and
technical staff is also a form of review
A formal technical review is the most effective filter from a quality assurance standpoint. Conducted by software engineers (and others) for software engineers, the FTR is an effective means for improving
software quality.
3.1 Cost Impact of Software Defects:
The primary objective of formal technical reviews is to find errors during the process so that they
do not become defects after release of the software.
A number of industry studies indicate that design activities introduce between 50 and 65 percent
of all errors during the software process. However, formal review techniques have been shown to be up to
75 percent effective] in uncovering design errors. By detecting and removing a large percentage of these
errors, the review process substantially reduces the cost of subsequent steps in the development and support
phases.
To illustrate the cost impact of early error detection, we consider a series of relative costs that are based on actual cost data collected for large software projects Assume that an error uncovered
during design will cost 1.0 monetary unit to correct.
just before testing commences will cost 6.5 units;
during testing, 15 units;
and after release, between 60 and 100 units.
3.2) Defect Amplification and Removal:
(This topic I will tell you later)
4) FORMAL TECHNICAL REVIEWS
A formal technical review is a software quality assurance activity performed by software engineers
(and others). The objectives of the FTR are
(1) to uncover errors in function, logic, or implementation for any representation of the software;
(2) to verifythat the software under review meets its requirements;
(3) to ensure that the software has been represented according to predefined standards;
(4) to achieve software that is developed in a uniform manner; and
(5) to make projects more manageable.
4.1 The Review Meeting
Every review meeting should abide by the following constraints:
Between three and five people (typically) should be involved in the review.
Advance preparation should occur but should require no more than two hours of work for each
person.
The duration of the review meeting should be less than two hours.
The focus of the FTR is on a work product. The individual who has developed the work product—the producer—informs the project leader that the
work product is complete and that a review is required.
The project leader contacts a review leader, who evaluates the product for readiness, generates
copies of product materials, and distributes them to two or three reviewers for advance
preparation.
Each reviewer is expected to spend between one and two hours reviewing the product, making
notes, and otherwise becoming familiar with the work.
The review meeting is attended by the review leader, all reviewers, and the producer. One of the
reviewers takes on the role of the recorder; that is, the individual who records (in writing) all
important issues raised during the review.
At the end of the review, all attendees of the FTR must decide whether to
(1) accept the product without further modification, (2) reject the product due to severe errors (once corrected, another review must be performed), or (3) accept the product provisionally.
The decision made, all FTR attendees complete a sign-off, indicating their participation in the review and
their concurrence with the review team's findings.
4.2 Review Reporting and Record Keeping
At the end of the review meeting and a review issues list is produced. In addition, a formal technical review summary report is completed. A review summary report answers three questions:
1. What was reviewed?
2. Who reviewed it? 3. What were the findings and conclusions?
The review summary report is a single page form.
It is important to establish a follow-up procedure to ensure that items on the issues list have been properly
corrected.
4.3 Review Guidelines
The following represents a minimum set of guidelines for formal technical reviews:
1. Review the product, not the producer. An FTR involves people and egos. Conducted properly, the
FTR should leave all participants with a warm feeling of accomplishment.
2. Set an agenda and maintain it. An FTR must be kept on track and on schedule. The review leader
is chartered with the responsibility for maintaining the meeting schedule and should not be afraid
to nudge people when drift sets in.
3. Limit debate and rebuttal. When an issue is raised by a reviewer, there may not be universal
agreement on its impact.
4. Enunciate problem areas, but don't attempt to solve every problem noted. A review is not a
problem-solving session. The solution of a problem can often be accomplished by the producer
alone or with the help of only one other individual. Problem solving should be postponed until
after the review meeting.
5. Take written notes. It is sometimes a good idea for the recorder to make notes on a wall board, so
that wording and priorities can be assessed by other reviewers as information is recorded.
6. Limit the number of participants and insist upon advance preparation. Keep the number of
people involved to the necessary minimum.
7. Develop a checklist for each product that is likely to be reviewed. A checklist helps the review leader to structure the FTR meeting and helps each reviewer to focus on important issues.
Checklists should be developed for analysis, design, code, and even test documents.
8. Allocate resources and schedule time for FTRs. For reviews to be effective, they should be
scheduled as a task during the software engineering process
9. Conduct meaningful training for all reviewers. To be effective all review participants should
receive some formal training.
10. Review your early reviews. Debriefing can be beneficial in uncovering problems with the review
process itself.
4.4 Sample-Driven Reviews (SDRs):
SDRs attempt to quantify those work products that are primary targets for full FTRs.To accomplish this the
following steps are suggested…
• Inspect a fraction ai of each software work product, i. Record the number of faults, fi found within ai.
• Develop a gross estimate of the number of faults within work product i by multiplying fi by 1/ai. • Sort the work products in descending order according to the gross estimate of the number of faults
in each.
• Focus available review resources on those work products that have the highest estimated number
of faults. The fraction of the work product that is sampled must
Be representative of the work product as a whole and
Large enough to be meaningful to the reviewer(s) who does the sampling.
5) STATISTICAL SOFTWARE QUALITY ASSURANCE
For software, statistical quality assurance implies the following steps: 1. Information about software defects is collected and categorized. 2. An attempt is made to trace each defect to its underlying cause (e.g., non-conformance to
specifications, design error, violation of standards, poor communication with the customer). 3. Using the Pareto principle (80 percent of the defects can be traced to 20 percent of all possible
causes), isolate the 20 percent (the "vital few").
4. Once the vital few causes have been identified, move to correct the problems that have caused the
For software, statistical quality assurance implies the following steps:
.
The application of the statistical SQA and the pareto principle can be summarized in a single sentence: spend your time focusing on things that really matter, but first be sure that you understand what really matters.
5.1 Six Sigma for software Engineering:
Six Sigma is the most widely used strategy for statistical quality assurance in industry today.
The term “six sigma” is derived from six standard deviations—3.4 instances (defects) per million
occurrences—implying an extremely high quality standard. The Six Sigma methodology defines three core
steps:
Define customer requirements and deliverables and project goals via well-defined methods of
customer communication
Measure the existing process and its output to determine current quality performance (collect defect metrics)
Analyze defect metrics and determine the vital few causes.
If an existing software process is in place, but improvement is required, Six Sigma suggests two additional
steps.
Improve the process by eliminating the root causes of defects.
Control the process to ensure that future work does not reintroduce the causes of defects
These core and additional steps are sometimes referred to as the DMAIC (define, measure, analyze, improve, and control) method.
If any organization is developing a software process (rather than improving and existing process),
the core steps are augmented as follows:
Design the process to
o avoid the root causes of defects and
o to meet customer requirements
Verify that the process model will, in fact, avoid defects and meet customer requirements.
This variation is sometimes called the DMADV (define, measure, analyze, design and verify) method.
6) THE ISO 9000 QUALITY STANDARDS
A quality assurance system may be defined as the organizational structure, responsibilities, procedures,
processes, and resources for implementing quality management
ISO 9000 describes quality assurance elements in generic terms that can be applied to any business
regardless of the products or services offered.
ISO 9001:2000 is the quality assurance standard that applies to software engineering. The standard
contains 20 requirements that must be present for an effective quality assurance system. Because the ISO
9001:2000 standard is applicable to all engineering disciplines, a special set of ISO guidelines have been
developed to help interpret the standard for use in the software process.
The requirements delineated by ISO 9001 address topics such as
- management responsibility, - quality system, contract review,
- design control,
- document and data control, - product identification and traceability,
- process control,
- inspection and testing,
- corrective and preventive action, - control of quality records,
- internal quality audits,
- training,
- servicing and - statistical techniques.
In order for a software organization to become registered to ISO 9001, it must establish policies and
procedures to address each of the requirements just noted (and others) and then be able to demonstrate that
these policies and procedures are being followed.
SOFTWARE RELIABILITY
Software reliability is defined in statistical terms as "the probability of failure-free operation of a
computer program in a specified environment for a specified time".
7.1 Measures of Reliability and Availability: Most hardware-related reliability models are predicated on failure due to wear rather than failure due to
design defects. In hardware, failures due to physical wear (e.g., the effects of temperature, corrosion,
shock) are more likely than a design-related failure. Unfortunately, the opposite is true for software. In fact,
all software failures can be traced to design or implementation problems; wear does not enter into the
picture.
A simple measure of reliability is meantime-between-failure (MTBF), where MTBF = MTTF + MTTR
The acronyms MTTF and MTTR are mean-time-to-failure and mean-time-to-repair, respectively.
In addition to a reliability measure, we must develop a measure of availability. Software availability is the
probability that a program is operating according to requirements at a given point in time and is defined as
Availability = [MTTF/(MTTF + MTTR)] 100%
The MTBF reliability measure is equally sensitive to MTTF and MTTR. The availability measure is
somewhat more sensitive to MTTR, an indirect measure of the maintainability of software.
7.2) Software Safety
Software safety is a software quality assurance activity that focuses on the identification and
assessment of potential hazards that may affect software negatively and cause an entire system to fail. If
hazards can be identified early in the software engineering process, software design features can be
specified that will either eliminate or control potential hazards.
For example, some of the hazards associated with a computer-based cruise control for an automobile might
be
causes uncontrolled acceleration that cannot be stopped
does not respond to depression of brake pedal (by turning off)
does not engage when switch is activated
slowly loses or gains speed
Once these system-level hazards are identified, analysis techniques are used to assign severity and
probability of occurrence.To be effective, software must be analyzed in the context of the entire system.
If a set of external environmental conditions are met (and only if they are met), the improper position of the
mechanical device will cause a disastrous failure. Analysis techniques such as fault tree analysis [VES81],
real-time logic [JAN86], or petri net models [LEV87] can be used to predict the chain of events that can
cause hazards and the probability that each of the events will occur to create the chain.
Once hazards are identified and analyzed, safety-related requirements can be specified for the
software. That is, the specification can contain a list of undesirable events and the desired system responses
to these events. The role of software in managing undesirable events is then indicated.
Although software reliability and software safety are closely related to one another, it is important
to understand the subtle difference between them. Software reliability uses statistical analysis to determine
the likelihood that a software failure will occur. However, the occurrence of a failure does not necessarily
result in a hazard or mishap. Software safety examines the ways in which failures result in conditions that
can lead to a mishap.
Defect Amplification and Removal:
Defect Amplification Model
A defect amplification model can be used to illustrate the generation and detection of errors during
the preliminary design, detail design, and coding steps of the software engineering process.
A box represents a software development step. During the step, errors may be inadvertently generated.
Review may fail to uncover newly generated errors and errors from previous steps, resulting in some
number of errors that are passed through. In some cases, errors passed through from previous steps are
amplified (amplification factor, x) by current work. The box subdivisions represent each of these
characteristics and the percent of efficiency for detecting errors, a function of the thoroughness of the
review.
Referring to the figure8.3 each test step is assumed to uncover and correct 50 percent of all
incoming errors without introducing any new errors (an optimistic assumption). Ten preliminary design
defects are amplified to 94 errors before testing commences. Twelve latent errors are released to the field. Figure8.4 considers the same conditions except that design and code reviews are conducted as part
of each development step. In this case, ten initial preliminary design errors are amplified to 24 errors before
testing commences. Only three latent errors exist.
Recalling the relative costs associated with the discovery and correction of errors, overall cost
(with and without review for our hypothetical example) can be established. The number of errors
uncovered during each of the steps noted in Figures 8.3 and 8.4 is multiplied by the cost to remove an error
(1.5 cost units for design, 6.5 cost units before test, 15 cost units during test, and 67 cost units after release).
Using these data, the total cost for development and maintenance when reviews are
conducted is 783 cost units.
When no reviews are conducted, total cost is 2177 units—nearly three times more
costly. To conduct reviews, a software engineer must expend time and effort and the development
organization must spend money. Formal technical reviews (for design and other technical activities) provide a demonstrable cost benefit. They should be conducted.
FIGURE 8.3
Defect amplification, no reviews
FIGURE 8.4
Defect amplification, reviews conducted