software development life cycle process · software development life cycle models a software life...
TRANSCRIPT
Software Development Life Cycle Process
SDLC is a process which defines the various stages involved in the development of software for
delivering a high-quality product. SDLC stages cover the complete life cycle of a software i.e.
from inception to retirement of the product.
Adhering to the SDLC process leads to the development of the software in a systematic and
disciplined manner.
Purpose: Purpose of SDLC is to deliver a high-quality product which is as per the customer’s requirement.
SDLC has defined its phases as, Requirement gathering, Designing, Coding, Testing, and
Maintenance. It is important to adhere to the phases to provide the Product in a systematic
manner.
For Example, A software has to be developed and a team is divided to work on a feature of the
product and is allowed to work as they want. One of the developers decides to design first
whereas the other decides to code first and the other on the documentation part.
This will lead to project failure because of which it is necessary to have a good knowledge and
understanding among the team members to deliver an expected product.
SDLC Cycle
SDLC Cycle represents the process of developing software.
Below is the diagrammatic representation of the SDLC cycle:
SDLC Phases
Given below are the various phases: Requirement gathering and analysis
Design
Implementation or coding
Testing
Deployment
Maintenance
#1) Requirement Gathering and Analysis
During this phase, all the relevant information is collected from the customer to develop a
product as per their expectation. Any ambiguities must be resolved in this phase only.
Business analyst and Project Manager set up a meeting with the customer to gather all the
information like what the customer wants to build, who will be the end-user, what is the purpose
of the product. Before building a product a core understanding or knowledge of the product is
very important.
For Example, A customer wants to have an application which involves money transactions. In
this case, the requirement has to be clear like what kind of transactions will be done, how it will
be done, in which currency it will be done, etc.
Once the requirement gathering is done, an analysis is done to check the feasibility of the
development of a product. In case of any ambiguity, a call is set up for further discussion.
Once the requirement is clearly understood, the SRS (Software Requirement Specification)
document is created. This document should be thoroughly understood by the developers and also
should be reviewed by the customer for future reference.
#2) Design
In this phase, the requirement gathered in the SRS document is used as an input and software
architecture that is used for implementing system development is derived.
#3) Implementation or Coding
Implementation/Coding starts once the developer gets the Design document. The Software
design is translated into source code. All the components of the software are implemented in this
phase.
#4) Testing
Testing starts once the coding is complete and the modules are released for testing. In this phase,
the developed software is tested thoroughly and any defects found are assigned to developers to
get them fixed.
Retesting, regression testing is done until the point at which the software is as per the customer’s
expectation. Testers refer SRS document to make sure that the software is as per the customer’s
standard.
#5) Deployment
Once the product is tested, it is deployed in the production environment or first UAT (User
Acceptance testing) is done depending on the customer expectation.
In the case of UAT, a replica of the production environment is created and the customer along
with the developers does the testing. If the customer finds the application as expected, then sign
off is provided by the customer to go live.
#6) Maintenance
After the deployment of a product on the production environment, maintenance of the product
i.e. if any issue comes up and needs to be fixed or any enhancement is to be done is taken care by
the developers.
Software Development Life Cycle Models
A software life cycle model is a descriptive representation of the software development cycle.
SDLC models might have a different approach but the basic phases and activity remain the same
for all the models.
#1) Waterfall Model
Waterfall model is the very first model that is used in SDLC. It is also known as the linear
sequential model.
In this model, the outcome of one phase is the input for the next phase. Development of the next
phase starts only when the previous phase is complete.
First, Requirement gathering and analysis is done. Once the requirement is freeze then
only the System Design can start. Herein, the SRS document created is the output for the
Requirement phase and it acts as an input for the System Design.
In System Design Software architecture and Design, documents which act as an input for
the next phase are created i.e. Implementation and coding.
In the Implementation phase, coding is done and the software developed is the input for
the next phase i.e. testing.
In the testing phase, the developed code is tested thoroughly to detect the defects in the
software. Defects are logged into the defect tracking tool and are retested once fixed. Bug
logging, Retest, Regression testing goes on until the time the software is in go-live state.
In the Deployment phase, the developed code is moved into production after the sign off
is given by the customer.
Any issues in the production environment are resolved by the developers which come
under maintenance.
Advantages of the Waterfall Model:
Waterfall model is the simple model which can be easily understood and is the one in
which all the phases are done step by step.
Deliverables of each phase are well defined, and this leads to no complexity and makes
the project easily manageable.
Disadvantages of Waterfall model: Waterfall model is time-consuming & cannot be used in the short duration projects as in
this model a new phase cannot be started until the ongoing phase is completed.
Waterfall model cannot be used for the projects which have uncertain requirement or
wherein the requirement keeps on changing as this model expects the requirement to be
clear in the requirement gathering and analysis phase itself and any change in the later
stages would lead to cost higher as the changes would be required in all the phases.
#2) V-Shaped Model
V- Model is also known as Verification and Validation Model. In this model Verification &
Validation goes hand in hand i.e. development and testing goes parallel. V model and waterfall
model are the same except that the test planning and testing start at an early stage in V-Model.
a) Verification Phase:
(i) Requirement Analysis:
In this phase, all the required information is gathered & analyzed. Verification activities include
reviewing the requirements.
(ii) System Design: Once the requirement is clear, a system is designed i.e. architecture, components of the product
are created and documented in a design document.
(iii) High-Level Design:
High-level design defines the architecture/design of modules. It defines the functionality between
the two modules.
(iv) Low-Level Design: Low-level Design defines the architecture/design of individual components.
(v) Coding: Code development is done in this phase.
b) Validation Phase:
(i) Unit Testing:
Unit testing is performed using the unit test cases that are designed and is done in the Low-level
design phase. Unit testing is performed by the developer itself. It is performed on individual
components which lead to early defect detection.
(ii) Integration Testing:
Integration testing is performed using integration test cases in High-level Design phase.
Integration testing is the testing that is done on integrated modules. It is performed by testers.
(iii) System Testing: System testing is performed in the System Design phase. In this phase, the complete system is
tested i.e. the entire system functionality is tested.
(iv) Acceptance Testing:
Acceptance testing is associated with the Requirement Analysis phase and is done in the
customer’s environment.
Advantages of V – Model: It is a simple and easily understandable model.
V –model approach is good for smaller projects wherein the requirement is defined and it
freezes in the early stage.
It is a systematic and disciplined model which results in a high-quality product.
Disadvantages of V-Model:
V-shaped model is not good for ongoing projects.
Requirement change at the later stage would cost too high.
#3) Prototype Model
The prototype model is a model in which the prototype is developed prior to the actual software.
Prototype models have limited functional capabilities and inefficient performance when
compared to the actual software. Dummy functions are used to create prototypes. This is a
valuable mechanism for understanding the customers’ needs.
Software prototypes are built prior to the actual software to get valuable feedback from the
customer. Feedbacks are implemented and the prototype is again reviewed by the customer for
any change. This process goes on until the model is accepted by the customer.
Once the requirement gathering is done, the quick design is created and the prototype which is
presented to the customer for evaluation is built.
Customer feedback and the refined requirement is used to modify the prototype and is again
presented to the customer for evaluation. Once the customer approves the prototype, it is used as
a requirement for building the actual software. The actual software is build using the Waterfall
model approach.
Advantages of Prototype Model: Prototype model reduces the cost and time of development as the defects are found much
earlier.
Missing feature or functionality or a change in requirement can be identified in the
evaluation phase and can be implemented in the refined prototype.
Involvement of a customer from the initial stage reduces any confusion in the
requirement or understanding of any functionality.
Disadvantages of Prototype Model:
Since the customer is involved in every phase, the customer can change the requirement
of the end product which increases the complexity of the scope and may increase the
delivery time of the product.
#4) Spiral Model
The Spiral Model includes iterative and prototype approach.
Spiral model phases are followed in the iterations. The loops in the model represent the phase of
the SDLC process i.e. the innermost loop is of requirement gathering & analysis which follows
the Planning, Risk analysis, development, and evaluation. Next loop is Designing followed by
Implementation & then testing.
Spiral Model has four phases: Planning
Risk Analysis
Engineering
Evaluation
(i) Planning: The planning phase includes requirement gathering wherein all the required information is
gathered from the customer and is documented. Software requirement specification document is
created for the next phase.
(ii) Risk Analysis: In this phase, the best solution is selected for the risks involved and analysis is done by building
the prototype.
For Example, the risk involved in accessing the data from a remote database can be that the data
access rate might be too slow. The risk can be resolved by building a prototype of the data access
subsystem.
(iii) Engineering: Once the risk analysis is done, coding and testing are done.
(iv) Evaluation: Customer evaluates the developed system and plans for the next iteration.
Advantages of Spiral Model: Risk Analysis is done extensively using the prototype models.
Any enhancement or change in the functionality can be done in the next iteration.
Disadvantages of Spiral Model:
The spiral model is best suited for large projects only.
The cost can be high as it might take a large number of iterations which can lead to high
time to reach the final product.
#5) Iterative Incremental Model
The iterative incremental model divides the product into small chunks.
For Example, Feature to be developed in the iteration is decided and implemented. Each
iteration goes through the phases namely Requirement Analysis, Designing, Coding, and
Testing. Detailed planning is not required in iterations.
Once the iteration is completed, a product is verified and is delivered to the customer for their
evaluation and feedback. Customer’s feedback is implemented in the next iteration along with
the newly added feature.
Hence, the product increments in terms of features and once the iterations are completed the final
build holds all the features of the product.
Phases of Iterative & Incremental Development Model: Inception phase
Elaboration Phase
Construction Phase
Transition Phase
(i) Inception Phase:
Inception phase includes the requirement and scope of the Project.
(ii) Elaboration Phase: In the elaboration phase, the working architecture of a product is delivered which covers the risk
identified in the inception phase and also fulfills the non-functional requirements.
(iii) Construction Phase:
In the Construction phase, the architecture is filled in with the code which is ready to be
deployed and is created through analysis, designing, implementation, and testing of the
functional requirement.
(iv) Transition Phase:
In the Transition Phase, the product is deployed in the Production environment.
Advantages of Iterative & Incremental Model:
Any change in the requirement can be easily done and would not cost as there is a scope
of incorporating the new requirement in the next iteration.
Risk is analyzed & identified in the iterations.
Defects are detected at an early stage.
As the product is divided into smaller chunks it is easy to manage the product.
Disadvantages of Iterative & Incremental Model:
Complete requirement and understanding of a product are required to break down and
build incrementally.
#6) Big Bang Model
Big Bang Model does not have any defined process. Money and efforts are put together as the
input and output come as a developed product which might be or might not be the same as what
the customer needs.
Big Bang Model does not require much planning and scheduling. The developer does the
requirement analysis & coding and develops the product as per his understanding. This model is
used for small projects only. There is no testing team and no formal testing is done, and this
could be a cause for the failure of the project.
Advantages of Big Bang Model:
It’s a very simple Model.
Less Planning and scheduling is required.
The developer has the flexibility to build the software of their own.
Disadvantages of the Big Bang Model:
Big Bang models cannot be used for large, ongoing & complex projects.
High risk and uncertainty.
#7) Agile Model
Agile Model is a combination of the Iterative and incremental model. This model focuses more
on flexibility while developing a product rather than on the requirement.
In Agile, a product is broken into small incremental builds. It is not developed as a complete
product in one go. Each build increments in terms of features. The next build is built on previous
functionality.
In agile iterations are termed as sprints. Each sprint lasts for2-4 weeks. At the end of each sprint,
the product owner verifies the product and after his approval, it is delivered to the customer.
Customer feedback is taken for improvement and his suggestions and enhancement are worked
on in the next sprint. Testing is done in each sprint to minimize the risk of any failures.
Advantages of Agile Model:
It allows more flexibility to adapt to the changes.
The new feature can be added easily.
Customer satisfaction as the feedback and suggestions are taken at every stage.
Disadvantages: Lack of documentation.
Agile needs experienced and highly skilled resources.
If a customer is not clear about how exactly they want the product to be, then the project
would fail.
Conclusion
Adherence to a suitable life cycle is very important, for the successful completion of the Project.
This, in turn, makes the management easier.
Different Software Development Life Cycle models have their own Pros and Cons. The best
model for any Project can be determined by the factors like Requirement (whether it is clear or
unclear), System Complexity, Size of the Project, Cost, Skill limitation, etc.
Example, in case of an unclear requirement, Spiral and Agile models are best to be used as the
required change can be accommodated easily at any stage.
Waterfall model is a basic model and all the other SDLC models are based on that only.
Software Testing Life Cycle (STLC)
Software Testing Life Cycle (STLC) is a sequence of different activities performed during the
software testing process.
Characteristics of STLC:
STLC is a fundamental part of Software Development Life Cycle (SDLC) but STLC
consists of only the testing phases.
STLC starts as soon as requirements are defined or software requirement document is
shared by stakeholders.
STLC yields a step-by-step process to ensure quality software.
In the initial stages of STLC, while the software product or the application is being
developed, the testing team analyzes and defines the scope of testing, entry and exit
criteria and also the test cases. It helps to reduce the test cycle time and also enhance the
product quality.
As soon as the development phase is over, testing team is ready with test cases and start
the execution. This helps in finding bugs in the early phase.
Phases of STLC:
Requirement Analysis:
Requirement Analysis is the first step of Software Testing Life Cycle (STLC). In this phase
quality assurance team understands the requirements like what is to be tested. If anything is
missing or not understandable then quality assurance team meets with the stakeholders to better
understand the detail knowledge of requirement.
Test Planning:
Test Planning is most efficient phase of software testing life cycle where all testing plans are
defined. In this phase manager of the testing team calculates estimated effort and cost for the
testing work. This phase gets started once the requirement gathering phase is completed.
Test Case Development:
The test case development phase gets started once the test planning phase is completed. In this
phase testing team note down the detailed test cases. Testing team also prepare the required test
data for the testing. When the test cases are prepared then they are reviewed by quality assurance
team.
Test Environment Setup:
Test environment setup is the vital part of the STLC. Basically test environment decides the
conditions on which software is tested. This is independent activity and can be started along with
test case development. In this process the testing team is not involved. either the developer or the
customer creates the testing environment.
Test Execution:
After the test case development and test environment setup test execution phase gets started. In
this phase testing team start executing test cases based on prepared test cases in the earlier step.
Test Closure:
This is the last stage of STLC in which the process of testing is analyzed.
Software Testing Terms and Definitions
These terms describe fundamental concepts regarding the software development process and
software testing.
Precision and Accuracy
As a software tester, it’s important to know the difference between precision and accuracy.
Suppose that you’re testing a calculator. Should you test that the answers it returns are precise or
accurate? Both? If the project schedule forced you to make a risk-based decision to focus on only
one of these, which one would you choose?
What if the software you’re testing is a simulation game such as baseball or a flight simulator?
Should you primarily test its precision or its accuracy?
Verification and Validation
Verification and validation are often used interchangeably but have different definitions. These
differences are important to software testing.
Verification is the process confirming that something—software—meets its specification.
Validation is the process confirming that it meets the user’s requirements. These may sound very
similar, but an explanation of the Hubble space telescope problems will help show the difference.
Quality and Reliability
Merriam-Webster’s Collegiate Dictionary defines quality as “a degree of excellence” or
“superiority in kind.” If a software product is of high quality, it will meet the customer’s needs.
The customer will feel that the product is excellent and superior to his other choices.
Software testers often fall into the trap of believing that quality and reliability are the same thing.
They feel that if they can test a program until it’s stable, dependable, and reliable, they are
assuring a high-quality product. Unfortunately, that isn’t necessarily true. Reliability is just one
aspect of quality.
Testing and Quality Assurance (QA)
The last pair of definitions is testing and quality assurance (sometimes shortened to QA). These
two terms are the ones most often used to describe either the group or the process that’s verifying
and validating the software.
The goal of a software tester is to find bugs, find them as early as possible, and make sure they
get fixed.
A software quality assurance person’s main responsibility is to create and enforce standards and
methods to improve the development process and to prevent bugs from ever occurring.
Dynamic Black-Box Testing
Testing software without having an insight into the details of underlying code is dynamic black-
box testing. It’s dynamic because the program is running—you’re using it as a customer would.
And, it’s black-box because you’re testing it without knowing exactly how it works— with
blinders on. You’re entering inputs, receiving outputs, and checking the results. Another name
commonly used for dynamic black-box testing is behavioral testing because you’re testing how
the software actually behaves when it’s used.
To do this effectively requires some definition of what the software does—namely, a
requirements document or product specification. You don’t need to be told what happens inside
the software “box”—you just need to know that inputting A outputs B or that performing
operation C results in D. A good product spec will provide you with these details.
Once you know the ins and outs of the software you’re about to test, your next step is to start
defining the test cases. Test cases are the specific inputs that you’ll try and the procedures that
you’ll follow when you test the software.
Test-to-Pass and Test-to-Fail
There are two fundamental approaches to testing software: test-to-pass and test-to-fail. When
you test-to-pass, you really assure only that the software minimally works. You don’t push its
capabilities. You don’t see what you can do to break it. You treat it with kid gloves, applying the
simplest and most straightforward test cases.
After you assure yourself that the software does what it’s specified to do in ordinary
circumstances, it’s time to put on your sneaky, conniving, devious hat and attempt to find bugs
by trying things that should force them out. Designing and running test cases with the sole
purpose of breaking the software is called testing-to-fail or error-forcing. You’ll learn later in
this chapter that test-to-fail cases often don’t appear intimidating. They often look like test-to-
pass cases, but they’re strategically chosen to probe for common weaknesses in the software.
Equivalence Partitioning
Selecting test cases is the single most important task that software testers do and equivalence
partitioning, sometimes called equivalence classing, is the means by which they do it.
Equivalence partitioning is the process of methodically reducing the huge (infinite) set of
possible test cases into a much smaller, but still equally effective, set.
For example, without knowing anything more about equivalence partitioning, would you think
that if you tested 1+1, 1+2, 1+3, and 1+4 that you’d need to test 1+5 and 1+6? Do you think you
could safely assume that they’d work?
Look at a few examples:
• In the case of adding two numbers together, there seemed to be a distinct difference between
testing 1+13 and 1+99999999999999999999999999999999. Call it a gut feeling, but one seemed
to be normal addition and the other seemed to be risky. That gut feeling is right. A program
would have to handle the addition of 1 to a maxed-out number differently than the addition of
two small numbers. It would need to handle an overflow condition. These two cases, because the
software most likely operates on them differently, are in different equivalence partitions.
If you have some programming experience, you might be thinking of several more “special”
numbers that could cause the software to operate differently. If you’re not a programmer, don’t
worry—you’ll learn the techniques very shortly and be able to apply them without having to
understand the code in detail.
• Figure 5.3 shows the Calculator’s Edit menu selected to display the copy and paste commands.
There are five ways to perform each function. For copy, you click the Copy menu item, type c or
C, or press Ctrl+c or Ctrl+Shift+c. Each input path copies the current number into the
Clipboard—they perform the same output and produce the same result.
Data Testing
The simplest view of software is to divide its world into two parts: the data (or its domain) and
the program. The data is the keyboard input, mouse clicks, disk files, printouts, and so on. The
program is the executable flow, transitions, logic, and computations. A common approach to
software testing is to divide up the test work along the same lines.
When you perform software testing on the data, you’re checking that information the user inputs,
results that he receives, and any interim results internal to the software are handled correctly.
Examples of data would be
• The words you type into a word processor
• The numbers entered into a spreadsheet
• The number of shots you have remaining in your space game
• The picture printed by your photo software
• The backup files stored on your floppy disk
• The data being sent by your modem over the phone lines
The amount of data handled by even the simplest programs can be overwhelming. Remember all
the possibilities for simple addition on a calculator? Consider a word processor, a missile
guidance system, or a stock trading program. The trick (if you can call it that) to making any of
these testable is to intelligently reduce the test cases by equivalence partitioning based on a few
key concepts: boundary conditions, sub-boundary conditions, nulls, and bad data.
Boundary Conditions
Boundary conditions are special because programming, by its nature, is susceptible to problems
at its edges. Software is very binary—something is either true or it isn’t. If an operation is
performed on a range of numbers, odds are the programmer got it right for the vast majority of
the numbers in the middle, but maybe made a mistake at the edges.
Testing the Boundary Edges
What you’ve learned so far is that you need to create equivalence partitions of the different data
sets that your software operates on. Since software is susceptible to bugs at the boundaries, if
you’re choosing what data to include in your equivalence partition, you’ll find more bugs if you
choose data from the boundaries.
Sub-Boundary Conditions
The normal boundary conditions just discussed are the most obvious to find. They’re the ones
defined in the specification or evident when using the software. Some boundaries, though, that
are internal to the software aren’t necessarily apparent to an end user but still need to be checked
by the software tester. These are known as sub-boundary conditions or internal boundary
conditions.
Powers-of-Two
Computers and software are based on binary numbers—bits representing 0s and 1s, bytes made
up of 8 bits, words made up of 4 bytes, and so on. Table 5.1 shows the common powers-of-two
units and their equivalent values.
ASCII Table
Default, Empty, Blank, Null, Zero, and None
Another source of bugs that may seem obvious is when the software requests an entry—say, in a
text box—but rather than type the correct information, the user types nothing. He may just press
Enter. This situation is often overlooked in the specification or forgotten by the programmer but
is a case that typically happens in real life.
Well-behaved software will handle this situation. It will usually default to the lowest valid
boundary limit or to some reasonable value in the middle of the valid partition, or return an error.
Invalid, Wrong, Incorrect, and Garbage Data
The final type of data testing is garbage data. This is where you test-to-fail. You’ve already
proven that the software works as it should by testing-to-pass with boundary testing, sub
boundary testing, and default testing. Now it’s time to throw the trash at it.
State Testing
So far what you’ve been testing is the data—the numbers, words, inputs, and outputs of the
software. The other side of software testing is to verify the program’s logic flow through its
various states.
Testing the Software’s Logic Flow
Testing the software’s states and logic flow has the same problems. It’s usually possible to visit
all the states (after all, if you can’t get to them, why have them?). The difficulty is that except for
the simplest programs, it’s often impossible to traverse all paths to all states. The complexity of
the software, especially due to the richness of today’s user interfaces, provides so many choices
and options that the number of paths grows exponentially.
Creating a State Transition Map
The first step is to create your own state transition map of the software. Such a map may be
provided as part of the product specification. There are several different diagramming techniques
for state transition diagrams. One uses boxes and arrows and the other uses circles (bubbles) and
arrows. The technique you use to draw your map isn’t important as long as you and the other
members of your project team can read and understand it.
Each unique state that the software can be in. A good rule of thumb is that if you’re unsure
whether something is a separate state, it probably is. You can always collapse it into another state
if you find out later that it isn’t.
The input or condition that takes it from one state to the next. This might be a key press, a menu
selection, a sensor input, a telephone ring, and so on. A state can’t be exited without some
reason. The specific reason is what you’re looking for here.
Set conditions and produced output when a state is entered or exited. This would include a menu
and buttons being displayed, a flag being set, a printout occurring, a calculation being performed,
and so on. It’s anything and everything that happens on the transition from one state to the next.
Reducing the Number of States and Transitions to Test
Creating a map for a large software product is a huge undertaking. Hopefully, you’ll be testing
only a portion of the overall software so that making the map is a more reasonable task. Once
you complete the map, you’ll be able to stand back and see all the states and all the ways to and
from those states. If you’ve done your job right, it’ll be a scary picture! If you had infinite time,
you would want to test every path through the software—not just each line connecting two states,
but each set of lines, back to front, round and round. As in the traveling salesman problem, it
would be impossible to hit them all.