mi0033 set-1 & set-2

74
SIKKIM MANIPAL UNIVERSITY SOFTWARE ENGINEERING SUBJECT CODE – MI0033 Assignment Set- 1 Q1. Quality and reliability are related concepts but are fundamentally different in a number of ways. Discuss them. Answer: One of the challenges of software quality is that "everyone feels they understand it. In addition to more software specific definitions given below, there are several applicable definitions of quality which are used in business. Software quality may be defined as conformance to explicitly stated functional and performance requirements, explicitly documented development standards and implicit characteristics that are expected of all professionally developed software. The three key points in this definition: 1. Software requirements are the foundations from which quality is measured. Lack of conformance to requirement is lack of quality. SANTOSH GOWDA.H Reg No.: 521075728 3rd semester, Disha institute of management and technology Mobile No.: 9986840143

Upload: santosh143hsv143

Post on 02-Dec-2014

1.231 views

Category:

Documents


2 download

DESCRIPTION

12/12/2011

TRANSCRIPT

Page 1: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITY

SOFTWARE ENGINEERING

SUBJECT CODE – MI0033

Assignment Set- 1

Q1. Quality and reliability are related concepts but are fundamentally

different in a number of ways. Discuss them.

Answer:

One of the challenges of software quality is that "everyone feels they

understand it.

In addition to more software specific definitions given below, there are

several applicable definitions of quality which are used in business.

Software quality may be defined as conformance to explicitly stated

functional and performance requirements, explicitly documented development

standards and implicit characteristics that are expected of all professionally

developed software.

The three key points in this definition:

1. Software requirements are the foundations from which quality is

measured.

Lack of conformance to requirement is lack of quality.

2. Specified standards define a set of development criteria that guide the

management in software engineering.

If criteria are not followed lack of quality will usually result.

3. A set of implicit requirements often goes unmentioned, for example ease

of use, maintainability etc.

If software conforms to its explicit requirements but fails to meet implicit

requirements, software quality is suspected.

A definition in Steve McConnell's Code Complete divides software into

two pieces: internal and external quality characteristics. External quality

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 2: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYcharacteristics are those parts of a product that face its users, where internal

quality characteristics are those that do not.[4]

Another definition by Dr. Tom De Marco says "a product's quality is a

function of how much it changes the world for the better. This can be interpreted

as meaning that user satisfaction is more important than anything in determining

software quality.

Another definition, coined by Gerald Weinberg in Quality Software

Management: Systems Thinking, is "Quality is value to some person." This

definition stresses that quality is inherently subjective - different people will

experience the quality of the same software very differently. One strength of this

definition is the questions it invites software teams to consider, such as "Who are

the people we want to value our software?" and "What will be valuable to them?"

Software product quality

Product quality

conformance to requirements or program specification; related to

Reliability

Scalability

Correctness

Completeness

Absence of bugs

Fault-tolerance

Extensibility

Maintainability

Documentation

The Consortium for IT Software Quality (CISQ) was launched in 2009 to

standardize the measurement of software product quality. The Consortium's goal

is to bring together industry executives from Global 2000 IT organizations,

system integrators, outsourcers, and package vendors to jointly address the

challenge of standardizing the measurement of IT software quality and to promote

a market-based ecosystem to support its deployment.

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 3: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYIt is essential to supplement traditional testing – functional, non-

functional, and run-time – with measures of application structural quality.

Structural quality is the quality of the application’s architecture and the degree to

which its implementation accords with software engineering best practices.

Industry data demonstrate that poor application structural quality results in cost

and schedule overruns and creates waste in the form of rework (up to 45% of

development time in some organizations). Moreover, poor structural quality is

strongly correlated with high-impact business disruptions due to corrupted data,

application outages, security breaches, and performance problems. As in any

other field of engineering, an application with good structural software quality

costs less to maintain and is easier to understand and change in response to

pressing business needs.

Source code quality

A computer has no concept of "well-written" source code. However, from

a human point of view source code can be written in a way that has an effect on

the effort needed to comprehend its behavior. Many source code programming

style guides, which often stress readability and usually language-specific

conventions are aimed at reducing the cost of source code maintenance. Some of

the issues that affect code quality include:

Readability

Ease of maintenance, testing, debugging, fixing, modification and

portability

Low complexity

Low resource consumption: memory, CPU

Number of compilation or lint warnings

Robust input validation and error handling, established by software fault

injection

Methods to improve the quality:

Refactoring

Code Inspection or software review

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 4: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITY Documenting the code

Software reliability

Software reliability is an important facet of software quality. It is defined

as "the probability of failure-free operation of a computer program in a specified

environment for a specified time".

One of reliability's distinguishing characteristics is that it is objective,

measurable, and can be estimated, whereas much of software quality is subjective

criteria.[7] This distinction is especially important in the discipline of Software

Quality Assurance. These measured criteria are typically called software metrics.

Goal of reliability

The need for a means to objectively determine software reliability comes

from the desire to apply the techniques of contemporary engineering fields to the

development of software. That desire is a result of the common observation, by

both lay-persons and specialists, that computer software does not work the way it

ought to. In other words, software is seen to exhibit undesirable behavior, up to

and including outright failure, with consequences for the data which is processed,

the machinery on which the software runs, and by extension the people and

materials which those machines might negatively affect. The more critical the

application of the software to economic and production processes, or to life-

sustaining systems, the more important is the need to assess the software's

reliability.

Regardless of the criticality of any single software application, it is also

more and more frequently observed that software has penetrated deeply into

almost every aspect of modern life through the technology we use. It is only

expected that this infiltration will continue, along with an accompanying

dependency on the software by the systems which maintain our society. As

software becomes more and more crucial to the operation of the systems on which

we depend, the argument goes, it only follows that the software should offer a

concomitant level of dependability. In other words, the software should behave in

the way it is intended, or even better, in the way it should.

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 5: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYChallenge of reliability

The circular logic of the preceding sentence is not accidental—it is meant

to illustrate a fundamental problem in the issue of measuring software reliability,

which is the difficulty of determining, in advance, exactly how the software is

intended to operate. The problem seems to stem from a common conceptual error

in the consideration of software, which is that software in some sense takes on a

role which would otherwise be filled by a human being. This is a problem on two

levels. Firstly, most modern software performs work which a human could never

perform, especially at the high level of reliability that is often expected from

software in comparison to humans. Secondly, software is fundamentally incapable

of most of the mental capabilities of humans which separate them from mere

mechanisms: qualities such as adaptability, general-purpose knowledge, a sense

of conceptual and functional context, and common sense.

Nevertheless, most software programs could safely be considered to have a

particular, even singular purpose. If the possibility can be allowed that said

purpose can be well or even completely defined, it should present a means for at

least considering objectively whether the software is, in fact, reliable, by

comparing the expected outcome to the actual outcome of running the software in

a given environment, with given data. Unfortunately, it is still not known whether

it is possible to exhaustively determine either the expected outcome or the actual

outcome of the entire set of possible environment and input data to a given

program, without which it is probably impossible to determine the program's

reliability with any certainty.

However, various attempts are in the works to attempt to rein in the

vastness of the space of software's environmental and input variables, both for

actual programs and theoretical descriptions of programs. Such attempts to

improve software reliability can be applied at different stages of a program's

development, in the case of real software. These stages principally include:

requirements, design, programming, testing, and runtime evaluation. The study of

theoretical software reliability is predominantly concerned with the concept of

correctness, a mathematical field of computer science which is an outgrowth of

language and automata theory.

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 6: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITY

Reliability in program development

Requirements

A program cannot be expected to work as desired if the developers of the

program do not, in fact, know the program's desired behaviour in advance, or if

they cannot at least determine its desired behaviour in parallel with development,

in sufficient detail. What level of detail is considered sufficient is hotly debated.

The idea of perfect detail is attractive, but may be impractical, if not actually

impossible. This is because the desired behaviour tends to change as the possible

range of the behaviour is determined through actual attempts, or more accurately,

failed attempts, to achieve it.

Whether a program's desired behaviour can be successfully specified in

advance is a moot point if the behaviour cannot be specified at all, and this is the

focus of attempts to formalize the process of creating requirements for new

software projects. In situ with the formalization effort is an attempt to help inform

non-specialists, particularly non-programmers, who commission software projects

without sufficient knowledge of what computer software is in fact capable.

Communicating this knowledge is made more difficult by the fact that, as hinted

above, even programmers cannot always know in advance what is actually

possible for software in advance of trying.

Design

While requirements are meant to specify what a program should do,

design is meant, at least at a high level, to specify how the program should do it.

The usefulness of design is also questioned by some, but those who look to

formalize the process of ensuring reliability often offer good software design

processes as the most significant means to accomplish it. Software design usually

involves the use of more abstract and general means of specifying the parts of the

software and what they do. As such, it can be seen as a way to break a large

program down into many smaller programs, such that those smaller pieces

together do the work of the whole program.

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 7: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYThe purposes of high-level design are as follows. It separates what are

considered to be problems of architecture, or overall program concept and

structure, from problems of actual coding, which solve problems of actual data

processing. It applies additional constraints to the development process by

narrowing the scope of the smaller software components, and thereby—it is hoped

—removing variables which could increase the likelihood of programming errors.

It provides a program template, including the specification of interfaces, which

can be shared by different teams of developers working on disparate parts, such

that they can know in advance how each of their contributions will interface with

those of the other teams. Finally, and perhaps most controversially, it specifies the

program independently of the implementation language or languages, thereby

removing language-specific biases and limitations which would otherwise creep

into the design, perhaps unwittingly on the part of programmer-designers.

Programming

The history of computer programming language development can often be

best understood in the light of attempts to master the complexity of computer

programs, which otherwise becomes more difficult to understand in proportion

(perhaps exponentially) to the size of the programs. (Another way of looking at

the evolution of programming languages is simply as a way of getting the

computer to do more and more of the work, but this may be a different way of

saying the same thing). Lack of understanding of a program's overall structure and

functionality is a sure way to fail to detect errors in the program, and thus the use

of better languages should, conversely, reduce the number of errors by enabling a

better understanding.

Improvements in languages tend to provide incrementally what software

design has attempted to do in one fell swoop: consider the software at ever greater

levels of abstraction. Such inventions as statement, sub-routine, file, class,

template, library, component and more have allowed the arrangement of a

program's parts to be specified using abstractions such as layers, hierarchies and

modules, which provide structure at different granularities, so that from any point

of view the program's code can be imagined to be orderly and comprehensible.

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 8: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYIn addition, improvements in languages have enabled more exact control

over the shape and use of data elements, culminating in the abstract data type.

These data types can be specified to a very fine degree, including how and when

they are accessed, and even the state of the data before and after it is accessed..

Software Build and Deployment

Many programming languages such as C and Java require the program

"source code" to be translated in to a form that can be executed by a computer.

This translation is done by a program called a compiler. Additional operations

may be involved to associate, bind, link or package files together in order to

create a usable runtime configuration of the software application. The totality of

the compiling and assembly process is generically called "building" the software.

The software build is critical to software quality because if any of the generated

files are incorrect the software build is likely to fail. And, if the incorrect version

of a program is inadvertently used, then testing can lead to false results.

Software builds are typically done in work area unrelated to the runtime

area, such as the application server. For this reason, a deployment step is needed

to physically transfer the software build products to the runtime area. The

deployment procedure may also involve technical parameters, which, if set

incorrectly, can also prevent software testing from beginning. For example, a Java

application server may have options for parent-first or parent-last class loading.

Using the incorrect parameter can cause the application to fail to execute on the

application server.

The technical activities supporting software quality including build,

deployment, change control and reporting are collectively known as Software

configuration management. A number of software tools have arisen to help meet

the challenges of configuration management including file control tools and build

control tools.

Testing

Main article: Software Testing

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 9: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYSoftware testing, when done correctly, can increase overall software

quality of conformance by testing that the product conforms to its requirements.

Testing includes, but is not limited to:

Unit Testing

Functional Testing

Regression Testing

Performance Testing

Failover Testing

Usability Testing

A number of agile methodologies use testing early in the development cycle to

ensure quality in their products. For example, the test-driven development

practice, where tests are written before the code they will test, is used in Extreme

Programming to ensure quality.

Runtime

Runtime reliability determinations are similar to tests, but go beyond

simple confirmation of behavior to the evaluation of qualities such as

performance and interoperability with other code or particular hardware

configurations.

Software quality factors

A software quality factor is a non-functional requirement for a software

program which is not called up by the customer's contract, but nevertheless is a

desirable requirement which enhances the quality of the software program. Note

that none of these factors are binary; that is, they are not “either you have it or you

don’t” traits. Rather, they are characteristics that one seeks to maximize in one’s

software to optimize its quality. So rather than asking whether a software product

“has” factor x, ask instead the degree to which it does (or does not).

Some software quality factors are listed here:

Understandability

Clarity of purpose: This goes further than just a statement of purpose; all of the

design and user documentation must be clearly written so that it is easily

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 10: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYunderstandable. This is obviously subjective in that the user context must be taken

into account: for instance, if the software product is to be used by software

engineers it is not required to be understandable to the layman.

Completeness

Presence of all constituent parts, with each part fully developed. This

means that if the code calls a subroutine from an external library, the software

package must provide reference to that library and all required parameters must be

passed. All required input data must also be available.

Conciseness

Minimization of excessive or redundant information or processing. This is

important where memory capacity is limited, and it is generally considered good

practice to keep lines of code to a minimum. It can be improved by replacing

repeated functionality by one subroutine or function which achieves that

functionality. It also applies to documents.

Portability

Ability to be run well and easily on multiple computer configurations.

Portability can mean both between different hardware—such as running on a PC

as well as a Smartphone—and between different operating systems—such as

running on both Mac OS X and GNU/Linux.

Consistency

Uniformity in notation, symbology, appearance, and terminology within

itself.

Maintainability

Propensity to facilitate updates to satisfy new requirements. Thus the

software product that is maintainable should be well-documented, should not be

complex, and should have spare capacity for memory, storage and processor

utilization and other resources.

Testability

Disposition to support acceptance criteria and evaluation of performance.

Such a characteristic must be built-in during the design phase if the product is to

be easily testable; a complex design leads to poor testability.

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 11: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYUsability

Convenience and practicality of use. This is affected by such things as the

human-computer interface. The component of the software that has most impact

on this is the user interface (UI), which for best usability is usually graphical (i.e.

a GUI).

Reliability

Ability to be expected to perform its intended functions satisfactorily. This

implies a time factor in that a reliable product is expected to perform correctly

over a period of time. It also encompasses environmental considerations in that

the product is required to perform correctly in whatever conditions it finds itself

(sometimes termed robustness).

Efficiency

Fulfillment of purpose without waste of resources, such as memory, space

and processor utilization, network bandwidth, time, etc.

Security

Ability to protect data against unauthorized access and to withstand

malicious or inadvertent interference with its operations. Besides the presence of

appropriate security mechanisms such as authentication, access control and

encryption, security also implies resilience in the face of malicious, intelligent and

adaptive attackers.

Measurement of software quality factors

There are varied perspectives within the field on measurement. There are a

great many measures that are valued by some professionals—or in some contexts,

that are decried as harmful by others. Some believe that quantitative measures of

software quality are essential. Others believe that contexts where quantitative

measures are useful are quite rare, and so prefer qualitative measures. Several

leaders in the field of software testing have written about the difficulty of

measuring what we truly want to measure well.[8][9]

One example of a popular metric is the number of faults encountered in

the software. Software that contains few faults is considered by some to have

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 12: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYhigher quality than software that contains many faults. Questions that can help

determine the usefulness of this metric in a particular context include:

a) What constitutes “many faults?” Does this differ depending upon the

purpose of the software (e.g., blogging software vs. navigational

software)? Does this take into account the size and complexity of the

software?

b) Does this account for the importance of the bugs (and the importance to

the stakeholders of the people those bugs bug)? Does one try to weight

this metric by the severity of the fault, or the incidence of users it affects?

If so, how? And if not, how does one know that 100 faults discovered is

better than 1000?

c) If the count of faults being discovered is shrinking, how do I know what

that means? For example, does that mean that the product is now higher

quality than it was before? Or that this is a smaller/less ambitious change

than before? Or that fewer tester-hours have gone into the project than

before? Or that this project was tested by less skilled testers than before?

Or that the team has discovered that fewer faults reported is in their

interest?

This last question points to an especially difficult one to manage. All

software quality metrics are in some sense measures of human behavior, since

humans create software.[8] If a team discovers that they will benefit from a drop in

the number of reported bugs, there is a strong tendency for the team to start

reporting fewer defects. That may mean that email begins to circumvent the bug

tracking system, or that four or five bugs get lumped into one bug report, or that

testers learn not to report minor annoyances. The difficulty is measuring what we

mean to measure, without creating incentives for software programmers and

testers to consciously or unconsciously “game” the measurements.

Software quality factors cannot be measured because of their vague

definitions. It is necessary to find measurements, or metrics, which can be used to

quantify them as non-functional requirements. For example, reliability is a

software quality factor, but cannot be evaluated in its own right. However, there

are related attributes to reliability, which can indeed be measured. Some such

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 13: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYattributes are mean time to failure, rate of failure occurrence, and availability of

the system. Similarly, an attribute of portability is the number of target-dependent

statements in a program.

A scheme that could be used for evaluating software quality factors is

given below. For every characteristic, there are a set of questions which are

relevant to that characteristic. Some type of scoring formula could be developed

based on the answers to these questions, from which a measurement of the

characteristic can be obtained.

Q.3. Discuss the CMM 5 Levels for Software Process.

Answer.

The Software Process:

In recent years, there has been a significant emphasis on “process

maturity”. The Software Engineering Institute (SEI) has developed a

comprehensive model predicated on a set of software engineering capabilities that

should be present as organizations reach different levels of process maturity. To

determine an organization’s current state of process maturity, the SEI uses an

assessment that results in a five point grading scheme. The grading scheme

determines compliance with a capability maturity model (CMM) [PAU93] that

defines key activities required at different levels of process maturity. The SEI

approach provides a measure of the global effectiveness of a company’s software

engineering practices, and establishes five process maturity levels that are defined

in the following manner:

Level 1: Initial – The Software process is characterized as ad hoc and

occasionally even chaotic. Few processes are defined, and success depends on

individual effort.

Level 2: Repeatable – Basic project management processes are established to

track cost, schedule, and functionality. The necessary process discipline is in

place to repeat earlier successes on projects with similar applications.

Level 3: Defined – The software process for both management and engineering

activities is documented, standardized, and integrated into an organized-wide SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 14: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYsoftware process. All projects use a documented and approved version of the

organizations process for developing and supporting software. This level includes

all characteristic defined for level 2.

Level 4: Managed – Detailed measures of the software process and product

quality are collected. Both the software process and products are quantitatively

understood and controlled using detailed measures. This level includes all

characteristics defined for level 3.

Level 5: Optimizing – Continuous process improvement is enabled by

quantitative feedback from the process and from testing innovative ideas and

technologies. This level includes all characteristics defined for level 4. The five

levels defined by the SEI were derived as a consequence of evaluating responses

to the SEI assessment questionnaire that is based on the CMM. The results of the

questionnaire are distilled to a single numerical grade that provides an indication

of an organization’s process maturity.

The SEI has associated key process areas (KPAs) with each of the

maturity levels. The KPAs describe those software engineering functions (e.g.,

software project planning, requirements management) that must be present to

satisfy good practice at a particular level. Each KPA is described by identifying

the following characteristics:

- Goals – the overall objectives that the KPA must achieve.

- Commitments – requirements (imposed on the organization) that must be

met to achieve the goals, or provide proof of intent to comply with the

goals.

- Abilities – those things must be in place (organizationally and technically)

to enable the organization to meet the commitments.

- Activities – the specific tasks required to achieve the KPA function.

- Methods for monitoring implementation – the manner in which the

activities are monitored as they are put into place.

- Methods for verifying implementation – the manner in which proper

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 15: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYpractice for the KPA can be verified.

Q.4. Discuss the Water Fall Model for Software Development.

Answer.

The Linear Sequential Model:

Sometimes called the classic life cycle or the waterfall model, the linear

sequential model suggests a systematic, sequential approach to software

development that begins at the system level and progresses through analysis,

design, coding testing, and support. The linear sequential model for software

engineering. Modeled after a conventional engineering cycle, the linear sequential

model encompasses the following activities :

System / information engineering and modeling – because software is always

part of a larger system (or business), work begins by establishing requirements for

all system elements and then allocating some subset of these requirements to

software. This system view is essential when software must interact with other

elements such as hardware, people, and databases. System engineering and

analysis encompass requirements gathering at the system level, with a small

amount of top level design and analysis. Information engineering encompasses

requirements gathering at the strategic business level and at the business area

level.

Software requirements analysis - The requirements gathering process is

intensified and focused specifically on software. To understand the nature of the

program(s) to be built, the software engineer (“analyst”) must understand the

information domain for the software, as well as required function, behavior,

performance, and interface. Requirements for both the system and the software

are documented and reviewed with the customer.

Design – Software design is actually a multistep process that focuses on four

distinct attributes of a program : data structure, software architecture, interface

representations, and procedural (algorithmic) detail. Thedesign process translates

requirements into a representation of the software that can be assessed for quality

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 16: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYbefore coding begins. Like requirements, the design is documented and becomes

part of the software configuration.

Code generation – The design must be translated into a machine-readable form.

The code generation step performs this task. If design is performed in a detailed

manner, code generation can be accomplished mechanistically.

Test – Once the code has been generated, program testing begins. The testing

process focuses on the logical internals of the software, ensuring that all

statements have been tested, and on the functional externals; that is, conducting

tests to uncover errors and ensure that defined input will produce actual results

that agree with the required results.

Support – Software will undoubtedly undergo change after it is delivered to the

customer (a possible exception is embedded software). Change will occur because

errors have been encountered, because the software must be adapted to

accommodate changes in its external environment (e.g. a change required because

of a new operating system or peripheral device), or because the customer requires

functional or performance enhancements. Software support / maintenance

reapplies each of the preceding phases to an existing program rather than a new

one. The linear sequential model is the oldest and the most widely used paradigm

for software engineering. However, criticism of the paradigm has caused even

active supporters to questions its efficacy [HAN95]. Among the problems that are

sometimes encountered when the linear sequential model is applied are:

1. Real projects rarely follow the sequential flow that the model proposes.

Although the linear model can accommodate iteration, it does so

indirectly. As a result, changes can cause confusion as the project team

proceeds.

2. It is often difficult for the customer to state all requirements explicitly.

The linear sequential model requires this and has difficulty

accommodating the natural uncertainty that exists at the beginning of

many projects.

3. The customer must have patience. A working version of the program(s)

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 17: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYwill not be available until late in the project time-span. A major blunder, if

undetected until the working program is reviewed, can be disastrous.

In an interesting analysis of actual projects Bradac [BRA94], found that

the linear nature of the classic life cycle leads to “blocking states” in which some

project team members must wait for other members of the team to complete

dependent tasks. In fact, the time spent waiting can exceed the time spent on

productive work ! The blocking state tends to be more prevalent at the beginning

and end of a linear sequential process. Each of these problems is real. However,

the classic life cycle paradigm has a definite and important place in software

engineering work. It provides a template into which methods for analysis, design,

coding, testing, and support can be placed. The classic life cycle remains a widely

used procedural model for software engineering. While it does have weaknesses,

it is significantly better than a haphazard approach to software development.

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 18: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITY

Q.5. Explain the Advantages of Prototype Model, & Spiral Model in

Contrast to Water Fall model.

Answer:

Prototype Model Advantages

Creating software using the prototype model also has its benefits. One of

the key advantages a prototype modeled software has is the time frame of

development. Instead of concentrating on documentation, more effort is placed in

creating the actual software. This way, the actual software could be released in

advance. The work on prototype models could also be spread to others since there

are practically no stages of work in this model. Everyone has to work on the same

thing and at the same time, reducing man hours in creating a software. The work

will even be faster and efficient if developers will collaborate more regarding the

status of a specific function and develop the necessary adjustments in time for the

integration.

Another advantage of having a prototype modeled software is that the

software is created using lots of user feedbacks. In every prototype created, users

could give their honest opinion about the software. If something is unfavorable, it

can be changed. Slowly the program is created with the customer in mind.

The waterfall model is a sequential design process, often used in software

development processes, in which progress is seen as flowing steadily downwards

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 19: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITY(like a waterfall) through the phases of Conception, Initiation, Analysis, Design,

Construction, Testing, Production/Implementation and Maintenance.

The unmodified "waterfall model". Progress flows from the top to the bottom, like

a waterfall.

The waterfall development model originates in the manufacturing and

construction industries: highly structured physical environments in which after-

the-fact changes are prohibitively costly, if not impossible. Since no formal

software development methodologies existed at the time, this hardware-oriented

model was simply adapted for software development.

The first known presentation describing use of similar phases in software

engineering was held by Herbert D. Benington at Symposium on advanced

programming methods for digital computers on 29 June 1956.[1] This presentation

was about the development of software for SAGE. In 1983 the paper was

republished[2] with a foreword by Benington pointing out that the process was not

in fact performed in strict top-down, but depended on a prototype.

The first formal description of the waterfall model is often cited as a 1970

article by Winston W. Royce,[3] though Royce did not use the term "waterfall" in

this article. Royce presented this model as an example of a flawed, non-working

model (Royce 1970). This, in fact, is how the term is generally used in writing

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 20: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYabout software development—to describe a critical view of a commonly used

software practice.[4]

Q.5. Explain the COCOMO Model & Software Estimation Technique.

Answer:

The COCOMO cost estimation model is used by thousands of software

project managers, and is based on a study of hundreds of software projects.

Unlike other cost estimation models, COCOMO is an open model, so all of the

details are published, including:

The underlying cost estimation equations

Every assumption made in the model (e.g. "the project will enjoy good

management")

Every definition (e.g. the precise definition of the Product Design phase of

a project)

The costs included in an estimate are explicitly stated (e.g. project

managers are included, secretaries aren't)

Because COCOMO is well defined, and because it doesn't rely upon

proprietary estimation algorithms, Costar offers these advantages to its users:

COCOMO estimates are more objective and repeatable than estimates

made by methods relying on proprietary models

COCOMO can be calibrated to reflect your software development

environment, and to produce more accurate estimates

Costar is a faithful implementation of the COCOMO model that is easy to

use on small projects, and yet powerful enough to plan and control large projects.

Typically, you'll start with only a rough description of the software system

that you'll be developing, and you'll use Costar to give you early estimates about

the proper schedule and staffing levels. As you refine your knowledge of the

problem, and as you design more of the system, you can use Costar to produce

more and more refined estimates.

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 21: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYCostar allows you to define a software structure to meet your needs. Your

initial estimate might be made on the basis of a system containing 3,000 lines of

code. Your second estimate might be more refined so that you now understand

that your system will consist of two subsystems (and you'll have a more accurate

idea about how many lines of code will be in each of the subsystems). Your next

estimate will continue the process -- you can use Costar to define the components

of each subsystem. Costar permits you to continue this process until you arrive at

the level of detail that suits your needs.

One word of warning: It is so easy to use Costar to make software cost estimates,

that it's possible to misuse it -- every Costar user should spend the time to learn

the underlying COCOMO assumptions and definitions from Software

Engineering Economics and Software Cost Estimation with COCOMO II.

Introduction to the COCOMO Model

The most fundamental calculation in the COCOMO model is the use of

the Effort Equation to estimate the number of Person-Months required to develop

a project. Most of the other COCOMO results, including the estimates for

Requirements and Maintenance, are derived from this quantity.

Source Lines of Code

The COCOMO calculations are based on your estimates of a project's size

in Source Lines of Code (SLOC). SLOC is defined such that:

Only Source lines that are DELIVERED as part of the product are

included -- test drivers and other support software is excluded

SOURCE lines are created by the project staff -- code created by

applications generators is excluded

One SLOC is one logical line of code

Declarations are counted as SLOC

Comments are not counted as SLOC

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 22: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYThe original COCOMO 81 model was defined in terms of Delivered

Source Instructions, which are very similar to SLOC.  The major difference

between DSI and SLOC is that a single Source Line of Code may be several

physical lines.  For example, an "if-then-else" statement would be counted as one

SLOC, but might be counted as several DSI.

The Scale Drivers

In the COCOMO II model, some of the most important factors

contributing to a project's duration and cost are the Scale Drivers. You set each

Scale Driver to describe your project; these Scale Drivers determine the exponent

used in the Effort Equation.

The 5 Scale Drivers are:

Precedentedness

Development Flexibility

Architecture / Risk Resolution

Team Cohesion

Process Maturity

Note that the Scale Drivers have replaced the Development Mode of

COCOMO 81.  The first two Scale Drivers, Precedentedness and Development

Flexibility actually describe much the same influences that the original

Development Mode did.

Cost Drivers

COCOMO II has 17 cost drivers � you assess your project, development

environment, and team to set each cost driver. The cost drivers are multiplicative

factors that determine the effort required to complete your software project. For

example, if your project will develop software that controls an airplane's flight,

you would set the Required Software Reliability (RELY) cost driver to Very

High. That rating corresponds to an effort multiplier of 1.26, meaning that your

project will require 26% more effort than a typical software project.

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 23: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYCOCOMO II defines each of the cost drivers, and the Effort Multiplier

associated with each rating. Check the Costar help for details about the definitions

and how to set the cost drivers.

COCOMO II Effort Equation

The COCOMO II model makes its estimates of required effort (measured in

Person-Months � PM) based primarily on your estimate of the software

project's size (as measured in thousands of SLOC, KSLOC)):

Effort = 2.94 * EAF * (KSLOC)E

Where

EAF   Is the Effort Adjustment Factor derived from the Cost Drivers.

E Is an exponent derived from the five Scale Drivers.

As an example, a project with all Nominal Cost Drivers and Scale Drivers

would have an EAF of 1.00 and exponent, E, of 1.0997. Assuming that the project

is projected to consist of 8,000 source lines of code, COCOMO II estimates that

28.9 Person-Months of effort is required to complete it:

Effort = 2.94 * (1.0) * (8)1.0997 = 28.9 Person-Months

Effort Adjustment Factor

The Effort Adjustment Factor in the effort equation is simply the product

of the effort multipliers corresponding to each of the cost drivers for your project.

For example, if your project is rated Very High for Complexity (effort

multiplier of 1.34), and Low for Language & Tools Experience (effort multiplier

of 1.09), and all of the other cost drivers are rated to be Nominal (effort multiplier

of 1.00), the EAF is the product of 1.34 and 1.09.

Effort Adjustment Factor = EAF = 1.34 * 1.09 = 1.46

Effort = 2.94 * (1.46) * (8)1.0997 = 42.3 Person-Months

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 24: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYCOCOMO II Schedule Equation

The COCOMO II schedule equation predicts the number of months

required to complete your software project. The duration of a project is based on

the effort predicted by the effort equation:

Duration = 3.67 * (Effort)SE

Where

Effort Is the effort from the COCOMO II effort equation

SE Is the schedule equation exponent derived from the five Scale Drivers

Continuing the example, and substituting the exponent of 0.3179 that is

calculated from the scale drivers, yields an estimate of just over a year, and an

average staffing of between 3 and 4 people:

Duration = 3.67 * (42.3)0.3179 = 12.1 months

Average staffing = (42.3 Person-Months) / (12.1 Months) = 3.5 people

The SCED Cost Driver

The COCOMO cost driver for Required Development Schedule (SCED) is

unique, and requires a special explanation.

The SCED cost driver is used to account for the observation that a project

developed on an accelerated schedule will require more effort than a project

developed on its optimum schedule. A SCED rating of Very Low corresponds to

an Effort Multiplier of 1.43 (in the COCOMO II.2000 model) and means that you

intend to finish your project in 75% of the optimum schedule (as determined by a

previous COCOMO estimate). Continuing the example used earlier, but assuming

that SCED has a rating of Very Low, COCOMO produces these estimates:

Duration = 75% * 12.1 Months = 9.1 Months

Effort Adjustment Factor = EAF = 1.34 * 1.09 * 1.43 = 2.09

Effort = 2.94 * (2.09) * (8)1.0997 = 60.4 Person-Months

Average staffing = (60.4 Person-Months) / (9.1 Months) = 6.7 people

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 25: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITY

Notice that the calculation of duration isn't based directly on the effort

(number of Person-Months) � instead it's based on the schedule that would have

been required for the project assuming it had been developed on the nominal

schedule. Remember that the SCED cost driver means "accelerated from the

nominal schedule".

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 26: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITY

SOFTWARE ENGINEERING

SUBJECT CODE – MI0033

Assignment Set- 2

Q.1. Write a note on myths of Software.

Answer:

Software Myths - beliefs about software and the process used to build it - can be

traced to the earliest days of computing. Myths have a number of attributes that

have made them insidious. For instance, myths appear to be reasonable statements

of fact, they have an intuitive feel, and they are often promulgated by experienced

practitioners who "know the score".

Management Myths - Managers with software responsibility, like managers in

most disciplines, are often under pressure to maintain budgets, keep schedules

from slipping, and improve quality. Like a drowning person who grasps at a

straw, a software manager often grasps at belief in a software myth, If the Belief

will lessen the pressure.

Myth: We already have a book that's full of standards and procedures for building

software. Won't that provide my people with everything they need to know?

Reality: The book of standards may very well exist, but is it used?

- Are software practitioners aware of its existence?

- Does it reflect modern software engineering practice?

- Is it complete? Is it adaptable?

- Is it streamlined to improve time to delivery while still maintaining a focus on

Quality?

In many cases, the answer to these entire question is no.

Myth: If we get behind schedule, we can add more programmers and catch up

(sometimes called the Mongolian horde concept)

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 27: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYReality: Software development is not a mechanistic process like manufacturing.

In the words of Brooks [BRO75]: "Adding people to a late software project makes

it later." At first, this statement may seem counterintuitive. However, as new

people are added, people who were working must spend time educating the

newcomers, thereby reducing the amount of time spent on productive

development effort

Myth: If we decide to outsource the software project to a third party, I can just

relax and let that firm build it.

Reality: If an organization does not understand how to manage and control

software project internally, it will invariably struggle when it out sources software

project.

Customer Myths: A customer who requests computer software may be a person

at the next desk, a technical group down the hall, the marketing /sales department,

or an outside company that has requested software under contract. In many cases,

the customer believes myths about software because software managers and

practitioners do little to correct misinformation. Myths led to false expectations

and ultimately, dissatisfaction with the developers.

Myth: A general statement of objectives is sufficient to begin writing programs

we can fill in details later.

Reality: Although a comprehensive and stable statement of requirements is not

always possible, an ambiguous statement of objectives is a recipe for disaster.

Unambiguous requirements are developed only through effective and continuous

communication between customer and developer.

Myth: Project requirements continually change, but change can be easily

accommodated because software is flexible.

Reality: It's true that software requirement change, but the impact of change

varies with the time at which it is introduced. When requirement changes are

requested early, cost impact is relatively small. However, as time passes, cost

impact grows rapidly - resources have been committed, a design framework has

been established, and change can cause upheaval that requires additional

resources and major design modification.

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 28: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITY

Q.2. Explain Version Control & Change Control.

Answer:

Change control within Quality management systems (QMS) and

Information Technology (IT) systems is a formal process used to ensure that

changes to a product or system are introduced in a controlled and coordinated

manner. It reduces the possibility that unnecessary changes will be introduced to a

system without forethought, introducing faults into the system or undoing changes

made by other users of software. The goals of a change control procedure usually

include minimal disruption to services, reduction in back-out activities, and cost-

effective utilization of resources involved in implementing change.

Change control is currently used in a wide variety of products and

systems. For Information Technology (IT) systems it is a major aspect of the

broader discipline of change management. Typical examples from the computer

and network environments are patches to software products, installation of new

operating systems, upgrades to network routing tables, or changes to the electrical

power systems supporting such infrastructure.

Certain portions of the Information Technology Infrastructure Library

cover change control.

The process

There is considerable overlap and confusion between change management,

configuration management and change control. The definition below is not yet

integrated with definitions of the others.

Certain experts describe change control as a set of six step.

Record / Classify

Assess

Plan

Build / Test

Implement

Close / Gain Acceptance

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 29: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYRecord/classify:

The client initiates change by making a formal request for something to be

changed. The change control team then records and categorizes that request. This

categorization would include estimates of importance, impact, and complexity.

Assess:

The impact assessor or assessors then make their risk analysis typically by

answering a set of questions concerning risk, both to the business and to the

process, and follow this by making a judgment on who should carry out the

change. If the change requires more than one type of assessment, the head of the

change control team will consolidate these. Everyone with a stake in the change

then must meet to determine whether there is a business or technical justification

for the change. The change is then sent to the delivery team for planning.

Plan:

Management will assign the change to a specific delivery team, usually

one with the specific role of carrying out this particular type of change. The

team's first job is to plan the change in detail as well as construct a regression plan

in case the change needs to be backed out.

Build/test:

If all stakeholders agree with the plan, the delivery team will build the

solution, which will then be tested. They will then seek approval and request a

time and date to carry out the implementation phase.

Implement:

All stakeholders must agree to a time, date and cost of implementation.

Following implementation, it is usual to carry out a post-implementation review

which would take place at another stakeholder meeting.

Close/gain acceptance:

When the client agrees that the change was implemented correctly, the

change can be closed.

Regulatory environment:

In a Good Manufacturing Practice regulated industry, the topic is

frequently encountered by its users. Various industrial guidance and

commentaries are available for people to comprehend this concept. As a common

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 30: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYpractice, the activity is usually directed by one or more SOPs.[4] From the

information technology perspective for clinical trials, it has been guided by

another USFDA document

Revision control, also known as version control or source control (and an

aspect of software configuration management or SCM), is the management of

changes to documents, programs, and other information stored as computer files.

It is most commonly used in software development, where a team of people may

change the same files. Changes are usually identified by a number or letter code,

termed the "revision number", "revision level", or simply "revision". For example,

an initial set of files is "revision 1". When the first change is made, the resulting

set is "revision 2", and so on. Each revision is associated with a timestamp and the

person making the change. Revisions can be compared, restored, and with some

types of files, merged.

Version control systems (VCSs – singular VCS) most commonly run as stand-

alone applications, but revision control is also embedded in various types of

software such as word processors (e.g., Microsoft Word, OpenOffice.org Writer,

KWord, Pages, etc.), spreadsheets (e.g., Microsoft Excel, OpenOffice.org Calc,

KSpread, Numbers, etc.), and in various content management systems (e.g.,

Drupal, Joomla, WordPress). Integrated revision control is a key feature of wiki

software packages such as MediaWiki, DokuWiki, TWiki etc. In wikis, revision

control allows for the ability to revert a page to a previous revision, which is

critical for allowing editors to track each other's edits, correct mistakes, and

defend public wikis against vandalism and spam.

Software tools for revision control are essential for the organization of multi-

developer projects.[1]

Source-management models

Traditional revision control systems use a centralized model where all the

revision control functions take place on a shared server. If two developers try to

change the same file at the same time, without some method of managing access

the developers may end up overwriting each other's work. Centralized revision

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 31: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYcontrol systems solve this problem in one of two different "source management

models": file locking and version merging.

Atomic operations

Computer scientists speak of atomic operations if the system is left in a

consistent state even if the operation is interrupted. The commit operation is

usually the most critical in this sense. Commits are operations which tell the

revision control system you want to make a group of changes you have been

making final and available to all users. Not all revision control systems have

atomic commits; notably, the widely-used CVS lacks this feature.

File locking

The simplest method of preventing "concurrent access" problems involves

locking files so that only one developer at a time has write access to the central

"repository" copies of those files. Once one developer "checks out" a file, others

can read that file, but no one else may change that file until that developer

"checks in" the updated version (or cancels the checkout).

File locking has both merits and drawbacks. It can provide some protection

against difficult merge conflicts when a user is making radical changes to many

sections of a large file (or group of files). However, if the files are left exclusively

locked for too long, other developers may be tempted to bypass the revision

control software and change the files locally, leading to more serious problems.

Version merging

Most version control systems allow multiple developers to edit the same

file at the same time. The first developer to "check in" changes to the central

repository always succeeds. The system may provide facilities to merge further

changes into the central repository, and preserve the changes from the first

developer when other developers check in.

Merging two files can be a very delicate operation, and usually possible

only if the data structure is simple, as in text files. The result of a merge of two

image files might not result in an image file at all. The second developer checking

in code will need to take care with the merge, to make sure that the changes are

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 32: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYcompatible and that the merge operation does not introduce its own logic errors

within the files. These problems limit the availability of automatic or semi-

automatic merge operations mainly to simple text based documents, unless a

specific merge plugin is available for the file types.

The concept of a reserved edit can provide an optional means to explicitly

lock a file for exclusive write access, even when a merging capability exists.

Baselines, labels and tags

Most revision control tools will use only one of these similar terms

(baseline, label, tag) to refer to the action of identifying a snapshot ("label the

project") or the record of the snapshot ("try it with baseline X"). Typically only

one of the terms baseline, label, or tag is used in documentation or discussion

they can be considered synonyms.

In most projects some snapshots are more significant than others, such as

those used to indicate published releases, branches, or milestones.

When both the term baseline and either of label or tag are used together in

the same context, label and tag usually refer to the mechanism within the tool of

identifying or making the record of the snapshot, and baseline indicates the

increased significance of any given label or tag.

Most formal discussion of configuration management uses the term baseline.

Distributed revision control

Distributed revision control (DRCS) takes a peer-to-peer approach, as

opposed to the client-server approach of centralized systems. Rather than a single,

central repository on which clients synchronize, each peer's working copy of the

codebase is a bona-fide repository.[2] Distributed revision control conducts

synchronization by exchanging patches (change-sets) from peer to peer. This

results in some important differences from a centralized system:

No canonical, reference copy of the codebase exists by default; only

working copies.

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 33: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITY Common operations (such as commits, viewing history, and reverting

changes) are fast, because there is no need to communicate with a central

server.

Rather, communication is only necessary when pushing or pulling changes to or

from other peers.

Each working copy effectively functions as a remote backup of the code

base and of its change-history, providing natural protection against data

loss.

Integration

Some of the more advanced revision-control tools offer many other

facilities, allowing deeper integration with other tools and software-engineering

processes. Plugins are often available for IDEs such as Oracle JDeveloper,

IntelliJ IDEA, Eclipse and Visual Studio. NetBeans IDE and Xcode come with

integrated version control support.

Common vocabulary

Terminology can vary from system to system, but some terms in common

usage include

Baseline: 

An approved revision of a document or source file from which subsequent

changes can be made. See baselines, labels and tags.

Branch:

A set of files under version control may be branched or forked at a point

in time so that, from that time forward, two copies of those files may develop at

different speeds or in different ways independently of each other.

Change:

A change (or diff, or delta) represents a specific modification to a

document under version control. The granularity of the modification considered a

change varies between version control systems.

Change list:

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 34: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYOn many version control systems with atomic multi-change commits, a

changelist, change set, or patch identifies the set of changes made in a single

commit. This can also represent a sequential view of the source code, allowing the

examination of source "as of" any particular changelist ID.

Checkout:

A check-out (or co) is the act of creating a local working copy from the

repository. A user may specify a specific revision or obtain the latest. The term

'checkout' can also be used as a noun to describe the working copy.

Commit:

A commit (checkin, ci or, more rarely, install, submit or record) is the

action of writing or merging the changes made in the working copy back to the

repository. The terms 'commit' and 'checkin' can also be used in noun form to

describe the new revision that is created as a result of committing.

Conflict:

A conflict occurs when different parties make changes to the same

document, and the system is unable to reconcile the changes. A user must resolve

the conflict by combining the changes, or by selecting one change in favour of the

other.

Delta compression:

Most revision control software uses delta compression, which retains only

the differences between successive versions of files. This allows for more

efficient storage of many different versions of files.

Dynamic stream:

A stream in which some or all file versions are mirrors of the parent

stream's versions.

Export:

Exporting is the act of obtaining the files from the repository. It is similar

to checking-out except that it creates a clean directory tree without the version-

control metadata used in a working copy. This is often used prior to publishing

the contents, for example.

Head:

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 35: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYAlso sometime called tip, this refers to the most recent commit.

Import: 

Importing is the act of copying a local directory tree (that is not currently a

working copy) into the repository for the first time.

Mainline:

Similar to trunk, but there can be a mainline for each branch.

Merge:

A merge or integration is an operation in which two sets of changes are

applied to a file or set of files. Some sample scenarios are as follows:

A user, working on a set of files, updates or syncs their working copy with

changes made, and checked into the repository, by other users.

A user tries to check-in files that have been updated by others since the

files were checked out, and the revision control software automatically

merges the files (typically, after prompting the user if it should proceed

with the automatic merge, and in some cases only doing so if the merge

can be clearly and reasonably resolved).

A set of files is branched, a problem that existed before the branching is

fixed in one branch, and the fix is then merged into the other branch.

A branch is created, the code in the files is independently edited, and the

updated branch is later incorporated into a single, unified trunk.

Promote:

The act of copying file content from a less controlled location into a more

controlled location. For example, from a user's workspace into a repository, or

from a stream to its parent.[7]

Repository:

The repository is where files' current and historical data are stored, often

on a server. Sometimes also called a depot (for example, by SVK, AccuRev and

Perforce).

Resolve:

The act of user intervention to address a conflict between different

changes to the same document.

Reverse integration:

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 36: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYThe process of merging different team branches into the main trunk of the

versioning system.

Revision:

Also version: A version is any change in form. In SVK, a Revision is the

state at a point in time of the entire tree in the repository.

Share:

The act of making one file or folder available in multiple branches at the

same time. When a shared file is changed in one branch, it is changed in other

branches.

Stream:

A container for branched files that has a known relationship to other such

containers. Streams form a hierarchy; each stream can inherit various properties

(like versions, namespace, workflow rules, subscribers, etc.) from its parent

stream.

Tag:

A tag or label refers to an important snapshot in time, consistent across

many files. These files at that point may all be tagged with a user-friendly,

meaningful name or revision number. See baselines, labels and tags.

Trunk:

The unique line of development that is not a branch (sometimes also called

Baseline or Mainline)

Update:

An update (or sync) merges changes made in the repository (by other

people, for example) into the local working copy.

Working copy:

The working copy is the local copy of files from a repository, at a specific time or

revision.

All work done to the files in a repository is initially done on a working copy,

hence the name.

Q.3. Discuss the SCM Process.

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 37: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITY

Answer:

Traditional Software Configuration Management Process

Traditional SCM process is looked upon as the best fit solution to handling

changes in software projects. Traditional SCM process identifies the functional

and physical attributes of a software at various points in time and performs

systematic control of changes to the identified attributes for the purpose of

maintaining software integrity and traceability throughout the software

development life cycle.

The SCM process further defines the need to trace the changes and the

ability to verify that the final delivered software has all the planned enhancements

that are supposed to be part of the release.

The traditional SCM identifies four procedures that must be defined for

each software project to ensure a good SCM process is implemented. They are

Configuration Identification

Configuration Control

Configuration Status Accounting

Configuration Authentication

Most of this section will cover traditional SCM theory. Do not consider

this as boring subject since this section defines and explains the terms that will be

used throughout this document.

3.1. Configuration Identification

Software is usually made up of several programs. Each program, its

related documentation and data can be called as a "configurable item"(CI). The

number of CI in any software project and the grouping of artifacts that make up a

CI is a decision made of the project. The end product is made up of a bunch of

CIs.

The status of the CIs at a given point in time is called as a baseline. The

baseline serves as a reference point in the software development life cycle. Each

new baseline is the sum total of an older baseline plus a series of approved

changes made on the CI

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 38: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITY

A baseline is considered to have the following attributes

1. Functionally complete

A baseline will have a defined functionality. The features and

functions of this particular baseline will be documented and available for

reference. Thus the capabilities of the software at a particular baseline is well

known.

2. Known Quality

The quality of a baseline will be well defined. i.e. all known bugs will

be documented and the software will have undergone a complete round of

testing before being put define as the baseline.

3. Immutable and completely recreatable

A baseline, once defined, cannot be changed. The list of the CIs and

their versions are set in stone. Also, all the CIs will be under version control

so the baseline can be recreated at any point in time.

3.2. Configuration Control

The process of deciding, co-ordinating the approved changes for the

proposed CIs and implementing the changes on the appropriate baseline is called

Configuration control.

It should be kept in mind that configuration control only addresses the

process after changes are approved. The act of evaluating and approving changes

to software comes under the purview of an entirely different process called

change control.

3.3. Configuration Status Accounting

Configuration status accounting is the bookkeeping process of each

release. This procedure involves tracking what is in each version of software and

the changes that lead to this version.

Configuration status accounting keeps a record of all the changes made to

the previous baseline to reach the new baseline.

3.4. Configuration Authentication

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 39: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYConfiguration authentication (CA) is the process of assuring that the new

baseline has all the planned and approved changes incorporated. The process

involves verifying that all the functional aspects of the software is complete and

also the completeness of the delivery in terms of the right programs,

documentation and data are being delivered.

The configuration authentication is an audit performed on the delivery

before it is opened to the entire world.

3.5. Tools that aid Software Configuration Management

Free Software Tools TODO: need some writeup here on each tool. Free

software tools that help in SCM are

Concurrent Versions System (CVS)

Revision Control System (RCS)

Source Code Control System (SCCS)

Commercial Tools

Rational ClearCase

PVCS

Microsoft Visual SourceSafe

3.6. SCM and SEI Capability Maturity Model

The Capability Maturity Model defined by the Software Engineering

Institute (SEI) for Software describes the principles and practices to achieve a

certain level of software process maturity. The model is intended to help software

organizations improve the maturity of their software processes in terms of an

evolutionary path from ad hoc, chaotic processes to mature, disciplined software

processes. The CMM is designed towards organizations in improving their

software processes for building better software faster and at a lower cost.

The Software Engineering Institute (SEI) defines five levels of maturity of

a software development process. They are denoted pictorially below.

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 40: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITY

Associated with each level from level two onwards are key areas which an

organization is required to focus on to move on to the next level. Such focus areas

are called as Key Process Areas (KPA) in CMM parlance. As part of level 2

maturity, one of the KPAs that has been identified is SCM.

Q.4. Explain

Answer:

I. Software doesn’t Wear Out.

Answer:

In 1970, less than 1% of the public could have intelligently described what

"computer software" meant. Today, most personal and many members of the

public at large feel that they understand software. But do they?

A text book description of software might take the following form:

Software is

(1) Instructions (computer programs) that when executed provide

desired function and performance,

(2) Data structures that enable the programs to adequately manipulate

information, and

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 41: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITY(3) documents that describe the operation and use of the programs.

There is no question that other, more complete definitions could be

offered. But we need more than a formal definition.

Software Characteristics to gain an understanding of software, it is important to

examine the characteristics of software that make it different from other things

that human beings build. When hardware is built, the human creative process

(analysis, design, construction, testing) is ultimately translated into a physical

form. If we build a new computer, our initial sketches, formal design drawings,

and bread boarded prototype evolve into a physical product (chips, circuit boards,

power supplies, etc).

Software is a logically rather than a physical system element. Therefore, software has characteristics that are considerably different than those of hardware:

1. Software is developed or engineered, it is not manufactured in the

classical sense.

Although some similarities exist between software development and hardware manufacture, the two activities are fundamentally different. In both activities, high quality is achieved through good design, but the manufacturing phase for hardware can introduce quality problems that are nonexistent (or easily corrected) for software. Both activities are dependent on the people, but the relationship between people applied and work accomplished is entirely different. Both activities require the construction of a "product" but the approaches are different.Software costs are concentrated in engineering. This means that software projects can not be managed as if they were manufacturing projects.

2.  Software doesn't "wear out."

Bath tub curve

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 42: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITY

Figure above depicts failure rate as a function of time for hardware. The

relationship often called the "bath tub curve" indicates that hardware exhibits

relatively high failure rates early in its life (these failures are often attributable to

design or manufacturing defects); defects are corrected and the failure rate drops

to a steady-state level (ideally, quite low) for some period of time. As time passes,

however, the failure rate rises again as hardware components suffer from the

cumulative effects of dust, vibration, abuse, temperature extremes, and any other

environmental maladies. Stated simply, the hardware begins to wear out.

Software is not suspect able to the environmental maladies that cause hardware to

wear out. In, theory, therefore, the failure rate curve for the software should take

the form of the "idealized curve". Undiscovered defects will cause high failure

early in the life of a program. However these are corrected (ideally, without

introducing other errors) and the curve flattens. However, the implication is

clear--software doesn't wear out. But it does deteriorate!

Idealized curve

3.  Although the industry is moving towards component-based assembly,

most software continues to be custom built.

Consider the manner in which the control hardware for a computer-based

product is designed and built. The design engineer draws a simple schematic of

the digital circuitry, does some fundamental analyst to assure that proper function

will be achieved, and then goes to the shelf where catalogs of digital components

exist. Each integrated circuit (called an IC or a chip) has a part number, a defined

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 43: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYand validated function, a well-defined interface, and a standard set of integration

guidelines. After each component is selected, it can be ordered off the shelf.

As an engineering discipline evolves, a collection of standard design

components is created. Standard screws andoff-the-shelfintegrated circuits are

only electrical engineers as they design new system. The reusable components

have created so that the engineer can concentrate on the truly innovative elements

of a design, that is, the parts of the design that represents something new. In the

hardware world, component reuse is a natural part of the engineering process. In

the software world, It is something that has only begun to be achieved on a broad

scale.

ii. Software is engineered & not manufactured.

Answer:

The roadmap to building high quality software products is software process.

Software processes are adapted to meet the needs of software engineers and

managers as they undertake the development of a software product.

A software process provides a framework for managing activities that can very

easily get out of control.

Different projects require different software processes.

The software engineer's work products (programs, documentation, data) are

produced as consequences of the activities defined by the software process.

The best indicators of how well a software process has worked are the quality,

timeliness, and long-term viability of the resulting software product.

Software Engineering

Software engineering encompasses a process, management techniques,

technical methods, and the use of tools.

Generic Software Engineering Phases

Definition phase - focuses on what (information engineering, software project

planning, and requirements analysis).

Development phase - focuses on how (software design, code generation, software

testing).

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 44: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITY Support phase - focuses on change (corrective maintenance, adaptive

maintenance, perfective maintenance, preventative maintenance).

Software Engineering Activities

Software project tracking and control

Formal technical reviews

Software quality assurance

Software configuration management

Document preparation and production

Reusability management

Measurement

Risk management

Q.5. Explain the Different types of Software Measurement Techniques

Answer:

Software Measurement Techniques:

Measurements in the physical world can be categorized in two ways :

direct measures (e.g. the length of a bolt) and indirect measures (e.g. the “quality”

of bolts produced, measured by counting rejects). Software metrics can be

categorized similarly. Direct measures of the software engineering process

include cost and effort applied. Direct measures of the product include lines of

code (LOC) produced, execution speed, memory size, and defects reported over

some set period of time. Indirect measures of the product include functionality,

quality, complexity, efficiency, reliability, maintainability, and many other “-

abilities”.

1. Size Oriented Metrics:

Size-oriented software metrics are derived by normalizing quality and / or

productivity measures by considering the size of the software that has been

produced. If a software organization maintains simple records, a table of size-

oriented measures can be created. The table lists each software development

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 45: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYproject that has been completed over the past few years and corresponding

measures for that project. 12,100 lines of code were developed with 24 person-

months of effort at a cost $168,000. It should be noted that the effort and cost

recorded in the table represent all software engineering activities (analysis,

design, code, and test), not just coding. Further information for project alpha

indicates that 365 pages of documentation were developed, 134 errors were

recorded before the software was released, and 29 defects were encountered after

release to the customer within the first year of operation. Three people worked on

the development of software for project alpha.

2. Function Oriented Metrics:

Function-oriented software metrics use a measure of the functionality

delivered by the application as a normalization value. Since ‘functionality’ cannot

be measured directly, it must be derived indirectly using other direct measures.

Function-oriented metrics were first proposed by Albrecht [ALB79], who

suggested a measure called the function point. Function points are derived using

an empirical relationship based on countable (direct) measures of software’s

information domain and assessments of software complexity.

3. Extended Function Point Metrics:

The function point measure was originally designed to be applied to

business information systems applications. To accommodate these applications,

the data dimension (the information domain values discussed previously) was

emphasized to the exclusion of the functional and behavioral (control)

dimensions. For this reason, the function point measure was inadequate for many

engineering and embedded systems (which emphasize function and control). A

number of extensions to the basic function point measure have been proposed to

remedy this situation.

Q.6. Write a Note on Spiral Model.

Answer:

The spiral model is a software development process combining elements

of both design and prototyping-in-stages, in an effort to combine advantages of SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143

Page 46: MI0033 SET-1 & SET-2

SIKKIM MANIPAL UNIVERSITYtop-down and bottom-up concepts. Also known as the spiral lifecycle model (or

spiral development), it is a systems development method (SDM) used in

information technology (IT). This model of development combines the features of

the prototyping model and the waterfall model. The spiral model is intended for

large, expensive and complicated projects.

This should not be confused with the Helical model of modern systems

architecture that uses a dynamic programming (mathematical not software type

programming!) approach in order to optimise the system's architecture before

design decisions are made by coders that would cause problems.

The spiral model was defined by Barry Boehm in his 1986 article "A

Spiral Model of Software Development and Enhancement".[1] This model was not

the first model to discuss iterative development.

As originally envisioned, the iterations were typically 6 months to 2 years

long. Each phase starts with a design goal and ends with the client (who may be

internal) reviewing the progress thus far. Analysis and engineering efforts are

applied at each phase of the project, with an eye toward the end goal of the project

SANTOSH GOWDA.H Reg No.: 5210757283rd semester, Disha institute of management and technology Mobile No.: 9986840143