a metric suite for requirement based test effort estimation using the role of irbc and tcf

8
JOURNAL OF INFORMATION AND COMMUNICATION TECHNOLOGIES, VOLUME 2, ISSUE 8, SEPTEMBER 2012 © 2012 JICT www.jict.co.uk 11 A Metric Suite for Requirement Based Test Effort Estimation using the role of IRBC and TCF Ganesh Kumar and Dr. Pradeep Kumar Bhatia Abstract - The Software testing is one of the most important and critical activity of software development life cycle which ensures software quality and directly influences the development cost and success of the software. This paper presents a systematic and an integrated approach for the estimation of software development and testing effort on the basis of improved requirement based complexity (IRBC) and technical complexity factor (TCF) of the proposed software. The IRBC measure serves as the basis for estimation of these software development activities to enable the developers and practitioners to predict the critical information about the software development intricacies and obtained from software requirement specification (SRS) of proposed software. Hence, this paper presents an integrated approach, for the prediction of software development and testing effort using IRBC. The computation of proposed test effort estimation involves least overhead as compared to others. Index Term - SDLC, SRS, Improved Requirement Based Complexity (IRBC), Technical Complexity Factor (TCF), Requirement Based Test Function Point ( RBTFP), Number of Requirement Based Test Case (NRBTC) , Requirement Based Test Effort Estimation ( RBTEE ). 1. Introduction Software testing is a vital aspect which directly influences the quality of the software. In order to carry out a systematic testing, it is absolutely imperative to predict the effort required to test the software. Hence this paper proposes a metric for estimation of software testing effort using software requirements. Most of the methodologies that have been proposed in the past are code based, but when we have the code, it is too late. Therefore, the effort requirement for the software can be minimized, if the computation of test effort can be done in early phases of software development lifecycle (SDLC). In this direction, we have captured the attributes from SRS of the proposed software to compute the improved requirement based complexity [10]. Further, the obtained IRBC will serve as the basis for estimation and analysis of the test effort. As SRS acts as a verifiable document, hence, estimation of test effort based on SRS will be early warning, systematic, less complex, faster and comprehensive one. To prove the effectiveness, the proposed test metric is also compared with various prevalent test effort estimation practices given in the past. These test effort measures can be classified and individually compared based on three broad categories like use- case based test effort estimation, code based test effort estimation and complexity value based test effort estimation. Literature review for Software Testing Effort During the last few decades, various models, methods and techniques have been developed to estimate the test effort for software to be developed. This section presents a survey of prevalent testing practicies which are categorized into code based, requirement based and complexity based methods for the estimation of software testing effort. Kuo Chang Tai [19] propose the exploration of testing complexity for several class of programs, based on testing path that is obtained on the basis of test data. Muthu Ramchandran [5] proposes a model for test process and investigates the possibility of deriving the test cases from system models and requirement analysis techniques. Johannes Ryser et. al. [14] describes validation and classification of software requirements based on heuristic and solution based strategies. Suresh Nageshwaran [10], discuss a use-case based approach for the estimation of test effort based on use case weight, use case points and complexity factors. Borris Veysburg et. al. [17] discusses an approach for the reduction of requirement based test suites using Extended Finite State Machine (EFSM) dependence analysis. The technique supports test case generation from EFSM system models. Ian Holden and Dalton [6] uses Cumulative Test Analysis (CTA) for test

Upload: journalofict

Post on 31-Jul-2015

149 views

Category:

Documents


3 download

DESCRIPTION

Journal of Information and Communication Technologies, ISSN 2047-3168, Volume 2, Issue 8, September 2012 http://www.jict.co.uk

TRANSCRIPT

Page 1: A Metric Suite for Requirement Based Test Effort Estimation using the role of IRBC and TCF

JOURNAL OF INFORMATION AND COMMUNICATION TECHNOLOGIES, VOLUME 2, ISSUE 8, SEPTEMBER 2012

© 2012 JICT www.jict.co.uk

 

11

A Metric Suite for Requirement Based Test Effort Estimation using the role of IRBC and

TCF Ganesh Kumar and Dr. Pradeep Kumar Bhatia Abstract - The Software testing is one of the most important and critical activity of software development life cycle which ensures software quality and directly influences the development cost and success of the software. This paper presents a systematic and an integrated approach for the estimation of software development and testing effort on the basis of improved requirement based complexity (IRBC) and technical complexity factor (TCF) of the proposed software. The IRBC measure serves as the basis for estimation of these software development activities to enable the developers and practitioners to predict the critical information about the software development intricacies and obtained from software requirement specification (SRS) of proposed software. Hence, this paper presents an integrated approach, for the prediction of software development and testing effort using IRBC. The computation of proposed test effort estimation involves least overhead as compared to others. Index Term - SDLC, SRS, Improved Requirement Based Complexity (IRBC), Technical Complexity Factor (TCF), Requirement Based Test Function Point ( RBTFP), Number of Requirement Based Test Case (NRBTC) , Requirement Based Test Effort Estimation ( RBTEE ).

   

1. Introduction Software testing is a vital aspect which directly influences the quality of the software. In order to carry out a systematic testing, it is absolutely imperative to predict the effort required to test the software. Hence this paper proposes a metric for estimation of software testing effort using software requirements. Most of the methodologies that have been proposed in the past are code based, but when we have the code, it is too late. Therefore, the effort requirement for the software can be minimized, if the computation of test effort can be done in early phases of software development lifecycle (SDLC). In this direction, we have captured the attributes from SRS of the proposed software to compute the improved requirement based complexity [10]. Further, the obtained IRBC will serve as the basis for estimation and analysis of the test effort. As SRS acts as a verifiable document, hence, estimation of test effort based on SRS will be early warning, systematic, less complex, faster and comprehensive one. To prove the effectiveness, the proposed test metric is also compared with various prevalent test effort estimation practices given in the past. These test effort measures can be classified and individually compared based on three broad categories like use-case based test effort estimation, code based test effort estimation and complexity value based test effort estimation.

Literature review for Software Testing Effort During the last few decades, various models, methods and techniques have been developed to estimate the test effort for software to be developed. This section presents a survey of prevalent testing practicies which are categorized into code based, requirement based and complexity based methods for the estimation of software testing effort. Kuo Chang Tai [19] propose the exploration of testing complexity for several class of programs, based on testing path that is obtained on the basis of test data. Muthu Ramchandran [5] proposes a model for test process and investigates the possibility of deriving the test cases from system models and requirement analysis techniques. Johannes Ryser et. al. [14] describes validation and classification of software requirements based on heuristic and solution based strategies. Suresh Nageshwaran [10], discuss a use-case based approach for the estimation of test effort based on use case weight, use case points and complexity factors. Borris Veysburg et. al. [17] discusses an approach for the reduction of requirement based test suites using Extended Finite State Machine (EFSM) dependence analysis. The technique supports test case generation from EFSM system models. Ian Holden and Dalton [6] uses Cumulative Test Analysis (CTA) for test

Page 2: A Metric Suite for Requirement Based Test Effort Estimation using the role of IRBC and TCF

JOURNAL OF INFORMATION AND COMMUNICATION TECHNOLOGIES, VOLUME 2, ISSUE 8, SEPTEMBER 2012

© 2012 JICT www.jict.co.uk

 

12

selection and produces an objective measure upon identifying and assigning impact of risk for test effectiveness. Antonio Bertilino [13] discusses testing roadmap for the achievements, challenges in software testing and discusses four dreams as efficacy maximized test engineering, 100% automatic testing, test based modelling and universal test theory for the testing of any software. Aranha and Borba P [3] uses a tool to convert test specification into natural language for the estimation of the test effort and also finds test size and test execution complexity measure. Ajitha Rajan et. al. [20] proposed an approach to automate the generation of requirements based tests for the model validation, in order to formalize the requirement using linear temporal logic (LTL) properties for the test case generation. Aranha et.al. [16] discusses test execution and a test automation effort estimation model for the test case selection on the basis a controlled natural language and uses a manual coverage and automated test case generation technique for effort estimation. Harry Sneed [15] uses a test strategy & automatically performs requirement analysis by identifying keywords in requirement analysis phase and generates the test cases. Uuistallo et. al. [8] provides a set of practices that can be applied to link the requirements with testing based on interdependencies and linking the people with requirement documentation. Zhu Xiachun et. al. [30] presents an experience based approach for the estimation of software test suite size. Zho and Xiachun [25] presents an empirical study on early test execution effort estimation based on test case number prediction from use case and estimation the test effort using test execution complexity. Tibor Ripasi [28] models the development process using V & W models for the automated test case generation and test execution. Deniel et. al. [22] considers effort estimation model based on data analysis, hypothesis formulation, evaluation, accumulated efficiency and finally models the test effort. Erica R. et. al. [24] describes a method for test effort based on the information contained in use case. It also considers the various parameters like actor ranking and technical environment factor in order to finally arrive at test effort estimation. Veenendal E Dekkers [33] provides a method called test point analysis (TPA) that uses function points for the estimation of final result. 2. Proposed approach for Requirement Based Test Effort Estimation (RBTEE)

The Requirement Based Test Effort Estimation ( RBTEE ) Method provides the ability to estimate the man-hours , a software project requires from its requirement based. Based on work by [4], the RBTEE Method analyzes the attributes, IRBC, NRBTC, RBTFP and abstract them into an equation. Readers familiar with [4] will recognize its influence on RBTEE and IRBC. The RBTEE equation is composed of two variable:-

Ø Number of Requirement Based Test Cases (NRBTC).

Ø Requirement Based Test Team Productivity (RBTTP).

Each variable is defined and computed separately. To move a step closer, the proposed test effort measure empirically estimates the software testing effort so as to properly plan the testing process, reduce the testing cost and computation of software testing effort in early phases of SDLC. The equation can be used to estimate the number of man-hours needed to complete a project. Here is the complete equation: RBTEE = ( NRBTC * RBTTP ) man-hours. The necessary steps to generate the estimate based on the RBTEE method are the following:-- i Determine and Compute the NRBTC in term of requirement based test function point. ii Determine and Compute the RBTTP in term of requirement based test team productivity. ( i ) Number of Requirement Based Test Cases Estimation of number of requirement based test case (NRBTC) is a function of requirement based test function point (RBTFP), because numbers of function point dictate the number of test cases to be designed [25]. Like function points, acceptance test cases should be independent of technology and implementation techniques. Hence, this is expressed as: NRBTC = (RBTFP)1.2 Numbers of test cases are closely related to the amount of required testing effort. Hence, NRBTC plays a very significant role in the estimation of required test effort in man hours for the proposed software. Requirement Based Test Function Points(RBTFP)

Page 3: A Metric Suite for Requirement Based Test Effort Estimation using the role of IRBC and TCF

JOURNAL OF INFORMATION AND COMMUNICATION TECHNOLOGIES, VOLUME 2, ISSUE 8, SEPTEMBER 2012

© 2012 JICT www.jict.co.uk

 

13

 

∑Wi * Ci i=1  

17  

 

∑Wi * Ci ) i=1  

17  

RBTFP are coputed based on two conputations:-

a. The Technical Complexity Factor (TCF) . b. The Improved Requirement Based

Complexity ( IRBC). i (a). The Technical Complexity Factor (TCF) Seventeen standard technical factors [1] exist to estimate the impact on productivity that various technical issues have on a project shown in Table 1. Each factor is weighted according to its relative impact. Table 1: Technical Complexity Factors Technical Factor

Description Weight ( Wi)

Perceived complexity ( Ci )

T1 Distributed System 2 C1

T2 Performance 1 C2

T3 End User Efficiency/ Efficiency of interface

1 C3

T4 Complex Internal Processing / Complex interfacing

1 C4

T5 Reusability / Test ware reuse / Reusable code

1 C5

T6 Easy to Install / Installability

0.5 C6

T7 Easy to Use / Operability

0.5 C7

T8 Portability 2 C8

T9 Easy to Change / Maintainability

1 C9

T10 Concurrency

1 C10

T11 Special Security Features

1 C11

T12 Provides Direct Access for Third Parties

1 C12

T13 Special User Training Facilities Are Required

1 C13

T14 Test Tools 2 C14

T15 Documented inputs

2 C15

T16 Development Environment

1 C16

T17 Test Environment 1 C17

For each project, the technical factors are evaluated by the development team and assigned a perceived complexity value between zero and five. The perceived complexity factor is subjectively determined by the development team’s perception of the project’s complexity – concurrent applications, for example, require more skill and time than single-threaded applications. A perceived complexity of 0 means the technical factor is irrelevant for this project, 3 is average, and 5 is strong influence. When in doubt, use 3. Each factor’s weight is multiplied by its perceived complexity factor to produce the calculated factor. The calculated factors are summed to produce the Technical Total Factor, i.e. Technical Total Factor = Two constants are computed with the Technical Total Factor to produce the TCF. The constants constrain the effect the TCF has on the UCP equation from a range of 0.60 (perceived complexities all zero) to a maximum of 1.30 (perceived complexities all five). TCF values less than one reduce the UCP because any positive value multiplied by a positive fraction decreases in magnitude: 100 * 0.60 = 60 (a reduction of 40 percent). TCF values greater than one increase the UCP because any positive value multiplied by a positive mixed number increases in magnitude: 100 * 1.30 = 130 (an increase of 30 percent). Since the constants constrain the TCF from a range of 0.60 to 1.30, the TCF can impact the UCP equation from - 40 percent (.60) to a maximum of +30 percent(1.30). For the mathematically astute, the complete formula to compute the TCF is: TCF = 0.6 + ( 0.01 * i(b).The Improved Requirement Based Complexity ( IRBC). IRBC has bearing on two basic parameters i.e. functionality and input and these parameters are

Page 4: A Metric Suite for Requirement Based Test Effort Estimation using the role of IRBC and TCF

JOURNAL OF INFORMATION AND COMMUNICATION TECHNOLOGIES, VOLUME 2, ISSUE 8, SEPTEMBER 2012

© 2012 JICT www.jict.co.uk

 

14

sufficient to decide or generate the test case in both black box and white box scenarios. The entire estimation is carried out on the basis of IRBC [31, 32] that has been derived from elicited customer’s requirements and documented as per IEEE 830:1998 standard [12] for the generation of SRS for the proposed software.

Procedure for the computation of IRBC Here is the complete equation to compute RBTFP is RBTFP = TCF * IRBC (ii). Requirement Based Test Team Productivity Productivity is defined as accomplishment of objective in a given unit of time. Hence, test team productivity depends on the number of staff and personnel (talent) available to test the software. In order to estimate test team productivity, we consider rank and proficiency of tester. Therefore, a model [30] proposes estimation of tester rank on the basis of two dimensions i.e. experience in testing and knowledge of target application as represented in figure 1.

Figure 1. Tester Rank Model The tester rank helps in understanding tester behavior for test execution because higher the rank of test team, lower the need of number of tester. Therefore, in order to derive the requirement based test team productivity (RBTPP), the number of testers and their relative rank is considered that is expressed as:

Where T shows testers and R is the respective rank of the tester from tester rank model. (iii). Requirement Based Test Effort Estimation In order to compute software testing effort for the proposed software, it is necessary to know the two significant parameters, first, the prior knowledge of number of test cases and second, the productivity of the test team. Hence in this direction, we have already derived the contributing measures i.e. NRBTC, for the computation of number of test case and RBTTP, for the estimation of test team productivity. These measures are multiplied in order to get the final requirement based test effort estimate (RBTEE) in man-hours for the proposed software. This is expressed as: RBTEE = NRBTC * RBTTP man-hrs Early estimation of software testing effort using requirement based complexity will save tremendous amount of time, cost and man power for yet to be developed software. 3. Empirical Study In this section we analysis the requirement based test effort estimation on [4]. 3(i). NRBTC Calculation. i(a). TCF

Page 5: A Metric Suite for Requirement Based Test Effort Estimation using the role of IRBC and TCF

JOURNAL OF INFORMATION AND COMMUNICATION TECHNOLOGIES, VOLUME 2, ISSUE 8, SEPTEMBER 2012

© 2012 JICT www.jict.co.uk

 

15

 

∑Wi * Ci ) i=1  

17  

Technical Factor

Description Weight ( Wi)

perceived complexity ( Ci )

Calculated Factor (Wi * Ci)

T1 Distributed System

2 1 2

T2 Performance 1 3 3

T3 End User Efficiency/ Efficiency of interface

1 2 2

T4 Complex Internal Processing / Complex interfacing

1 3 3

T5 Reusability / Test ware reuse / Reusable code

1 0 0

T6 Easy to Install / Installability

0.5 0 0

T7 Easy to Use / Operability

0.5 3 1.5

T8 Portability 2 0 0

T9 Easy to Change / Maintainability

1 3 3

T10 Concurrency

1 0 0

T11 Special Security Features

1 0 0

T12 Provides Direct Access for Third Parties

1 3 3

T13 Special User Training Facilities Are Required

1 0 0

T14 Test Tools 2 1 2

T15 Documented inputs

2 0 0

T16 Development Environment

1 1 1

T17 Test Environment

1 1 1

Total Technical Factor 21.5

Now TCF =0.6 + (0.01 * Total Technical Factor) TCF =0.6 + (0.01 * TCF= 0.6 + (0.01 * 21.5) = 0.815 i(b) IRBC Step1: Calculation of Input Output Complexity: a. Calculation of Input Complexity: i. Number of Input : 2 ii. Type of Input : Integer iii. Source of Input : Keyboard = 2*1*1 =2

b. Calculation of Output Complexity: i. Number of Output : 2 ii. Type of Output : Integer iii. Source of Output : Screen = 2*1*1 =2 c. Calculation of Storage Complexity: 1 IOC = IC + OC + SC = 2 + 2 + 1 = 5 Step2: Calculation of Functional Requirements: i. Functionality to be performed: FCFS ii. Decomposed Sub Processes: Input of Execution time, Computation, Display FR = 1*3=3 Step3: Calculation of Non-Functional Requirements: based on ISO-9126 model i. No. of Attributes to be considered: Functionality, Usability. ii. No. of Sub-attributes to be considered: Accuracy, Operability. NFR = 1 * 1 + 1 * 1 = 2 Step4: Requirement Complexity RC = FR + NFR = 3 + 2 = 5 Step5: Product Complexity PC = IOC * RC = 5 * 5 = 25 Step6: Personal Complexity Attributes (PCA): = 1.17 Step7: Design Constraints Imposed: DCI = 0 Step8: Interface Complexity: IFC = 0 Step9: User Class Complexity: UCC = 1; No. of User Class considered: Casual End User SDLC= 1*1 =1 Step10: System Feature Complexity: SFC = 0 Step11: IRBC = ((PC * PCA) + DCI +IFC +SFC ) * SDLC = ((25 * 1.17) + 0 + 0 + 0 ) * 1 = (29.25 + 0 ) * 1 = 29.25 Now, RBTFP = TCF * IRBC = 0.815 * 29.25 = 23.83875 NRBTC = (RBTFP)1.2 = (23.83875)1.2 = 44.99108 3(ii). RBTTP

= 1 * 1 = 1 3(iii). Effort Estimation in term of man-hours RBTEE = NRBTC * RBTTP

Page 6: A Metric Suite for Requirement Based Test Effort Estimation using the role of IRBC and TCF

JOURNAL OF INFORMATION AND COMMUNICATION TECHNOLOGIES, VOLUME 2, ISSUE 8, SEPTEMBER 2012

© 2012 JICT www.jict.co.uk

 

16

= 44.99108 * 1 = 44.99108 man-hours. 4. Industrial Case Study The estimation of software test effort have been developed with various models, methods and techniques in the last few decades. For the estimation of software testing effort, this section also presents a survey of some leading papers describing the work it carried out so far. Eduardo Aranha [13] uses a controlled natural language (CNL) tool to convert test specification into natural language and estimate test effort. It also computes test size and test execution complexity measure. Zhou, Xiaochun [11] presents an experience based approach for the test suit size estimation. Aranha and Borba [9] discusses about test execution and a test automation effort estimation model for test selection. The model is based on the test specifications written in a controlled natural language. It uses manual coverage and automated test generation technique. Nageshwaran [7] presents a use-case based approach for test effort estimation and considers weight and environmental factors with a constant conversion factor for the computation of test effort. Zhu Xiaochun et. al [29] presents an empirical study on early test execution effort estimation based on test case number prediction and test execution complexity. Erika Almeida et. al. [24] discusses a method for test effort, based on use cases. It uses parameters like: actor ranking, technical and environment factor related to testing like test tools, input, environment, distributed system, interfaces etc. for the calculation of test effort. The work presented in this paper is based on SRS. The Software Engineering Standard Committee of IEEE computer society [18] presents the guidelines for the documentation of software requirements using IEEE 830: 1998 format. Antonio Bertilino [13] discusses software testing roadmap for the achievements, challenges and dreams for software testing. Symon [27] discusses the computation of function point for the estimation of size and cost of the software. The paper also considers technical and environmental factors. Boehm [21] discusses constructive cost model (COCOMO) and its various categories for the estimation of software cost using cost driver attributes. We put forth a comparative study for the test effort estimation , which is as follows:-

Variables Sharma &

Kushwaha(2011) In this Paper

TCF 13 Factors 17 Factors TCF 0.95 0.815 RBTFP 27.78 23.83875 NRBTC 54.028 44.99108 RBTTP 1 1 RBT Estimated effort

54.028 man-hours

44.99108 man-hours

Sharma & Kushwaha [31, 32] discusses the improved requirement based complexity (IRBC) based on SRS of the proposed software and also presented an object based semiautomated model for the tagging and categorization of software requirements. 5. Conclusion The requirement based test effort , on the proposed work based on IEEE-830: 1998 standard of requirement engineering document, soon after freezing the requirements. To make the estimation precise and perfect. The proposed measure is reliable as it is derived from SRS of the software to be developed. The proposed measure is also proved by comparing it with different categories of test effort estimation. Although, there have been various proposal for test effort estimation but the important contributing factors such as input, output, interfaces, data storage, functional requirement decompositions and most importantly non-functional requirements were not taken into considerations in existing measures, which otherwise plays very significant role in complexity computation and test effort estimation. Hence, it is observed on the basis of validation that the that the proposed measures follows the trend of all the other established measures in a comprehensive fashion. However other requirement based measures like use case. The proposed work successfully establish a linear relationship between improved requirement based complexity and requirement based test effort. 6. References [1] Pradeep Kumar Bhatia, Ganesh Kumar, “Role of

Technical Complexity Factors in Test Effort Estimation Using Use Case Points”, International Journal of Software Engineering Research & Practices, Rohtak (India), Vol. 1, No. 3, pp 5-12, July 2011.

Page 7: A Metric Suite for Requirement Based Test Effort Estimation using the role of IRBC and TCF

JOURNAL OF INFORMATION AND COMMUNICATION TECHNOLOGIES, VOLUME 2, ISSUE 8, SEPTEMBER 2012

© 2012 JICT www.jict.co.uk

 

17

[2] K. Clemmons Roy, “ Project Estimation with Use

Case Points” in February 2006. [3] Eduardo Aranha, Filipe de Almeida, Thiago Diniz, Vitor Fontes, Paulo Borba, “Automated Test Execution Effort Estimation Based On Functional Test Specification”, Proceedings of Testing : Academic and Industrial Conference Practice and Research Techniques, MUTATION 07, 2007, pp. 67-71. [4] Ashish Sharma, Dharmender Singh Kushwaha, “A

Metric Suite for Early Estimation of Software Testing Effort using Requirement Engineering Document and its validation”, International Conference on Computer & Communication Technology (ICCCT)-2011, pp. 373-378.

[5] Muthu Ramachandran, “Requirements-Driven Software Test: A Process Oriented Approach”, ACM Sigsoft, Software Engineering Notes Vol. 21, No. 4, July 1996, pp. 66-70. [6] Ian Holden, Dave Dalton, “Improving Test Efficiency

Using Cumulative Test Analysis” , Proceedings of the Testing: Academic and Industrial Conference–Practice and Research Techniques (TAIC-PART’06), 2006, pp. 152-158.

[7] Suresh Nageshwaran, “Test Effort Estimation Using

USE CASE Points”, Quality Week 2001, San Francisco, California USA, 2001, pp. 1-6.

[8] Eero J Uusitalo, Marko Komssi, Marjo Kauppinen, Alan M. Davis, “Linking Requirement And Testing In Practice”, 16th IEEE International Requirement Engineering Conference, IEEE-Computer Society, 2008, pp-265-270 [9] Aranha E, Borba P., “Test Effort Estimation Model

Based On Test Specifications. Testing”, Academic and Industrial Conference-Practice and Research Techniques,IEEE Computer Society, 2007

[10] Sharma Ashish, Kushwaha DS, “NLP based

component extraction and its complexity analysis”, ACM Sigsoft,, Issue 36, January 2011.

[11] ZHU Xiaochun, ZHOU Bo, WANG Fan, QU Yi

CHEN Lu., “Estimate Test Execution Effort at an Early Stage: An Empirical Study”, International Conference on Cyber World , IEEE Computer

Society,2008. [12] IEEE Computer Society, “IEEE recommended practice for software requirement specification, IEEE std 830-1998. [13] Antonio Bertilino, “Software Testing Research: Achievements, Challenges and dreams”, IEEE -Future Of software engineering-FOSE, 2007, pp. 85-103.

[14] Johannes Ryser,Stefan Bernaer, Martin Glinz, “On The State Of Art In Requirements- Based Validation And Test of Software”, University of Zurich , 1999, pp. 1-16. [15] Harry M Sneed, “Testing Against Natural Lang. Requirements”, 7th International Conference on Quality Software (QSIC- 2007), pp. 380-387. [16] Aranha E, Borba P, “Test Effort Estimation Model Based On Test Specifications”, Testing : Academic and Industrial Conference- Practice and Research Techniques, IEEE Computer Society, 2007, pp-67-71. [17] Boris Vaysburg , Luay H. Tahat, Bogdan Korel, “Dependence Analysis In Reduction Of Requirement Based Test Suites”, ACM Journal, 2002, pp. 107-111. [18] Software engineering standard committee of IEEE Computer Society. IEEE Recommended Practice For Software Requirement Specifications”, IEEE Inc. NY, USA, 1998. [19] Muthu Ramachandran,“Requirements-Driven Software Test: A Process Oriented Approach”, ACM Sigsoft, Software Engineering Notes Vol. 21, No. 4, pp. 66-70, July 1996. [20] Ajitha Rajan, Michael W Whalen, Mats P. E “Heimdahl Model Validation Using Automatically Generated Requirements-Based Test”, 10th IEEE- High Assurance Systems Engineering Symposium, 2007, pp. 95-104. [21] Barry Boehm, “Cost models for future software life cycle processes”, Annals of Software Engineering, Special Volume on Software Process and Product Measurement, Neitherlands, 1985. [22] Deniel Guerreiro e Silva, Bruno T. de Abreu, Mario Jino, “A Simple Approach For Estimation of Execution of Function Test Case”, IEEE-International Conference on Software Testing Verification and Validation, 2009.pp 289-298. [23] Carroll, Edward R. “Estimating Software Based on Use Case Points.” 2005 Object-Oriented, Programming, Systems, Languages, and Applications (OOPSLA) Conference, San Diego, CA, 2005. [24] Erika R. C De Almeida, Bruno T. de Abreu, Regina Moraes, “An Alternative Approach to Test Effort Estimation Based on Use Case”, IEEE-International Conference on Software Testing Verification and Validation, 2009, pp-279-288. 25. Qu Yi Zhou Bo, Zhu Xiaochun, “Early Estimate the Size of Test Suites from Use Cases”, 15th Asia-Pacific Software Engineering Conference, IEEE Computer

Page 8: A Metric Suite for Requirement Based Test Effort Estimation using the role of IRBC and TCF

JOURNAL OF INFORMATION AND COMMUNICATION TECHNOLOGIES, VOLUME 2, ISSUE 8, SEPTEMBER 2012

© 2012 JICT www.jict.co.uk

 

18

Society, 2008, pp-487-492. [26] Mohagheghi, Parastoo, “Effort Estimation of Use

Cases for Incremental Large-Scale Software Development,” IEEE International Conference on Software Engineering – ICSE ’05, May 15-21, 2005, St.Louis, MI, USA, pp. 303-311.

[27] Charles R Symons, “Function point: Difficulties and Improvements”, IEEE Transactions on Software Engineering, Vol.14, No.1, Jan. 1988. 28] Tibor Repasi, “Software Testing- State Of The Art And Current Research Challenges”, 5th IEEE – International Symposium on applied Computational intelligence and Informatics, May 28-29, 2009, Romania, 2009, pp-47-50. [29] Qu Yi Zhou Bo, Zhu Xiaochun, “Early Estimate the Size of Test Suites from Use Cases”, 15th Asia-Pacific Software Engineering Conference, IEEE Computer Society, 2008. [30] ZHU Xiaochun, ZHOU Bo, WANG Fan, QU Yi CHEN Lu,“Estimate Test Execution Effort at an Early Stage: An Empirical Study”, International Conference on Cyber World , IEEE Computer Society, 2008, pp- 195-200. [31] Sharma Ashish, Kushwaha D.S., “NLP/ POS tagger based requirement analysis and its complexity analysis”, ACM SigSoft, Jan 2011, Vol.36, No. 1, pp. 1-14. [32] Sharma Ashish, Kushwaha D.S., “Complexity measure based on requirement engineering document and its validation”, IEEE Conf. on Computer and Communication Technology, ICCCT 2010, pp. 608-615. [33] Veenendal. E e Deckkers. T., “Test point analysis: a method for test estimation in project control for software quality”, edited by Rob Kusters Arian Cowderoy, Fred Heemstra e Erik van, Shaker publishing, pp-1-16. [34] Karner, Gustav. “Resource Estimation for Objectory Projects.” Objective Systems SF AB, 1993. [35] Albrecht, A.J. Measuring Application Development Productivity. Proc. Of IBM Applications Development Symposium, Monterey, CA, 14-17 Oct. 1979: 83.

Ganesh Kumar

MCA and pursuing his Ph.D Computer Science from Manav Bharti University, Solan(HP). Currently working as a Programme In-Charge in PSC of IGNOU. 10+ Years of combined teaching experience in Under graduate & Post graduate Courses. 2+ Research papers published in Journals.

Dr. Pradeep Kumar Bhatia

Ph.D Computer Science and Engineering. Currently working as Associate Professor in Dept.of Computer Science & Engineering of GJU&ST, Hisar(HR). 18+ Years of combined teaching experience in Under graduate & Post graduate Courses. 49+ Research papers published in Journals and 9+ Research papers published in conferences and 01 book published. UGC has sanctioned Minor and Major Research Project for research work. Ph.D. Scholar.