designs for making test treatments vs control treatment … · designs for making test treatments...

22
DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW Rajender Parsad I.A.S.R.I., Library Avenue, New Delhi-110 012 1. Introduction Experimentation is an integral component of any research investigation. The designs used for experimentation generally require making all the possible paired comparisons among the treatments. But this is not the case always. There exist situations when the interest of the experimenter is only in the subset of all the possible paired comparisons. We illustrate this through examples studied in the literature. Experimental Situation 1: (Federer, 1956). In sugarcane breeding trial conducted at Hawaii, four sugarcane varieties, A, B, C, or D, and eleven-seedling test e, f, g, h, i, j, k, l, m, n, or o, were tried to evaluate the seedlings. Replicated plots on the individual seedlings were not possible because of the scarcity of seed cane and because of the large plots required. One of the objectives of the trial was to make comparisons between members of the two groups of sugarcane variety and seed cane. For obtaining the experimental error, the sugarcane varieties were replicated. The trial was conducted in four blocks (b = 4) with seven plots in three of the blocks (k 1 = k 2 = k 3 = 7) and six plots in the fourth block (k 4 = 6). The layout of the design, without randomization, is the following: Blocks A A A A B B B B C C C C D D D D e h k N f i l O g j m Experimental Situation 2: (Pearce, 1960). In a strawberry weed killer trial, it was intended to find out whether the application of any of the weed killers, A, B, C, or D, all of which were apparently suitable for controlling weeds in strawberry fields, would harm the growth of fruiting strawberry plants. In it there were four blocks (b = 4) each of seven plots (k = 7) and there were four treatments comprising of four kinds of herbicides (w = 4) besides an untreated control (O). The layout, without randomization, is given as:

Upload: vuongdiep

Post on 13-May-2018

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN

OVERVIEW

Rajender Parsad I.A.S.R.I., Library Avenue, New Delhi-110 012

1. Introduction Experimentation is an integral component of any research investigation. The designs used for experimentation generally require making all the possible paired comparisons among the treatments. But this is not the case always. There exist situations when the interest of the experimenter is only in the subset of all the possible paired comparisons. We illustrate this through examples studied in the literature. Experimental Situation 1: (Federer, 1956). In sugarcane breeding trial conducted at Hawaii, four sugarcane varieties, A, B, C, or D, and eleven-seedling test e, f, g, h, i, j, k, l, m, n, or o, were tried to evaluate the seedlings. Replicated plots on the individual seedlings were not possible because of the scarcity of seed cane and because of the large plots required. One of the objectives of the trial was to make comparisons between members of the two groups of sugarcane variety and seed cane. For obtaining the experimental error, the sugarcane varieties were replicated. The trial was conducted in four blocks (b = 4) with seven plots in three of the blocks (k1 = k2 = k3 = 7) and six plots in the fourth block (k4 = 6). The layout of the design, without randomization, is the following:

Blocks A A A A B B B B C C C C D D D D e h k N f i l O g j m

Experimental Situation 2: (Pearce, 1960). In a strawberry weed killer trial, it was intended to find out whether the application of any of the weed killers, A, B, C, or D, all of which were apparently suitable for controlling weeds in strawberry fields, would harm the growth of fruiting strawberry plants. In it there were four blocks (b = 4) each of seven plots (k = 7) and there were four treatments comprising of four kinds of herbicides (w = 4) besides an untreated control (O). The layout, without randomization, is given as:

Page 2: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

Block I II III IV O O O O O O O O A A A A A B B B B C B C C D C C D D D D

Interestingly, initially the design planned was RCB design with five test treatments (weed killers, A, B, C, D, or E) modified by adding two control replications in each block. But at last moment it was observed that supply of herbicide E still had not arrived and a decision had to be made quickly. It was thought of merging treatment E with any one of the treatments A, B, C, or D, in each block thus doubling the number of plots assigned to one of other substances. But sufficient supplies were not available for any of them (i.e. merging of treatments was not possible). Hence in desperation it was thought that in each of the four blocks treatment E will be exchanged with distinct treatments like A, B, C, and D in block I, II, III and IV, respectively. As a result of this a new class of designs, described above, was discovered. Experimental Situation 3: In a trial, seven treatments (test treatments) were tried along with three controls. The purpose of the experiment was to make comparisons between the treatments in the two groups. The trial was laid out in ten blocks (b = 10), the first seven blocks having six plots each ( 6kkk 721 ==== L ) and the last three blocks having ten plots each ( 10kkk 1098 === ). The design adopted is the following

Blocks 1. 1 2 4 A B C 2. 2 3 5 A B C 3. 3 4 6 A B C 4. 4 5 7 A B C 5. 5 6 1 A B C 6. 6 7 2 A B C 7. 7 1 3 A B C 8. 1 2 3 4 5 6 7 A B C 9. 1 2 3 4 5 6 7 A B C 10. 1 2 3 4 5 6 7 A B C

Experimental Situation 4: (Ture, 1994). A certain type of synthetic fiber is used in the production of various consumer goods. The research team of the manufacturer of this fiber has developed three new types of synthetic fibers that can be used for the same purpose. Each of these alternative fibers is more cost-efficient than the present one and can replace it if any one of them is proved to be stronger. An experiment was conducted to compare the breaking strengths of all these synthetic fibers. Suppose that 4 testing

258

Page 3: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

machines and 5 operators are available for the experiment. Because variability between the machines and the operators is suspected, the experiment must be designed to control such variability. Assuming that each operator can work on each testing machine once only, the following design may be used which is efficient for making tests vs controls comparisons.

Operator Machine A B C D E

I 0 3 0 1 2 II 1 2 0 0 3 III 3 1 2 0 0 IV 2 0 1 3 0

Such types of experimental settings are also popular in the Indian agricultural sciences research system. We now describe some such experimental situations: Experimental Situation 5: A trial was conducted for identifying the promising accessions by National Bureau of Plant Genetic Resources, New Delhi for cluster bean at its Jodhpur (Rajasthan) location. The trial included 6 checks (control treatments) viz. Suvidha, Maru, P. Navbahar, IC-11388, PLG-85 and D.P. Safed and 200 new (test) treatments (accessions). These were sown in 10 blocks each of size 26. Each of the checks were replicated 10 times and accessions only once. Experimental Situation 6: Coordinated Advanced Trial – I (1992) was conducted by National Bureau of Plant Genetic Resources, New Delhi on horsegram at 18 locations with the objective of comparing 10 new strains (test treatments) K-42, HPK-2, PHG-62, PHG-9, KS-2, PHG-13, JND-2, BGM-1, CODB-6, CODB-8 with one local strain, Maru Kulthi-1 (control treatment). The trial was conducted to investigate if any of these 10 new strains could replace the present local variety that was under cultivation if any of them proved better. The design used for the experiment was an Randomized Complete Block (RCB) design. Experimental Situation 7: Coordinated advanced trial - I (1992) was conducted by National Bureau of Plant Genetic Resources, New Delhi to evaluate 15 genotypes of Moth bean with three checks. 15 genotypes (test treatments) were RMO -96, RMO - 173, RMO -224, RMO - 225, RMO - 226, RMO -256, JMS - 1, JMS - 2, JMS -3, JMS - 4, JMM -259, IPCMO -371, IPCMO - 481, IPCMO -526, IPCMO -880. The checks (control treatments) were Jadia, Jawala and Maru Moth-I. The trial was conducted as an RCB design to compare each of the new genotypes with checks. For the experimental settings considered here, all the possible paired comparisons among treatments are not of equal interest to the experimenter. In fact, the experimenter is interested only in a subset of comparisons comprises of comparisons among treatments belonging to the two groups. The comparisons among tests and among controls may be of little or no consequence to the experimenter. For this experimental setting the variance-

259

Page 4: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

balanced designs for making all possible paired comparisons may not be useful. We illustrate this fact through examples. Example 1: An experimenter wants to compare 11 tests with a control treatment. 132 experimental units are available with him that can be grouped into 22 blocks of size 6 each. Among large number of treatment connected block designs for 12 treatments arranged in 22 blocks of size 6 each, two are the following: DI: A Balanced Incomplete Block (BIB) design with parameters v = 12, b = 22, r = 11, k = 6, λ = 5, n = 132. This is a variance-balanced block design. The layout of the design is given below with columns as blocks:

12 12 12 12 12 12 12 12 12 12 12 2 3 4 5 6 7 8 9 10 11 1 6 7 8 9 10 11 1 2 3 4 5 7 8 9 10 11 1 2 3 4 5 6 8 9 10 11 1 2 3 4 5 6 7 10 11 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 10 11 3 4 5 6 7 8 9 10 11 1 2 4 5 6 7 8 9 10 11 1 2 3 5 6 7 8 9 10 11 1 2 3 4 9 10 11 1 2 3 4 5 6 7 8 11 1 2 3 4 5 6 7 8 9

DII: A reinforced design with parameters v = 11+1, b = 22, r′ =(10 , 22), k = 6, λ = 4, λ

111′0 = 10, n = 132 obtained by taking 2-copies of a BIB design v* = 11 = b*, r* = 5 =

k*, λ* = 2 with a control treatment added once to each of the 22 blocks. Here represents the replication vector, the control treatment replicated 22 times and each of

the test treatments 10 times, λ is number of blocks in which any two test treatments concur and λ

r′

0 is the number of blocks in which a test treatment and a control treatment concurs. The design with columns as blocks is given below:

1 2 3 4 5 6 7 8 9 10 11 2 3 4 5 6 7 8 9 10 11 1 4 5 6 7 8 9 10 11 1 2 3 5 6 7 8 9 10 11 1 2 3 4 9 10 11 1 2 3 4 5 6 7 8 12 12 12 12 12 12 12 12 12 12 12 1 2 3 4 5 6 7 8 9 10 11

260

Page 5: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

2 3 4 5 6 7 8 9 10 11 1 4 5 6 7 8 9 10 11 1 2 3 5 6 7 8 9 10 11 1 2 3 4 9 10 11 1 2 3 4 5 6 7 8 12 12 12 12 12 12 12 12 12 12 12

In DI all the 66 elementary contrasts are estimated with same variance 0.20σ=⎟⎟⎠

⎞⎜⎜⎝

⎛212 2 and

the average variance of the 66 contrasts is 0.20σ2. In DII all the = 55 elementary

contrasts among test treatments are estimated with variance 2/9(≅0.22)σ

⎟⎟⎠

⎞⎜⎜⎝

⎛211

2 while the remaining 11 tests vs control contrasts are estimated with variance 7/45σ2 = 0.15σ2. The average variance of 66 contrasts for DII is approximately 0.21σ2. Thus one can see that if the interest of the experimenter is in making all the possible paired comparisons, then DI is better than DII. However, if the experimenter is interested in making only tests vs control comparisons and the comparisons among tests are of no interest, then the average variance of all the 66 elementary contrasts is not a meaningful criterion. In this situation a meaningful criterion is to compare the average variance of all the 11 estimated elementary contrasts of treatment 12 with each of the first 11 treatments. Then DII is a better design than DI for making tests vs control comparisons. Example 2: Consider another experimental situation where an experimenter wants to compare 6 tests with 3 controls. There are 72 experimental units available with him that can be grouped into 12 blocks of size 6 each. Among the large number of treatment connected block designs for 9(= 6 + 3) treatments arranged in 12 blocks of size 6 each, two are the following, with columns as blocks:

DI: A variance-balanced design (BIB design v = 9 (w = 6, u = 3, b = 12, r = 8, k = 6, λ = 5).

4 1 1 1 1 2 2 1 1 2 1 1 5 2 2 3 2 3 3 3 2 3 3 2 6 3 3 4 4 4 6 5 4 4 4 5 7 7 4 6 5 5 5 6 6 6 5 6 8 8 5 7 7 8 8 7 7 7 8 7 9 9 6 9 8 9 9 9 8 8 9 9

DII: A reinforced design obtained by adding all the three controls to all the blocks of a BIB design w = 6, b* = 10, r* = 5, k* = 3, λ* = 2 and then adding two more blocks containing each of the 6 tests. The design so obtained has parameters v = 9 (w = 6, u = 3), b = 12(= 10 + 2), r’ = ( 36 10 ,7 11 ′′ ), k = 6, hh ′λ = 4, ghλ = 5, gg ′λ = 10. Here,

)( gghh ′′ λλ is the number of blocks in which any two tests (controls) concur and ghλ is the number of blocks in which a test and a control concurs.

261

Page 6: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

Blocks I II III IV V VI VII VIII IX X XI XII 1 2 2 1 2 3 4 1 1 1 1 1 2 3 3 3 4 5 5 2 3 4 2 2 5 6 4 4 5 6 6 6 5 6 3 3 7 7 7 7 7 7 7 7 7 7 4 4 8 8 8 8 8 8 8 8 8 8 5 5 9 9 9 9 9 9 9 9 9 9 6 6

In DI all the = 36 elementary treatment contrasts are estimated with variance

(4/15)σ

⎟⎟⎠

⎞⎜⎜⎝

⎛29

2 ≅ 0.267σ2. In DII the = 15 elementary contrasts among tests are estimated

with variance (4/13)σ

⎟⎟⎠

⎞⎜⎜⎝

⎛26

2, while the = 3 elementary contrasts among controls are

estimated with variance (1/5)σ

⎟⎟⎠

⎞⎜⎜⎝

⎛23

2 =0.20σ2. Each of the elementary contrasts of interest between tests and controls is estimated with variance (17/65)σ2 ≅ 0.261σ2. The total variance of all the 36 estimated elementary contrasts for DI is (624/65) σ2, while for DII it is (645/65) σ2. It is known that DI is the most efficient design according to a family of efficiency criteria so far as the problem of estimating all the paired differences is concerned. This is also evident from the total variance of the estimated elementary contrasts. On the other hand, the total variance of the 18 estimated elementary contrasts between tests and controls is (312/65)σ2 for DI and (306/65)σ2 for DII. Hence we see that DII performs better than DI for comparing tests with controls. The above examples clearly emphasize that the usual variance balanced designs are not useful for the experimental setting under consideration. In fact, the experimenter may fail miserably by using traditional designs, like BIB designs, as these are not appropriate for the present experimental setting in which the controls play a distinguished role. Indeed Cox (1958, p.238) noted that BIB designs are not appropriate because of the role played by a control treatment. Sinha (1980, 1982) demonstrated that in the class of equireplicated designs with v treatments (w tests and a control), b blocks and a common block size k, a BIB design whenever existent, is the most efficient design for making tests vs control comparisons according to various efficiency criteria. But in the unrestricted class of all connected designs, the limiting efficiency of a BIB design with respect to most efficient design is only 50% for tests vs control comparisons. Cox (1958) suggested a design in which control appears an equal number of times in each block and the tests form a BIB design in remaining experimental units. Cox, however, did not provide any analytical details. It is easy to prove that reinforced design suggested by Cox is always efficient than a BIB design for tests vs control comparisons. It may be important to mention here that for more than one control situation also it has been established empirically that this reinforcement provides an efficient design for number of tests (w) ≤ 200, number of controls (u) ≤ 10 and block size (k) ≤ 20. However, it may be possible that the result holds for other parametric combinations also.

262

Page 7: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

From the above examples it is clear that for the experimental situations described here, the problem of choosing an appropriate design is important and needs attention. This talk addresses this problem. The experimental situations described above can be classified into two broad categories. These are CATEGORY A experiments where the TESTS are UNREPLICATED in the design CATEGORY B experiments where TESTS are REPLICATED in the design

The replication of controls, however, is possible in both the situations. We now describe in brief the historical development of the designs in the two categories of experiments.

2. Category A Experiments with Single Replication of Tests Category-A designs are essentially augmented designs. In genetic resources environment, which is a field in the forefront of biological research, an essential activity is to test or evaluate the new germplasm/ provenance / superior selections (test treatments), etc. with the existing provenance or released varieties (control treatments). A problem in these evaluation studies is that the quantity of the genetic material collected from the exploration trips is very limited or cannot be made available since a part of this is to be deposited in Gene Bank. The available quantity of seed is often not sufficient for replicated trials. Moreover, the number of new germplasm or provenance to be tested is very high (usually about 1000-2000 and sometimes up to 3000 accessions), and it is very difficult to maintain the within block homogeneity. These experimental situations may also occur in the fields of entomology, pathology, chemistry, physiology, agronomy and perhaps others for screening experiments on new material and preliminary testing of experiments on promising material. In some other cases (e.g. physics), a single observation on new material may be desirable because of relatively low variability in the experimental material. These types of situations came to be known to Federer around 1955 in screening new strains of sugarcane and soil fumigants used in pineapples. Augmented (Hoonuiaku) designs were introduced by Federer (1956) to fill a need arising in screening new strains of sugarcane at Experimental Station of Hawaii Sugarcane Planters Association on the basis of agronomic characters other than yield. Thus, we have seen that we have to design an experiment in which the experimental material for new (test) treatments is just enough for a single replication. However, the connectedness property of the design is ensured by augmenting any standard connected design in control treatments with new (test) treatments and replications of the control provide the estimate of error. More precisely, an augmented experimental design is any standard experimental design in standard treatments to which additional (new) treatments have been added. The additional treatments require enlargement of the complete block, incomplete block, row - column designs, etc. The groupings in an augmented design may be of unequal sizes. Augmented designs are available for 0-way, 1-way, 2-way etc. heterogeneity settings. Augmented designs eliminating heterogeneity in one direction are called augmented

263

Page 8: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

block designs and augmented designs eliminating heterogeneity in two directions are called augmented row-column designs. Federer (1956, 1961) gave the analysis, randomization procedure and construction of these designs by adding the new treatments to the blocks of RCB Design and balanced lattice designs. Federer (1963) gave procedures and designs useful for screening material inspection and allocation with a bibliography. Federer and Raghavarao (1975) who obtained augmented designs using RCB design and linked block designs for one-way heterogeneity setting gave a general theory of augmented designs. They also gave a method of construction of augmented row - column designs using a Youden Square design and also provided formulae for standard errors of estimable treatment contrasts. Federer, Nair and Raghavarao (1975) gave systematic methods of construction of augmented row column design. An analytic procedure of these designs has also been given. The estimable contrasts in such designs may be (i) among new varieties (test treatments), (ii) among check varieties (control treatments), and (iii) among all check and new varieties simultaneously. Indeed it may be possible to estimate the contrasts between check and new varieties. We shall concentrate on augmented designs for 1-way elimination of heterogeneity settings. In general, the randomization procedure for an augmented block design is: 1. Follow the standard randomization procedure for the known design in control

treatments or check varieties. 2. Test treatments or new varieties are randomly allotted to the remaining experimental

units. 3. If a new treatment appears more than once, assign the different entries of the

treatment to a block at random with the provision that no treatment appears more than once in a block until that treatment appears once in each of the blocks.

The analysis of variance of the data generated from an augmented block design with v = u + w treatments comprising of w tests and u controls arranged in b blocks having k1 plots in block 1, k2 plots in block 2, and so on, and kb plots in block b, such that

, the total number of plots in the design, is sketched below: nkkk b21 =+++ L

264

Page 9: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

ANOVA Source D.F. S.S. M.S.

Blocks (eliminating treatments) b -1 ASSB MSSB Treatments (eliminating blocks) v -1 ASST

Among Tests w - 1 SST MSSTAmong Controls u - 1 SSC MSSCTests vs Controls 1 SSTC MSSTC

Error n – v – b + 1

SSE MSE

Total n -1 TSS For simplicity and ease of understanding, we shall consider the Augmented Randomized Complete Block Design. Let us consider the experimental situation where w test treatments ( tth test denoted by Nt; t=1(1)w ) are to be compared with u control treatments (sth control denoted by Cs; s=1(1)u) via ‘n’ experimental units arranged in r blocks such that jth block is of size kj (>u); j=1(1)r . For an augmented randomized complete block design, each of the control treatments is replicated b times and occur once in every block and test treatments occur once in one of the blocks. Therefore, it can easily be seen that in the jth block there are kj - u =nj test treatments j=1(1)b. The randomization procedure is given as follows: 1. Randomly allot u controls to u of the kj experimental units in each block. 2. Randomly allot the w test treatments to the remaining experimental units. 3. If a new treatment appears more than once, assign the different entries of the treatment

to a complete block at random with the provision that no treatment appears more than once in a complete block until that treatment occurs once in each of the complete blocks.

For a augmented randomized complete block design standard errors for comparing mean differences are as follows Standard Errors (i) Between two control treatment means

SE(1) = b

MSE2

(ii) Between two test treatments in the same block SE(2) = 2 MSE (iii) Between two test treatments not in the same block SE(3) = )u/11(MSE2 + (iv) Between a test treatment and a control treatment SE(4) = )ub/1u/1b/11(MSE −++ Kindly note that the above expression in the Federer (1956) is SE(4) = )ub/1u/1b/11(MSE +++ .

265

Page 10: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

Exercise 1: An experiment was conducted with w = 8 new accessions (that were to be tested) denoted by and u = 4 control treatments denoted by of a genotype. There are 20 plots (experimental units) that could be arranged in three blocks (b = 3). There are 7 plots (4 for control treatments and 3 for new accessions) in the first and third block and 6 plots (4 for control treatments and 2 for new accessions) in the second block, i.e.,

8,,1t ,Nt L= ,41, s,Cs L=

6k ;7kk 231 === . For random allocation of these treatments in the experiment, we have to proceed as: (i) Allot the 4 control treatments to each block randomly. In this process, say

following is the arrangement: Blocks Experimental units 1 2 3 4 5 6 7 1 C3 C4 C1 C22 C4 C2 C1 C33 C3 C1 C4 C2 The 7th experimental unit is for block 1 and block 3. (ii) 8 new accessions are allotted randomly to the remaining experimental units of the

3 blocks. This way 4 controls and 8 new accessions randomly occupy 20 experimental units. The final arrangement looks like:

Experimental units Blocks 1 2 3 4 5 6 7

1 N8 (74) C3 (78) C4 (78) N3 (70) C1 (83) C2 (77) N7(75) 2 C4 (91) C2 (81) C1 (79) C3 (81) N1 (79) N5 (78) 3 N4 (96) C3 (87) C1 (92) N2 (89) C4 (81) C2 (79) N6 (82)

The figures in the parenthesis represent the observed value of the character under study from an experiment conducted in the above layout. Source for this data is Federer (1956). The analysis of the data has been carried out and the ANOVA table is given as

ANOVA SOURCE D.F. S.S. M.S. F Pr > F

Blocks(eliminating treatments) 2 69.5001 34.7500 1.2884 0.3424 Treatments(eliminating blocks) 11 285.0952 25.9177 0.9609 0.5499

Among Tests 7 215.1686 30.7384 1.1396 0.4447Among Controls 3 52.9167 17.6389 0.6540 0.6092

Tests vs Controls 1 15.0417 15.0417 0.5577 0.4834Error 6 161.8332 26.9722

R-Square C.V. Root MSE Yield Mean 0.7995 6.3724 5.1935 81.5000 Standard errors of differences Estimated standard error of difference between two controls is 4.24. Estimated standard error of difference between two tests in the same block is 7.34. Estimated standard error of difference between two tests in different blocks is 8.21. Estimated standard error of difference between a control and a test is 6.36.

We can use the adjusted values of the test treatments for comparison purpose. All those treatments for which yield levels are up to the satisfaction of breeder can be selected for

266

Page 11: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

further national level trials. In these kinds of experiments, generally, the multiple characteristics are observed. It may, therefore, be desirable to perform multivariate analysis of variance and use other related multivariate analytic techniques like cluster analysis, discriminant analysis, etc. Keeping in view the importance of this design and for the ease of Biological Research Workers Agarwal and Sapra (1995) have developed a user friendly program AUGMENT1 at the Documentation Unit of National Bureau of Plant Genetic Resources, New Delhi, to analyze the data of Augmented RCB design. It is interesting to note that the augmented RCB design is variance balanced with respect to tests vs controls comparisons. As mentioned earlier, the augmented designs can be obtained by augmenting test treatments in the blocks of incomplete block designs, row-column designs, etc. in control treatments. At IASRI, New Delhi attempt to extend the scope of this software to a general situation to cover all the augmented designs generated by augmenting tests in the blocks of incomplete block designs in controls and also when data on some co-variates is available. Note: A survey of the literature reveals that generally the experiments described above are conducted using an augmented randomized complete block design. However, the experimenters would often like to know how many times the control treatments be replicated in each of the blocks so as to maximize the efficiency per observation for making test treatments vs control treatments(s) comparisons? Answer to this question is obtained by Parsad and Gupta (2000) and is described as: let us assume that there are w test treatments which occur only once in the design and each of the u controls occurs in each of the b blocks, then to maximize the efficiency per observation the number of times each control appears in each of the blocks is

ub

w1bur

−+=

provided For example, consider the problem of obtaining the optimum number of replications of the controls in an experiment with w = 24, u = 3, b = 4. We

have

.w1ub ≤−+

.144246

31r =

××

= Similarly, for w=98, u=2, b=7, we have 277988

21r =

××

= .

Remark 2.1: For a single control situation, i.e. u = 1, the above expression reduces to

bwr = and it can easily be seen that for u = 1, b ≤ w, which is always true.

There may, however, arise many combinations of w, u and b for which the above expression of r does not yield a positive integer value of r. In such situations, a question that arises is as to what integer value of r should be taken? To answer this question, the efficiency per observation has been calculated for w ≤ 100, b ≤ 25 and u ≤ 10 such that b + u –1 ≤ w and r has been taken as r* = int (r) and int (r) + 1 besides taking r=1. A close scrutiny reveals that if value of r > #. 42 then take r* = int (r) + 1 and for values of r

267

Page 12: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

smaller than or equal to #. 42 take r* = int (r) for u ≥ 2. For u = 1, the same rule applies but the value of r is taken as #. 45 instead of #. 42. 3. Category B experiments with many replications of the tests Category B experiments are in fact a follow up of the Category A experiments. In Category A experiments, the material on tests is scarce and it is not possible to replicate the tests, though the controls are replicated and it is the replications of the controls alone that provides us the experimental error. These types of experiments are generally the screening experiments wherein the purpose of the experiment is to identify the promising genotypes or lines or test treatments. Once these are identified, then a further experimentation is carried out with the tests identified as promising. In these experiments, it is possible to replicate the tests also besides replicating the controls. For these experiments, attempts have been made by various research workers to investigate the optimality and construction aspects of designs used for making tests vs controls comparisons. For a critical and excellent review on the subject, reference may be made to Hedayat, Jacroux and Majumdar (1988), Majumdar (1996) and Gupta and Parsad (2001). Methods of construction and the catalogues of these designs along with their efficiencies can be used by experimenters for choosing an experimental design. The most appropriate criteria for searching most efficient (optimal) design for making tests vs control(s) comparisons are A- and MV-optimality criteria. A design d* belonging to a certain class of competing designs D is said to be A-optimal if d* minimizes the average variance of BLUE of elementary contrasts between tests and controls over all designs . A design dD∈d * belonging to a certain class of competing designs D is said to be an MV-optimal design if d* has the least value of the maximum variance of BLUE of elementary contrasts between tests and controls as compared to any other design d ∈ D. This category of experiments may be with a single control treatment or more than one control treatment. We describe both types of experiments in the sequel. 3.1. Single control treatment 3.1.1. Multiple comparison procedure The solution of problem for multiple comparisons involving a single control was initiated by Roessler (1946) where it was assumed that the comparisons are independent, though these are not since all the comparisons have the control in common. Paulson (1952) gave a multiple decision procedure for comparing several experimental categories with a control. Earliest attempts to develop efficient designs for test treatments-control comparisons were made by Dunnett (1955, 1964). He posed the problem of optimum allocation of experimental units to control and tests to maximize the probability connected with the joint confidence statement concerning many to one comparison between the mean of control and means of the tests. The problem of optimum allocation was solved by Bechhofer and his co-workers (1969, 1970, 1971). In all these papers, it was assumed that a completely randomized design was used. In many situations blocking of experimental units is required to improve the precision of the experiment. The situation commonly encountered is when the block size is less than the total number of treatments. Robson (1961) pointed out that Dunnett’s procedure could be used when all

268

Page 13: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

the treatments including the control are arranged in a BIB design. As discussed earlier, Cox (1958, p.238) noted that a design with control occurring an equal number of times in each block and the tests forming a BIB design in remaining experimental units is efficient. The efforts made by Bechhofer and his co-workers led to a new class of block designs that are appropriate and efficient for the purpose. In multiple comparison problems, it is desired to make a simultaneous confidence statement that applies simultaneously to all the w differences of the form of elementary contrasts between tests and control. They regarded the problem being symmetric in these differences. All the elementary contrasts between tests and control are estimated with same variance and covariance and also variances of estimates of elementary contrasts between tests are also same using the class of designs introduced by them. These designs were termed as Balanced Treatment Incomplete Block (BTIB) designs by Bechhofer and Tamhane (1981). Bechhofer and Tamhane (1981) initiated the study of the problem of obtaining optimal block designs for these situations. The optimality criterion was to obtain simultaneous confidence intervals for many to one comparison under a two - way classified, fixed effects, additive linear model. The analysis of multiple comparison procedure of Dunnett can be performed by using options available under the MEANS statement in PROC ANOVA AND PROC GLM of SAS system for linear models. These are DUNNETT, DUNNETTL and DUNNETTU, respectively, a two-tailed, a left tailed and a right tailed test to test if the effect of any test treatment is significantly different from that of control. 3.1.2. Supplemented balance The problem of comparing several tests with the control has also been studied using the concept of supplemented balance (type S) designs. This device was suggested by Hoblyn, Pearce and Freeman (1954) in connection with the changing of treatments in successive experiments in cases where it can be assumed that the new treatments will not interact with residual effects from those previously applied. According to them a design is supplemented balance if all treatments are replicated same number of times and concurrence of each of the test treatments with that of control treatment is same. Pearce (1960) provided real life examples for these supplemented balanced designs that were conducted at East Malling research station in 1953. The experimental situation 2 in fact refers to this type of designs. The basic approach in these designs is to supplement any standard design in test treatments with replications of control treatment. Nigam, Gupta and Narain (1979) have written a monograph “Supplemented Balanced Block Designs”. It can easily be seen that type S designs and BTIB designs are similar. Both the above approaches concern mainly with comparing several test treatments with a single control treatment. It is possible, though, to extend these approaches to comparing a set of test treatments with a set of control treatments. In fact, Hoover (1991) has developed the tests for simultaneous comparisons of multiple comparisons with two controls. 3.2. More than one control treatment

269

Page 14: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

The problem of comparing two disjoint sets of treatments for more than one replication of tests has been studied with different names in the literature. Nair and Rao (1942) termed the existence of such designs as inter and intra group balanced designs. Rao (1947) gave the analysis of such designs. Corsten (1962) investigated combinatorial aspects of these designs and these designs were termed as balanced block designs with two different numbers of replicates. Adhikary (1965) also studied these designs with the name of balanced block designs with variable replications. Kageyama and Sinha (1988) and Sinha and Kageyama (1990) gave systematic methods of construction of these designs along with their catalogues. They called such designs as balanced bipartite block (BBPB) designs. 3.2.1. Reinforced designs The technique of reinforcement was initiated to control large intra block variances but if looked from another angle, this may also be used to compare a set of tests with a set of controls. The reinforced designs were developed to meet the need of experimental situations, where it may not be possible to use a BIB design, a Lattice design or a partially balanced incomplete block (PBIB) design. Inadequate experimental resources may not allow equal replication of all the treatments. To deal with the situation, Das (1954) suggested a design obtainable by adding some extra treatments to each of the blocks of a BIB design. Some extra blocks were also added containing all the treatments, the treatments in the BIB design as well as the treatments added to each block of the BIB design. With a given number of treatments v, the u extra treatments can always be so adjusted that w (=v - u) treatments form a BIB design. Due to limitation on w, it may not always be possible to keep in such designs the total number of plots at any desired level having at the same time smaller blocks. This problem to some extent was solved by Giri (1957), who used the base designs in w treatments as a PBIB design and termed the resulting design as reinforced partially balanced incomplete block design. Das (1958) gave a more general definition of reinforced incomplete block design as that design obtained by reinforcing each block of an incomplete block design in w treatments, b blocks and block size k with single replication of each of the u (>0) extra treatments. t (≥0) new blocks containing each of the u + w treatments are added to the b blocks of the incomplete block design. The parameters of the reinforced design are

. It is obvious that no general criterion can be set for choice of u. It is also clear that the reinforced incomplete block designs can be used for making comparisons between a set of tests and a set of controls.

vkk,ukkkk,tb*b,wuv*v *tb

*1b

*b

*2

*1 ===+===+=+== ++ LL

Kulkarni (1960) has shown that in a reinforced incomplete block design, the mean variance of the differences between all pairs of treatments is a decreasing function of block size k. In other words, by adding a few extra treatments to an incomplete block design, the mean variance of all the pairs of treatments belonging to the original design is reduced. It can be seen easily that information matrix of reinforced balanced incomplete block designs, inter and intra group balanced block designs, balanced block designs with two

270

Page 15: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

different replications and balanced bipartite block designs are similar. When in a reinforced BIB design u = 1 and all other group of designs contains a single control, then these designs are useful for comparing several tests with a control. It was also shown by Pearce (1983, pp. 128-129) through a discussion that reinforced and supplementation techniques are related. It can easily be seen that with t = 0 and u = 1, reinforced BIB designs of Das (1954), supplemented balance designs of Pearce (1960) and BTIB designs of Bechhofer and Tamhane (1981) are same. We now give the analysis of the data generated from an experiment laid out as a reinforced block design. Exercise 2: Consider an experimental situation where 6 tests (1,..., 6) are to be compared with 3 controls (7,8,9). 78 experimental units are available with the experimenter that can be arranged in 10 blocks of size 6 each and 2 blocks of size 9 each. The experiment is laid out as a reinforced BIB design. The layout of the experiment and data collected are given below:

Block 1 1 (18) 2 (21) 5(53) A(12) B(13) C(16) 2 2 (23) 3(33) 6(65) A(8) B(10) C(8) 3 2(20) 3(30) 4(40) A(7) B(5) C(8) 4 1(13) 3(32) 4(43) A(6) B(5) C(10) 5 2(25) 4(46) 5(57) A(8) B(7) C(8) 6 3(33) 5(58) 6(63) A(7) B(5) C(9) 7 4(41) 5(55) 6(63) A(10) B(8) C(8) 8 1(12) 2(24) 6(64) A(6) B(8) C(5) 9 1(13) 3(32) 5(43) A(6) B(8) C(9) 10 1(14) 4(44) 6(67) A(8) B(5) C(6) 11 1(15) 2(26) 3(37) 4(44) 5(58) 6(69) A(9) B(7) C(6) 12 1(16) 2(23) 3(36) 4(43) 5(55) 6(67) A(8) B(9) C(6)

The analysis of the data was carried out using PROC GLM in SAS. The steps involved along with several options are given below: data reinforc; input block treat yield; cards; : : : : : : ; proc glm; class block treat;

271

Page 16: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

model yield = block treat/ss2; lsmeans treat/stderr pdiff; contrast ‘Among tests’ treat 1 –1, treat 1 1 –2, treat 1 1 1 –3, treat 1 1 1 1 –4, treat 1 1 1 1 1 –5; contrast ‘among controls’ treat 0 0 0 0 0 0 1 –1, treat 0 0 0 0 0 0 1 1 –2; contrast ‘tests vs controls’ treat 3 3 3 3 3 3 –6 –6 –6; run; The analysis of variance table is given below:

ANOVA Source DF Sum of Squares Mean Square F Value Pr > F Model 19 31643.23 1665.433 294.82 0.0001 Error 58 327.64 5.649 Total 77 31970.87 R-Square C.V. Root MSE YIELD Mean 0.9898 9.6657 2.3768 24.5897 Source DF Type II SS Mean Square F Value Pr > F Blocks (adjusted 11 171.0964 15.5542 2.75 0.0061 for treatments) Treatments 8 29936.1361 3742.0170 662.42 0.0001 (adjusted for blocks) Among tests 5 11859.2735 2371.8547 419.87 0.0001 Among controls 2 3.3889 1.6944 0.30 0.7420 Tests vs Controls 1 18073.4737 18073.4737 3199.41 0.0001 Treatment YIELD Std Err Pr > |T| LSMEAN LSMEAN LSMEAN H0:LSMEAN=0 Number 1 14.3207 0.9299 0.0001 1 2 22.5771 0.9299 0.0001 2 3 33.7054 0.9299 0.0001 3 4 42.9618 0.9299 0.0001 4 5 53.5259 0.9299 0.0001 5 6 65.2951 0.9299 0.0001 6 7 7.9167 0.6861 0.0001 7 8 7.5000 0.6861 0.0001 8 9 8.2500 0.6861 0.0001 9

272

Page 17: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

Pr > |T| H0: LSMEAN(i)=LSMEAN(j) i/j 1 2 3 4 5 6 7 8 9 1 . 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 2 0.0001 . 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 3 0.0001 0.0001 . 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 4 0.0001 0.0001 0.0001 . 0.0001 0.0001 0.0001 0.0001 0.0001 5 0.0001 0.0001 0.0001 0.0001 . 0.0001 0.0001 0.0001 0.0001 6 0.0001 0.0001 0.0001 0.0001 0.0001 . 0.0001 0.0001 0.0001 7 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 . 0.6692 0.7324 8 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.6692 . 0.4427 9 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.7324 0.4427 . 4. Optimality Results The experimental situation demands that all test treatments be tested against every control treatment. It, therefore, seems logical that in the choice of an appropriate design we put more emphasis on the allocation of control treatment(s) to the plots within a block. Since every test treatment is compared with every control, intuitively it appears logical to have more replications of the control treatments as compared to test treatments. The statistical problem, therefore, is to obtain a suitable arrangement of treatments in the plots within blocks such that test treatments vs control treatment(s) comparisons are estimated with as high a precision as possible. Before discussing the optimality results, it will not be out of place to define some characterization properties of these designs. Definition 1: A design is said to be variance balanced with respect to test treatments-control treatment(s) comparisons if all the elementary contrasts between tests and control(s) are estimated with same variance and the covariance between the BLUE of any two elementary treatment contrasts is also same. In 0-way heterogeneity setting, the above is achieved when each of the tests treatments has equal replication, say ‘a’ and each of the control treatments has equal replication, say ‘b’. In 1-way elimination of heterogeneity settings, i.e., block design set up, for orthogonal designs, it is achieved when each of the tests is replicated in each of the blocks, say ‘a’ times and each of the controls, say ‘b’ times. In orthogonal row-column design set up, it is achieved when each of the tests occur in each of the rows and columns equal number of times, say ‘a’ and each of the control treatments occur in each of the rows and columns equal number of times, say ‘b’. In non-orthogonal set up, under block designs set up, BTIB designs are recommended for single control situation and BBPB designs for more than one control situations have been defined. To be more clear, consider an experimental situation where w test treatments, denoted by 1, ... , w belonging to a set H of cardinality w, are to be compared with each of the u (≥1) control treatments denoted by w+1, ... , w+u = v belonging to a set G of cardinality u, such that G∩H=φ. Let τi represent the effect due to treatment i; ∀ i=1, .. , v. The treatment comparisons of interest are τg - τh, ∀ h=1, ..., w; g=w+1, ..., v.

273

Page 18: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

Definition 2 : An incomplete binary block design with a set of w treatments occurring r1 times and another set of u treatments occurring r2 times (r1≠r2) arranged in b blocks of constant block size k, is said to be a BBPB design if i) any two distinct treatments in the ith set occur together in λii blocks, i=1,2: ii) any two treatments from different sets occur together in λ12= λ21 (>0) blocks. Under the standard homoscedastic linear additive model (intra-block analysis), the three different variances for the estimated differences in effects between two treatments are V1σ2, for two treatments both from the first set; V2σ2, for two treatments both from the second set; V3σ2, for two treatments from different sets, V1=2k / [r1(k-1)+λ11]; V2=2k / [r2(k-1)+λ22]; V3=V1/2 +V2/2 + [u{r2 /bλ12 - k / {r1(k-1)+ λ11}}+w{r1 /bλ12 - k /{r2(k-1)+ λ22}}]/ uw where σ2 is the error variance in the model. In a variance balanced design V1=V2=V3.

Definition 3: For given (w, b, k), where w, the number of test treatments indexed as 1,2,...,w and a control indexed as 0 are arranged in b blocks of size k each, consider a design with the incidence matrix N=((nij)), where nij is the number of replications of the

ith (i=0,1,...,w) treatment in the jth (j=1,...,b) block. Let denote the total

number of times that the i

ji'b

1jij"ii nn∑

==λ

th treatment appears with the i’th treatment in the same block over the whole design (i≠i’; 0≤i, i’≤w). Then the necessary and sufficient condition for a design to be a BTIB design is that λ01=λ02=...=λ0w= λ0 (say) and λ12=λ13=...=λw-1,w=λ1 (say) In other words, each test treatment must appear with the control treatment the same number of times (λ0) over the design and each test treatment must appear with every other test treatment the same number of times (λ1) over the design. In a BTIB design all test treatments versus control comparisons are made with same variance and variances of the elementary contrasts of test treatments are also same. The similar designs under row-column design set up have been termed as BTRC designs for single control treatments (Ture, 1994) and BBPBRC designs for more than one control treatments (Parsad and Gupta, 2001).

We now give some important optimality results for orthogonal designs: Result 1: In a 0-way heterogeneity setting, for comparing w tests with u controls via n experimental units a design is A-optimal if n = 0 mod(w + uw ) and uw is a perfect

square and uw)mod(u 0n += when each control is replicated uw times the

replication of the tests. For an A-optimal design uww

nr...r w1 +=== and

274

Page 19: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

⎟⎠⎞

⎜⎝⎛

+⎟⎟⎠

⎞⎜⎜⎝

⎛===+ uww

nuwr...r v1w , where is the replication number of the

treatment. Thus, if w = 8 tests are to be compared with u = 2 controls using 36 experimental units then the A-optimal design assigns 3 experimental units to each of the tests and 6 to each of the controls.

trtht

Result 2: For one-way heterogeneity setting, for comparing w tests with u controls via n = bk experimental units arranged in b blocks of size k each, a design is A-optimal if

)uwmod(w 0k += and uw is a perfect square and )uwmod(u 0k += when each

control is replicated uw times the replication of the tests in each of the b blocks. For

example, if the experimenter is interested in comparing 4 tests with one control using a complete block design in 4 blocks, then the most efficient orthogonal block design is Block-1: (0,0,1,2,3,4); Block-2: (0,0,1,2,3,4); Block-3: (0,0,1,2,3,4); Block-4 : (0,0,1,2,3,4). Result 3: For two-way elimination of heterogeneity setting, for comparing w tests with u controls via n = bk experimental units arranged in b rows and k columns, a design is A-optimal if )uwmod(w 0k += and )uwmod(w 0b += and uw is a perfect square

when each control is replicated uw times the replication of the tests in each of the b

rows and also in each of the k columns. For example, if the experimenter is interested in comparing 4 tests with 1 controls using a row-column design then the most efficient design is obtained by writing a Latin square of order 624vv =+=+ and merging treatments 5 and 6 to form the control, say 0. The A-optimal orthogonal row-column design is

1 2 3 4 0 0 2 3 4 0 0 1 3 4 0 0 1 2 4 0 0 1 2 3 0 0 1 2 3 4 0 1 2 3 4 0

For non-orthogonal designs the following optimality results have been obtained. Result 4: A reinforced BIB design obtained by adding a control once in every block of a BIB design is A-optimal in the restricted class of block designs having a single replication of the control in each block. Result 5: A design obtained by adding control treatment once to each of block of a most balanced group divisible design is A-optimal in the restricted class of block designs with a single replication of control in each block.

275

Page 20: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

All such designs in which a standard treatment is reinforced in each block of the design were termed as Standard reinforced (SR-) designs. Result 6: (Stufken, 1987). A BTIB design obtained by adding a control treatment t times to each of the blocks of a BIB design with parameters w, b, r, k-t, λ in tests is A-optimal whenever

( ) ( 222 tkwt11tk −≤≤+−− ) For example, the following design is most efficient (A-optimal) for comparing 7 tests (1,2,...,7) with one control (0), when 28 experimental units are arranged in 7 blocks of size 4 each. Block -1: (0, 1,2,4); Block -2: (0, 2,3,5); Block -3: (0, 3,4,6); Block -4 : (0, 4,5,7); Block -5: (0, 5,6,1); Block -6: (0, 6,7,2); Block -7: (0, 7,1,3). We now give a general result that is in terms of off diagonal elements of the information matrix pertaining to treatment effects. Result 7: A BBPB design is A-optimal for comparing w tests with each of the u controls in the class of designs binary in tests and controls if

2/1

0u

uw1 ⎟⎟

⎞⎜⎜⎝

⎛ ++=

λλ

This is a fairly general condition and the condition for all other designs useful for tests vs controls comparisons fall out as a particular case of this. A procedure to obtain an efficient design for the experimental situations for which this condition does not hold good has also been suggested using the concept of lower bound to the average variance. In on farm experiments generally the farmers’ practice is to be compared with that of treatments identified from research stations. What treatment is to be taken as farmers’ practice is a problem as this varies from farmer to farmer? One possible solution of this problem is that we take as many controls as there are farmers in the trial and add the farmers’ practice to each of the blocks of one replication of resolvable incomplete block design. The problem still needs attention. References Adhikary, B.(1965) On properties and construction of balanced block designs with

variable replications., Calcutta Statist. Assoc.Bull. 14, 36-64. Agrawal, R.C. and Sapra, R.L.(1995). Augment 1: A microcomputer based program to

analyze augmented randomized complete block design. Indian Journal of Plant Genetic Resources , 8 (1), 63-69.

Bechoffer, R.E.(1969).Optimal allocation of observation when Comparing several treatments with a control, in Multivariate Analysis II, ed. P.R. Krishnaiah, New York: Academic , 465-473.

276

Page 21: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

Bechoffer, R.E. and Nocturne. D.J.M.(1970).Optimal allocation of observations when comparing several treatments with a control (II) : 2- sided comparisons., Technometrics, 23, 423-436.

Bechoffer, R.E. andTurnbull, B.W.(1971). Optimal allocation of observations when comparing several treatments with a control (III). Globally best one sided for unequal variances. Statistical Decision Theory and Related Topics, eds. S. S. Gupta and J. Yackel, New York: Academic Press, 41-78.

Bechoffer, R.E. and Tamhane, A.C.(1981). Incomplete block designs for comparing treatments with a control: General theory. Technometrics, 23, 45-57.

Corsten, L.C.A.(1962). Balanced block designs with two different number of replicates. Biometrics, 18, 490-519.

Cox, D.R.(1958). Planning 0f Experiments, John Wiley & Sons, New York. Das, M.N.(1954). On reinforced balanced incomplete block designs. Jour. Indian Soc.

Agri. Statist., 6, 58-76. Das, M.N.(1958). On reinforced incomplete block designs. Jour. Indian Soc. Agri

Statist., 10, 73-77. Dunnett, C.W.(1955). A multiple comparison procedure for comparing several

treatments with a control. Jour. Amer. Statist. Assoc., 50, 1096-1121. Dunnett, C.W.(1964). New tables for multiple comparisons with a control. Biometrics,

20, 482-491. Federer, W.T.(1956). Augmented designs. Hawaiian Planter Record, 55, 191-208. Federer, W.T.(1961). Augmented designs with one way elimination of heterogeneity.

Biometrics, 17, 447-473. Federer, W.T. (1963). Procedures and designs useful for screening material in selection

and allocation with a bibliography. Biometrics, 19, 553-587. Federer, W.T. and Raghavarao, D. (1975). On augmented designs. Biometrics, 31, 39-

35. Federer, W.T., Nair, R.C. and Raghavarao, D.(1975). Some augmented row column

designs. Biometrics, 31, 361-374. Giri, N.C. (1957). On a reinforced partially balanced incomplete block design. Jour.

Indian Soc. Agri. Statist.., 9, 41-51. Gupta, V.K. and Parsad, R. (2001). Block designs for comparing test treatments with

control treatments - An overview. Special issue of Statistics and Applications to felicitate the 80th Birthday of Dr. M.N.Das. 3(1 & 2), 133-146.

Hedayat, A.S., Jacroux, M. and Majumdar, D. (1988). Optimal designs for comparing test treatments with a control. Statist. Sc., 3, 363-370.

Hoblyn, T.N., Pearce, S.C. and Freeman,G.H.(1954). Some considerations in the design of successive experiments in fruit plantations. Biometrics, 10, 503-515.

Hoover, D.R.(1991). Simultaneous comparisons of multiple treatments to two ( or more) controls. Biom. J., 33(8), 913-921.

Kageyama, S. and Sinha, K. (1988). Some constructions of balanced bipartite block designs. Utilitas Mathematica, 33, 137 - 162.

Kulkarni, G.A. (1960). A problem in reinforced incomplete block designs. J. Ind. Soc. Agric. Statist., 12, 143-145.

Majumdar, D. (1996). Optimal and efficient treatment-control designs. Handbook of Statistics, 13, Editors: S.Ghosh and C.R.Rao, Elsevier Science B.V.

277

Page 22: DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT … · DESIGNS FOR MAKING TEST TREATMENTS VS CONTROL TREATMENT (S) COMPARISONS – AN OVERVIEW ... Experimental Situation 1:

Test(s) vs Control(s) Comparisons – An Overview

Nair,K.R.and Rao,C.R.(1942). Incomplete Block designs for experiments involving several groups of varities. Science & Culture, 7, 615-616.

Nigam, A.K., Gupta, V.K. and Narain, P.(1979). The monograph on supplemented balanced block designs. I.A.S.R.I., New Delhi Publication.

Parsad, R. and Gupta, V.K. (2000). A Note on augmented designs. Indian J.Pl. Genet. Resources, 13(1), 53-58.

Parsad, R. Gupta, V.K. (2001). Balanced bipartite row- column designs. Ars. Combinatoria, 61, 301-312.

Paulson, E.(1952). On the comparison of several experimental categories with a control. Ann. Math. Statist., 23, 239-246.

Pearce, S.C.(1960). Supplemented Balance. Biometrika, 47, 263-271 Pearce, S.C.(1983). The Agricultural Field Experiment: A Statistical Examination of

Theory & Practice. Wiley, New York. Rao, C.R. (1947). General methods of analysis for incomplete block designs. J.

Amer.Statist. Assoc. 42, 541 - 561. Robson, D.S. (1961). Multiple comparisons with a control in balanced incomplete block

designs. Technometrics, 3, 103 - 105. Roessler, E.B. (1946). Testing the significance of observations compared with a control.

Proceedings of the American Society of Horticultural Sciences, 47, 249-251. Sinha, B.K.(1980). Optimal block designs. Unpublished Seminar Notes, Indian Statistical

Institute, Calcutta. Sinha, B.K.(1982). On complete classes of Experiments for certain Invariant problems of

Linear Inference. J. Statist. Plann. Inf., 7, 171-180. Sinha, K. and Kageyama, S.(1990). Further contribution of balanced bipartite designs.

Utilitas Math., 38, 155-160. Stufken, J. (1987). A-optimal block designs for comparing test treatments with a control.

Ann.Statist., 15, 1629 - 1638. Ture,T.E.(1994). Optimal row-column designs for multiple comparisons with a control:

A complete catalog. Technometrics, 36, 292-299.

278