Transcript
Page 1: Gender differences in computerised and conventional ... · Gender differences in computerised and conventional educational tests J. Horne Department of Psychology, University of Hull,

Gender differences in computerised andconventional educational testsJ. HorneDepartment of Psychology, University of Hull, Hull, UK

Abstract Research has demonstrated girls to outperform boys on conventional literacy tests. The

present studies concern gender differences on computerised educational tests. Seventy-one

children were tested using LASS Secondary and a set of seven conventional measures.

No significant gender differences were found on any of the LASS Secondary modules, al-

though females did outperform males on a conventional spelling test. A further 126 pupils

were tested on computerised and paper versions of the LASS Secondary reading, spelling

and reasoning modules. No gender differences were found on the computerised versions, but

there were significant differences on the paper versions of the reading and spelling modules

favouring females. In a third study, 45 children were administered computerised and paper

versions of the LASS Junior reading and spelling modules. There were no significant dif-

ferences on the computerised modules, but girls performed significantly higher than boys on

the paper version of the spelling module. It is possible that computerised assessment does not

detect the established gender effect due to differences between males and females in mo-

tivation, computer experience and competitiveness. Further large-scale studies are necessary

to confirm these findings.

Keywords assessment, children, computer, education, gender.

Introduction

Several researchers have shown girls to be scoring

higher than boys on literacy tests on school entry at

age 4 or 5 (Denton & West 2002; Justice et al. 2005)

and even on pre-school cognitive tasks (Sammons

et al. 2004; Sylva et al. 2004). Strand (1999) maintains

that girls are typically found to score higher than boys

in teacher ratings of reading, writing and mathematics

throughout Key Stage 1. Indeed, a number of studies

using baseline assessment schemes (mostly based on

teacher rating scales) have shown girls to achieve

higher scores than boys (Lindsay 1980; Strand 1997;

Lindsay & Desforges 1999). In addition, the Annual

Report on sample data from the QCA Baseline As-

sessment Scales for the Autumn 1998 and Spring and

Summer 1999 periods (Qualifications and Curriculum

Authority 1999) also shows that, from a nationally

representative sample of 6953 children, girls score

higher than boys on all items (which are based on

teacher ratings). It has also been reported that girls

make more progress than boys in reading and writing

during Key Stage 1 and so increase the gender gap

(Strand 1997, 1999). Strand (1999) reports that girls

make less progress in mathematics but still score

slightly higher than boys at age 7. However, Tizard

et al. (1988) indicate that, on objective tests, there are

no significant differences between boys and girls at the

end of their nursery education (average age 4 years 7

months) but that girls are significantly ahead of boys

in reading by age 7. Tymms (1999) reports significant

gender differences, favouring girls, on all subsections

of the PIPS baseline test. However, he argues thatCorrespondence: J. Horne, Department of Psychology, University of

Hull, Hull HU6 7RX, UK. E-mail: [email protected]

Accepted: 10 July 2006

& 2007 The Author. Journal compilation & 2007 Blackwell Publishing Ltd Journal of Computer Assisted Learning 23, pp47–55 47

Original article

Page 2: Gender differences in computerised and conventional ... · Gender differences in computerised and conventional educational tests J. Horne Department of Psychology, University of Hull,

‘none was large enough to be regarded as education-

ally important’ (p. 43). The computer-based version of

the PIPS baseline test shows girls performing slightly

better than boys, although the effect sizes (0.1–0.2) are

small (CEM 2001).

Gender differences in education, particularly with

girls scoring higher than boys on literacy, continue to

be evident throughout the school years (Frome &

Eccles 1998; Pajares et al. 1999; Swiatek et al. 2000;

Herbert & Stipek 2005). A number of studies in recent

years have demonstrated a female advantage over

males on reading tests (MacMillan 2000; Diamond &

Onwuegbuzie 2001) and writing tests (Malecki &

Jewell 2003). Tizard et al. (1988) report that girls in

Britain outperform boys on standardised tests of

reading and writing by the age of 7 but there is only a

small gender difference in mathematics. According to

Fergusson and Horwood (1997) the traditional edu-

cational disadvantage of girls has been replaced by a

male disadvantage. They studied a group of children

from school entry to age 18 and found that males

achieved less well than females. These differences

were not due to differences in intelligence as boys and

girls had similar IQ scores. Fergusson and Horwood

suggest that the gender differences were explained by

boys being more disruptive and inattentive in class,

which impeded their learning. Male (1994) states that,

in one LEA, two-thirds of the children receiving a

statement of special educational needs are boys. There

are also a disproportionate number of boys identified

as having reading difficulties in the United States

(Allred 1990). Furthermore, Vardill (1996) reports

that more boys than girls are identified as having

special educational needs and, in one LEA, almost

twice as many boys, compared with girls, were re-

ferred. He suggests that one of the reasons for this is

that boys experiencing difficulties are more likely to

be a problem to the teacher because of associated

conduct problems. Alternatively, Malecki and Jewell

(2003) suggest that boys may be over-identified for

difficulties as girls have an advantage on tests and

normative data may not account for gender.

Plewis (1997) suggests that the reasons for gender

differences in educational attainment are not well

understood. It has sometimes been argued that tea-

chers have lower expectations for boys. He suggests,

however, that to conclude that teachers are biased

against boys it is necessary to show that these low

expectations are inaccurate and that teachers behave

differently towards male pupils. Plewis did not find

teachers’ assessments to be less accurate for this

group, although he did find teachers’ expectations to

be biased against boys. A number of researchers

suggest that expectations could be affected by the way

boys behave (Tizard et al. 1988; Bennett et al. 1993;

Delap 1995).

Despite the huge growth in expenditure on in-

formation and communication technology in educa-

tion over recent years, the majority of assessment

taking place in schools is pen-and-paper based (Cowan

& Morrison 2001). According to Hargreaves et al.

(2004), English schools have one of the highest rates

of computer use in the world, with one computer for

every 9.7 pupils in 2002. There are a number of ad-

vantages of computerised assessment (see Greenwood

& Rieth 1994; Singleton 1994, 1997; Singleton et al.

1995, 1999; Woodward & Rieth 1997; British Psy-

chological Society 1999) including that they are

labour and time saving, more standardised and

controlled. Another principal advantage is the possi-

bility of adaptive testing. In an adaptive test, in-

dividuals are moved quickly to the test items that will

most efficiently discriminate his or her capabilities,

making assessment shorter, more reliable, more effi-

cient and often more acceptable to the person being

tested. A number of studies have indicated that chil-

dren (particularly if they feel they might perform

badly) often prefer computer-based assessment to

traditional assessment by a human assessor. However,

the effect of utilising computerised assessments on

gender differences is not clear. It has been suggested

that females may perform more poorly than males on

computerised tests due to computer anxiety (Brosnan

1999; Todman 2000) although it has also been argued

that males under-perform on such tests as they practice

less. A number of studies have shown females to

obtain lower scores on computerised assessments

(Gallagher et al. 2002; Volman et al. 2005), while

others have shown there to be no gender differences on

computerised assessment (Singleton 2001; Fill &

Brailsford 2005).

The present study seeks to ascertain whether gender

differences exist in computerised educational tests. It

consists of three small-scale studies involving two

computerised tests – one for secondary schools (LASS

Secondary) and one for junior schools (LASS Junior).

48 J. Horne

& 2007 The Author. Journal compilation & 2007 Blackwell Publishing Ltd

Page 3: Gender differences in computerised and conventional ... · Gender differences in computerised and conventional educational tests J. Horne Department of Psychology, University of Hull,

Method

Participants

The participants in the three studies attended schools

in different regions of England and Wales, which were

selected in order to give a representative range across

the socio-economic spectrum. Details of the partici-

pants are given in Table 1. The pupils in each school

were randomly selected from the school register – they

were not selected on the basis of ability or for being

considered as at risk of having any special educational

need.

Instruments

Study 1: LASS Secondary versus conventional tests

LASS Secondary (Horne et al. 1999) is a compu-

terised test consisting of seven main assessment

modules. It is standardised for the age range 11 years 0

months to 15 years 11 months.

1. Sentence reading is an adaptive test that involves

finding the missing word in a sentence. Items are

selected from a bank of 70 items.

2. Spelling is an adaptive test that involves spelling

single words. Items are selected from a bank of 98

items.

3. Reasoning is an adaptive test involving logical

matrix puzzles. Items are selected from a bank of

58 items.

4. Visual spatial short-term memory has a scenario of

a ‘cave’ with eight ‘hollows’. Different ‘phantoms’

appear in different hollows one at a time and then

disappear. The pupils must select from an array the

phantoms that were presented and place them in the

correct hollow. This is an adaptive test that starts

with three phantoms and progresses to a maximum

of eight phantoms, depending on the pupils’ per-

formance.

5. Auditory-sequential short-term memory is a for-

ward digit span task. Pupils are given a telephone

number to remember which they then enter on to a

representation of a mobile phone using the mouse

or the computer keyboard. Pupils start with two

items of three-digit numbers and, if they answer

one or both correctly, then they move on to two

items of four-digit numbers and so on up to a

maximum of nine digits.

6. Non-word reading is a test of phonic decoding

skills and has 25 items. For each item, pupils are

presented with a non-word visually and hear four

different spoken versions of the word; they must

select the version that they think is correct.

7. Syllable segmentation is a test of syllable and

phoneme deletion, which identifies poor phonolo-

gical processing ability. Pupils are presented with

32 words and asked what each word would sound

like if part of it is removed. They listen to four

alternative answers, before selecting the answer

that they think is correct.

Commensurate conventional tests were used as com-

parison measures for the seven LASS Secondary

modules (see Table 2). It is important to note that these

tests are not exact equivalents of the LASS Secondary

modules.

Study 2: LASS Secondary computerised versus pen-

and-paper administration

The second study utilised the LASS Secondary (Horne

et al. 1999) sentence reading, spelling and reasoning

modules as outlined above. These were administered

in their standard computerised format (as adaptive

tests) and also in a pen-and-paper format (in which all

items were administered).

Study 3: LASS Junior computerised versus pen-and-

paper administration

The final study utilised the LASS Junior (Thomas et al.

2001) sentence reading and spelling modules. These

were administered in their standard computerised

Table 1. Participant details.

Study Schools n Males Females Age range Mean age (SD)

1. LASS Secondary versus conventional tests 5 71 44 27 11:6–15:11 13:6 (17.02 months)

2. LASS Secondary computer versus paper 3 126 59 67 11:1–13:2 12:2 (7.90 months)

3. LASS Junior computer versus paper 1 45 25 20 9:5–11:5 10:6 (7.32 months)

Gender differences in educational tests 49

& 2007 The Author. Journal compilation & 2007 Blackwell Publishing Ltd

Page 4: Gender differences in computerised and conventional ... · Gender differences in computerised and conventional educational tests J. Horne Department of Psychology, University of Hull,

format (as adaptive tests) and also in a pen-

and-paper format (in which all items were adminis-

tered).

1. Sentence reading is an adaptive test that involves

finding the missing word in a sentence. Items are

selected from a bank of 94 items.

2. Spelling is an adaptive test that involves spelling

single words. Items are selected from a bank of

122 items.

Procedure

Study 1: LASS Secondary versus conventional tests

Pupils completed the seven LASS Secondary modules,

with each school completing the tests in a different

order. Pupils were administered the equivalent con-

ventional tests, in the same order, with a 4-week gap in

between. Half the schools completed the LASS Sec-

ondary tests first and half completed the conventional

tests first.

Study 2: LASS Secondary computerised versus pen-

and-paper administration

Pupils completed the three LASS Secondary compu-

terised modules, with each school completing the tests

in a different order. Pupils were administered the

equivalent pen-and-paper versions, in the same order

as the comparable computerised tests, with an 8-week

gap in between.

Study 3: LASS Junior computerised versus pen-and-

paper administration

Pupils completed the two LASS Junior computerised

modules and the equivalent pen-and-paper versions,

with a 4-week gap in between. Half the pupils com-

pleted the computerised tests first and half completed

the pen-and-paper tests first.

Results

Study 1: LASS Secondary versus conventional tests

Analysis by gender (see Table 3) revealed no sig-

nificant differences on any of the LASS Secondary

modules.

Analysis by gender revealed a significant difference

on only one of the conventional measures (BSTS 3

spelling test), with girls outperforming boys on this

test (see Table 4).

Fifty-one of the 71 pupils preferred the computer

tests, while 17 preferred the paper-and-pencil tests.

There was no significant gender difference (see Fig 1)

in the preference for computerised or paper-and-pencil

tests (w2 5 0.34, df 5 1, not significant).

Study 2: LASS Secondary computerised versus pen-

and-paper administration

Analysis by gender (see Table 5) revealed no sig-

nificant differences on any of the LASS Secondary

computerised modules but significant differences on

two of the pen-and-paper versions of the tests (reading

and spelling), with girls outperforming boys on these

tests. Correlations between the computerised and pen-

and-paper versions of the tests were 0.90 (Po0.001)

for reading, 0.89 (Po0.001) for spelling and 0.79

(Po0.001) for reasoning.

Study 3: LASS Junior computerised versus pen-and-

paper administration

Analysis by gender (see Table 6) revealed no sig-

nificant differences on either of the LASS Junior

Table 2. Conventional tests and their comparative LASS Secondary modules.

Conventional test LASS module

NFER – Nelson group reading sentence completion test (Macmillan Test Unit 1990) Sentence reading

British spelling test series Level 3 (Vincent & Crumpler 1997) Spelling

Matrix analogies test – short form (Naglieri 1985) Reasoning

Wechsler Memory Scale-III Spatial Span Test (Wechsler 1997) Visual memory

Wechsler Intelligence Scale for Children-III Digit Span Test (Wechsler 1991) Auditory memory

Phonological Assessment Battery (Frederickson et al. 1997) non-word reading Non-word reading

Phonological Assessment Battery (Frederickson et al. 1997) spoonerisms Syllable segmentation

50 J. Horne

& 2007 The Author. Journal compilation & 2007 Blackwell Publishing Ltd

Page 5: Gender differences in computerised and conventional ... · Gender differences in computerised and conventional educational tests J. Horne Department of Psychology, University of Hull,

computerised modules but a significant difference on

the pen-and-paper version of the spelling test, with

girls outperforming boys on this test. Correlations

between the computerised and pen-and-paper versions

of the tests were 0.92 (Po0.001) for reading and 0.96

(Po0.001) for spelling.

Discussion

No significant gender differences were found on any

of the computerised LASS Secondary or LASS Junior

modules. The conventional measures utilised in Study

1 show a significant gender difference on spelling,

with females outperforming males. Additionally, girls

also scored higher than boys on the pen-and-paper

versions of the LASS Secondary reading and spelling

modules and the LASS Junior spelling module. The

gender difference in spelling ability on the three pen-

and-paper spelling tests is concordant with findings by

Hyde and Linn (1988) that, even at an undergraduate

level (Coren 1989), a gender difference, favouring

females, exists in spelling. The results of the first

LASS Secondary study suggest that there is no sig-

nificant effect of gender in pupils’ preference for

computerised or conventional tests.

The findings of the present study, that there are no

significant gender differences in the computerised

reading and spelling tests, are in contrast to the results

Table 3. Gender differences on the seven LASS Secondary modules (z-scores).

Gender n Mean SD t df Significance Effect size (r)

LASS sentence reading Female 27 0.64 0.70 1.25 69 NS 0.15

Male 44 0.36 0.99

LASS spelling Female 26 0.69 0.76 1.93 68 NS 0.22

Male 44 0.25 1.00

LASS reasoning Female 27 0.51 0.70 0.93 69 NS 0.11

Male 44 0.33 0.80

LASS visual memory Female 27 0.02 0.93 0.44 69 NS 0.05

Male 44 0.13 1.03

LASS auditory memory Female 27 0.54 0.74 1.07 69 NS 0.13

Male 44 0.29 1.17

LASS non-word reading Female 27 0.49 0.97 0.55 69 NS 0.07

Male 44 0.35 1.10

LASS syllable segmentation Female 27 0.49 0.86 1.23 69 NS 0.14

Male 44 0.23 0.84

Table 4. Gender differences on the seven conventional measures (z-scores).

Gender n Mean SD t df Significance Effect size (r)

NFER group reading test Female 27 �0.29 0.85 1.29 69 NS 0.15

Male 44 �0.63 1.19

BSTS 3 Female 27 �0.06 0.87 2.36 68 Po0.05 0.27

Male 44 �0.66 1.27

MAT Female 27 �0.22 0.83 0.77 69 NS 0.09

Male 44 �0.41 1.11

WMS-III spatial span (total) Female 27 �0.11 1.10 1.21 68 NS 0.14

Male 43 �0.42 1.00

WISC-III Digit Span (total) Female 27 0.08 0.96 1.87 69 NS 0.22

Male 44 �0.40 1.08

PhAB non-word reading Female 27 0.06 0.77 0.71 67 NS 0.09

Male 42 �0.10 1.15

PhAB spoonerisms Female 27 �0.22 0.81 0.56 68 NS 0.07

Male 43 �0.35 1.05

Gender differences in educational tests 51

& 2007 The Author. Journal compilation & 2007 Blackwell Publishing Ltd

Page 6: Gender differences in computerised and conventional ... · Gender differences in computerised and conventional educational tests J. Horne Department of Psychology, University of Hull,

of previous studies, which suggested that girls out-

perform boys in conventional pen-and-paper literacy

tests. It could be suggested that fully computerised

tests are more objective than conventional assessment

and less susceptible to gender bias. However, it is

more likely that the interaction with a computer in the

studies has enhanced the motivation of the boys more

than that of the girls and this could have compensated

for (or masked) gender differences in performance.

Several studies have shown that boys globally use ICT

more both at home and at school (Scott et al. 1992;

Bannert & Arbinger 1996; Makrakis & Sawada 1996;

Janssen & Plomp 1997; Kadijevich 2000). Further, it

is reported that girls do not view computers as posi-

tively as boys do (Durndell 1991; Comber et al. 1997;

Durndell & Thomson 1997; Huber & Schofield 1998)

and Crook (1996) suggests that girls’ attitudes towards

technology may become more negative as they go

through school. Previous research has suggested that

females experience higher levels of computer anxiety

(Brosnan 1999; Todman 2000) which may negatively

affect their performance on computerised tests. This is

consistent with the finding of Newton and Beck (1993)

that, in the early 1990s, the percentage of women

studying for computer science degrees was falling.

Additionally, Volman and van Eck (2001) state that

0

5

10

15

20

25

30

35

Males FemalesGender

Frequen

cy

Computer

Pen and Paper

Fig 1 Gender differences in preferences for different test types.

Table 5. Gender differences on the LASS Secondary computerised and paper-administered modules.

Gender n Mean SD t df Significance Effect size (r)

Computerised reading Female 66 53.95 18.93 0.21 113 NS 0.02

Male 57 53.19 21.28

Computerised spelling Female 65 59.85 20.64 0.90 121 NS 0.08

Male 58 56.55 19.63

Computerised reasoning Female 51 33.90 17.76 0.69 93 NS 0.07

Male 46 36.13 14.00

Pen-and-paper reading Female 66 55.20 16.45 3.14 121 Po0.01 0.27

Male 57 45.32 18.47

Pen-and-paper spelling Female 65 61.05 20.21 3.37 121 Po0.01 0.29

Male 58 48.78 20.07

Pen-and-paper reasoning Female 51 35.63 13.48 0.16 95 NS 0.02

Male 46 35.24 10.04

Table 6. Gender differences on the LASS Junior computerised and paper-administered modules.

Gender n Mean SD t df Significance Effect size (r)

Computerised reading Female 20 64.05 18.66 1.16 43 NS 0.17

Male 25 56.48 24.03

Computerised spelling Female 20 75.20 19.76 1.60 43 NS 0.24

Male 25 65.00 22.27

Pen-and-paper reading Female 20 61.85 15.43 1.43 43 NS 0.22

Male 25 54.12 19.85

Pen-and-paper spelling Female 20 79.25 18.14 2.19 43 Po0.05 0.32

Male 25 66.12 21.30

52 J. Horne

& 2007 The Author. Journal compilation & 2007 Blackwell Publishing Ltd

Page 7: Gender differences in computerised and conventional ... · Gender differences in computerised and conventional educational tests J. Horne Department of Psychology, University of Hull,

girls report fewer ICT skills and less ICT knowledge

than boys and Janssen and Plomp (1997) suggest that

boys score better than girls on tests of ICT skills.

However, Crook and Steele (1987) found no sig-

nificant gender differences among reception class

pupils choosing to engage in computer activities in

school and Essa (1987) obtained similar results among

pre-schoolers. Volman and van Eck (2001) and Vol-

man et al. (2005) also found gender differences to be

less common in younger pupils than older pupils.

Crook (1996, p. 25) concludes that ‘. . . gender-based

attitude differences are not convincingly present at the

start of schooling: they must somehow be cultivated

within the early school years’. Nevertheless, if gender

differences in computer interest do exist, then these

could bias the results of a computerised assessment.

However, Taylor et al. (1999) found no relationship

between computer familiarity and level of perfor-

mance on a computerised test of English as a foreign

language, after controlling for English language abil-

ity. Furthermore, Hargreaves et al. (2004) found no

relationship between low performance on a compu-

terised mathematics test and unfamiliarity or ner-

vousness with computers. This has been supported in

adult samples by Parshall and Kromrey (1993) and

Clariana and Wallace (2002). Fill and Brailsford

(2005) found no evidence of higher computer anxiety

among female University students.

Alternatively, it is suggested that males are more

competitive than females, while females are more

collaborative (Maccoby & Jacklin 1974; Fiore 1999).

According to Hayes and Waller (1994) this may in-

terfere with accurate performance under situations

such as group testing. The pen-and-paper reading and

spelling tests used in the studies were administered as

group tests, while the computerised tests were ad-

ministered individually in most cases. It is therefore

possible that the male students attempted to finish the

conventional tests quickly and so made more mistakes,

whereas during the individual computerised tests boys

were under less pressure to complete the test quickly.

Both LASS Junior and LASS Secondary have network

versions, which can be installed on several machines

within a computer room so that pupils can be tested

simultaneously (wearing headphones). In such situa-

tions, it may be a necessary precaution to advise that

students being tested at the same time be administered

the tests in different orders. Furthermore, the paper

versions of the LASS tests in the current studies in-

corporated all of the test items, while the computerised

versions utilised an adaptive algorithm. It is possible

that the extended length of the paper test impacted

more negatively on the boys.

Conclusions

Gender differences favouring females that are evident

in conventional literacy tests are not evident in the

LASS Secondary and LASS Junior reading and spel-

ling tests. It may be that computerised assessment does

not detect this established gender effect due to dif-

ferences between males and females in motivation,

computer experience and competitiveness. However,

these results are from small-scale studies and further

large-scale studies looking at the effect of test mode

on gender differences in educational tests are neces-

sary to establish the generalisability of these findings.

References

Allred R.A. (1990) Gender differences in spelling achieve-

ment in grades 1 through 6. Journal of Educational

Research 83, 187–193.

Bannert M. & Arbinger P.R. (1996) Gender-related differ-

ences in exposure to and use of computers: results of a

survey of secondary schools. European Journal of Psy-

chology of Education 11, 269–282.

Bennett R.E., Gottesman R.L., Rock D.A. & Cerullo F.

(1993) Influence of behaviour perceptions and gender on

teachers’ judgements of students’ academic skills. Jour-

nal of Educational Psychology 85, 347–356.

British Psychological Society (1999) Dyslexia, Literacy and

Psychological Assessment. British Psychological Society,

Leicester.

Brosnan M. (1999) Computer anxiety in students. In Com-

puter-Assisted Assessment in Higher Education (eds

S. Brown, J. Bull & P. Race), pp. 47–54. Kogan Page,

London.

CEM Centre (2001) Performance Indicators in Primary

Schools: Technical Report 2001: CD-ROM Version.

CEM Centre, Durham.

Clariana R. & Wallace P. (2002) Paper-based versus com-

puter-based assessment: key factors associated with the

test mode effect. British Journal of Educational Tech-

nology 33, 593–602.

Comber Ch., Colley A., Hargreaves D.J. & Dorn L. (1997)

The effects of age, gender and computer experience upon

computer attitudes. Educational Research 9, 123–134.

Gender differences in educational tests 53

& 2007 The Author. Journal compilation & 2007 Blackwell Publishing Ltd

Page 8: Gender differences in computerised and conventional ... · Gender differences in computerised and conventional educational tests J. Horne Department of Psychology, University of Hull,

Coren S. (1989) The spelling component test: psychometric

evaluation and norms. Educational and Psychological

Measurement 49, 961–971.

Cowan P.C. & Morrison H. (2001) Assessment in the 21st

century: role of computerised adaptive testing in National

Curriculum subjects. Teacher Development 5, 241–257.

Crook C. (1996) Computers and the Collaborative Experi-

ence of Learning: A Psychological Perspective. Routle-

dge, London.

Crook C. & Steele J. (1987) Self-selection of simple com-

puter activities by infant school pupils. Educational

Psychology 7, 23–32.

Delap M.R. (1995) Teachers’ estimates of candidates’ per-

formance in public examinations. Assessment in Educa-

tion 2, 75–92.

Denton K. & West J. (2002) Children’s Reading and

Mathematics Achievement in Kindergarten and First

Grade. National Center for Education Statistics, Wa-

shington, DC.

Diamond P.J. & Onwuegbuzie A.J. (2001) Factors asso-

ciated with reading achievement and attitudes among

elementary school-aged students. Research in the

Schools 8, 1–11.

Durndell A. (1991) The persistence of the gender gap in

computing. Computers and Education 16, 283–287.

Durndell A. & Thomson K. (1997) Gender and computing: a

decade of change? Computers and Education 28, 1–9.

Essa E.L. (1987) The effect of a computer on pre-school

children’s activities. Early Childhood Research Quar-

terly 2, 377–382.

Fergusson D.M. & Horwood L.J. (1997) Gender differences

in educational achievement in a New Zealand birth co-

hort. New Zealand Journal of Educational Studies 32,

83–96.

Fill K. & Brailsford S. (2005) Investigating gender bias in

formative and summative CAA. Available at: http://

eprints.soton.ac.uk/16256, last accessed 8 May 2006.

Fiore C. (1999) Awakening the tech bug in girls. Learning

and Leading with Technology 26, 10–17.

Frederickson N., Frith U. & Reason R. (1997) Phonological

Assessment Battery. NFER-Nelson, Windsor.

Frome P.M. & Eccles J.S. (1998) Parents’ influence on

children’s achievement-related perceptions. Journal of

Personality and Social Psychology 74, 435–452.

Gallagher A., Bridgeman B. & Cahalan C. (2002) The effect

of computer-based tests on racial-ethnic and gender

groups. Journal of Educational Measurement 39,

133–147.

Greenwood C. & Rieth H. (1994) Current dimensions of

technology-based assessment in special education. Ex-

ceptional Children 61, 105–113.

Hargreaves M., Shorrocks-Taylor D., Swinnerton K.T. &

Threlfall J. (2004) Computer or paper? That is the

question: does the medium in which assessment ques-

tions are presented affect children’s performance in

mathematics? Educational Research 46, 29–42.

Hayes Z.L. & Waller T.G. (1994) Gender differences in

adult readers: a process perspective. Canadian Journal of

Behavioural Science 26, 421–437.

Herbert J. & Stipek D. (2005) The emergence of gender

differences in children’s perceptions of their academic

competence. Applied Developmental Psychology 26,

276–295.

Horne J.K., Singleton C.H. & Thomas K.V. (1999) Lucid

Assessment Systems for Schools – Secondary. Lucid

Creative Limited, Beverley.

Huber B. & Schofield J.W. (1998) I like computers, but

many girls don’t: gender and the sociocultural context of

computing. In Education/ Technology/Power: Educa-

tional Computing as a Social Practice (eds H. Bromley &

M.W. Apple), pp. 103–132. University of New York

Press, Albany.

Hyde J.S. & Linn M.C. (1988) Gender differences in verbal

ability: a meta-analysis. Psychological Bulletin 104, 53–

69.

Janssen R.I. & Plomp T.J. (1997) Information technology

and gender equality: a contraICTtion in terminis. Com-

puters in Education 28, 65–78.

Justice L.M., Invernizzi M., Geller K., Sullivan A. & Welsch

J. (2005) Descriptive developmental performance of at-

risk preschoolers on early literacy tasks. Reading

Psychology 26, 1–25.

Kadijevich D. (2000) Gender differences in computer atti-

tude among ninth-grade pupils. Journal of Educational

Computing Research 22, 145–154.

Lindsay G. (1980) The Infant Rating Scale. British Journal

of Educational Psychology 50, 97–104.

Lindsay G. & Desforges M. (1999) The use of the

Infant Index/Baseline-PLUS as a baseline assessment

measure of literacy. Journal of Research in Reading 22,

55–66.

Maccoby E.E. & Jacklin C.N. (1974) The Psychology of Sex

Differences. Stanford University Press, Stanford.

MacMillan P. (2000) Simultaneous measurement of reading

growth, gender and relative-age effects: many faceted

Rasch applied to CBM reading scores. Journal of Applied

Measurement 1, 393–408.

Macmillan Test Unit (1990) Group Reading Test. NFER-

Nelson, Windsor.

Makrakis V. & Sawada T. (1996) Gender, computers and

other school subjects among Japanese and Swedish

pupils. Computers and Education 26, 225–231.

54 J. Horne

& 2007 The Author. Journal compilation & 2007 Blackwell Publishing Ltd

Page 9: Gender differences in computerised and conventional ... · Gender differences in computerised and conventional educational tests J. Horne Department of Psychology, University of Hull,

Male D.B. (1994) Time taken to complete assessments and

make statements: does the nature of SEN matter and will

new legislation make a difference? British Journal of

Special Education 21, 147–151.

Malecki C.K. & Jewell J. (2003) Developmental, gender and

practical considerations in scoring curriculum-based

measurement writing probes. Psychology in the Schools

40, 379–390.

Naglieri J.A. (1985) Matrix Analogies Test – Short Form.

The Psychological Corporation, London.

Newton P. & Beck E. (1993) Computing: an ideal occupation

for women? In Computers into Classrooms (eds J. Benyon

& H. Mackay), pp. 130–146. The Falmer Press, London.

Pajares F., Miller M.D. & Johnson M.J. (1999) Gender dif-

ferences in writing self-beliefs of elementary school

students. Journal of Educational Psychology 91, 50–61.

Parshall C.G. & Kromrey J.D. (1993) Computer testing

versus paper-and-pencil: an analysis of examinee char-

acteristics associated with mode effect. Paper presented

at the Annual Meeting of the American Educational

Research Association, Atlanta, GA, April (Educational

Resources Document Reproduction Service ED363272).

Plewis I. (1997) Inferences about teacher expectations from

national assessment at key stage one. British Journal of

Educational Psychology 67, 235–247.

Qualifications and Curriculum Authority (1999) Annual

Report of the Evaluation of the First Year of Statutory

Baseline Assessment 1998/1999. QCA, London.

Sammons P., Elliot K., Sylva K., Melhuish E., Siraj-Blatch-

ford I. & Taggart B. (2004) The impact of pre-school on

young children’s cognitive attainments at entry to recep-

tion. British Educational Research Journal 30, 691–712.

Scott T., Cole M. & Engel M. (1992) Computers and edu-

cation: a cultural constructivist perspective. Review of

Research in Education 18, 191–251.

Singleton C.H. (1994) Computer applications in the identi-

fication and remediation of dyslexia. In Literacy and

Computers: Insights from Research (ed. D. Wray),

pp. 55–61. United Kingdom Reading Association, Widness.

Singleton C.H. (1997) Computer-based assessment of read-

ing. In The Psychological Assessment of Reading (eds

J.R. Beech & C.H. Singleton), pp. 257–278. Routledge,

London.

Singleton C.H. (2001) Computer-based assessment in edu-

cation. Educational and Child Psychology 18, 58–74.

Singleton C.H., Horne J.K. & Vincent D. (1995) Im-

plementation of a Computer-Based System for the

Diagnostic Assessment of Reading. National Council for

Educational Technology, Coventry.

Singleton C.H., Horne J.K. & Thomas K.V. (1999) Com-

puterised baseline assessment of literacy. Journal of

Research in Reading 22, 67–80.

Strand S. (1997) Pupil progress during key stage 1: a value

added analysis of school effects. British Educational

Research Journal 23, 471–487.

Strand S. (1999) Baseline assessment results at age 4: associ-

ations with pupil background factors. Journal of Re-

search in Reading 22, 14–26.

Swiatek M.A., Lupkowski-Shoplik A. & O’Donoghue C.C.

(2000) Gender differences in above-level EXPLORE

scores of gifted third through sixth graders. Journal of

Educational Psychology 92, 718–723.

Sylva K., Melhuish E., Sammons P., Siraj-Blatchford I. &

Taggart B. (2004) The Effective Provision of Pre-School

Education (EPPE) Project: Findings from Pre-School to

End of Key Stage 1. DFES Publications, Nottingham.

Taylor C., Kirsch I., Eignor D. & Jamieson J. (1999) Ex-

amining the relationship between computer familiarity

and performance on computer-based language tasks.

Language Learning 49, 219–274.

Thomas K.V., Singleton C.H. & Horne J.K. (2001) Lucid

Assessment Systems for Schools – Junior. Lucid Creative

Ltd, Beverley.

Tizard B., Blatchford P., Burke J., Farquhar C. & Plewis I.

(1988) Young Children at School in the Inner City.

Lawrence Erlbaum, Hove.

Todman J. (2000) Gender differences in computer anxiety

among University entrants since 1992. Computers and

Education 34, 27–35.

Tymms P.B. (1999) Baseline Assessment and Monitoring in

Primary Schools. David Fulton, London.

Vardill R. (1996) Imbalance in the numbers of boys and girls

identified for referral to educational psychologists: some

issues. Support for Learning 11, 123–129.

Vincent D. & Crumpler M. (1997) British Spelling Test

Series. NFER-Nelson, Windsor.

Volman M. & van Eck E. (2001) Gender equity and

information technology in education: the second

decade. Review of Educational Research 71, 613–

631.

Volman M., van Eck E., Heemskerk I. & Kuiper E. (2005)

New technologies, new differences: gender and ethnic

differences in pupils’ use of ICT in primary and

secondary education. Computers and Education 45,

35–55.

Wechsler D. (1991) Wechsler Intelligence Scale for Chil-

dren – Third Edition. The Psychological Corporation,

San Antonio.

Wechsler D. (1997) Wechsler Memory Scale – Third Edi-

tion. The Psychological Corporation, London.

Woodward J. & Rieth H. (1997) A historical review of

technology research in special education. Review of

Educational Research 67, 503–536.

Gender differences in educational tests 55

& 2007 The Author. Journal compilation & 2007 Blackwell Publishing Ltd


Top Related