the workplace employee relations survey (wers) 1997-8 technical report … · 2007. 7. 10. · this...

189
THE WORKPLACE EMPLOYEE RELATIONS SURVEY (WERS) 1997-8 TECHNICAL REPORT (CROSS-SECTION AND PANEL SURVEYS) Colin Airey, Jon Hales, Rosemary Hamilton, Christos Korovessis, Anthony McKernan, Susan Purdon P1700/P1699 September 1999

Upload: others

Post on 24-Jan-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

  • THE WORKPLACE EMPLOYEE RELATIONSSURVEY (WERS) 1997-8

    TECHNICAL REPORT

    (CROSS-SECTION AND PANEL SURVEYS)

    Colin Airey, Jon Hales, Rosemary Hamilton,Christos Korovessis, Anthony McKernan, Susan Purdon

    P1700/P1699 September 1999

  • CONTENTS

    SECTION ONE INTRODUCTION 1

    SECTIO N TWO SAMPLE DESIGN & SELECTION 4

    2.1 Design & selection of the sample for thecross-section survey 4

    2.2 Design & selection of the panel sample 10

    SECTIO N THREE DEVELOPMENT WORK 12

    3.1 Introduction 123.2 The pilot surveys 133.3 Quali tative work for the Survey of Employees 163.4 Design of paper questionnaires 173.5 Telephone screening for the panel survey 17

    SECTION FOUR CONDUCT OF FIELDWORK 22

    4.1 Briefing & interviewer numbers 224.2 Sifting the samples 224.3 Fieldwork progress 264.4 Interviewer workload 284.5 The SEQ 324.6 Fieldwork quali ty control procedures 344.7 Trawl of establishments excluded from

    the panel survey 364.8 Computer Aided Personal Interviews

    (CAPI) 384.9 Retrieval of Paper Forms 41

    SECTION FIVE RESPONSE 43

    5.1 Cross-section survey: response amongmanagement respondents 43

    5.2 Cross-section survey: response amongworker representatives 53

    5.3 Cross-section survey: response to theSurvey of Employees (SEQ) 57

    5.4 Panel Survey: overall response amongmanagement respondents 64

    5.5 Response to the trawl of establishmentsexcluded from the panel survey 72

  • SECTION SIX CODING & EDITING OF DATA 78

    6.1 Introduction 786.2 The Fact Sheets 786.3 Editing the questionnaires 836.4 Standard Industrial Classification (SIC) &

    Standard Occupational Classification (SOC) 836.5 Coding of open questions 846.6 Issues concerning interviews with worker

    representatives 856.7 Overcodes 86

    SECTION SEVEN WEIGHTING THE CROSS-SECTION ANDPANEL SAMPLES 88

    7.1 Weighting the cross-section sample 887.2 Weighting the panel sample 92

    SECTION EIGHT OTHER ISSUES 94

    8.1 Sampling errors(cross-section & panel surveys) 948.2 Archiving of data & confidentiality restrictions 118

    SECTION NINE PAPER DOCUMENTS: FIELDWORK AND OTHER

    9.1 Cross-section survey9.2 Panel survey and Trawl of excluded establishments9.3 General

  • SECTION ONE: INTRODUCTION

    This report documents the conduct of the British Workplace Employee Relations Survey (WERS)1998. It is the fourth in a series of surveys carried out for central government1 and otherfunders2. The previous surveys, known as the Workplace Industrial Relations Surveys (WIRS),were conducted in 1980, 1984 and 1990. For all four surveys, the National Centre for SocialResearch3 has been responsible for sampling and statistical consultancy, the conduct of thefieldwork, coding and preparation of the final data.

    For the first three surveys in the series, the survey was conducted among a cross-section ofestablishments in Great Britain with 25 or more employees. The scope of the fourth survey waswidened to include establishments with 10 or more employees. The sample in 1997 was drawnfrom the Inter-Departmental Business Register (IDBR). This consists of a register of businessesoperating in the UK, maintained by the Office for National Statistics. The register covers allsectors of employment; manufacturing, service industries, banking and finance, public sector(including the NHS and education) and private sector.

    The achieved sample size for each of the three earlier cross-section surveys was just over 2000establishments. In 1997-84, the total number of establishments at which interviews were achievedwas somewhat greater - just under 2,200 but this number includes over 250 establishmentsclassified, at the time of the interview, as having between 10 and 24 employees. Differentialsampling fractions have been used according to the size (ie number of employees) of theestablishment on the IDBR, with the data being weighted before analysis so as to make thesample properly representative of the designated population.

    One or more respondents were interviewed at the selected workplace, each being interviewed as arole holder with specific responsibilities. The management respondent was defined as ‘the seniormanager dealing with personnel, staff or employee relations’ at the establishment. In the greatmajority of cases this person was identified and interviewed at the sampled establishment; in theremainder of cases there was no appropriate respondent at the establishment and the interviewtherefore took place elsewhere in the parent organisation - though still focused on the sampledestablishment.

    Interviews were also sought with worker representatives at each of the establishments at which amanagement interview took place. An establishment’s eligibility for the worker representativeinterview was defined during the course of the management interview. It was derived from theanswers to a number of questions. Interviewers sought interviews with the representative of thelargest (in terms of number of members at the site) recognised trade union or staff association.If an establishment did not recognise unions for the purpose of negotiating pay and conditions forany section of the workforce, but did operate a formal consultative committee of employees andmanagers, then the senior employee representative of the committee was sought. Frequently such

    1 The Department of Trade & Industry on this occasion. The previous surveys were funded by the (former)Employment Department. Employment relations were among those areas of responsibility transferred from the EDto the DTI in 1995.2 The Economic and Social Research Council, the Policy Studies Institute and (but not in 1980) the Advisory,Conciliation and Arbitration Service.3 For 30 years following its registration as a charitable trust in April 1969, the National Centre for Social Researchoperated under the name Social and Community Planning Research (SCPR). The change of name took place inMay 1999. The conduct of WERS 1997-8 was therefore under the former name which appears passim throughoutthe documentation.4 Unlike the previous surveys, the nine month fieldwork period covered successive calendar years. The greater partof the fieldwork took place in 1998.

  • consultative committees were found to exist at workplaces where there were no recognisedunions. However this was not always so. In many workplaces consultative committees werefound to co-exist with recognised trade unions. And of course over half of workplaces had noworker representative (of either sort) to be interviewed.

    For the first time in the series employees were also included in the remit of the survey. A randomselection of 25 employees was made at each establishment (provided that management agreed tothis further survey) and self-completion questionnaires, along with freepost reply envelopes, wereleft for the selected employees. At establishments with fewer than 25 employees, all employeeswere included in the scope of the survey. A detailed account of the methodology and proceduresrelating to the Survey of Employees is given in Sections Two and Four. Approximately 28,250questionnaires were completed and returned - an average of 15 from each of the 1880 co-operating establishments.

    In 1984 and 1990 reinterviews had been carried out with establishments which had taken part inthe previous surveys (ie 1980 and 1984 respectively). This ‘panel’ element of the series wasrepeated in 1997-8 on a larger than ever scale. The issued sample size was 1,301 (randomlyselected from the 2,061 productives in 1990). Nearly 900 interviews5 were achieved. Afundamental difference between the 1997-8 panel survey and its predecessors was in the designand content of the questionnaire. For the first time a quite distinct questionnaire was developedspecifically for use in the panel survey. The reason for this was that the 1997-8 cross-sectionquestionnaire included relatively few questions from 1990. The panel questionnaire thereforecomprised mostly questions repeated from 1990. The remainder of the questions focused onchanges that had occurred in the intervening period. Only the management respondent, defined inthe same terms as above, was interviewed in the panel survey; there were no interviews withworker representatives; and there was no employee survey.

    All three face-to-face interviews (cross-section: management and worker representativerespondents; panel: management respondents) were conducted, for the first time in the series, asComputer Aided Personal Interviews (CAPI). There were considerable gains in efficiency fromthis mode of interviewing, compared with the use of paper questionnaires. In particular it led togreat improvements in the data editing and cleaning stages of the survey. Thus it facilitated theavailability of interim working data files for early analysis and report drafting, whilst fieldworkwas still in progress.

    A joint steering committee was again established by the WIRS funding organisations to initiateand supervise the project. The planning of the 1997-8 survey began in 1996. A research teamfrom the funding organisations6 was responsible to this committee for the conduct of the survey.From the commissioning of the survey (May 1997) through to the final handover of data(September 1998), there was a parallel team consisting of researchers from the National Centre.

    The research team at the National Centre comprised the authors of this report, Colin Airey andJon Hales, being co-directors of the project. Susan Purdon and Christos Korovessis, from theSurvey Methods Centre at the National Centre, provided statistical advice on design, samplingand weighting issues. Anthony McKernan’s areas of responsibility were the development of theCAPI questionnaires and the editing and coding of data after fieldwork. Rosemary Hamilton hada pivotal role in briefing interviewers, fieldwork progress and negotiating agreement with largeorganisations where agreement had to be obtained from Head Offices before establishments couldtake part in the survey.

    5 But see Sections Five and Eight.6 There were seven members. Three were from the DTI (Mark Cully, Stephen Woodland, Andrew O’Reilly), threefrom the Policy Studies Institute (Dr Neil Millward, Alex Bryson, John Forth), one from ACAS (Gill Dix). In July1998 Dr Millward and John Forth joined the staff of the National Institute for Social and Economic Research.

  • There was a degree of continuity within these research teams from previous surveys. Dr NeilMillward and Colin Airey had both worked as principal researchers for their organisations on theprevious three surveys. Rosemary Hamilton had also worked on all of the surveys as aninterviewer, senior fieldwork co-ordinator and as a researcher.

    The survey data were lodged with the ESRC Data Archive in January 1999 (cross-section) andJuly 1999 (panel). They are generally available to all bona fide researchers, subject to a numberof limitations:

    - there will be no locational identifiers;- industrial classification will be limited to SIC(92) Major groups;- full text answers to open and ‘other specify’ questions will be excluded.

    Access to the full data file (without the limitations listed above) will be granted only with theconsent of the WERS steering committee, working through a sub-committee consisting ofrepresentatives of the DTI, ESRC and the National Centre. For further details see Section Eight.

    The initial results of the survey are reported in:

    • First Findings from the 1998 Workplace Employee Relations Survey, by Mark Cully,Stephen Woodland, Andrew O’Reilly, Gill Dix, Neil Millward, Alex Bryson, and John Forth(published by DTI on behalf of DTI, ESRC, ACAS and PSI October 1998).

    Further publications include:

    • Britain at Work: as depicted by the 1998 Workplace Employee Relations Survey, by MarkCully, Stephen Woodland, Andrew O’Reilly, and Gill Dix, to be published by Routledge inSeptember 1999;

    • All Change at Work? British employee relations 1980-98, portrayed by the WorkplaceIndustrial Relations Survey series, by Neil Millward, Alex Bryson and John Forth, to bepublished by Routledge in March 2000.

    This Technical Report is Volume Two in a series of documents describing the output andmethodology of the survey. They have been prepared jointly by the funding research teams andthe National Centre’s researchers and are available from the ESRC Data Archive. For furtherdetails see Section Eight (8.2).

  • SECTION TWO: SAMPLE DESIGN AND SELECTION

    2.1 Design and selection of the sample for the cross-section survey

    The 1997-8 WERS cross-section sample comprises a sample of establishments and asample of employees at those establishments. The selection of each of these samples isdescribed below.

    2.1.1 Selection of the cross-section sample of establishments

    The sampling frame used for the 1997-8 WERS is the Inter-Departmental BusinessRegister which is maintained by the Office for National Statistics7. This is undoubtedlythe highest quality sample frame of organisations and establishments in Britain. Inparticular, the sample frame is believed to be the most complete and the most accurate,respectively reflecting the continuous updating of the frame from VAT and PAYEsources and the removal of establishments which no longer exist. However, it has to berecognised that the primary purpose of the IDBR is to provide a basis for ONS statisticalinquiries. It is not maintained for sampling purposes, and has some deficiencies in thisrespect.

    The data now held in the IDBR derive in large part from the Census of Employment, andthis relationship is readily apparent in the way address information is organised. There isalso considerable evidence that the Census data have been updated, both in terms of thenames of organisations and in the details of addresses, such as the postcode. Recordshave in many cases retained the local unit identifier which they had on the Census ofEmployment.

    The IDBR is used as the basis for various statistical inquiries which contribute tonational accounts and other purposes. It is maintained by reference to PAYE and VATrecords. A feature of these sources is that they do not provide a direct indication ofemployment size. Estimation procedures are used to relate assumed workforce to thefinancial turnover. However, the record is treated as ‘unproven’ until it has been coveredin the Annual Business Inquiry. This is a successor to the triennial Census ofEmployment, and like the latter, is based on a census of establishments with 25employees or over, and a sample of smaller establishments. The consequence of this isthat smaller establishments may remain unproven for some time, but larger ones aregenerally proven within a year or so.

    In order to ensure that the sample for the survey would be representative of allestablishments with 10 employees or more (cross-section), it was necessary to includeunproven units in the sampling process. In the event, of the 3,192 addresses issued tointerviewers (see below) only 58 were unproven.

    The sampling unit used for the survey was the IDBR’s ‘local unit’ , which in mostinstances corresponds with the definition of an establishment used in the survey8 . Thesample was restricted to local units with 10 or more employees and with a StandardIndustrial Classification (SIC) major group between D and O. The sample thereforeexcludes major groups A-C (ie agriculture; hunting; forestry and fishing; and mining andquarrying) and P,Q (private households with employed persons and extra territorial

    7The National Centre was granted access to IDBR data under the WERS contract with strict limitations on the useto which they might be put.8 See Interviewer Handbook pp. 7-14 for discussion of the definition of establishment.

  • bodies). Previously in the series, mining and quarrying in general had not been excluded,only deep coal mining. Local units located in Northern Ireland were also excluded fromthe sample.

    To avoid overlaps with the panel sample, all eligible/in scope addresses from the 1990WIRS (a total of 2,492 cases) were, whenever a match could be found, excluded from theIDBR sampling frame. In practice just 1,036 of the 2,492 were successfully matched toand excluded from the sampling frame, which consisted of some 341,411 local units.

    There were, however, limitations in the matching process, which involved comparingunits identified in the 1987 Census of Employment with IDBR units recorded on theIDBR. These limitations were evidenced by the fact that during the course of fieldwork afurther 47 ‘overlap’ units were identified in the cross-section sample. 38 of these werewithdrawn from the issued cross-section sample and no interview was attempted. In theremaining 9 cases the cross-section interview had been completed before the duplicationwas identified and therefore a panel interview could not take place (see Section Five:5.1.1).

    All remaining local units on the sampling frame were divided into strata, the strata beingdefined in terms of SIC major groups and employee numbers. The distribution of localunits by these strata is shown in Table 2A9 . The figures in italics are the IDBR countsbefore exclusion of the 1,036 units from 1990 matched in the IDBR.

    9 The figures of Table 2B are aggregated across two IDBR databases: proven units and unproven units.

  • TABLE 2A: IDBR COUNTS AFTER AND BEFORE EXCLUSIONOF THE 1990 WIRS SAMPLE

    Number of local units insampling frame

    Total number of localunits on IDBR

    Number of employees in each unit

    SIC92Major Group 10-24 25-49 50-99 100-199 200-499 500+ Total

    D 25,195 11,502 6,742 4,402 2,653 801 51,29525,199 11,534 6,783 4,459 2,725 926 51,626

    E 340 228 176 144 135 58 1,081340 228 178 149 141 62 1,098

    F 8,811 3,100 1,448 659 274 64 14,3568,813 3,115 1,452 667 278 66 14,391

    G 45,179 13,031 5,317 2,601 1,537 212 67,87745,183 13,060 5,349 2,635 1,572 230 68,029

    H 23,741 5,928 1,890 762 213 56 32,59023,744 5,939 1,901 768 216 56 32,624

    I 8,577 3,748 2,248 1,285 686 250 16,7948,582 3,766 2,260 1,297 697 259 16,861

    J 9,072 2,960 1,589 750 454 192 15,0179,080 2,966 1,597 760 459 197 15,059

    K 23,544 8,271 4,235 2,572 1,304 415 40,34123,549 8,288 4,249 2,585 1,317 437 40,425

    L 6,224 3,744 2,612 1,559 931 306 15,3766,228 3,755 2,624 1,575 945 325 15,452

    M 13,299 10,096 4,163 1,824 509 262 30,15313,304 10,117 4,186 1,835 511 275 30,228

    N 21,019 9,519 3,778 1,295 704 469 36,78421,022 9,527 3,784 1,307 715 523 36,878

    O 12,312 3,785 1,636 660 254 64 18,71112,314 3,792 1,641 664 256 73 18,740

    Total 197,313 75,912 35,834 18,513 9,654 3,149 340,375197,358 76,087 36,004 18,701 9,832 3,429 341,411

    Within each cell of Table 2A a simple random sample of local units was selected. Table2B shows the sample size within each cell. The numbers in italics are the samplingfractions.

  • TABLE 2B: SAMPLE SIZES AND SAMPLING FRACTIONSBY SIC92 MAJOR GROUP AND EMPLOYMENT SIZE

    Number selected;Sampling fraction

    Number of employees in each selected unit

    SIC92Major Group 10-24 25-49 50-99 100-199 200-499 500+ Total

    D 28 56 66 85 111 80 4260.00111 0.00487 0.00979 0.01931 0.04184 0.09988

    E 2 7 11 19 38 36 1130.00588 0.03070 0.06250 0.13194 0.28148 0.62069

    F 27 40 37 33 30 15 1820.00306 0.01290 0.02555 0.05008 0.10949 0.23438

    G 78 100 83 79 100 31 4710.00173 0.00767 0.01561 0.03037 0.06506 0.14623

    H 50 53 34 27 16 9 1890.00211 0.00894 0.01799 0.03543 0.07512 0.16071

    I 15 29 35 39 45 36 1990.00175 0.00774 0.01557 0.03035 0.06560 0.14400

    J 18 27 29 27 35 33 1690.00198 0.00912 0.01825 0.03600 0.07709 0.17188

    K 44 65 68 80 86 61 4040.00187 0.00786 0.01606 0.03110 0.06595 0.14699

    L 10 29 41 48 61 46 2350.00161 0.00775 0.01570 0.03079 0.06552 0.15033

    M 22 78 65 55 33 38 2910.00165 0.00773 0.01561 0.03015 0.06483 0.14504

    N 35 74 59 39 46 72 3250.00167 0.00777 0.01562 0.03012 0.06534 0.15352

    O 33 45 38 31 25 16 1880.0027 0.0119 0.0232 0.0470 0.0984 0.25000

    Total 362 603 566 562 626 473 3192

    Thus the main features of the design are as follows:

    • the sampling fractions used increase with employment size, in part to give sufficientnumbers within each size band for separate analyses, but also to allow reasonablyefficient employee-based estimates to be derived;

    • the sampling fractions were increased for SIC major groups E (Electricity, gas andwater supply), F (Construction), H (Hotels and restaurants), J (Financialintermediation), and O (Other community, social and personal service activities).The aim was to achieve increased sample sizes of 100-150 for groups E, F, H, J, andO10. This ‘over-sampling’ was accommodated by decreasing the sampling fractionwithin SIC major group D (Manufacturing). This is a change from the sample designof previous surveys. In 1990 under-sampling (by a factor of 1 in 4) had been limitedto units classified as Public Administration, Education and Health Services. In 1980and 1984 there was no under or over-sampling in terms of industrial classification.

    10 Owing to a higher than anticipated level of out-of-scope establishments, the achieved sample sizes in these SICmajor groups were below the target levels (see Section Five, Table 5E).

  • SECTION TWO: SAMPLE DESIGN AND SELECTION

    The 1987 Census of Employment from which the WIRS 1990 sample was drawn,comprised a similar number of units. There were 142,283 units with 25 or moreemployees, compared with 144,053 on the IDBR in 1997. The intervening decade sawan increase in the number of establishments with under 200 employees and a decline inestablishments over that size. The overall sampling fractions for each establishment sizeband generated by the matrix that comprises Table 2B were very similar to thoseemployed in 1990, with the exception of units with 1,000 or more employees, as Table2C indicates.

    The relatively small number of large units is the major difference in the 1997/8 WERSsample design compared with previous surveys in the series. In 1990, 1,007 units withmore than 500 employees were selected, compared with under 500 such units in 1997.

    TABLE 2C: COMPARISON OF SAMPLING FRACTIONS WITH 1990

    No of1987 1997

    Employeesin Unit No of

    Units

    AverageSamplingFraction

    No ofUnits

    AverageSamplingFraction

    10-2425-4950-99

    100-199200-499500-999

    1000-19992000+

    n/a74,95635,21518,1789,9212,693

    960360

    n/a111633621722

    197,35876,08736,00418,7019,832

    ) ) 3,249 )

    545126643316777

    TOTAL 142,283 [44] 144,053* [51]*

    * excluding establishments with 10-24 employees

    In addition to the number of units sampled, as indicated in the above table, a reserve poolof 500 units was selected to be used should the number of establishments at whichinterviews were achieved fall appreciably below the anticipated level. The selection ofsuch a reserve pool had been the practice in the previous WIRS surveys. However, on nooccasion so far in this series has the reserve sample ever been drawn upon .

    2.1.2 Selection of employees for the Survey of Employees

    Within each establishment taking part in the survey a sample of 25 employees wasselected (or all employees were selected if the establishment had between 10 and 25employees). The sample was drawn by interviewers using random number sheetsdesigned specifically for the survey. The process is fully described in the InterviewersHandbook. Copies of the documents used in the sampling procedures are included inSection Nine.

    The decision to select a fixed number of employees from each establishment, rather thansample using a variable sampling fraction, was influenced by several practicalconsiderations. Firstly, the fixed sample-size approach was simple for interviewers tohandle because he/she did not have to vary the sampling method from one establishment

  • to the next. With a variable sampling fraction this consistency would be lost. A secondmajor consideration was that the interviewer knew in advance, and importantly, could tellthe establishment in advance, what the sample size would be. Thirdly, the fixed sampleapproach gives control over the final sample size, both overall and within individualestablishments. With a variable sample-size approach, establishments with moreemployees than expected could potentially be asked to provide a very large sample ofemployees.

    The main disadvantage of using a fixed sample size is that the probability of selection ofemployees differs from establishment to establishment and this has to be compensated forby weighting the data when the data are aggregated across establishments (see SectionSeven: 7.1.1). Weighting the data leads to some loss of precision in survey estimates.Table 2D shows the range of sampling fractions for employees under the (simplistic)assumption that IDBR employee numbers were accurate at the time of the WERSinterview. The figures demonstrate that, with the exception of employees from the verylargest and the very smallest establishments, within SIC major groups there is relativelylittle variation in the probabilities of selection of employees. In practice the number ofemployees at the time of interview differed from the IDBR count in a considerableproportion of cases. The impact of this was to widen the ranges of sampling fractions foremployees quite considerably.

    TABLE 2D: MINIMUM & MAXIMUM PROBABILITIES OFSELECTION FOR EMPLOYEES IF IDBR EMPLOYEE NUMBERS

    ARE ACCURATE AT TIME OF WERS INTERVIEW

    Probability of selectionfor employees

    Number of employees in unit

    SIC92 MajorGroup 10-24 25-49 50-99 100-199 200-499 500+

    D Minimum 0.00111 0.00248 0.00247 0.00243 0.00105 0.00000Maximum 0.00111 0.00487 0.00489 0.00483 0.00523 0.00499

    E Minimum 0.00588 0.01566 0.01578 0.01658 0.00704 0.00000Maximum 0.00588 0.03070 0.03125 0.03299 0.03519 0.03103

    F Minimum 0.00306 0.00658 0.00645 0.00629 0.00274 0.00000Maximum 0.00306 0.01290 0.01278 0.01252 0.01369 0.01172

    G Minimum 0.00173 0.00392 0.00394 0.00382 0.00163 0.00000Maximum 0.00173 0.00767 0.00781 0.00759 0.00813 0.00731

    H Minimum 0.00211 0.00456 0.00454 0.00445 0.00188 0.00000Maximum 0.00211 0.00894 0.00899 0.00886 0.00939 0.00804

    I Minimum 0.00175 0.00395 0.00393 0.00381 0.00164 0.00000Maximum 0.00175 0.00774 0.00778 0.00759 0.00820 0.00720

    J Minimum 0.00198 0.00465 0.00461 0.00452 0.00193 0.00000Maximum 0.00198 0.00912 0.00913 0.00900 0.00964 0.00859

    K Minimum 0.00187 0.00401 0.00405 0.00391 0.00165 0.00000Maximum 0.00187 0.00786 0.00803 0.00778 0.00824 0.00735

    L Minimum 0.00161 0.00395 0.00396 0.00387 0.00164 0.00000Maximum 0.00161 0.00775 0.00785 0.00770 0.00819 0.00752

    M Minimum 0.00165 0.00394 0.00394 0.00379 0.00162 0.00000Maximum 0.00165 0.00773 0.00781 0.00754 0.00810 0.00725

    N Minimum 0.00167 0.00397 0.00394 0.00378 0.00164 0.00000Maximum 0.00167 0.00777 0.00781 0.00753 0.00817 0.00768

    O Minimum 0.00268 0.00607 0.00587 0.00590 0.00246 0.00000Maximum 0.00268 0.01189 0.01161 0.01174 0.01230 0.01250

  • 2.2 Design and selection of the panel sample

    The fundamental requirement for the WERS 1997-8 panel sample was that it should berepresentative of surviving establishments at which interviews were conducted in 1990.

    As a preliminary step, a file of local and reporting unit identifiers from the 1987 Censusof Employment was sent to ONS. Their analysis showed that only 1,130 of the 2,061cases (55%) were still held in their database. Among the other 931, there were 407 caseswhich were identified as having been on the IDBR, but flagged as no longer operating.The other 524 cases have been identified as no longer operating at the time when theCensus of Employment was incorporated into the IDBR, in 1993-94. This was part ofthe basis for assuming that an achieved sample of about 1,000 was a realistic target in1997, although it was recognised that factors such as changes of ownership, address, andorganisational structure might have led to establishments being allocated a new local unitidentifier.

    As part of the understanding of changes in the population of establishments with 25employees and over, it would be of considerable value to know the fate of the 1990establishment. For example, where the parent organisation still existed, it might bepossible to establish when the 1990 establishment had ceased to operate and what hadhappened to the activities which had been conducted in 1990.

    The National Centre’s initial work on the sample of apparent deaths (see Section Three:3.5) showed that the surviving establishments were likely to exceed 55% by somemargin. For example, some of the establishments which appeared to have ceasedoperating were major hospitals and industrial establishments. It was apparent thatchanges associated with privatisation of former public utilities and the changes in theorganisation of health services had been the basis for old records to be dropped, ratherthan the records being updated with current details.

    It was apparent, therefore, that there would be little advantage in using IDBR’s recordsas a basis for sampling. Instead, it was decided to draw a random sub-sample from the2,061 cases, knowing that this included establishments which would have ceased tooperate. Owing to uncertainty about the extent of survival of workplaces from 1990, itwas also considered prudent to draw a reserve sample. With this precaution, a sample ofabout 1,301 cases was drawn to achieve a total of 1,000 interviews. The reserve sampleof 132 was in addition to this.

    The panel sample was thus drawn as a stratified random sample from the 2,061productive interviews from the 1990 Workplace Industrial Relations Survey.Establishments were stratified into groups defined in terms of the number of employees atthe time of the 1990 interview and a 63% sample of establishments was selected withineach group.

    In 1990 the respondents at 5% of the productive sample (n=108) had not positively saidthat they were willing to be recontacted for further questioning. Some of these hadrefused; some had given an unclear answer or not answered at all. On the grounds thatthese answers have been personal views of respondents who were unlikely still to be inpost, it was decided to include the establishments in the survey11

    11 In the event, the issue was not raised during the course of fieldwork by any respondent. The same proportion(5%) of the panel interviews achieved in 1997-8 were at establishments at which non-positive agreement had beenrecorded in 1990.

  • The issued sample was distributed as follows:

    TABLE 2E: PANEL SAMPLE SIZE,BY ESTABLISHMENT SIZE IN 1990

    Number of employees in 1990 Sample size

    25-49 22150-99 228

    100-199 221200-499 206500-999 175

    1000-1999 1862000+ 64

    Total 1,301

    The decision to take a random sub-sample was based on the requirement to increase therange of weights as little as possible. Since the ‘target sample’ was to be representativeof surviving establishments, each establishment would carry the same weight as it had in1990, and the overall result would be estimates which were representative of thepopulation of surviving establishments.

    As with the cross-section sample, a reserve of about 10% of addresses (n = 132) wasselected in case the number of interviews achieved should prove to be too low foradequate analysis. In the event neither the cross-section nor the panel reserve sampleswere used in the fieldwork.

  • SECTION THREE: DEVELOPMENT WORK

    3.1 Introduction

    3.1.1 Scope of development work

    The piloting and development stages of WERS 97-8 took place during a 5 month period,May to September 1997. The requirement was to cover the final stages of developmentand design of a number of different data collection instruments and procedures. Theycomprised:

    • three face-to-face questionnaires for:

    - the management respondent (cross-section) - identified throughout fieldwork anddata processing as the MQ ;

    - the worker representative (cross-section) - the WRQ;

    - the management respondent (panel) - the PQ.

    These were to be conducted by interviewers using laptop PCs in CAPI(Computer Aided Personal Interviewing);

    • three paper self-completion questionnaires:

    - the Employee Profile Questionnaire (cross-section) – EPQ;

    - the Basic Workforce Data Sheet (panel) - BWDS

    The intention was that these would be filled in by management respondents (ortheir nominees) before the face-to-face interview, thus facilitating any necessaryreference to staff records and also reducing the length of the interview;

    - the Survey of Employees Questionnaire (cross-section) - SEQ

    These were to be distributed to a sample of employees (maximum 25) at co-operating establishments, subject to management agreement. They were to bereturned individually, generally by post, directly to the National Centre’s office;

    • contact procedures for the cross-section sample. It was anticipated that these wouldlargely replicate the procedures used on previous surveys in the series, in spite of thedifference in sampling frame (Inter Departmental Business Register rather than theCensus of Employment). However this presupposition needed to be checked;

    • contact procedures for the panel sample. The particular requirement was to developcriteria for determining whether the establishment contacted in 1997-8 had been incontinuous existence since the previous WIRS interviews were last carried out(1990).

  • 3.1.2 Programme of development work

    Accordingly the programme of work that was devised comprised a number of discreteactivities:

    • two pilot surveys, each one including both main sample and panel addresses. Themanagement and worker representative interviews for both the pilot surveys wereconducted in CAPI. The DTI had conducted some piloting with paper questionnairesprior to commissioning. Conducting further development work on paper was notconsidered worthwhile. Pilot versions of the EPQ, BWDS and SEQ were also tested.It was, however, always the intention that the material from the EPQ and the BWDSwould be keyed by the interviewer at the start of the interview and would thus formpart of the CAPI datafile;

    • qualitative work on the content and question wording of the SEQ. Thisquestionnaire had not been as thoroughly tested prior to commissioning as had theother data collection instruments. This work was carried out in tandem with the pilotsurveys;

    • design work on the layout of the three paper questionnaires;

    • telephone screening of a sample of the 1990 productive interviews, the overall aimbeing to identify the extent to which establishments interviewed seven yearspreviously might have changed but still retained essential continuity.

    In the following sections we give account of each of these development stages.

    3.2 The pilot surveys

    3.2.1 Cross-section: Management Questionnaire (MQ)

    A range of sources was used to provide addresses for the two pilot surveys. Mostly thesewere addresses of businesses and organisations that were ‘unused’ from samples drawnfrom recent National Centre surveys. They did not include public sector workplacesbecause these would require Wave 2 access procedures (see Section Four: 4.2.2). Anattempt, therefore, was made to include private educational establishments and privatehospitals, since these might give some indication of public sector problems.

    Six interviewers (four of whom had 1990 WIRS experience) participated in Pilot 1 inJuly; nine interviewers (four of whom had worked on Pilot 1) participated in Pilot 2 inAugust. Day-long personal briefing and debriefing sessions were held for each pilot.

    62 interviews were achieved in total (22 in Pilot 1, 40 in Pilot 2). There were, however,no indications of likely response rates; interviewers were asked to obtain quotas fromtheir ‘pool’ of addresses. They were asked to interview at larger establishments, wherethe option was available, and to include as wide a range of workplace activities aspossible.

    Interviewers were questioned closely at the debriefing sessions about the contactprocedures that they had used and asked for their recommendations for improvement.The combination of telephone and postal contact used in previous surveys still seemed tobe appropriate. However the formal system instigating 4 or 5 stages of contact

  • (described in Section Four: 4.2.1 and the Interviewer Handbook, Section Six) was basedon material drawn from these pilot debriefings.

    The target interview duration was ‘90 minutes average - with nearly all cases rangingbetween 60 and 120 minutes’. The average durations established in the pilots were inexcess of the target figure. They varied by size of establishment but not as much as hadbeen the case in 1990, when smaller establishments were filtered out of large sections ofthe questionnaire.

    Weighting the average durations to reflect the likely distribution (by no. of employees) ofthe achieved sample in the survey proper made some difference in Pilot 1 but littledifference in Pilot 2, where the unweighted mean of 102 minutes increased to 103. 80%of interviews in Pilot 2 lasted between 1 and 2 hours.

    The questionnaire comprised 13 sections (A-M) of varying lengths. Detailed informationon the duration of each section was provided to the research team. Interviewers wereasked at the debriefing whether any particular sections had seemed to cause therespondent difficulties or irritation. None was mentioned. The research team had aparticular interest in the EPQ, the content of which was appreciably greater than itsequivalent in previous WIRS surveys. Nonetheless none of the interviewers reportedproblems arising from the document per se. It had not always been completed in advance- but that was to be expected. The general view was that staff records in nearly allworkplaces are now computerised, making the EPQ data that much more accessible thanin previous surveys.

    Following each of the pilot surveys, the research team made modifications to thewording, ordering and routing of the questions. A substantial number of deletions wasmade after the second pilot with the aim of reducing the interview length to the target of90 minutes. On completion of fieldwork, however, the mean duration proved still to be inexcess of 100 minutes (Section Four: 4.4).

    3.2.2 Cross-section: Worker Representative Questionnaire (WRQ)

    In total 16 worker representative interviews were achieved at the 62 workplaces in bothpilots (8/22 in Pilot 1; 8/40 in Pilot 2).

    Of these, 2 interviews were not with union, but with committee representatives.

    Approximately 60% of workplaces had no eligible representatives. The response rateachieved among the remainder was 70%.

    There was a marked difference in the interview duration between Pilot 1 and Pilot 2. Inthe first pilot the average length was 62.5 minutes. In the second pilot a number ofdeletions were made, substantially reducing the length. In the main fieldwork, however,the mean duration was 47 minutes, well above the target level of 30 minutes.

    3.2.3 Panel: Management Questionnaire (PQ)

    An essential feature of the panel questionnaire was the ‘feeding forward’ of data from the1990 questionnaires. In order to replicate this feature in the pilot surveys, the onlypossible source of addresses was the 1990 (/1984) panel of some 540 or so ‘tradingsector’ cases. The development work for the 1998 panel study, therefore, also excludedpublic sector cases.

  • It proved difficult from this source to provide an adequate number of addresses - locatedsufficiently near to the selected interviewers and allowing for the proportion that wouldbe closed down and so on - to achieve the target number of interviews (20 in each of thepilots).

    16 interviews were achieved in Pilot 1. The average duration was 65 minutes, inspite ofdeficiencies in the CAPI programming that meant that some questions were routinely, butwrongly, omitted.

    In Pilot 2, in which 19 interviews were achieved, the average length had been reducedonly marginally to 62 minutes - in spite of a number of deletions that were intended tohave a more substantial effect.

    After the second pilot further deletions were made from the questionnaire. It was alsodecided, for the main fieldwork, to reduce substantially the amount of data from theBWDS that interviewers were required to key in during the early part of the interview.Only the total employee numbers (male/female, full/part-time and each of the mainoccupational groups) would be keyed. These were integral to the routing throughout theinterview and hence to the CAPI programming.

    In spite of these reductions the mean interview length for the main fieldwork proved to bein excess of 60 minutes.

    3.2.4 The Survey of Employees (SEQ)

    In all cases in Pilot 1 (cross-section and panel) an attempt was made to select a sample ofemployees and distribute a questionnaire, although this was not scheduled to take place atPanel addresses in the main fieldwork.

    The attempt was successful at 28 establishments.

    Of the 10 establishments which declined to co-operate, two stated that they had justconducted an employee survey as part of the Investors in People accreditation process,and were not prepared to repeat a similar exercise. One manager gave as a reason thepresence of ethnic minorities in the workplace(!); one the fact that redundancies werelooming. Two or three cases could probably have been conversions if it had beenpossible to return at a later date. In two cases the request was referred to a Board ofDirectors who eventually turned the idea down.

    In total 640 questionnaires were placed; 20 workplaces took the maximum of 25questionnaires. The remaining 8 workplaces took questionnaires for all the employeesthey had, which amounted to 140 in total.

    Two different reminder strategies were adopted:

    (A) A reminder letter after 2-3 weeks or so to the management respondent withadditional questionnaires for the (named) non responders.

    (B) A similar reminder letter for the management respondent without additionalquestionnaires, followed a further 3 weeks later by a second reminder letter withadditional questionnaires for the non responders.

  • Response from the two strategies was as follows:

    Strategy A Strategy BEstablishments: 13 15Questionnaires placed: 296 344Response pre 1st reminder: 185 (62.5%) 211 (61.3%)Final response: 235 (79.4%) 271 (78.8%)Questionnaires used: 405 440Used/placed: 1.375 1.279

    The additional reminder incorporated into Strategy B did not therefore appear to increaseresponse. Additionally there were some indications from interviewers and from contactsdirectly between respondents and the DTI that two reminders following initial agreementto co-operate might be seen as verging on harassment.

    Strategy A was therefore adopted for the main fieldwork. It was not anticipated thatresponse in the main fieldwork (for which see Section Five: 5.3) would reach the veryhigh levels achieved in the pilot and this proved, in the event, to be so.

    SEQ placements were also made in Pilot 2. In order to reduce costs interviewers wereasked to place at only 20 of the 40 workplaces at which management interviews wereachieved. The level of co-operation received from management was as in Pilot 1. Justover 60% of questionnaires placed were returned completed. No reminders were issuedowing to the lack of time before the start of fieldwork.

    A random subset of 100 from the 506 questionnaires received from Pilot 1 were keyed.The marginal totals from this dataset constituted an important tool for the research teamin assessing the wording of questions, the range of answers offered and in determiningpriorities for the necessary cutting of the questionnaire length.

    3.3 Qualitative Work for the Survey of Employees

    3.3.1 Cognitive testing

    Cognitive interviewing highlights where respondents misunderstand survey questions orkey concepts, do not know or cannot recall the needed information from memory, use aninappropriate strategy for making a judgement, or prefer to hide certain information orprovide an ‘acceptable’ answer.

    During the last week of July two interviewers were briefed to conduct cognitiveinterviews in three workplaces (in London and the North of England). Agreement fromthe participating workplaces had been gained by the DTI. With the employerspermission, between 7 and 10 employees were selected and invited to see the interviewerat different intervals throughout the day. Each respondent spent at least half an hour,sometimes rather longer, with the interviewer. During the first part of this periodrespondents were asked to complete a copy of the pilot version of the SEQ. Theinterviewer then spent the rest of the time going back over the questions ascertaining howthe respondent went about giving the answers they did.

    Approximately 25 people participated in the cognitive exercise.

    Researchers from the National Centre and the funding organisations briefed anddebriefed the interviewers. The debriefing took place on the 12th August 1997.

  • 3.3.2 Expert Panel

    The research team also made use of an in-house ‘expert panel’ to evaluate the draftversion of the SEQ. The panel comprised a number of National Centre research directorsand researchers. This meets on an occasional basis with the specific purpose ofevaluating and improving draft questionnaires. The procedure draws on the expertiseand fresh perspectives of persons outside any particular project team. The NationalCentre has found such panels to be a fast, economic and effective method of identifyingpotential problems with questionnaires at an early stage in their development.

    On this occasion, the panel met with representatives from the funding organisations onthe 12th August, the same day as the debriefing of the interviewers engaged on thecognitive pilot. A large number of points were raised and discussed. Completeconsensus was rare but the research team took away a number of issues forconsideration.

    Among the changes suggested, considered and implemented were:

    • using terms and language more consistently eg unions/staff associations;• standardising response categories;• coping with the ‘don’t know’ or ‘can’t decide’ consistently;• re-ordering some questions to improve clarity;• considering difficulties in definition eg ‘your workplace’ and ‘manager’;• the wording and use of 3 and 5 point scales;• identifying questions that might be deleted, since the general view was that the

    questionnaires needed to be reduced by about a quarter.

    3.4 Design of paper questionnaires

    The versions of paper questionnaires used in the pilots were documents typed ontostandard A4 white paper. While they served their purpose well, as the high response rateachieved in the pilot survey showed, it was always the intention that the final layout ofthese documents would be created by a professional graphic designer.

    The design work, which was subcontracted by the National Centre to DavenportAssociates, was carried out during the months of July and August 1997, in parallel withthe pilot work. The style of all three documents was harmonised, each was colourwashed in a different colour, with white boxes for respondents to enter their answers.The BWDS was similar in content and arrangements to the one used in 1990. The EPQ,although also on 1 x A3 folded to 4 x A4, was a somewhat more extensive documentthan its predecessors. A particular design concern for the SEQ, which was 2 x A3,folded to 8 x A4, was to facilitate optical scanning of the completed sheets, this being theproposed mode of data capture. The final version of the SEQ also incorporated typesetparagraphs of information in six Asian languages.

    Copies of the final documents are included in Section Nine.

    3.5 Telephone screening for the panel

    As has been described in Section Two: 2.2, the DTI had submitted the names andaddresses of the 2,061 establishments that were productive in 1990 to the Office ofNational Statistics in order to ascertain which of them were, according to IDBR records,

  • still in existence. The result of this analysis was that just over half (n = 1,130) wereclassified as ‘Live’. 524 were classified as having closed down or gone out of business.ONS were unable to trace the remaining 407.

    There were some surprising features of the classification. For example, the proportion of‘live’ establishments did not vary according to the size of establishment (ie the number ofemployees) in 1990. Consequently the research team decided to recontact a substantialproportion of the 2,061, across all three IDBR classifications, in order to check thevalidity of the classification.

    It was agreed at the outset that the two key characteristics of a continuing establishmentshould be:

    • some continuity of activity (ie some activity at all times between 1990 and 1998);• some continuity of employment (ie some employment at all times between 1990 and

    1998).

    There were, however, no absolute criteria, with a decision on each case being based onthe balance of evidence overall. This meant that there would inevitably be some caseswhere the assessment is that the establishment is likely to be continuing, but when thedata are analysed, it will become evident that there is insufficient continuity formeaningful comparisons to be made.

    A subset of approximately 400 establishments was drawn. The selection was randomapart from the exclusion of ‘Wave 2’ addresses which could not be approached atestablishment level (see Section Four: 4.2.2). Those establishments which had not agreedin 1990 to be recontacted (approximately 5% of the 2,061, n = 108) were included on thegrounds that there was a high probability that the person interviewed seven yearspreviously would no longer be present. These establishments were also included at allsubsequent stages of the panel survey.

    A short (4 side) questionnaire was devised for use over the telephone (known during theconduct of the whole survey as the TQ). To the questionnaire was attached a copy of the1990 Address Record Form containing the address, telephone number and other details ofthe contacts in 1990. Four interviewers were briefed to conduct this preliminary sift,which took place in August 1997.

    It was eventually used as part of the standard contact procedure for all panelestablishments. The topics covered by the TQ are listed below:

    • presence of the 1990 respondent: It was intended to establish the interviewer’scredentials, although where the person was still at the establishment they would beappropriate to act as respondent for the checklist questions. However, interviewerswere instructed to ask anyone who was available. For this reason, it was decidedthat questions needed for the interview itself would be repeated with the surveyrespondent;

    • change of name: This would never be a critical factor, even when it indicated a

    change of ownership, but was important as background information; • change of address: A move to a different address, even outside the locality, would

    not be a critical factor. However, a move associated with a change in the structureof the establishment might be significant;

  • • amalgamation or separation of establishments or departments: The test of

    continuity in this case would depend largely on what happened to the establishment’sworkforce. If a majority of the employees (50% or more) had stayed as an entity,then the establishment to be covered in 1997-8 would be where these people were,whether that was the original local or elsewhere. In particular, if the 1990establishment had been closed and the workforce absorbed into another site, then aninterview would be conducted about that other site. This rule is different from thatapplied for the cross-section sample. In that case, a further factor was whether theestablishment to which the move was made had existed at the time of the move, sinceit had a separate chance of selection if it already existed;

    • change of activity: The nature of the activity could change quite appreciably andhave no impact on continuity for the survey’s purposes. As an extreme example, amanufacturing site could change into a distribution centre and still qualify, as indeedoccurred in the pilot survey. In this case the distribution involved the same productsas had been manufactured previously, and there was no change of ownership. If boththe products and ownership had changed, then the critical factor would have been thecontinuity of employment;

    • change of ownership: A change of ownership, in itself, would not be a criticalfactor;

    • number of employees: A major change in the size of the workforce would not, initself, represent a break in continuity. For example, a dairy might have employeddelivery staff in 1990, but have changed to a franchise operation in 1997-8, in whichcase the same individuals, still working as self-employed milkmen, would beexcluded from the establishment’s employed workforce.

    It can be seen that no simple criterion was available which would always result in anestablishment being treated as continuing or not. In practice, for the survey to serve itspurpose of charting changes in British employee relations, it needed to be able to copewith quite substantial changes, as well as obvious continuity.

    It was thus decided that interviewers should be required to contact the research team tooutline what they had discovered about each establishment, and the researcher would thenreach a decision, either at the time or based on consultation with others.

    From this pre-fieldwork screening and the results of the telephone screening, the researchteam devised a definition of a ‘continuing’ establishment. This definition was usedthroughout fieldwork by the research teams at the National Centre, the DTI and PSI inadvising interviewers who had problems with establishment definition. It is set outbelow:

  • RULES FOR CONTINUING ESTABLISHMENTS

    • CHANGE OF NAME

    • CHANGE OF OWNERSHIP

    • MOVE TO A DIFFERENT ADDRESS

    None of the above, in themselves, destroy continuity of existence.

    • CHANGE OF ACTIVITY

    • ACTIVITIES ADDITIONAL TO 1990 ACTIVITY

    There must be continuity of activity of some sort, between 1990 and now. If there has been a break inwhich there was no activity, then the establishment is dead.

    • NUMBERS OF EMPLOYEES

    There can be more (many more) or fewer (many fewer) employees in 1997-8 than in 1990. The tasksthey do can be widely different. But at no stage can there have been ZERO employees.

    • SPLITS (WITHOUT CHANGE OF OWNERSHIP)

    A 1990 establishment may have split into a number of parts:

    - if any part is still at the 1990 address then interview there provided there are 25 or more employees

    - if all parts are at different addresses then follow the largest part provided there are more than 25employees.

    • SPLITS INVOLVING A CHANGE OF OWNERSHIP

    A 1990 establishment may have been split among two or more employers by the original employerselling off part of the business:

    - the part still belonging to the original employer counts as the continuing establishment (providingthe basic test of continuity of employment is met) and there are 25 or more employees at the time ofinterview;

    - the part that was sold off is a new establishment and therefore out-of-scope;

    - if none of the original 1990 establishment remains with the original employer (or another employerwho took them over) it counts as ‘Closed Down’.

    • AMALGAMATIONS

    - If the amalgamated unit is at the 1990 address then interview there, even if those who have movedin out-number the pre-amalgamated staff;

    - If the 1990 establishment has been amalgamated with one (or more) units at (a) differentaddress(es), then carry out the interview at the address which houses the largest number of 1990employees (or their replacements) provided that the amalgamated unit has 25 or more employees.

  • It became evident early in the course of the work that a substantial number of theaddresses classified by the IDBR as closed down or untraceable were in truth still inexistence and could be relatively easily identified and contacted by phone. Theinterviewers working on this screening operation did not have access to the sophisticatedmeans of tracing telephone numbers that were employed later in the survey (see SectionFour: 4.7). Nonetheless the overall conclusion - that the IDBR classification was of verylimited value for this particular purpose - was clear. Consequently in selecting the panelsample of 1,301 establishments, the research team took no account of the IDBRclassification, including, pro rata, establishments classified as closed down oruntraceable.

  • SECTION FOUR: CONDUCT OF FIELDWORK

    4.1 Briefing and Interviewer numbers

    A series of 12 two-day briefing conferences was held between 3 October and 6November 1997. At these conferences interviewers were briefed on both the cross-section and panel questionnaires. The briefings involved a description of the sampledesign and methodology, a full discussion of the problems of establishment definition, asummary of current employee relations structures in the workplace and procedures forcontacting establishments and selecting respondents. There was considerable emphasison the procedures to be adopted and the techniques required for gaining co-operation atthe different stages of the survey process. A major section of the briefing was devoted toprocedures relating to the SEQ. Time was also spent working through dummy scheduleson the laptop PCs. A copy of the briefing agenda is annexed to this report.

    The briefings were conducted by National Centre’s researchers working in conjunctionwith researchers from the funding organisations.

    Six of the conferences took place at the National Centre’s London offices. Theremainder were in Glasgow, Liverpool, Leeds, Bristol and Birmingham (2).

    In total 156 interviewers were briefed. All of them were trained and experiencedmembers of the National Centre’s interviewing panel. Efforts were made to maximise thenumber of interviewers who had worked on previous WIRS surveys. In the event itproved to be about 20 or so.

    4.2 Sifting the Sample

    An essential part of the WIRS survey process, developed during previous surveys in theseries, is the sub-division of the sampled addresses prior to fieldwork, into what havecome to be described as ‘Waves’ 1 and 2.

    Wave 1 addresses are those which, in the view of the research teams, could safely beapproached by interviewers at establishment level; Wave 2 addresses are those whichbelong to organisations which needed to be approached at corporate (ie Head Office orthe equivalent) level, in order to gain agreement for a subsequent approach to theestablishments.

    Once approval has been obtained centrally, it is usually found that managers at theestablishment are extremely co-operative. In many cases, their Head Office will haveidentified the individual best-placed to act as the respondent, and will have copied theircorrespondence to this individual in advance of an interviewer’s approach. However, it isnot always easy to establish the organisation to which a sampled unit belongs (or thestructure of an organisation) even with reference to the IDBR reporting unit. Problemsof this nature lead to contacting Head Offices on more than one occasion seekingpermission to contact a succession of establishments. This can be a source ofembarrassment (to the asker) and irritation (to the asked).

    Part of the rationale for this strategy is that it seems extremely important to avoid a HeadOffice receiving a number of separate referrals from their branches. It also recognisesthat a limited number of major employers, accounting for a substantial part of Britishemployment, are constantly being asked to take part in research studies. Their branch

  • network may be so extensive, as with banks and retail organisations, that virtually everynational study of employment practices is bound to involve selecting a number of theirbranches. There is, therefore, a special requirement in these cases to manage the surveyin a way which ensures as favourable an impression as possible. A refusal to participatefrom the Head Office of an organisation of this sort can have a very detrimental effect onthe representativeness of a national sample.

    Generally Wave 2 cases are those where there are two or more units belonging to thesame organisation in the sample. This is not invariably so however. Conversely it wasconsidered, on the basis of the research teams’ previous experience with the surveyseries, that some organisations represented by more than one unit in the sample couldsafely be approached at establishment level.

    For the purposes of this sift, cross-section and panel addresses were combined. Thesifting process mainly comprised computer searches, with the aim of grouping togetherunits belonging to the same organisation, or which needed similar treatment, eg firestations, police stations, ambulance services. The sift focused on the Standard IndustrialClassification (SIC92) codes allocated by ONS (or in the case of panel addresses, theCensus of Employment) and within that on the reporting unit for each of the sampledunits. The lists were also scanned visually with the aim of discovering links betweenunits that were disguised by inconsistent keying, titling and so on. The panel reservesample (but not the cross-section reserve) was included in the sift. Before fieldworkbegan, it was thought possible that it might be necessary to draw upon this reserve. Inwhich case it would have been inefficient to seek Head Office approval for additionalinterviews. It was not thought likely that the cross-section reserve would be drawn upon.

    4.2.1 Wave 1 addresses

    Of the total selected sample of 4,623 addresses (cross-section, panel and panel reserve)over three quarters (3,500-3,600) were classified as Wave 1.

    Included with Wave 1 addresses were those classified as Education, Health and LocalAuthorities. Together these accounted for approximately 25% of the total addresses (ieapproximately 1,150). Consideration was given at an early stage to separating thesefrom other Wave 1 addresses and sending a ‘courtesy’ letter to an appropriate ‘HeadOffice’ representative before proceeding to an interviewer contact at establishment level.Early work on this approach was relatively unfruitful. It proved surprisingly difficult toidentify prior to interviewer contact, the appropriate names (and sometimes even theaddresses) of Head Offices. The approach was eventually abandoned and theseaddresses were issued to interviewers as Wave 1 - with hardly any cases necessitatingsubsequent correspondence with Head Offices.

    The contact procedures followed by the interviewers’ Wave 1 addresses are described indetail in the Interviewer Handbook (Section Six). In essence they comprised:

    Stage 1: a telephone contact with the establishment by the interviewers to identify thename and job title of the appropriate management respondent. Previous experience hadindicated the importance of writing to a named person, rather than a post holder;

    Stage 2: the sending (by the interviewer) of a letter from the DTI to the respondentidentified at Stage 1 to explain the nature of the survey and to ask for co-operation;

    Stage 3: a further telephone call to make an appointment for the interview with themanagement respondent;

  • Stage 4: the sending, in advance of the interview, of the Employee Profile Questionnaire(cross-section sample) or the Basic Workforce Data Sheet (panel sample) and theStatement of Anonymity Procedures, accompanied by a letter confirming the date andtime of the appointment.

    The materials provided for cross-section and panel contact stages differed in detail, butthe procedures were the same - except in one respect. For panel interviews the telephonecontact (Stage 1) was formalised into a brief telephone questionnaire12, the purpose ofwhich was to enable the interviewer to determine whether the establishment had truly‘continued’ in existence since 1990. The rules developed for determining continuity ofexistence are discussed in Section Three: 3.5.

    During these contact stages, interviewers were required to refer to the research team allpanel cases where answers to the telephone questionnaire indicated some doubt aboutwhether the establishment had been in continuous existence since 1990 and was,essentially, the ‘same’ establishment as had been interviewed in 1990.

    In Section Three: 3.1.2, we describe how interviewers working on the pre-fieldworkscreening were provided with relevant material from the 1990 fieldwork in order to helpwith the identification of panel addresses. These procedures and routines continued intomain fieldwork for those addresses not covered in the pre-fieldwork exercise. In moredifficult cases, interviewers worked in liaison with office staff who had access to the BTPhone Disk and other software of use in address identification. (These procedures aredescribed in detail in paragraph 7 of this section.)

    More generally, interviewers were required to refer back to the research teams all caseswhere they encountered difficulties in gaining co-operation at the contact stages. Initiallythese referrals were to National Centre researchers. A substantial number were thenpassed on to members of the DTI research team who took over the dialogue with the(potential) respondent, either by phone or letter, and reported back through the NationalCentre team when co-operation had been gained (or otherwise).

    The DTI estimates that about half of the 400 or so workplaces referred to them duringthe process eventually agreed to take part in the survey.

    In addition to dealing with queries initiated by interviewers, the DTI research teamoperated throughout the course of fieldwork a freephone Helpline13, to deal with queriesdirect from (potential) respondents. Approximately 400 calls were dealt with by thisHelpline during the survey.

    4.2.2 Wave 2 addresses

    Of the total issued sample, between 1,000 and 1,100 addresses (approximately 25%)were categorised in the initial sift as Wave 2 - requiring access to be negotiated at ahigher level in the organisation prior to any contact at establishment level.

    12 referred to throughout the course of the survey as the TQ. It was the same document as was used in the prefieldwork screening (see Section Three).13 The freephone number was included in the Stage 1 DTI letter.

  • In broad terms these subdivided into:

    Central Government (including majorDepartments, Prisons, MoD establishments,Benefit Offices, Job Centres)………………………………. 190

    Police, Fire, Ambulance Services…………………………. 70

    Utilities (Electricity, Gas, Water, Nuclear Fuels)…………. 150

    Telecommunications, Postal Services……………………… 130

    Finance, Banking ………………………………………….. 125

    Retail, High streets………………………………………… 270

    Transport……………………………………………………. 50

    Other (inc major multinationals, TV, broadcasting)……… 90

    The number of separate organisations classified as Wave 2 was approximately 160.

    The fundamental approach for Wave 2 addresses was for a letter (on DTI heading) to besent to the Personnel/Employee Director, or other similar post holders, at the Head Officeof each organisation, explaining the purpose of the survey, listing the addresses of theselected establishments (separately for cross-section and panel) and asking, providedthere was agreement to co-operate, for the name and telephone number of the appropriaterespondent and/or a contact person at each site. A copy of the standard letter used(which was subject to minor variations during the course of the survey) is appended.These letters were not sent until the name of the post holder had been identified inadvance from directories. In some cases these letters were sent out by the DTI researchteam. In particular all agreements with Civil Service and Government Departments werenegotiated by the DTI research team. Mostly, however, the letters were sent out by theNational Centre. There were a number of outcomes:

    • the organisation would ask for more detailed information about the survey and whatthe interview process would involve. To deal with this a short (two side) summary ofthe scope of the survey and the content of the questionnaires was prepared.Sometimes a further organisation-specific letter would be sent; sometimes it wasconsidered that a personal visit and presentation by the DTI research team would beeffective - and this was done;

    • there would be no response to the letter. These cases (of which there were asubstantial number - in excess of 50) were passed to the National Centre TelephoneUnit where a small number of interviewers were briefed on the necessary follow-upprocedures. A common reason for non-response was that no trace of the letter couldbe found, in which case duplicates were sent. Very frequently it was found that thename of the person originally specified was no longer appropriate, in which case thecorrect name was ascertained. It was also found that the letter had been addressed toa post holder at an inappropriately high level in the organisation, in which case theinterviewer was directed by a secretary/PA to the level most likely to be productive.

    The work generated from the stages outlined above was considerable. It lasted from theend of 1997 through to April/May 1998, involving considerable volumes of time from theresearch teams at the DTI and the National Centre. Both at the DTI and at the NationalCentre there was the equivalent of one researcher working full-time for a period of 6

  • months on the task of getting agreement to participate. Once the right level of contacthad been reached in an organisation, repeated telephone calls were needed before therewas final agreement to release the establishment addresses. (Generally for Wave 2establishments a contact name, not always that of the proposed respondent, wasprovided.)

    4.2.3 Worker representatives

    Contact with the Worker Representative at cross-section addresses was only achievedwith the consent of the management respondent. The identification of the appropriateemployee, whether Union or Committee representative, was made by the CAPI program.The request to carry out a second interview at the establishment was raised at the end ofthe management interview. The procedures to be followed and the material prepared forthe Worker Representatives are detailed in the Interviewers Handbook (Section Six).

    4.3 Fieldwork Progress

    Interviewing for the cross-section survey began in mid-October 1997 immediately afterthe start of the briefing conferences. Late alterations to the CAPI programs meant thatthe panel interviews did not start until November. Interviewing finished in July 1998.Table 4A below sets out the month by which interviews were completed - for each of thetwo samples.

    The table shows that approximately 25% of cross-section interviews were completed in1997. By the end of February 1998 50% were completed, and by the end of March 75%.Fewer than 10% of panel interviews had been completed by the end of 1997 but by theend of February, 50% of this sample also were completed. However a relatively highproportion of panel interviews was carried out during the last four months (April to July1998) of fieldwork.

    Determined efforts were made by the research teams to reduce the long ‘tail’ of fieldwork- which had been a characteristic of previous WIRS. The difficulty of making speedyprogress with Wave 2 addresses proved, yet again, insuperable. The last addresses werenot issued to interviewers until May 1998. The median month of interview for bothcross-section and panel surveys was February 1998.

  • TABLE 4A: DATE OF LAST VISIT BYINTERVIEWER TO ESTABLISHMENT

    InterviewsCross Section Panel

    completed byend of …… No. % Cumulative

    %No. % Cumulative

    %

    1997:OctoberNovemberDecember

    1998:JanuaryFebruaryMarchAprilMayJuneJuly

    15315230

    2782645343071697110

    0.714.410.5

    12.712.024.414.07.73.20.4

    0.715.125.6

    38.350.374.788.796.499.6

    100.0

    06

    62

    174210154129904215

    0.00.77.0

    19.723.817.514.610.24.81.7

    0.00.77.7

    27.451.268.783.393.598.3

    100.0

    Base: All productives 2193 100.0 882 100.0

    Once contact with an establishment had been completed, the final output relating to thataddress was transmitted to the National Centre’s Brentwood office by the interviewersvia telephone modem. The outcome code for each address was integrated into one ofthree databases, created prior to fieldwork, comprising the issued sample for each of thesurveys. Thus fieldwork progress information was updated daily, the information beingavailable for printing out, as requested, on the National Centre’s internal network.

    A framework for reporting responses was agreed by the research teams prior tofieldwork. It comprised:

    Cross-section and Panel:

    • level of ‘cover’ (ie interviews achieved, addresses still with interviewers, addressesnot yet issued to interviewers);

    • response (ie out of scope, non contact, refusal etc).

    Cross-section only:

    • presence of worker representative (eligibility and response);• agreement to participate in SEQ procedure.

    SEQ only:

    • number placed, number received (by date of arrival), refusals, establishment due forreminder mailings.

    Detailed tables analysing the response to date by Size of Establishment and Area wereprinted out weekly and sent to the research teams at DTI and PSI. Examples of theoutput are included in Section Nine.

  • 4.4 Interviewer Workload

    Of the 156 interviewers who were briefed for the survey, 5 did not in the event achieveany productive interviews. 142 interviewers worked on both the panel and cross-sectionsurveys. 9 worked only on the cross-section survey.

    The mean number of interviews carried out by contributing interviewers was therefore20.4 (cross-section and panel). In 1990, the number of interviewers working on thesurvey was virtually identical (n = 147), although the total number of establishments wassomewhat lower (n = 2,550) and consequently average number of establishments perinterviewer lower (n = 17.4). It has always been the National Centre’s policy throughoutthe WIRS series to employ a highly selective policy in allocating interviewers to thesurvey, with the aim of maximising the volume of each interviewer’s work.

    The distribution of work is summarised in Table 4B below.

    TABLE 4B:DISTRIBUTION OF INTERVIEWSAMONG THE INTERVIEWER PANEL

    Interviews at …. No. of Interviewers

    Fewer than 10 establishments

    Between 10 & 19 establishments

    Between 20 & 29 establishments

    30 or more establishments

    17

    63

    51

    20

    Interviewers working on the survey (excluding those who carried out fewer than 10interviews) were ranked in order of achieved response rate, as a standard fieldworkquality control procedure. The quartile threshold figures from this listing (taking bothcross-section and panel samples into account) were:

    Response rate from No. of interviewersUpper quartile 100 - 92% 34Upper middle quartile 91 - 87% 33Lower middle quartile 86 - 79% 34Lowest quartile 79 - 67% 26

  • Overall response, analysed by interviewer grade, was:

    Grade Average response No of interviewers A n/a 0 B 83% 12 C 80% 41 D 84% 32Supervisors 85% 49

    The response rates indicated in these paragraphs are on average one or two percentagepoints higher than the true survey response rates shown in Section Five. The calculationson which they are based discount addresses that were never issued to interviewers, beingthose where the Head Office of an organisation refused directly to the DTI.

    The average durations of the interviews was as set out below:

    Mean MedianCross-section: Management 108 minutes 102 minutes

    Worker Rep 47 minutes 45 minutesPanel: Management 66 minutes 60 minutes

    29% of management interviews at cross-section establishments lasted 2 hours or longer;30% of panel interviews lasted 75 minutes or longer. The length of the task varied, aswould be expected, according to the size of the establishment. Tables 4C and 4D set outthe extent of the variations.

    There was some difference in duration between interviews carried out with workerrepresentatives of recognised unions (Mean: 48 minutes) and those carried out withworker representatives of consultative committees (Mean: 41 minutes).

    Data relating to the number of visits required by an interviewer in order to complete allthe necessary work at an establishment are only available for productive interviews. In46% of cross-section interviews one visit was required; in a further 39% two visits. In4% of cases more than three visits were necessary. The distribution is very close to thatof 1990 in spite of the fact that 1997-8 fieldwork procedures incorporated the addedcomplication of the SEQ.

    90% of successful panel interviews were the product of a single visit; 2% required threevisits or more.

    At 95% (n=2,079) of productive cross-section establishments, the interview wasconducted with a single management respondent. This is a higher proportion than in1990 when no more than 89% of management interviews were conducted with a singlerespondent - quite apart from the requirement in that year to conduct separate interviewswith financial managers. 96% of panel interviews were conducted with a singlemanagement respondent.

  • SECTION FOUR: CONDUCT OF FIELDWORK

    TABLE 4C: CROSS-SECTIO N - LENGTH OF INTERVIEWS BY SIZE OF ESTABLISH MENT

    SIZE OF ESTABLISHMENT (NUM BER OF EMPLOYEES)

    Base: All Productive Total10-24emps

    25-49emps

    50-99emps

    100-199emps

    200-499emps

    500-999emps

    1000+emps

    Workplaces 2193 263 394 393 386 453 185 119

    Management Interview:

    No information

    0 - 59 minutes60 - 89 minutes90 - 199 minutes120 minutes plus

    Mean duration (mins)Median duration (mins)

    %

    3

    1244329

    108102

    %

    *

    2404612

    9390

    %

    2

    1314521

    10295

    %

    3

    2244625

    106100

    %

    3

    *214234

    109105

    %

    3

    1183939

    114110

    %

    5

    -174533

    118100

    %

    5

    1143248

    120120

    Worker Rep Interview:

    Mean duration (mins)Median duration (mins)

    4745

    4238

    4140

    4640

    4545

    4845

    4945

    5352

  • SECTION FOUR: CONDUCT OF FIELDWORK

    TABLE 4D: PANEL - LENGTH OF INTERVIEW B Y SIZE OF ESTABLISH MENT

    SIZE OF ESTABLISHMENT (NUM BER OF EMPLOYEES)

    Base: All Productive Total 25-49 50-99 100-199 200-499 500-999 1000-1999 2000+Workplaces 882 113 159 146 210 98 82 74

    No information

    0 - 29 minutes30 - 44 minutes45 - 59 minutes60 - 74 minutes75 - 89 minutes90 minutes plus

    Mean duration (mins)Median duration (mins)

    %

    1

    *929321713

    6660

    %

    -

    3193525127

    59-

    %

    -

    1132625169

    61-

    %

    -

    -103034189

    64-

    %

    1

    -624381616

    69-

    %

    1

    -631361214

    67-

    %

    4

    -120372319

    76-

    %

    3

    -324292519

    74-

  • Relatively few interviews took place away from the site of the sampled establishments.The location of the management interview is as set out in Table 4E below:

    TABLE 4E: LOCATION OF INTERVIEW

    Cross-Section Panel

    At establishment

    At Head Office

    At Regional Office

    At more than 1 site

    %

    89.4

    6.0

    3.7

    0.9

    %

    92.0

    4.5

    3.5

    0.0

    Base: All productives 2193 882

    In the 1990 cross-section survey, 18% of interviews took place, wholly or partly, awayfrom the sampled establishment.

    4.5 The SEQ

    The Survey of Employees Questionnaire (SEQ) comprised a short paper questionnaire (8x A4 sides) which was left after the completion of the management interview for asample of employees to fill in and return by post.

    This part of WERS 97-8 could also only proceed with the agreement of management.Not only did management have to agree in principle but they had to make it practicablefor the interviewer to draw a sample of employees from staff records. In the eventmanagement refused permission for this exercise in 14.4% (n=316) of establishments atwhich an interview was given. The range and distribution of reasons for refusingpermission is shown in Section Five: 5.3, Table 5K.

    The aim was to select an equal probability sample of 25 employees at each establishmentthat employed 25 or more persons; at establishments with fewer than 25 employees allemployees were to be included in the survey. For the purpose of the sampling exercise,employees of the establishment were defined as for the EPQ - persons with a contract ofemployment, even though it might be for a fixed period, not open-ended. Freelancers,casual workers, temporary or agency personnel who did not have such a contract at theselected establishment were excluded.

    Interviewers were provided with written instructions for the sampling operation, whichtook place at the workplace. In most cases the sample was drawn at the same visit atwhich the management interview took place; in a minority of cases a second visit wasnecessary.

  • The instructions covered two types of situation:

    • where a list or printout of staff names was available for the interviewer to carry outthe sampling him/herself;

    • where there were no paper documents available but the information was available on

    computer screen and the interviewer had to instruct a member of staff in the samplingprocedures.

    These were the only situations that were identified during development work. During theactual fieldwork, interviewers had to cope with some minor variations in the above model(which might necessitate repeat visits to a workplace) - but there were relatively few suchcases. In some cases we were told that employee statistics were held only at HeadOffice, with the result that Head Office staff drew the sample.

    The sampling procedure itself required the interviewer or establishment staff member torefer to look-up tables which set out 25 random numbers for different sizes ofestablishment (from 26 to 7000 employees). Copies of these look-up tables and otherdocuments used in the sampling process are included in Section Nine.

    Once the sample had been selected the interviewer prepared packs to be handed out to thesampled employees. The packs consisted of a large envelope, overprinted withinstructions and containing:

    • a questionnaire; • an explanatory leaflet, designed by the DTI particularly for the Survey of

    Employees;

    • a Business Return Envelope (Freepost).

    The interviewers were required to fix identification labels to the pack envelopes and thequestionnaire before handing out the packs. The label attached to the pack envelopeincluded both the serial number allocated to the establishment and the selected employeeand their name; the label attached to the questionnaire contained only the serial number(along with bar code).

    The envelope packs were then handed to the management respondent (or in some cases adifferent ‘SEQ contact’ person nominated by management) for distribution.

    The questionnaires once completed by staff members were put into return envelopes andeither posted directly to the National Centre’s offices by the respondents or left at acentral collection point at the workplace. Questionnaires from the collection point weresubsequently returned by management to the National Centre in a large envelopeprovided by the interviewer or (in the minority of cases) picked up by the interviewer ona subsequent visit.

    A detailed description of the procedures at the workplace is included in the InterviewerHandbook for the survey (Section Eight).

    Reminders were sent within 12-15 working days of placement to all respondents whosequestionnaires had not been received at the National Centre’s Brentwood offices. Thereminder consisted of fresh envelope packs, personally addressed to the selected staffmember (by means of additional labels completed by interviewers after the initial

  • placement and returned to the office) but enclosing an additional document explaining tothe respondent the need for a high response rate and appealing for his/her co-operation.The reminders were not sent directly to the employees but via the manager or SEQcontact person - who was asked to ensure the pack reached the destination.Questionnaires used in the reminder operation were separately identified so that the effectof the operation could be monitored. In the event 10% (n = 2,864) of the questionnairesreceived were from those sent out at the reminder stage.

    This reminder operation had been part of the initial survey operations design. In theevent, during the course of fieldwork, a further problem was identified - that for asignificant proportion of workplaces no questionnaires at all were received from theinitial distribution, in spite of management having agreed to co-operate and theinterviewer having selected the sample and left the packs. Accordingly, all cases wherethe response from an establishment was below 40% at the reminder date (ie replies hadbeen received from fewer than 10 employees), were handed over to the staff of theNational Centre’s Telephone Unit at the Brentwood office where procedures for furthercontacts with the management at the workplace were developed. Where practicablefurther contact by the original interviewer was instituted; where this was not possible orwas considered unlikely to be effective, calls were made by the Telephone Unitinterviewers to the SEQ contact person with the purpose of clearing the bottleneck. Acopy of the form used for this process is in Section Nine.

    In excess of 200 cases (more than 10% of establishments that agreed to co-operate) werepassed to the Telephone Unit during the course of fieldwork. Despite this additionaleffort there remained, when fieldwork had been completed, a residue of 14.4% (n=316)of establishments from which no employee questionnaires had been received, in spite ofmanagement’s having agreed to co-operate on the occasion of the interviewer’s visit.

    4.6 Fieldwork quality control procedures

    In keeping with previous surveys in the WIRS series, where it had been consideredinappropriate for interviewers to be accompanied and supervised during the conduct ofthe interview, postal metho