quality control in radiochemical analysis

12
The Science of the Total Environment, 130/131 (1993) 485-496 485 Elsevier Science Publishers B.V., Amsterdam Quality control in radiochemical analysis R.S. Grieve and V. Barnes Research and Development Department, British Nuclear Fuels plc, Sellafield, Cumbria CA20 IPG, UK ABSTRACT This paper attempts to define those systems of operational control employed in laboratory procedures which are mandatory to ensure control within a total quality assurance scheme. Use of standards and tracers, calibration of instruments and validation of methods are discussed in terms o! laboratory experience and interactions with other laboratories. In par- ticular, the handling and assessment of data prior to presentation of results receives special attention. Key words: quality control; statistical process control; Total Quality Management (TQM) INTRODUCTION The assurance of high quality operations is now of paramount importance in many companies and international specifications have been agreed which set out the standards for this assurance. A number of specified bodies pro- vide a certification service aimed at ensuring that firms have attained and are maintaining the standards of quality assurance specified in the recognised British Standard BS 5750. Such a service, specifically for testing laboratories, is the National Measurement and Accreditation Service (NAMAS) operated by the National Physical Laboratory and the Chemistry Group of BNFL Sellafield has applied for accreditation under this scheme. An ongoing quality improvement programme is particularly relevant to any laboratory engaged in the analysis of large numbers of samples by a vari- ety of methods. In these situations it is impossible to guarantee the same degree of scientific accuracy and precision which can be attained in a 'one-off analysis'. For example, resources do not generally allow for large scale replication, multiple measurements at various stages of the method are usual- ly not possible and the large number of operators normmly used means that their standards of performance will introduce variations, and occasionally accidental errors.

Upload: rs-grieve

Post on 10-Nov-2016

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Quality control in radiochemical analysis

The Science of the Total Environment, 130/131 (1993) 485-496 485 Elsevier Science Publishers B.V., Amsterdam

Quality control in radiochemical analysis

R.S. Grieve and V. Barnes

Research and Development Department, British Nuclear Fuels plc, Sellafield, Cumbria CA20 IPG, UK

ABSTRACT

This paper attempts to define those systems of operational control employed in laboratory procedures which are mandatory to ensure control within a total quality assurance scheme. Use of standards and tracers, calibration of instruments and validation of methods are discussed in terms o! laboratory experience and interactions with other laboratories. In par- ticular, the handling and assessment of data prior to presentation of results receives special attention.

Key words: quality control; statistical process control; Total Quality Management (TQM)

INTRODUCTION

The assurance of high quality operations is now of paramount importance in many companies and international specifications have been agreed which set out the standards for this assurance. A number of specified bodies pro- vide a certification service aimed at ensuring that firms have attained and are maintaining the standards of quality assurance specified in the recognised British Standard BS 5750. Such a service, specifically for testing laboratories, is the National Measurement and Accreditation Service (NAMAS) operated by the National Physical Laboratory and the Chemistry Group of BNFL Sellafield has applied for accreditation under this scheme.

An ongoing quality improvement programme is particularly relevant to any laboratory engaged in the analysis of large numbers of samples by a vari- ety of methods. In these situations it is impossible to guarantee the same degree of scientific accuracy and precision which can be attained in a 'one-off analysis'. For example, resources do not generally allow for large scale replication, multiple measurements at various stages of the method are usual- ly not possible and the large number of operators normmly used means that their standards of performance will introduce variations, and occasionally accidental errors.

Page 2: Quality control in radiochemical analysis

486 R.S. GRIEVE AND V. BARNES

These facts are particularly relevant to laboratories involved in analyses associated with environmental prograrames and workers' safety. This is so because of'the increased public awareness and concern in these areas with the associated continual reduction in statutory, and self-imposed levels for effluent discharges.

This paper describes some aspects of the Chemistry Group's Quality Improvement Programme.

QUALITY IMPROVEMENT PROGRAMME

A basic feature of any Quality Improvement Programme is that it must be managed :from the highest level. It has many facets and these have led to a concept of Total Quality Management (TQM) within our group the main features of which are as follows:

(i) Creation of a Quality Management Stru,=ture. (ii) Instigation of a programme for updating and extending the documenta-

tion of methods. (iii) Creating management driven Quality Improvement Groups. (iv) Carrying out a staff training programme. (v) Implementing controls to check the quality of methods used. (vi) Reviewing the present formats in which res~Llts of analyses are presented

to 'customers'.

This paper deals with the last two of these items only.

IMPLEMENTING CONTROL METHODS

The following factors each affect the quality of the result frorrL an analysis:

(i) Accuracy of result. (ii) Precision of result. (iii) Time: taken to obtain and report the result. (iv) Format of reported result.

Methods aimed at improving estimates of 'reporting time' as well as their reduction a~: outside the scope of this report, though they are currently under investigation. Formatting of reported resuhs is covered in a later sec- tion of tills report.

In the present section we discuss control of the accuracy and precision of results fi'om analyses.

The need to use standards which are traceable to nationally and interna- tionally ;approved sources is well known and accepted. When a new method

Page 3: Quality control in radiochemical analysis

QUALITY CONTROL IN RADIOCHEMICAL ANALYSIS 487

is developed and periodically thereafter, it is common to carry out inter- laboratory comparisons in attempts to check for any possible bias (Bates, 1988).

Whilst both these techniques impro~Te confidence in the ability of a method to produce accurate results, in day to day running a number of factors can violate this position. Amongst these a~e:

(i) Lack of homogeneity within a sample. (ii) Variations in the sample matrix between samples. (iii) Operator and instrument bias.

A number of techniques are available which attempt to overcome these prob- lems and the following are some of the common ones used at Sellafield.

Internal standards

It is frequently possible to find an isotope of the element being measured which is not present in the sample. Injecting an accurate amount of this 'tracer isotope' at the earliest possible point in the analysis allows any bias to be checked by noting its percentage recovery. If this recovery deviates from 100% then, in theory, it ought to be possible to apply a 'recovery factor' to the result for the element being measured thus correcting for any possible bias (Norton, 1966).

Considering each of the three 'violating factors' (i), (ii) and (iii) mentioned above it should be noted that:

(i) The tracer method will not necessarily correct for lack of homoger~eity of a sample because the tracer may not have the same distribution in the sample as the element being measured.

(ii) The tracer method ought to be able to cope with variations in sample matrix but note (iii) below.

(iii) Introduction of a tracer should successfully cope with operator bias and indeed can be used to note where operator training may be necessary (e.g. if a particular operator obtains repeatedly low recovery factors). Wl~n it comes to instrumental bias the ability of a tracer to accurately predict bias is less certain.

When tracer additions are made to sample matrices and are subsequently analysed to determine chemical yields, care must be taken that differential bias do not exist in the measurement of the tracer and analyte. Often the tracer and analyte measurements might be performed on a gamma ~pec- trometer. For the tracer to accurately correct, for any bias in the sample then any fractional bias in making a measurement at the energy of the tracer gam- mas must be the same as any fractional bias at the energy of the gammas

Page 4: Quality control in radiochemical analysis

488 R.S. GRIEVE AND V. BARNES

from the main constituent. These two may be different and may be affected by sample matrix. Similarly, a bias might be introduced when determining recovery factors from gravimetric or spectrophotometric determination of an added carrier in a radiochemical method.

One method of trying to combat these problems, is to perform duplicate analyses with the tracer/carrier and compare the two 'recovery factors'. This comparison should be done statistically as described under Duplication (below).

Quality control source (QC$)

If it can be assumed that samples are homogeneous and have a reasonably constant matrix it is possible to use a QCS to check that no bias has been introduced into a method which has been shown to be originall- without bias. A QCS provides a repeatable source of the analyte in a matrix as close to that of the samples as possible. It may be necessary to alter the matrix somewhat in order to ,)btain a stable source, which is essential. The actual concentration of the ~,nalyte in the QCS need not be known accurately but it should be of the same order as that in the samples. If the range of concen- tration in the samyies is large it may be necessary to have more than one QCS.

Periodic measurement of the QCS will then indicate any change in the mean ~alue of the .measurements and thus give an indication of the introduc- tion of a bias into the method. The measurements on the QCS will also in- dicate any changes in variability of the method and hence a change in precision. It should be noted however that the variability associated with the results from the QCS will not include any components introduced by hetero- geneity of the sample or matrix effects if these are present.

Duplication

Duplicating analyses is common practice in many laboratories. The advan- tages of this approach are 2-fold.

(i) By taking a mean of the two results the precision is improved by a factor of "42.

(if) Comparison of the two results allows any disagreement to be observed and hence a measure of confidence in the analyses to be established.

The main disadvantage is the added cost. The value of the comparison, as a measure of confidence in the result, depends on how the comparison is made. In too many cases it is purely subjective being based on the experience

Page 5: Quality control in radiochemical analysis

QUALITY CONTROL IN RADIOCHEMICAL ANALYSIS 489

of the operator with reference to past results from the method. This is bad practice for two main reasons:

(i) It does not provide information which can be used as a measure of quality.

(ii) It can introduce a biased result by the rejection of statistically valid results. For example the operator may try to reject a duplicate and per- form a third analysis to confirm one or the other of the first pair. There may then be a tendency to reject the lowest result on the grounds that some of the sample was lost. Alternatively, if contamination was deem- ed to be the probable cause the tendency could be to reject high results.

If then duplicates are to be used, to advantage, any comparisons must be non-subjective. This can only be done by the application of statistical methods which require a knowledge of the standard deviation (SD) of the method and its relationship to the level of analyte.

From this knowledge and the mean value of the duplicates the SD of the difference can be found. Application of the t-test then gives the probability of observing the measured difference by chance assuming no change has oc- curred in the way the results are distributed.

Sometimes the SD of the difference is independent of level or can be made so by a suitable transformation (Box, 1978). In these cases control chart methods (see below) can be advantageously used to test measured dif- ferences.

It is possible for duplication to detect abnormal heterogeneity of samples but it cannot detect bias introduced by matrix effects.

STATISTICAL PROCESS CONTROL (SPC)

SPC owes its origin to Dr Walter A Shewhart in the USA during the 1920s who noted that all manufacturing processes showed variation with two com- ponents (Shewhart, 1931). One component was a steady one which gave a fairly constant spread about the process average and the other was an inter- mittent or sporadic component which gave sudden changes in the magnitude of the variation or the process average. Shewhart called the factors including these two components of variation, Common Causes and Assignable Causes, respectively.

In order to analyse a process and distinguish between the two components of variation, Shewhart devised a pictorial representation of the measure- ments taken on the process. This was the Control Chart sometimes called the 'Shewhart Chart' or 'X-R Chart'. A number of different types of control

Page 6: Quality control in radiochemical analysis

490 R.S. GRIEVE AND V. BARNES

charts (e.g. Cusum, Moving Averages etc.) have been developed over the years (Grant, 1988).

Routine chemical analyses suffer from variations due to both Common and Assignable causes and as such can be usefully analysed using SPC.

SPC and quafity improvement

The modem use of SPC is to assist Quality Improvement by first identify- ing the components of variation due to Assignable Causes and allowing these to be removed.

When this has been achieved the variation about the mean due to Com- mon Causes is statistically predictable and the process is then said to be in a state of 'Statistical Control'.

Until this state is reached it is not possible to assign a meaningful value to the precision of the method.

Common and assignable causes in the laboratory

Common causes are those which cannot be readily identified or controlled and are often the result of natural phenomena (e.g. changes in ambient temperature, humidity or atmospheric pressure). The reduction of the effect of common causes is often difficult and expensive and therefore usually re- quires intervention by laboratory management.

On the other hand Assignable Causes are identified much more easily and often the operator is in a much better position than the manager to diagnose the source (e.g. a change in reagent, a bottle left open).

SPC in the laboratory

Any SPC system consists of the following components:

(i) A characteristic component of the process whose measurements can be used as a means of control.

(ii) A system of measurement. (iii) A visual display of the measurements in the form of a chart. (iv) An organisation for analysing the chart and taking action on the results

of this analysis.

Choh:e of characteristic The choice of characteristic component to be measured for the control of

a che.~,:-ica! analysis is, in general, much more difficult than in the case of a manuf'acturing process. In the latter case there is usually a fixed target value

Page 7: Quality control in radiochemical analysis

QUALITY CONTROL IN RADIOCHEMICAL ANALYSIS 491

for the component to be measured and about which variation will be observ- ed (e.g. the diameter of a cylinder, the acidity of a feed liquor)°

In a chemical analysis, however, analytes will not be exp~ted to have ex- actly the same value in each sample received for analysis. It is therefore not possible to use the results on the analyte for control purposes since a central feature of a control characteristic is that it should be expected to have a stable mean value. One way around this problem is to use a QCS as discussed previously. Its measurement at various stages of the method can give useful information on the presence of Assignable Causes and the magnitude of Common Causes.

In the absence of Assignable Causes, a Control Chart (Fig. 1) can give a very good estimate of the precision of the measurements on the QCS. This precision cannot be assumed to be that of the analytical method and the temptation to do so should be avoided unless it can be ascertained that the matrices of the QCS and samples being analysed are identical.

Another method, which goes some way to overcoming the problem of varying sample level, is to use the difference between ti~e measurements on a duplicate pair of samples as the control characf~eristic. The method relies on the assumption that it is unlikely that an Assignable Cause will effect each duplicate by exactly the sarae ~mount and any variation is due entirely to Common Causes wh~,~e variability can be assumed constant (by use ,~f a transformation if necess~) .

System of measurement Measurements of the Control Characteristic (e.g. a QCS) are formed into

what are known as Rational Subgroups which will typically contain two to five measurements. The mean (X) and range (R) of these measurements are then used to construct a Control Chart. Typically, a Subgroup is chosen from measurements associated with a batch of samples. If this is done however, it is often found that the variation within a batch is considerably less than between batches. Whilst it can be argued that 'between batch varia- tion' is an Assignable Cause it is often not possible to identify and remove its source. The method cannot then be kept in 'statistical control'.

To overcome this the Moving Range method is used (see below) where each Subgroup consists of two measurements. One measurement is from the current batch and the other is from the proceeding batch. A measurement thus appears in two adjacent subgroups. The Range of each Subgroup tbus gives an estimate of the combined 'within batch and between batch' variations.

Display of measurements Whilst other methods are available for the display of measurements (e.g.

the Multi Vari Chart and the Cusum Chart) the predominant method is by

Page 8: Quality control in radiochemical analysis

492 R.S. GRIEVE AND V. BARNES

Region of Assignable Causes Assignable Cause

Statistical (Spurious) (Mean Shift)

_ I I

. .

Q

X-Chart * * * *, ~' ,, - . GRAND

• * " * * * ~ / MEAN mO .~4

1 u* lo,

LWL

LAL

. . . . . . . . UAL

~: O I

. . . . . . . . . . . . ,.~ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . U W L o ; Assignable Cause

R-Chart (Reduced Variability)

L o ~ -~

O O O O :

0 O 0 0 0 0 / • • 0

0 ~ o ~ 0 CO 0 o o o o I ~ LWLA.AL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

I t

Fig. 1. Example of a Shewhart chart.

use of 'Shewhart Type' Control Charts. These can occur in different forms and are described under Control Charts (below).

The Cusum is exceptionally efficient at providing early warning of a change in process mean, which is an Assignable Cause, but is less efficient at highlighting the presence of other causes and thus in assisting overall quality improvement. Its use is more complex than other charts and it has not to date been used on a routine basis at Sellafield.

Organisation for analysing charts The chart construction is agreed between the Quality Manager, (or his

technical representative), the Assistant Group Manager and the relevant Laboratory Leader who is then responsible for the actual construction and analysis of the control chart. All tal~oratory leaders receive a 4-day course on quality methods and statistics which includes an introduction to rontrol

Page 9: Quality control in radiochemical analysis

QUALITY CONTROL IN RADIOCHEMICAL ANALYSIS 493

char techniques. Further instruction is given specifically in control char work by a series of 'surgeries' during the implementing phase.

The system is audited at approximately 3-monthly intervals by the Quality Manager and his technical advisor. Progress in each laboratory is reviewed and where necessary further advice and training are given. A report is then sent to the Chemistry Group Management.

Operators are instructed by Laboratory Leaders and encouraged to con- struct and update the chars. They are also encouraged to attempt to analyse the chaR, correcting obvious errors themselves and noting the cause, as well as bringing to the notice of the Laboratory Leader any suspicious behaviour revealed by the chaR.

Control charts

Figure 1 shows a typical layout of a control chart for variables. Control chars for attribute data have not, to date, been used routinely but there may be a case for their application where data is Poisson Distributed, such as from measurements, on low level radiation sources and detector back- grounds.

With regards to control charts for variable data the mean of the Ranges of the subgroups allows an estimate to be made of the standard deviation of the Common Causes. In Fig. 1 the upper chart is a plot of the level of the control characteristic (as estimated from the subgroup) against its sequence of occurrence and the lower chart is a plot of the range of each subgroup in sequence.

As can be seen from Fig. 1, the chars allows trends as well as spurious values to be observed and hence the presence of Assignable Causes to be readily discerned. To assist in the analysis of the charts control lines for upper and lower action limits (UAL and LAL) and also for upper and lower warning limits (UWL and LWL) are drawn.

The lines labelled UAL and LAL are each 3.098 (8 = 1 SD of a plotted value) from the mean value and the UWL and LWL lines are each 1.960 from the mean value. Thus assuming a Normal distribution and the presence of only Common Causes we would expect about 1 point in 1000 outside either the UAL or the LAL and about 1 point in 40 outside either the UWL or LWL.

FORMATFING OF ANALYTICAL RESULTS

This work is fully covered in a separate report (Barnes, 1990 unpublished data) t~om which the following is a brief extract:

Page 10: Quality control in radiochemical analysis

494 R.S. GRIEVE AND V. BARNES

As a consequence of a chemical analysis much information becomes available to the analyst. It is not possible, on a routine basis, to pass al'~ this information to the customer and a degree of filtration must be impos- ed. This must be carefully considered because it can have a marked effect on the value of the transmitted information. In the case of Analytical Results the effect of over filtration of information is considered to be most acute where the analytical measurement lies at or near the limit of detection.

It is important to realise that any chemical analysis will yield a result which is a member of a population of possible Analytical Results. This population will have a distribution about a mean value which, if the method is without bias, will accurately predict the quantity of analyte in the sample. An individual Analytical Result only provides an estimate of this mean, and depending upon the spread and shape of the distributions, will, in general, show a deviation from this mean up to some statistically limited value.

To completely specify the population to which an analytical result belongs requires three parameters, viz:

(i) Mean (ii) SD (iii) Shape of the distribution

These parameters are usually known to the analyst but which ones are passed to the customer often depends upon the relationship between the parameters. For example, if the SD is small compared to the mean it may be possible to ignore the former. The analytical result, which is only an estimate of the mean, can then be taken to reasonably represent the true value of the analyte with little chance of error.

Reported measurement

The reported measurements merit special consideration since the proposed format will report negative results. This should cause no concern because it does not mean that the actual component being measured is 'present in a negative amount' which, of course, is absurd but only that the analytical result for its measurement is negative.

When a sample having a low concentration of analyte has its mea:~ured value corrected for a background effect then, due to random variations ~n the measurements, the result can be negative. This is a valid result and should not be ignored. Reporting a zero value in these cases or repeating the analysis until a positive (or zero) result is obtained and then reporting that figure are both equally wrong and unacceptable. These latter procedures both in-

Page 11: Quality control in radiochemical analysis

QUALITY CONTROL IN RADIOCHEMICAL ANALYSIS 495

troduce an unacceptable positive bias in the results which can be as high as 0.4 standard deviations. Reporting negative results therefore overcomes any problems from the introduction of bias during use of the results provided the customer maintains the sign of the value. The reporting of a 'detection limit' is aiso avoided by this procedure and the handling of sets of data is simplified.

Confidence limit

Commonly used 95°,/0 confidence limits are reported rather than the SDs because, unless the distribution of results is Normal, the number of the latter within a given confidence interval varies with the distribution and also the level of the result. Inclusion of 95% confidence limits allows the customer to set his own limits by accurately interpolating from the 95% values provided he knows the type of distribution involved.

Type of distribution

Three types of distribution are considered which should cover all practical cases. These are the Normal, Poisson and Binomial distributions. The Nor- mal distribution has symmetrical confidence limits whereas the limits are asymmetrical for the latter two. This fact allows the Normal distribution to be recognised. To allow manipulations to be performed on Binomially- distributed data the probability of occurrence of a single event is required. Appending this single extra parameter allows the Poisson distribution to be differentiated from the Binomial.

Form of the reported results

The results will appear to the customer in one of the following formats.

(i) Result = X ± X (Normal distribution) (ii) Result = X (-Xl: +Xu) (Poisson distribution) (iii) Result = X (-XI: +Xu); P (Binomial Distribution)

where X is the 95% confidence limit for the Normal case, XI and Xu are, respectively, the lower and upper limits for the Poisson and Binomial case and P is the 'Binomial Probability'.

REFERENCES

Box, G.E.P., W.G. Hunter and J.S. Hunter, 1978. Statistics for Experimenters, Wiley, New York.

Page 12: Quality control in radiochemical analysis

496 R.S. GRIEVE AND V. BARNES

Bates, T.H., 1988. An intercomparison exercise on Tc-99 in seaweed. Environ. Int. USA, 14: 283-288.

Nortom E.F., 1966. Chemical Yield Determinations in Radiochemistry. US National Academy of Sciences, National Research Council. Nuclear Science Series NAS-NS 3111. US Dept of Commerce, Virginia, USA.

Shewhart, W.A., 1931. Economic Control of Quality of Manufactured Product. D Van Nostrand Company Inc.

Grant, E.L. and R.S. Leavenworth, 1988. Statistical Quality Control, 6th edn. McGraw-Hill.