propagating uncertainty in instrumentation systems

5
2376 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 Propagating Uncertainty in Instrumentation Systems B. D. Hall Abstract—An algorithm for propagating measurement uncer- tainty in a system of interconnected modules is presented. The method adheres strictly to current best-practice in the evaluation and reporting of measurement uncertainty. It allows modular instrumentation systems to be designed that will propagate uncer- tainty automatically. The algorithm is simple, general, efficient, and can be implemented with little difficulty. It inherently pro- vides the kind of dynamic “plug-and-play” flexibility expected of modern instrumentation. Index Terms—Measurement uncertainty, modular instrumenta- tion. I. INTRODUCTION M EASUREMENTS provide objective information about physical quantities; they are woven into a myriad of pro- cesses occurring in daily life, so that our society depends on their reliability. However, no measurement result should be consid- ered exact; some degree of uncertainty is inevitable. It is prefer- able to know the level of uncertainty associated with a mea- sured value, because the likelihood of a desirable outcome can be enhanced by taking account of uncertainties in the decision process. This paper addresses the important problem of calculating un- certainty in instrumentation systems. Modern instrumentation systems rarely report uncertainty explicitly (e.g., by displaying it or transmitting it through a communications port). Usually handbooks and calibration records must be examined and inter- preted, which is a laborious process requiring specialist knowl- edge and skills. Formal evaluation of measurement uncertainty is neverthe- less an increasingly common requirement of Quality Assurance processes. A notable example is the ISO-17025 uncertainty requirements that apply to calibration and testing laboratories [1]. International guidelines for handling of measurement uncertainty have been published by the ISO in the “Guide to the Expression of Uncertainty in Measurement” (Guide) [2]. 1 However, the recommended mathematical procedure can be difficult to implement when using sophisticated instrumentation [5]. In modern instrumentation, modularity is important. A mod- ular system offers greater flexibility, because its functionality can be altered or enhanced by changing components. In addi- tion, well-designed components generally find uses in a range Manuscript received May 15, 2002; revised July 15, 2004. This work was supported by the New Zealand Government as part of a contract for the provision of national measurement standards. The author is with the Measurement Standards Laboratory of New Zealand, Industrial Research Ltd., Lower Hutt, New Zealand (e-mail: [email protected]). Digital Object Identifier 10.1109/TIM.2005.859142 1 There is an equivalent ANSI document [3] as well as guidelines prepared by NIST [4]. of applications, resulting in economies of scale through reuse. Modular systems are generally easier to upgrade and maintain by replacing modules over time. Recently, in an effort to pro- mote modularity in instrument systems, the Test and Measure- ment industry has tried to harmonize industry-wide standards for the communication interfaces of system components [6]–[8]. While instrumentation technology is continually improving, support for handling uncertainty is lacking. Because of the in- herent flexibility of systems, it can be difficult to identify the exact relationship between a measurement result and the var- ious parts of a system that contributed to it. This prevents the explicit formulation of a mathematical expression for measure- ment uncertainty. Lack of support for uncertainty in instrumentation contributes to the burden of testing and validation. With modern “intel- ligent” instruments, it must be possible to do better. Ideally, system modules could be designed to handle uncertainty au- tonomously; evaluating output uncertainty as a function of input values and uncertainties, taking into account specific module characteristics, calibration history, etc. This paper asserts that modular systems can indeed be de- signed to handle uncertainty automatically. It presents a simple technique for maintaining a dynamic and self-consistent view of system uncertainty that even allows a system configuration to change without compromising the evaluation of uncertainty. The technique adheres strictly to international guidelines [2]. The paper begins by presenting an algorithm for propagating uncertainty in Section II; then Section III illustrates the ap- proach with a simple example. Section IV discusses the method and its implementation. II. ALGORITHM FOR PROPAGATING UNCERTAINTY This section describes an algorithm for propagating uncer- tainty in a modular system. The presentation assumes some fa- miliarity with the Guide, which is briefly summarized in Ap- pendix A. To begin, a formal description is given for the fa- miliar process of propagating values in a modular system. This formalism is then extended to allow for propagation of uncer- tainty. A measurement can expressed as a function (1) where the input values, , are not exactly known and so contribute some uncertainty to the result. The function represents a measurement procedure (performed, we assume, by some instrumentation system). Quite generally, this function may be considered as the composition of a set of simpler func- tions (2) 0018-9456/$20.00 © 2005 IEEE

Upload: bd

Post on 23-Sep-2016

218 views

Category:

Documents


3 download

TRANSCRIPT

2376 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005

Propagating Uncertainty in Instrumentation SystemsB. D. Hall

Abstract—An algorithm for propagating measurement uncer-tainty in a system of interconnected modules is presented. Themethod adheres strictly to current best-practice in the evaluationand reporting of measurement uncertainty. It allows modularinstrumentation systems to be designed that will propagate uncer-tainty automatically. The algorithm is simple, general, efficient,and can be implemented with little difficulty. It inherently pro-vides the kind of dynamic “plug-and-play” flexibility expected ofmodern instrumentation.

Index Terms—Measurement uncertainty, modular instrumenta-tion.

I. INTRODUCTION

MEASUREMENTS provide objective information aboutphysical quantities; they are woven into a myriad of pro-

cesses occurring in daily life, so that our society depends on theirreliability. However, no measurement result should be consid-ered exact; some degree of uncertainty is inevitable. It is prefer-able to know the level of uncertainty associated with a mea-sured value, because the likelihood of a desirable outcome canbe enhanced by taking account of uncertainties in the decisionprocess.

This paper addresses the important problem of calculating un-certainty in instrumentation systems. Modern instrumentationsystems rarely report uncertainty explicitly (e.g., by displayingit or transmitting it through a communications port). Usuallyhandbooks and calibration records must be examined and inter-preted, which is a laborious process requiring specialist knowl-edge and skills.

Formal evaluation of measurement uncertainty is neverthe-less an increasingly common requirement of Quality Assuranceprocesses. A notable example is the ISO-17025 uncertaintyrequirements that apply to calibration and testing laboratories[1]. International guidelines for handling of measurementuncertainty have been published by the ISO in the “Guide tothe Expression of Uncertainty in Measurement” (Guide) [2].1

However, the recommended mathematical procedure can bedifficult to implement when using sophisticated instrumentation[5].

In modern instrumentation, modularity is important. A mod-ular system offers greater flexibility, because its functionalitycan be altered or enhanced by changing components. In addi-tion, well-designed components generally find uses in a range

Manuscript received May 15, 2002; revised July 15, 2004. This work wassupported by the New Zealand Government as part of a contract for the provisionof national measurement standards.

The author is with the Measurement Standards Laboratory of New Zealand,Industrial Research Ltd., Lower Hutt, New Zealand (e-mail: [email protected]).

Digital Object Identifier 10.1109/TIM.2005.859142

1There is an equivalent ANSI document [3] as well as guidelines prepared byNIST [4].

of applications, resulting in economies of scale through reuse.Modular systems are generally easier to upgrade and maintainby replacing modules over time. Recently, in an effort to pro-mote modularity in instrument systems, the Test and Measure-ment industry has tried to harmonize industry-wide standardsfor the communication interfaces of system components [6]–[8].

While instrumentation technology is continually improving,support for handling uncertainty is lacking. Because of the in-herent flexibility of systems, it can be difficult to identify theexact relationship between a measurement result and the var-ious parts of a system that contributed to it. This prevents theexplicit formulation of a mathematical expression for measure-ment uncertainty.

Lack of support for uncertainty in instrumentation contributesto the burden of testing and validation. With modern “intel-ligent” instruments, it must be possible to do better. Ideally,system modules could be designed to handle uncertainty au-tonomously; evaluating output uncertainty as a function of inputvalues and uncertainties, taking into account specific modulecharacteristics, calibration history, etc.

This paper asserts that modular systems can indeed be de-signed to handle uncertainty automatically. It presents a simpletechnique for maintaining a dynamic and self-consistent viewof system uncertainty that even allows a system configurationto change without compromising the evaluation of uncertainty.The technique adheres strictly to international guidelines [2].

The paper begins by presenting an algorithm for propagatinguncertainty in Section II; then Section III illustrates the ap-proach with a simple example. Section IV discusses the methodand its implementation.

II. ALGORITHM FOR PROPAGATING UNCERTAINTY

This section describes an algorithm for propagating uncer-tainty in a modular system. The presentation assumes some fa-miliarity with the Guide, which is briefly summarized in Ap-pendix A. To begin, a formal description is given for the fa-miliar process of propagating values in a modular system. Thisformalism is then extended to allow for propagation of uncer-tainty.

A measurement can expressed as a function

(1)

where the input values, , are not exactly known andso contribute some uncertainty to the result. The functionrepresents a measurement procedure (performed, we assume,by some instrumentation system). Quite generally, this functionmay be considered as the composition of a set of simpler func-tions

(2)

0018-9456/$20.00 © 2005 IEEE

HALL: PROPAGATING UNCERTAINTY IN INSTRUMENTATION SYSTEMS 2377

which must be evaluated in a particular order. We will say thateach of function is associated with a “module”, labeled ,with output value and a set of input values . Strictly, theinput values to (2), , are also associated with modules.These input modules’ functions are of the form ,with empty sets.

Decomposition of a system is not an unfamiliar process. Forexample, a system can be decomposed in terms of its variouscomponents, resembling a conventional block-schematic repre-sentation. A typical example of an input module in that casecould be a sensor.

For a system composed of modules, the output, ,is obtained by evaluating the module functions in order2

for (3)

Each step of the algorithm is associated with a distinct module ,and those modules that provide its direct inputs. The evaluationof is actually a familiar recursive process: module callsits direct input modules, which call their inputs, and so on, downto the system inputs, .

This notion of a module, that encapsulates the steps in a valuecalculation, can be extended to handle uncertainty. The propa-gated quantity will be called a “component-of-uncertainty” andis defined as the product of a module’s sensitivity coefficient toan input (a partial derivative) and the input standard uncertainty.For example

(4)

is the component-of-uncertainty in due to uncertainty in theinput value .3 The standard uncertainty of is usually de-noted , which is equivalent to in the notation for acomponent-of-uncertainty.

Components of uncertainty can be propagated by an al-gorithm expressed in a similar form to (3).4 The component

, for any given among the inputs , is obtainedby evaluating

for (5)

Equation (5) describes a method for propagating informationabout uncertainty in a modular system. It is the main result ofthis paper. Moreover, it suggests how the familiar notion of amodule can be extended to handle uncertainty. This is possiblebecause the iteration order is the same in algorithms (3) and (5)and the values needed at each step can be obtained from themodules that are direct inputs to the th module, that is, thosemodules associated with the values in . The two algorithmscan be combined in the same iterative loop

for (6)

2To ensure the correct order of evaluation, subscripts are assigned so thatj > k, where x is any member of the set � .

3Note, components of uncertainty are only used to calculate combined uncer-tainty (see Appendix A). They are not uncertainties in their own right.

4The algorithm can be obtained by applying the chain rule for partial differ-entiation to the problem (see Appendix B).

To interpret this in terms of system components, each moduleshould be seen as an entity that makes the following infor-

mation available to clients:

• a value (i.e., an implementation of );• a component of uncertainty (i.e., an implementation of

, parameterized on an input module ).If these requirements are met, a module can calculate an outputvalue, and the components of uncertainty for that value, from theinformation available at its inputs. The next section illustratesthis with a simple example.

Note that the idea of a module is recursive in character: (2)is as much a module function as any in (3). So, the processof decomposition can be used to break down arbitrary math-ematical expressions into elementary operations. This is anal-ogous to the mathematical expression-parsing that is done inmany programming environments. It can be exploited here todesign software frameworks that neatly parse expressions andautomatically carry out uncertainty calculations in compliancewith the recommendations of the Guide [9]–[13]. This will bediscussed further in Section IV.

III. EXAMPLE

To illustrate the technique outlined in the previous section,suppose that a system is required to estimate electrical powerfrom a measurement of the potential difference across a knownresistor, . If is the measured voltage then power is given by

(7)

Further, suppose that the temperature dependence of the resistorhas been characterized, so

(8)

where is the resistor temperature, is the temperature coef-ficient and is the resistance at temperature ( , andhave negligible uncertainty here).

To realize the measurement system, two modules are avail-able: a voltage sensor and a temperature sensor. These modulesare regarded as “black boxes”—their inner workings are hidden.However, both provide a suitable “component of uncertainty”function in their communications interfaces, so that values for

and can be obtained. The discussion focuses onthe design of a processing module (e.g., a computer) that com-municates with both sensors (Fig. 1).

The processing module can be thought of as the itera-tion step in (6). Its (measurement) function is, from (7) and (8),

(9)

The expressions for the components of uncertainty can be foundfrom

(10)

given that . The two partial derivatives are

(11)

(12)

2378 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005

Fig. 1. Two sensor modules are connected to a processing module to realizethe power measurement system. The small empty circles represent a commoncommunications interface, the larger labeled circles identify the modules.

So (10) can be expressed as

(13)

When implementing these expressions we would actually inter-rogate modules 1 and 2 to obtain values for , and

. It is particularly important that the component-of-uncer-tainty functions remain parameterized on . Indeed, if the designwere to exploit the independence of the original input modules(i.e., that ) it could limit the system’s flex-ibility by rigidly imposing this independence even after futuremodule changes. It is possible to design the processor modulesoftware so that the set of system input modules is determineddynamically, which means calculations can iterate over all inputmodules connected at a given instant (i.e., over all ). This en-sures that future changes are automatically handled correctly.The more general formulation allows the processing module tobehave correctly in spite of the substantial changes to the systemuncertainty calculation, which is a fundamental feature of thedesign technique proposed.

IV. DISCUSSION

This paper has shown that the familiar notion of systemmodules can be extended to handle measurement uncertaintyaccording to the recommendations of the Guide. Externally,a module needs to give clients access to both a value and theassociated components of uncertainty. These attributes can becalculated from a module’s direct inputs.

A system that uses this technique presents two distinct as-pects. One aspect is the set of system modules, which can beconsidered as self-contained independent entities. The other as-pect is a system-specific framework that supports the inter-con-nection of modules and coordinates them. The framework al-lows a measurement to be made according to a given procedureand ensures that uncertainty is handled correctly. In short, the

technique proposed allows a design to be factored in a way thatseparates system structure and function from individual modulecharacteristics.

The clear distinction between these aspects will simplifyvalidation and testing. On one hand, a module can be testedand validated in isolation. It should be possible to determinethat a module’s performance is satisfactory and that it producesself-consistent measurement information. On the other hand, asystem framework can be tested in terms of the correct handlingof a measurement procedure and handling of the informationproduced by modules.

A system’s framework is implemented in software.In essence, this requires a specification for the genericsystem-module interface and implementations of functionsthat manipulate modules in the system to perform certain tasks.For example, the calculation of combined uncertainty [Ap-pendix A, (17)] would generally be implemented as part of aframework. As pointed out in Section II, the notion of a moduleis abstract and can be used to encapsulate an arbitrary degreeof complexity. Our own software framework implementationshave included module sets supporting arithmetic and simplemath functions, so that arbitrary mathematical expressionscan incorporate uncertainty. These are useful for manipulatingmeasurement data obtained from instruments. The techniqueis efficient and simple to implement.5 The design of theseelementary modules closely resembles the so-called “reverse”method of automatic differentiation, which is an establishedtechnique of computational mathematics [14].

There are no commercial instrument modules that support un-certainty available at this time; however, systems can be de-signed using conventional technology (i.e., without a built-incomponent-of-uncertainty function). To encapsulate an instru-ment and present a suitable module interface to a system frame-work, software must written to provide the component-of-un-certainty function. This is similar to the idea of a “Role ControlModule” (RCM) for instruments, as discussed in [8]. A RCMimplements the set of functions required of a generic instru-ment in a particular system. In this way, the role of an instrumentcan be quite narrowly defined within the context of a particularsystem’s requirements. The task of writing a component-of-un-certainty function would therefore apply only to the particularmodes of operation required of the RCM.

In general, the reader should note that there are several as-sumptions underlying the Guide’s approach to propagating andinterpreting measurement uncertainties. First, it is assumed thatthe measurement function can be approximated by a Taylor se-ries truncated beyond linear terms: in other words, that the mea-surement function, in the vicinity of the measurement point, isconsidered to be linear on the scale of variations associated withthe uncertainties. One would expect this assumption to be sat-isfied in the majority of cases. Nevertheless, system softwarecould be designed to test this assumption in situ. Second, it isassumed that the uncertainty associated with a measurement canbe associated with a normal distribution. If either of these ap-proximations do not hold, then the proposed method may notapply.

5Some examples are available, in C++, Python and Visual Basic [13].

HALL: PROPAGATING UNCERTAINTY IN INSTRUMENTATION SYSTEMS 2379

Another important consideration is that a measurement pro-cedure should demonstrate “traceability” to primary standardswith quantified uncertainties [15]. Traceable measurements areincreasingly demanded, because they carry an assurance ofquality. The analysis of uncertainty contributions in a particularmeasurement procedure is a task requiring specialist skills.There are various considerations that apply when assessinguncertainties in measurement, which are covered in the Guide[2]. This paper simply assumes that designers will have thiscompetency.

V. CONCLUSION

A new design technique has been presented that allows mea-surement systems to handle uncertainty calculations. Analysisof the mathematical procedure involved shows that individualsystem modules can be designed to evaluate the various stepsin a calculation. This means that uncertainty calculations canbe distributed over the modules comprising a system and showsthat module-design can fully encapsulate measurement infor-mation. Modular instrumentation systems can therefore reportself-consistent uncertainty values, recognizing the individualcontributions of each system component. The technique allowsmodules to be exchanged (plug-and-play) without compro-mising system integrity.

It appears that the proposed algorithm is novel and that itsapplication to measurement systems is new. The method strictlyfollows current best-practice in evaluation and reporting of mea-surement uncertainty and is straightforward to implement, re-quiring little more programming than is currently used in in-strumentation systems.

APPENDIX

VI. STANDARD PRACTICE

Guidelines for the evaluation of measurement uncertaintieshave been published in the Guide to the Expression of Uncer-tainty in Measurement [2]. The recommendations of Guide arecurrently accepted as best-practice and have been adopted bynational measurement institutes and accredited measurementlaboratories worldwide.

The measurement of a quantity , the measurand, is de-scribed by a function , interpretedas “ that function which contains every quantity, includingall corrections and correction factors, that can contributea significant component of uncertainty to the measurementresult” [2, 4.1.2]. The quantities and areassociated with random variables, however, the distributionmeans (and possibly the standard deviations) are not knownexactly. Estimates of the means, , are used toestimate the measurand as

(14)

A “standard uncertainty,” , is associated with each andis understood to be an estimate of the standard deviation of .The quality of this estimate can be characterized by a number

of “degrees-of-freedom,” . If is infinite, the standard un-certainty is considered exact, otherwise is related to the rel-ative uncertainty of [2, G.4.2]. The degrees-of-freedomhas its conventional meaning when associated with a normallydistributed random variable.

The uncertainty in depends on the uncertainties inand on the form of . This paper uses the nota-

tion6

(15)

where the partial derivative is more formally expressed as

(16)

The combined standard uncertainty in is then evaluated as7

(17)

where is the correlation co-efficient and is the estimated covariance of and .When there is no correlation, this simplifies to

(18)

If one or more of the input-quantity uncertainties has finitedegrees of freedom, a value of “effective degrees-of-freedom,”

, for should be calculated using the Welch–Satterth-waite formula [2, G.4]

(19)

When inputs are correlated, cannot be calculated—theGuide makes no specific recommendations in this case.

VII. THE CHAIN RULE

The iteration steps of (5) are closely related to the chain rulefor partial differentiation. This can be seen more clearly bywriting the final iteration of (5), when , in the equivalentform

(20)

The common factor can be cancelled, leaving

(21)

Because, in general, in this equation may not explicitly de-pend on (i.e., need not be a member of , for any partic-ular ), the terms may be expanded further by ap-plying the chain rule. That is the reason for iterating in (5);each step provides intermediate results needed in subsequentiterations.

6The Guide’s notation for uncertainty components defines them as the mod-ulus of the u (y) used here.

7The Guide’s notation for combined uncertainty is u (y). We omit the sub-script c, because the meaning is clear from the context.

2380 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005

ACKNOWLEDGMENT

The author is very grateful to R. Willink for many fruitfuldiscussions regarding this work and for suggestions that haveimproved this manuscript. The author would also like to thank L.Christian and D. R. White for their comments and suggestions.

REFERENCES

[1] ISO 1999, General Requirements for the Competence of Testing and Cal-ibration Laboratories. International Organization for Standardization,Geneva, Switzerland, ISO/IEC 17025:1999(E).

[2] ISO 1993, Guide to the Expression of Uncertainty in Measurement. In-ternational Organization for Standardization, Geneva, Switzerland.

[3] American National Standard for Expressing Uncertainty—U.S. Guideto the Expression of Uncertainty in Measurement. ANSI/NCSL Z540-2-1997.

[4] B. N. Taylor and C. E. Kuyatt, “Guidelines for evaluating and expressingthe uncertainty of NIST measurement results,” National Institute ofStandards and Technology, NIST Technical Note 1297.

[5] B. J. Anderson. (2001) Keynote Address. NCSL International Sym-posium, Washington, DC. [Online]. Available: http://metrology-forum.tm.agilent.com/ncsli2001.shtml

[6] K. B. Lee and R. D. Schneeman, “Internet-based distributed measure-ment and control applications,” IEEE Instrum. Meas. Mag., vol. 2, no.2, pp. 23–27, Jun. 1999.

[7] T. R. Licht, “The IEEE 1451.4 proposed standard,” IEEE Instrum. Meas.Mag., vol. 4, no. 1, pp. 12–18, Mar. 2001.

[8] J. Mueller and R. Oblad, “Architecture drives test system standards,”IEEE Spectrum, vol. 37, no. 9, pp. 68–73, Sep. 2000.

[9] B. D. Hall, “Automatic uncertainty calculation for smart measurementsystems,” in Proc. Sensors for Industry Conf., Rosemount, Ill., Nov.2001, pp. 290–295.

[10] , “A design pattern to encapsulate measurement uncertainty inreusable instrumentation modules,” in Proc. AUTOTESTCON’02 Conf.,Huntsville, Al., Oct. 2002, pp. 678–686.

[11] , “The GUM tree design pattern for uncertainty software,” inAdvanced Mathematical and Computational Tools in Metrology VI, ser.Advances in Mathematics for Applied Sciences. Singapore: WorldScientific, 2005.

[12] , “Calculating uncertainty automatically in instrumentation sys-tems,” Industrial Research, Ltd., Wellington, New Zealand, InternalReport no. 1073, Feb. 2002.

[13] Various reports and software implementations of the technique de-scribed are available at http://www.irl.cri.nz/msl/mst.

[14] L. B. Rall and G. F. Corliss, “An introduction to automatic differen-tiation,” in Computational Differentiation: Techniques Applications,and Tools, M. Berz, C. H. Bischof, G. F. Corliss, and A. Griewankpp,Eds. Philadelphia, PA: SIAM, Sept. 1996, pp. 1–17.

[15] J. V. Nicholas and D. R. White, Traceable Temperatures, 2nded. Chichester, U.K.: Wiley, 2002.

B. D. Hall received the B.Sc.(Hons.) and M.Sc. degrees in physics from VictoriaUniversity, Wellington, New Zealand, and the Dr. ès Sci. for research into theatomic structure of nanometer-sized particles from the Swiss Federal Instituteof Technology in Lausanne (EPFL) in 1991.

Before joining the Measurement Standards Laboratory, he held a Postdoc-toral Fellowship at the Swiss Federal Office of Metrology, and was a Lecturerin Physics and Electronics at Massey University, New Zealand. His currentresearch interests include measurement uncertainty, radio-frequency and mi-crowave metrology, and nanotechnology.