# Use of information-theory methods in the analysis of measurement devices

Post on 12-Aug-2016

217 views

TRANSCRIPT

USE OF INFORMATION-THEORY OF MEASUREMENT DEVICES Ya. M. Vyrk METHODS IN THE ANALYS IS UDC 681.2.001.5 : 519.92 In recent years a number of criteria based on the notion of quantity of information introduced by Shannon [1, 2] have been suggested for evaluating measurement means and processes. The principal concept of information the- ory of measurements is the proposition that the performance index of a measurement device is the value of the aver- age quantity of information [3] contained in the instrument readings (z) on the values of the measured quantity (x). The quantity of information is defined as I (x, z) = H(x ) -H(x /z ) , where H(x) and H(x/z) are respectively the en- tropy and conditional entropy of the measured discrete quantity. For a continuous x the value I(x, z) is defined as the difference between the differential and conditional differential entropies. Heretofore, however, such an approach has been applied only in the analysis of static characteristics of mea- surement devices. In analyzing the dynamics of the measurement process by information methods the basic criteria are the quan- tity of information per measurement and the quantity of information per unit time as defined on this basis. It is assumed that the measured process is stationary. Such an approach is convenient in analyzing digital and cyclic methods of measurement. In many cases both the input and output process of the measurement device are continuous stationary random processes [4]. Below we shall consider the possibilities of applying information theory in estimating the dynamics of the measurement process in such a case. The original model is shown in Fig. 1, where x (0 is the input (measured) signal which is a stationary random process; k(t) is the impulse transient function of the device for the input process; y(t) is the additive error referred to the output of the device and is a continuous stationary random process which is uncorrelated with x(t); z(t) is the output signal, the relationships being valid [5]. X1 ( [ ) = t j " x ( t - -x ) k(x) aT, z(0 =x l (0+Y( t ) --co As in the simplest problems in statistical dynamics of linear automatic conuol systems, let us assume that ~(t) and y(t) have zero mathematical expectations and that their spectral densities Sx(W ) and 8y(W) are known and may be represented by rational-fraction functions of w 2. The transfer function of the device, which corresponds to the impulse transient function k (t), is K (j w). The device is linear with constant parameters. As far as the distributions of the random processes are concerned, we shall restrict the analysis to a normal dis- tribution. This, of course, narrows the range of problems considered, but since in a number of cases one may assume that both the input process and the error of the device have a normal distribution [4], the study of the problem in the given plan is substantiated. The basic information characteristic of the device is assumed to be the quantity of information on the input process which is contained in the output signal. On the basis of [6] we have Translated from Izmeritel 'naya Tekhnika, No. 3, pp. 5-7, March, 1971. Original article submitted Novem- ber 4, 1968. 9 1971 Consultants Bureau, a division of Plenum Publishing Corporation, 227 West 17th Street, New York, N. Y. 10011. All rights reserved. This article cannot be reproduced for any purpose whatsoever without permission of the publisher. A copy of this article is available from the publisher for $15.00. 354 i 0 Sx (co) Sz (co) Sx (~o) S... (,.o) - - i Sxz (~)?' where Sz(W) is the spectral density of the output signal; under these conditions it follows that in accordance with [5] we have Sz too) ~ S~ (o) I K (/o>)i2 + Sv (:~o),, while Sxz0O) is the mutual spectral density of the processes x (t) and z (t), and r 2 q~2 Sx~ (to) !2 =- [ g (/o~) ~ ~ .,: (co). Finally we obtain 0o S (x, z) = 2~- In 1 2_ , ' S v (o ) 0 From (1) it is evident that the quantity of information obtained from the measurement device depends on the dynamic properties of K( jw) and on the error Sy(W) of the device, as well as on the measured process Sx(CO). In order to estimate the accuracy obtained, we use the mean-square error. The output signal with no error is assumed to be the signal K (0) x (t), where K (0) is the static transmission factor of the device. The mean-square er- ror [5] averaged over the entire spectrum is ~ = [[ K (o) - - K (ico)? s,< (~) + S v (o~)] do. (2) 2= In accordance with [7] we have oo H~ (x) ----- ~ [ K 2 (0) Sx (o) 1] in max I.- 0~ ' do, (a) J where 0 z is determined from the equation oo -- 2n min [K ~ (0) Sx (co), 021 do. (4) ~ o o By analogy with the analysis of random quantities by the methods of information theory of measurement de- vices [8] we define the information eff iciency as H e (xj ~1I -- l (x , z ) Low values of 7/I indicate that it is expedient to perform further processing of the output signal of the device. Under these conditions the achievable minimum mean-square error and the maximum information eff iciency 7/I' are of interest. It is well known that for the case considered (stationary signals having a normal distribution) linear fi ltering is optimal [5] (i.e., no other method of processing the output signal, for example, nonlinear filtering, can produce a smaller mean-square error), This means that by connecting an opt imal Wiener fi lter at the output of the device we obtain an improved signal at the filter output: zl (t) = .t z (, - - ~) h (~) dr, - -oo 355 1 ~Z t ",y(t) Fig. I a)n 2 CO 0,5 5 0 0 i r I(x,Z) r + \ / -,; 0,1 l 10 100 c Fig. 2 Fig. S where h(T) is the impulse transient function of the filter and reproduces the measured process x(t) with the mini- mum mean-square error e~. Knowing ~, we use (3) and (4) to determine the corresponding s-entropy I~ 1Cx), and find the maximum information efficiency Iq" I -- He, (x) I (x, z) Since the minimum quantity of infon-nation He(x) required for reproducing the signal x(t) with a stipulated accuracy ~2 depends s~lely on the properties of the signal [see (3) and (4)], while the quantity of information I(x, z) obtained from the measurement device also depends on the dynamic properties and the error of the device, it follows that qI characterizes the degree to which the dynamic properties of the device and the measured process are matched. The index ~I', which interrelates the quantity of information I (x, z) and [via Hq (x)] the minimal mean-square er- ror, is the measure of the value of the information obtained from the device. Let us remember that one of the prob- lems in the theory of the value of information is finding the extremal value of the mathematical expectation of the penalty function (in our case the minimal mean-square error) for a fixed value of the quantity of information [9]. The value of ~I' determines that portion of the overall quantity of information I(x, z), which may be used in the best case for the reproduction of x (t). The question naturally arises: what would be the mean-square error if it were to be possible to use all of the information optimally (i.e., we are interested in the mean-square error for the case Hq(x) = I(x, z) or ~I' -- 1)? Let us call the corresponding error the information-theory minimal mean-square error, and let us denote it by ~. In order to determine ~20 we substitute I(x, z) for Hs(x) in (3), whence we find 02; then we find ~ from (4). Based on ~, we define the dispersion efficiency as l i t - 8s and the maximum dispersion efficiency as Since in performing the measurements we are interested primarily in the error obtained, it follows that ~, ~T, and ~T' turn out to be clearer characteristics of the measurement device than I(x, z), 7li,and 7li'. Both ~I and ~I' characterize the degree to which the properties of the device and the measured process are matched. The value of ~T', which couples the quantity of information I(x, z) and (via g~) the mean-square error ~ , is the estimate of the value of the information obtained from the device. The proposed information characteristics-the quantity of information I(x, z), the information efficiency ~I and 7/I' (or the corresponding accuracy characteristics, which are the information-theory minimal mean-square 356 error ~0 z and the dispersion efficiencies r/T and 0T ' ) may turn out to be useful in analyzing the dynamics of mea- surement devices for continuous processes and their sections. They may be used as the criteria in comparing devices of various types, and provide the possibility of estimating the degree to which individual devices are matched when they are used jointly; they also estimate the degree to which the dynamic properties of a device and the measured process are matched, etc. In connection with the extremality of the normal distribution for the entropy (for a stipulated mean-square value of the reproduction error) one may obtain limiting estimates in the form of inequalities for a given device when the distributions are not normal. Since the quantity of information I (x, z) depends on the properties of the device and the elements used in it via K(jw) and Sy(ca), it is obviously expedient to use the information criteria I(x, z), 7/I and ~/I' (or gz0, ~T and ~T') for the purpose of maximizing or minimizing these criteria as early as in the design stage. Under these con- ditions it is necessary to consider the statistical properties (spectral density, distribution law) of the processes for which the device is basically designed. In conclusion, let us consider a simple computation example. Let us investigate the effect of spectral density of the error on the accuracy characteristics of the device on the assumption that the power of the error (the mean- square value of the error) remains constant. Assume 1 1 c S. (co)=- l+o~ ; K(/'co)-- l+05/co ;Sy( to )= 10(c~+co 2) Using tables of integrals [5], we obtain Py = 0.05 for the power of the error; this is independent of the para- meter c which determines the spectral density of the error. From (2) we obtain gz = 0.217. Equation (1) yields the following equation for the quantity of information: 1 ~ co4 @ co2 (5 @ -7 - ) @ 4 (1 @ lOc) ~--- - - f in I (x, z) 2z~ ,. (1 -}- coz) (4 @ o32) 0 do3. It may be shown that using tables of integrals [10] and the properties of definite integrals, we have (v ) 1 40 _ 3 .l (x, z) =- ~ 2- 41/F ; -1 -0~+5+ c (5) in this case. The physically unrealizable transfer function of a Wiener filter (its derivation is analogous to the derivation given in [5], the only difference being that here it is necessary to consider the stipulated frequency properties of the device as well) K (j ca) will be equal to: H (jco) -- & (~) K (--j~) sx (m i K (]~)7 + Sv (~) From this the mean-square error after filtering is --9 I e l - - 2n --oo S. (co) S u (co) s . (co) i K (ho)! ~ -;- S~ (col dco, whence we obtain 1 0.54- ' ] / l@ lOc 41 / lq 10c+5+ 40 C In order to determine ~ from (3) we find 02 by replacing He(x ) by I(x, z) found from (5). For the case given (see Fig. 2 and also [2]) we have 357 ton I (x, z) --= ~- In - - 0 (1 @ o)2) 0 e l 0 2 - - - - 1@ 2 COn do), (6) (7) We substitute (7) into (6) and calculate the integral (6) by means of Eq. (623) from [10]. We obtain I (x, z) - - - COn - - arc tg O~n whence for each specific I (x, z) one may calculate a corresponding value of co n. Under these conditions sz0 is de- termined by the shaded area in Fig. 2: ,t~ q- -- 1 t ' 02do)1 l 0 t.o n Applying Eq. (120) from [10] together with (10), we finally obtain ~0 2 _ (On q- 1 1 - - arc tg ton. (l + 0) 2) 2 For large w n (to n >> 1) we have 2 -22 ~o ~ ~, [I (x, z) + o.51 The calculations carried out on the basis of the equations given above are displayed in Fig. 3. As is evident from Fig. 3, comparatively high values of the dispersion efficiency 7/T (1 < c < 10) corresponded to the minimum of the quantity of information. Linear filtering is especially effective for c > 10. For c < 1 linear filtering also decreases the mean-square error noticeably, but nevertheless ~T' remains small (the value of the in- formation in the sense given is small), indicating that the dynamic properties of the device are not matched with the measured process. L ITERATURE C ITED 1. P.V. Novitskii, Foundations of the Information Theory of Measurement Devices [in Russian], Izd. ~nergiya, Leningrad (1968). 2. V . I . Rabinovich, Avtomettiya, No. 5 (1967). 3. F.P. Tarasenko, Introduction to a Course in Information Theory [in Russian], Izd. Tomsk Univ., Tomsk (1963). 4. A.S. Nemirovskii, Probabilistic Methods in Instrumentation Engineering [in Russian], Izd. Standartov, Moscow (1964). 5. V.V. Solodovnikov, Statistical Dynamics of Linear Automatic Control Systems [in Russian], Fizmatgiz, Mos- cow (1960). 6. M.S. Pinsker, Information and Information Stability of Random Quantities and Processes [in Russian], Izd. AN SSSR, Moscow (1960). 7, M.S. Pinsker, Problemy Peredacbi Informatsii, No. 14 (1963). 8. V.B. Svechinskii, in: Automation of Chemical Production [in Russian], No. 2, Izd. N.I.I.T.E. Kh, I.M., Mos- cow (1967). 9. R.L. Stratonovich, Izv. Akad. Nauk SSSP,, Tekhn. Kibemetika, No. 5 (1965). 10. G.B. Dwight, Tables of Integrals and Other Mathematical Formulas [Russian translation], Izd. Nauka, Moscow (1966). 358

Recommended

View more >