17-21

6
1 LECTURES 17 TO 21, FEBRUARY 18, MARCH 3, 4 AND 10 2004 ERRORS AND ADJUSTMENTS TYPES OF ERRORS Mistakes: Must be checked and eliminated. Systematic Error: Repeatable and is governed by physical laws. Factors contributing to systematic error and the size of their influence must be understood. The magnitude of systematic is estimated and eliminated at the stage of data reduction and Random Error: Represents the precision of a measurement technique and is generally considered to be normally distributed with zero mean. In other words, random errors of smaller magnitude are more frequent than those of larger magnitude and positive and negative random errors of equal magnitude occur with equal frequency. DEFINITIONS Measurements can be independent or conditioned. For instance, if two angles of a plane triangle are considered independent, the third is conditioned. An observation could be direct if the quantity of interest is directly measured or indirect (e.g., coordinates calculated using a GPS receiver). An observed value is obtained after eliminating all known errors. True value is one that is free from all errors and is usually not known. True error is the difference between true value and observed value. Most probable value is one for which the probability of its being the true value is the maximum. Most probable error is the quantity that gives the band about the most probable value within which the true value would lie. The most probable error of weighted average of n different observations is given by: ) 1 ( 6745 . 0 2 × × ± = n w e w E i i i ri i R (1) where, e ri , is the deviation of a single measurement from the mean. The most probable error of a sum of uncorrelated quantities, i i α , is Square Root of the Sum of Squares (SRSS) of the most probable errors of each individual quantity. The average error in a series of measurements of equal weight is defined as the arithmetic mean of absolute values of individual errors. The mean square error, on the other hand, is the square root of the arithmetic mean of squares of individual errors. Residual error is the difference between the observed value and the most probable value. Observation equation is one that relates several observed quantities. A condition equation or a constraint equation is a fundamental relationship that connects several dependent quantities, e.g., the sum of four angles of a plane quadrilateral is 360 o .

Upload: shrey-manish

Post on 26-May-2017

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 17-21

1

LECTURES 17 TO 21, FEBRUARY 18, MARCH 3, 4 AND 10 2004

ERRORS AND ADJUSTMENTS

TYPES OF ERRORS • Mistakes: Must be checked and eliminated.

• Systematic Error: Repeatable and is governed by physical laws. Factors contributing to systematic error and the size of their influence must be understood. The magnitude of systematic is estimated and eliminated at the stage of data reduction and

• Random Error: Represents the precision of a measurement technique and is generally considered to be normally distributed with zero mean. In other words, random errors of smaller magnitude are more frequent than those of larger magnitude and positive and negative random errors of equal magnitude occur with equal frequency.

DEFINITIONS Measurements can be independent or conditioned. For instance, if two angles of a plane triangle are considered independent, the third is conditioned. An observation could be direct if the quantity of interest is directly measured or indirect (e.g., coordinates calculated using a GPS receiver). An observed value is obtained after eliminating all known errors. True value is one that is free from all errors and is usually not known. True error is the difference between true value and observed value. Most probable value is one for which the probability of its being the true value is the maximum. Most probable error is the quantity that gives the band about the most probable value within which the true value would lie. The most probable error of weighted average of n different observations is given by:

)1(6745.0

2

−××±=

∑∑

nw

ewE

ii

irii

R (1)

where, eri, is the deviation of a single measurement from the mean. The most probable error of a sum of uncorrelated quantities, ∑

iiα , is Square Root of the Sum of Squares

(SRSS) of the most probable errors of each individual quantity. The average error in a series of measurements of equal weight is defined as the arithmetic mean of absolute values of individual errors. The mean square error, on the other hand, is the square root of the arithmetic mean of squares of individual errors. Residual error is the difference between the observed value and the most probable value. Observation equation is one that relates several observed quantities. A condition equation or a constraint equation is a fundamental relationship that connects several dependent quantities, e.g., the sum of four angles of a plane quadrilateral is 360o.

Page 2: 17-21

2

THE LAWS OF WEIGHTS The Weight of an observation quantifies the precision and reliability of the measurement. For instance an observation with a weight of 4 is four times as reliable as another with a weight of 1. Weight of an observation is inversely proportional to the variance of the observation. In surveying weight is typically assigned as follows: • The weight of an angle varies directly with the number of observations • The weight of a level line varies inversely as the length of the route • If an angle is measured a large number of times, its weight is inversely proportional to

the square of the most probable error • The corrections to be applied to various observed quantities are inversely proportional

to their weights.

Since weight of an observation is proportional to 21 σ , where 2σ is the variance and since variance of sum of pA and qB, 2

qBpA+σ , is given by 22222BAqBpA pp σσσ +=+ ,

where A and B are uncorrelated normally-distributed random variables and p and q are constants, the following laws governing weights can be derived: • The weight of an arithmetic mean of a number of observations is equal to number of

observations • The weight of the sum of quantities is the reciprocal of the sum of reciprocals of

individual weights • The weight of the product of any quantity multiplied by a constant is the weight of the

quantity divided by the square of the constant • The weight of an equation remains unchanged if all the signs of the equation are

changed • The weight of an equation remains unchanged if it is added or subtracted from a

constant • If an equation is multiplied by its weight the weight of the resulting equation is the

reciprocal of the weight of the original equation.

THE LEAST SQUARES PRINCIPLE A good survey is designed with in-built redundancy, i.e., the number of constraint equations for solving the quantities of interest is more than necessary. Such a system is referred to as over-determined. Consider a system of n unknowns and no independent equations (n < no). Several sub-sets of size n can be chosen from the set of no and solved for the set of unknowns. Each of these solutions is likely to return different solution set because of the presence of random error in the data. If it can be assumed that all the observations are uncorrelated, one approach of obtaining the most probable solution in such a situation estimated from all these solutions by “minimizing the squares of residuals.”

Page 3: 17-21

3

Uncorrelated Observations of Equal Precision For instance, if a quantity is measured n times (n > 1) with equal precision, an individual observation, di, relates to the most probable value according to: di + eri = d̂ . The least-squares principle calls for minimization of ∑( d̂ - di)2, i.e., ∑ ( d̂ - di) d̂ = 0 (recall L’Hospital’s rule). Hence, d̂ = ∑ di/n.

That is, the least-squares estimate of the most probable value from repeated uncorrelated measurements is identical to the arithmetic mean of the measurements.

Now, consider that all the three angles of a plane triangle were measured in a triangulation survey. They were θi. Let us assume ∑ θi = 180o + ∆. Our objective is to distribute the closure error ∆ and obtain the most probable values of the angles using the least squares approach. The objective function, φ, to be minimized here is:

221

22

21 )( rrrr eeee −−++= Δφ (2)

Equating 1re∂∂φ and 2re∂∂φ to zero, one gets er1 = er2 = er3 = ∆/3. This means if the angles of a plane triangle are of equal weight the angles are equally corrected for the closure error. This is in agreement with the recommendation that the closure error be equally distributed between backsight and foresight.

Uncorrelated Observations of Unequal Precision For adjusting the uncorrelated observations unequal precision the objective function to be minimized is:

∑=i

riiew 2φ (3)

Differentiating partially with respect to the most probable value, d̂ ,

∑ =−=∂∂i

ii ddwd 0)ˆ(2ˆφ , from which ∑∑=i

ii

ii wlwd /)(ˆ = weighted mean of the

observations.

For the three angles of a plane triangle measured with unequal precision, it can be shown that the closure error must be distributed in the inverse proportion of the weights of the three angles (see method of correlates).

Indirect Observations In matrix notation the least-squares solution for observation equations bAx = can be expressed as:

( ) PbAPAAx TT 1−= (4)

where A is the coefficient matrix, x is the solution vector and b is the measurement vector, and P is the diagonal matrix populated with the weights (reciprocal of the variances) of the observation equations.

Example 1. The following readings were obtained at the upper, middle and lower stadia wires to determine the tachemetric constants:

Page 4: 17-21

4

Readings Distance (m) Upper Stadia Wire Middle Stadia Wire Lower Stadia Wire

30.000 1.433 1.283 1.133 55.000 1.710 1.435 1.160 90.000 2.352 1.902 1.452

Solution. For 30.000 m distance the difference between upper and lower stadia readings is 0.300 and that for 90.000 m distance is 0.900. It is clear from these measurements that the multiplicative constant for the tachemeter is 100 and additive constant is zero. This is an over-determined problem and least squares method could be used to verify this.

Here

=

190.0155.0130.0

A ,

=

90.00055.00030.000

b and

=

100010001

P . The corresponding

solution vector is ( )

=−

01001 PbAPAA TT .

Example 2. Equation (11) is used in the following example involving measurement of three angles measured as follows: α = 39-14-15.3, β = 31-15-26.4, γ = 42-18-18.4, α + β =70-29-45.2, β + γ =73-33-48.3. The objective is to find the Most Probable Values of α, β and γ. The matrices are formed first adopting the so called method of differences, in which the simultaneous equations are formulated in terms of the most probable errors and not the quantities themselves:

=

110011100010001

A ,

=

1000001000001000001000001

P ,

=

5.35.3

000

b ⇒ ( )

==

875.075.1875.0

1 PbAPAAx TT .

where x is a vector here comprised of the most probable errors of the three angles. Now we get the most probable values of the angles by adding the most probable errors to the observations as follows: 18.161439875.0003.151439ˆ 1 −−=−−+−−=+= reαα etc.

The above problem can also be solved by first forming the Normal Equation for one unknown quantity by multiplying each observation equation by the product of the algebraic coefficient of that unknown quantity in that equation and the weight of observation and adding the results. For instance, the normal equation for α is formed by:

5.325.3

021

21

1 =+⇒

=+=

rrrr

r eeee

e. Similarly for er2 and er3, the normal equations are:

0.73 321 =++ rrr eee and 5.32 32 =+ rr ee , respectively. Solve the three normal equations simultaneously to obtain the least-squares estimates of the most probable errors and hence the most probable values, α̂ , β̂ and γ̂ .

Page 5: 17-21

5

Assignment: Five stations, A, B, C, D and E were established (Figure 5) consecutively in that order on a straight line. In a four-station calibration exercise, the observed distances, AB, BC, CD, AC, BD and AD were 50.000 m, 50.070 m, 50.050 m, 100.090 m, 100.010 m and 150.080 m, respectively. All measurements are of equal weight. Find the zero error.

Figure 5. Four-Station EDM Calibration

Hint: Considering (a) AB, AC, AD, BC, BD and CD in sequence and (b) the coordinates of B, C and D and the zero errors as unknowns, the coefficient matrix becomes

−−−

=

111011011011110010101001

A

Observation Equations Accompanied by Condition Equations One approach is to eliminate one or more unknowns using the condition (or constraint) equations. An alternative approach, called the method of correlates, is illustrated in the following.

Consider four angles, θi, were measured to close a horizon, i.e., Δ=⇒°= ∑∑

iri

ii e360θ , where ∆ is the angle of closure. The objective function

2ri

iiew∑=φ is minimized by 00 =⇒= ∑ riri

ii eew δδφ . To enforce the condition

equation, the differential form of the condition equation, 0=∑ rii

eδ , is multiplied by

correlative, λ, and added to the equation above giving iri we /λ= . Back-substitution of this result into the condition equation and solving for the correlative: ∑=

iiw/1/Δλ .

Page 6: 17-21

6

Example 3. Following are the observations from a differential leveling survey:

Segment Elevation Difference (m)

Distance (m)

AB 8.272 187 BC 7.654 274 CD −4.267 54 DA −11.750 98

Here a positive difference of elevation indicates the second station on a segment is higher than the first. Find the most probable elevations of B, C and D if the elevation of A is 67.432 m a.m.s.l. using the method of correlates.

Solution. Misclosure = 8.272+7.654−4.267−11.750 = −0.091 m. Denoting the most probable errors in AB, BC, CD and DA by erAB, erBC, erCD and erDA, respectively, we can write the following constraint equation:

m 091.0=+++ rDArCDrBCrAB eeee (A)

Differentiating:

0=∂+∂+∂+∂ rDArCDrBCrAB eeee (B)

Since the weights of the four observations are inversely proportional to the segment length, the objective function can be expressed as

9854274187 2222rDArCDrBCrAB eeee +++=φ

Hence ( ) ( ) ( ) ( ) 09854227421872d =∂+∂+∂+∂= rDArDArCDrCDrBCrBCrABrAB eeeeeeeeφ (C)

Multiplying (B) by λ and subtracting the result from (C)

( ) ( )( ) ( ) 0982542

27421872=∂−+∂−+

∂−+∂−

rDArDArCDrCD

rBCrBCrABrAB

eeeeeeee

λλλλ

(D)

Hence,

0982and 0542,02742,01872

=−=−=−=−

λλλλ

rDArCD

rBCrAB

eeee

(E)

Substituting these results into (A), ( ) 091.029825422742187 =+++×λ , from which m. 10969.2 4−×=λ Substituting λ into (E), m, 028.0=rABe m, 041.0=rBCe

m. 008.0=rCDe The most probable elevation difference between A and B is thus 8.300 m. Hence, the most probable elevation of B is 75.732 m. Similarly the most probable elevations of C and D are 83.427 m and 79.168 m, respectively.