camera location determination based on the perspective projection geometry of three points
TRANSCRIPT
Vol.13 No.1 JOURNAL OF ELECTRONICS Jan. 1996
C A M E R A L O C A T I O N D E T E R M I N A T I O N B A S E D ON T H E P E R S P E C T I V E P R O J E C T I O N G E O M E T R Y
OF T H R E E P O I N T S
Cao J u n
(Department of Computer Science & TecAnology, Tsin~hua University, Bering 100084)
He Zhenya
(Depm-tment of Radio Engineering, Southee~t University, Nanjing 210018)
Abst rac t An approach to camera location determination based on the perspective projection
geometry of three points is proposed, which is simple and straightforward. No rigid restrictions
are placed on the ori~nal positional and orientational relationshipe between the camera and the
mark. Experimental results verify the feMibility of the propooed approach.
Key words Image processing; Camera location; Perspective projection
I. I n t r o d u c t i o n
Determining the location of a camera from image to space primitive correspondences is
a major research area in computer vision. It can be applied to automatic assembly, aircraft
location, autonomous land vehicle navigation and so on. There exist many techniques for
solving this problem [1-9]. In those approaches, various types of standard marks axe used
as the calibration target to determine the camera location under certain initial conditions.
Fukui[ 1] used a diamond shaped mark piaced on the wall to determine the camera location.
In his method, the lens center of the camera must be on the same height as the mark center,
and the optical axis of the camera must pass through the mark center. Courtney [2] also used
a diamond shaped mark similar to Fukui's. His method did not require the lens center of the
camera be on the same height as the mark center, but assumed the height of the camera was
known. Magee[ a] used a spherical mark which contained two circles perpendicular to each
other. His method also required the optical axis of the camera pass through the center of the
spherical mark. Chou [4] used house corners as calibration target. The height of the camera
was also assumed to be known in his method. Tsai[ 5] proposed a nonlinear optimization
based camera location method in which 60 coplanar points were used as control points. The
marks used in Refs.[7-9] contained parallel lines, a~ad the camera location was determined
by vanishing points. The major objections to the pretr[mis methods are that either the shape
used as the standard mark is not commonly found jn real environment or tha t restrictions
are placed on the original positional and /or ~rientationb! relationships between the camera
and the mark.
In this paper, we consider a camera location scheme of using three colinear points as
62 JOURNAL OF ELECTRONICS Vol.13
control points. The determination of the camera location is based on the simple and well
understood geometric relationship of the perspective projection. No prerequisite condition
for the position and orientation of the camera relative to the mark need to be satisfied. The
proposed method can be used either individually or as an independent module in the process
of the camera calibration.
II. The CAmera M o d e l
Fig.1 illustrates the perspective viewing model of three control points. The mark co-
ordinate system is denoted as O-XYZ and the camera coordinate system as O'-X'Y'Z'. Let the origin of the camera coordinate system be the lens center whose coordinates in the
mark system is (Xc, Yc, Zc). Let the Z'-axis correspond to the optical axis of the camera.
The image plane is perpendicular to the Z'-axis, and its intersection point with the Z'-axis
denoted as C is the center of the image plane . The distance from C to O' equals to the
focM length of the camera. The coordinates of the three colinear points P~, i = 1, 2, 3, are
given by (X~, Y~, Z0), and/>2 is the midpoint of line PIP3. Their projections on the image
plane are specified as P~, i = 1,2,3.
Z t �9 t
Fig.1 The perspective viewing mode]
III. Calcu lat ion o f the Pro jec t ion A n g l e
In order to find the angle between P~ and an arbitrary point A on line P~P3 with the
focal point of the camera as vertex, /P~O'A, we first drop a perpendicular in the image
from the center of the image, C, to line P1~3, forming line CP, then drop line O'C and line
O'P from the lens center to C and P respectively, where the length of O'C equals to the
focal length of the camera, f . According to the theorem of tri-perpendicular, line O'P is perpendicular to line P~P~.
The geometry for the calculation o f / P 1 0 ' A is shown in Fig.2.
0'
p
r
No.1 CAMERA LOCATION DETERMINATION 63
Fig.2 Calculation of the projection angle
The length of O'P, O'P, is given by
O'P = ~/(CP) 2 + (O'C) 2 = v/(CP) 2 + f2
where CP can be computed by the formula of point to line distance.
LP~O' P and LPO' A can be obtained as follows
LP; O'P = arctan(P; RIO'P)
LPO'A = arctan(PA/O'P)
The projection angle /P~O'A can be calculated with the following algorithm.
if P; a >_ P; P then if PA >_ PxA then
LP~O' A = LPO' A - LP~O' P else
LP~O' A = / P O ' A + LPxO' P else
if PA > P~ P then
LP~O' A = LPO' A - LP~O' P else
LP~O' A = LP~O' P - LPO' A
(1)
(2) (3)
I V . C a l c u l a t i o n o f t h e D i s t a n c e f r o m
C o n t r o l P o i n t s t o t h e L e n s C e n t e r
On the plane formed by PIP3 and O t, draw a circle passing P1, P3 and O t, and draw a
line perpendicular to PIP3 passing P2 which intersects the circle at N and I. It is obvious
that NI is the diameter of the circle. Drop a line NO' from N to O' which intersects PIP3 at M. Fig.3 show the geometry of the camera location problem.
64 JOURNAL OF ELECTRONICS Vol.13
t4---- K---- 'P
Fig.3 The geometry of the camera location problem
In Fig.3, P1P3 is known and P1P2 : PIP3/2. / PIO'P3, LPIO' P2 and ZP20"P3 can be
calculated using the algorithm just described.
In order to determine the length of MP2, MP2, we consider cross-ratio, the basic
perspective projection invariant [1~ The cross-ratio of points P1, M, P2, P3 on the same
line, RL(P1, M, P2, P3) is defined by
RI,(P1,M, P2,P3) = P1P2" MP3 (4) MP2. PIP3
The cross-ratio of lines PIO', MO', P20', PsO' corresponding 0 ' , Re ( /PIO'M,
/ MO' P2, / P20' P3) is defined by
, , , s m ( / M O P3) (5) Re(ZPlO M , / M O P2, ZP20 P3) = sin(/PxO'P2) " ' sin( / MO' P2) sin( / PlO' P3)
The cross-ratio is the perspective projection invariant, namely the value of the cross-
ratio keeps unchanged after a number of perspective transformation.
According to the theorem of cross-ratio, we have the following equation
RL(P1, M, P2, Ps) = Re( / PIO'M, / MO'P2, ZP20'P3) (6)
Substitute MP3 = MP2 + P2P3 into Eq.(6) and simplifying the resulted equation, MP2 can be given by
MP2 = sin( / MO' P2) sin( / PlO' P3) . PIP2 " P2P3 (7) sin( / Pl O' P2 ) sin( / M O' Ps ) . PI P3 - sin( Z M O' P2 ) sin( / Pl O' P3 ) . P1P2
where
/ MO'P2 = / m o ' P a - / P 2 0 ' P 3 = / N I P a - / P 2 0 ' P 3 = LPIO'Pa/2 - / P20'Ps (8)
No.1 CAMERA LOCATION DETERMINATION 65
The length of NP2, NP2, can be computed as follows
NP2 = N I - P2! -- P3I
cos(LNIP3) P s i " cos( / N I P s ) (9)
where P2P3
P 3 I = sin( / N I P s )
MP2 and NP2 can be used to compute the angle [_O'NI
( MP2~ Z O ' N I = arctan k.N-P~2]
(10)
(11)
Then, we have
K = N O ' s j~n( /O'NI)
J = N O ' c o s ( / O ' N I ) - NP2
(12)
(13)
where N O ' can be comput~l by
s s
N O = N l c o s ( / O N I ) (14)
So distances from Pi, i = 1, 2, 3, the control points, to O', the lens center of the camera, can
be calculated by
PlO' = x / ( K - PIP2) 2 + j2
P20 ' = ~ / K 2 + .12
P30 ' = v / ( K + P2P3) 2 + j2
(15)
(16) (17)
V. D e t e r m i n a t i o n o f t h e C a m e r a Locat ion
The task of the camera location determination is to solve for ( X c , Y c , Zc) , given a
number of control points and their projections in an image. The distance from Pi to O' in
3-D space can be defined by
(P,O') 2 = ( X c - X , ) ~ + (Yc - Yi) 2 + (Zc - Z0) 2, i = 1, 2, 3 (18)
Substi tute F_xts.(15), (16) and (17) into Eq.(18) and simplifying the resulted equations, X c ,
Y c and Z c can be solved as
X c = a(Y1 - Ys) - b(Y1 - Y2) (19) C
Y c = - a ( X l - Xs) + b(X1 - X2) (20) C
g c = + ~ / ( P I O ' ) ~ - ( X c - X l ) 2 - (Yc - YI) 2 + go (21)
66 JOURNAL OF ELECTRONICS Vol.13
where
( P , o ' ) ~ - ( P , o ' ) ~ - (x~ - x ~ ) - (Y~ - Y}) a :
2
b = ( P 3 0 ' ) 2 - ( P I O ' ) 2 - (X32 - X12) - ( Y ~ - YI2)
(22)
(23) 2
c = ( X , - X2)(YI - Y3) - ( X , - X s ) ( Y I - 112) (24)
If the camera is assumed to be located above the control points, only the positive sign
is chosen for Zc.
V I . E x p e r i m e n t a l R e s u l t s a n d C o n c l u s i o n
Experiments have been c~rried out to show the performance of the proposed camera
location method. In the experiments, the vision system consists of a TM-560 CCD camera
mounted on a six degree-of-freedom robot hand. The focal length of the camera, f , is 16
ram. The pixel coordinate range along the horizontal and vertical directions on the image
plane are [0, 512] and [0, 582], respectively. The image plane center in pixels is (255.5,
290.5). The mark used in the experiments is a white board containing three black dots
15cm apart.
The experimental results are shown in Tab. 1. The maximal error rate is defined by
{ 'Xc- X'c', 'Yc- Y~' 'Zc - Zc' I (25)
where (Xc, Yc, Zc) are real values of the camera location relative to the origin of the mark
system, and (X~, Y~, Z~) are the computational results obtained using the proposed camera
location method. The maximal error rate is less than 2%.
10-
8i s! 4i
o! _2i
- 4 '
--6'
-8 !
_10 ~' 0
�9 : : : : m
l - i 0 - 20 so i0 so 60 70 go
Viewing aa~gh: (degree)
Fig.4 T h e impac t of the viewing angle on the locat ion precision
No.1 CAMERA LOCATION DETERMINATION 67
The impact of the dewing angle on the location precision is given in Fig.4. The viewing
anEle is defined as the angle between the optical axis of the camera and the normal of PIPs. The real values of Xc , Yc, Zc are 200 cm. It is obvious that the location error become larger
when the viewing angle approaches 90 ~ .
T a b . l E x p e r i m e n t a l r e su l t s
Test
points Real location (cm)
Xc Yc zo 26 45 210
2 -10 160
-90 -30 150
-72 12 250 90 ---60 180
-10 50 120 100 -20 80
50 50 60
Computational results (cm)
xb Yb zb 25.80 45.12 209.63
2.00 -10.02 160.06 -88.40 -30.41 148.19
-72.98 12.02 251.2 91.80 -59.73 182.15
-9.91 50.41 120.83 98.19 -20.21 79.82 50.11 59.29 59.51
Maximal error rate
0.77%
0.38% 1.78% 1.36% 2.00% 0.9o% 1.81%
0.82%
R e f e r e n c e s
[1] | . Fukui, Pattern Recogmtion, 14(1981)2, 101-109.
[2] J. W. Courtney, M. J. Magee, J. K. Aggarwal, Pattern Recognition, 1T(1984)8, 585-592.
[3] H. L. Chou, W. H. Tsai, Pattern Recognition, 19(1986)6, 439-451.
[4] M. R. Kabuka, A. E. Arenas, ZEEE 3. o[RA, RA-3(1987)4, 505-516.
[5] Z. Chen, D. C. Tseng, J. Y. Lin, Pattern Recognition, 22(1989)2, 173--187.
[6] D. H. Kite, Pattern Recogmtion, 23(1990)8, 818-831.
[7] T. Echigo, M~'~ine Vision and Application, 3(1990)3, 159-167.
[8] W. Chen, B. C. Jiang, Pattern Recognition, 24(1991)1, 57-67.
[9] K. Kanstani, IEICE ~-a~s., E-T4(1991)10, 3369-3377.
[10] M. A. Penna, P. P, Patterson(Ed.), Projective geometry and its applications to computer graphics,
Englewood Cl i~ , N J, Prentice-Hall, 1986, 233-280.
[11] D. Forsyth , /EEE Trans. on PAMI, PAMI-1B(1991)10, 971-991.
[12] E. B. Barrett, P. M. Payton, C"VGIP Image Understanding, 53(1991)1, 46-65.