imas camera calibration

20
1 Laboratory of Intelligent Machines and Systems Electrical Engineering Faculty Warsaw University of Technology Camera Calibration and Measurements by Witold Czajewski 1 May, 2010 Contents 1 Aim of the Exercise .................................................................................................................. 2 2 Hardware and software............................................................................................................. 2 3 Documentation ......................................................................................................................... 2 3.1 Introduction ........................................................................................................................ 2 3.2 Description of the calibration parameters .......................................................................... 3 3.2.1 Intrinsic parameters (camera model) ........................................................................ 3 3.2.2 Extrinsic parameters ................................................................................................. 5 3.3 Step-by-step calibration instruction ................................................................................... 6 3.3.1 Taking calibration images......................................................................................... 7 3.3.2 Starting the toolbox................................................................................................... 7 3.3.3 Reading the images ................................................................................................... 7 3.3.4 Extracting of the grid corners ................................................................................... 7 3.3.5 Main Calibration step ............................................................................................. 12 3.3.6 Recomputing corners and increasing the calibration precision .............................. 13 3.3.7 Visualizing distortions ............................................................................................ 14 4 Your exercise.......................................................................................................................... 16 1 This instruction is heavily based on the Camera Calibration Toolbox for Matlab tutorial by Jean-Yves Bouguet, available at http://www.vision.caltech.edu/bouguetj/calib_doc

Upload: jan-kownacki

Post on 21-Jul-2016

63 views

Category:

Documents


1 download

DESCRIPTION

camera calibration methods

TRANSCRIPT

Page 1: Imas Camera Calibration

1

Laboratory of Intelligent Machines and Systems Electrical Engineering Faculty

Warsaw University of Technology

Camera Calibration

and Measurements

by Witold Czajewski1

May, 2010

Contents

1 Aim of the Exercise..................................................................................................................2

2 Hardware and software.............................................................................................................2

3 Documentation .........................................................................................................................2

3.1 Introduction ........................................................................................................................2

3.2 Description of the calibration parameters ..........................................................................3

3.2.1 Intrinsic parameters (camera model) ........................................................................3

3.2.2 Extrinsic parameters .................................................................................................5

3.3 Step-by-step calibration instruction ...................................................................................6

3.3.1 Taking calibration images.........................................................................................7

3.3.2 Starting the toolbox...................................................................................................7

3.3.3 Reading the images...................................................................................................7

3.3.4 Extracting of the grid corners ...................................................................................7

3.3.5 Main Calibration step .............................................................................................12

3.3.6 Recomputing corners and increasing the calibration precision ..............................13

3.3.7 Visualizing distortions ............................................................................................14

4 Your exercise..........................................................................................................................16

1 This instruction is heavily based on the Camera Calibration Toolbox for Matlab tutorial by Jean-Yves

Bouguet, available at http://www.vision.caltech.edu/bouguetj/calib_doc

Page 2: Imas Camera Calibration

2

1 Aim of the Exercise

The purpose of this laboratory exercise is to familiarize the student with the issues regarding

camera calibration and parameter estimation as well as the use of a calibrated camera as a

measurement device.

2 Hardware and software

To do the exercise one will need a computer with a camera attached and Matlab with additional

camera calibration toolbox and some minor scripts. All the hardware and software is prepared in

the laboratory.

3 Documentation

3.1 Introduction

Camera calibration is an important issue in computer vision especially for applications that

involve localization of objects. The problem of camera calibration is to compute the camera

extrinsic and intrinsic parameters. The extrinsic parameters of a camera indicate the position and

the orientation of the camera with respect to a selected coordinate system, and the intrinsic

parameters characterize the inherent properties of the camera optics, including the focal length,

the image center, the image scaling factor and the lens distortion coefficients. The number of

parameters to be evaluated depends on the camera model being utilized. The problem of finding

these parameters is, in general, a nonlinear problem (owing to lens distortion) and requires good

initial estimates and an iterative solution.

The techniques found in the literature for camera calibration can be broadly divided into three

types: linear methods, nonlinear methods and two-step techniques. Linear methods assume a

simple pinhole camera model and incorporate no distortion effects. The algorithm is non-iterative

and therefore very fast. The limitation in this case, however, is that camera distortion cannot be

incorporated and therefore lens distortion effects cannot be corrected. The problem of lens

distortion is significant in most off-the-shelf cameras. In non-linear techniques, first the

relationship between parameters is established and then an iterative solution is found by

minimizing some error term. However, for this type of an iterative solution, a good initial guess

Page 3: Imas Camera Calibration

3

is essential, otherwise the iterations may not converge to a solution. Two-step techniques involve

a direct solution of some camera parameters and an iterative solution for the other parameters.

Iterative solution is also used to reduce the errors in the direct solution. This is the most common

and current approach to the problem.

3.2 Description of the calibration parameters

After calibration, the list of parameters may be stored in the matab file Calib_Results.mat by

clicking on Save button. The list of variables may be separated into two categories: intrinsic

parameters and extrinsic parameters.

3.2.1 Intrinsic parameters (camera model)

The intrinsic parameters of a camera include:

• Focal length: The focal length in pixels is stored in the 2x1 vector fc.

• Principal point: The principal point coordinates are stored in the 2x1 vector cc.

• Skew coefficient: The skew coefficient defining the angle between the x and y pixel axes

is stored in the scalar alpha_c.

• Distortions: The image distortion coefficients (radial and tangential distortions) are

stored in the 5x1 vector kc.

Definition of the intrinsic parameters:

Let P be a point in space of coordinate vector XXc = [Xc;Yc;Zc] in the camera reference frame.

Let us project now that point on the image plane according to the intrinsic parameters

(fc,cc,alpha_c,kc).

Let xn be the normalized (pinhole) image projection:

Let r2 = x

2 + y

2. After including lens distortion, the new normalized point coordinate xd is

defined as follows:

Page 4: Imas Camera Calibration

4

where dx is the tangential distortion vector:

Therefore, the 5-vector kc contains both radial and tangential distortion coefficients (observe that

the coefficient of 6th

order radial distortion term is the fifth entry of the vector kc).

Once distortion is applied, the final pixel coordinates x_pixel = [xp;yp] of the projection of P on

the image plane is:

Therefore, the pixel coordinate vector x_pixel and the normalized (distorted) coordinate vector xd

are related to each other through the linear equation:

where KK is known as the camera matrix, and defined as follows:

In Matlab, this matrix is stored in the variable KK after calibration. Observe that fc(1) and fc(2)

are the focal distance (which is in fact a unique value in mm) expressed in units of horizontal and

vertical pixels. Both components of the vector fc are usually very similar. The ratio fc(2)/fc(1),

Page 5: Imas Camera Calibration

5

often called "aspect ratio", is different from 1 if the pixel in the CCD array are not square.

Therefore, the camera model naturally handles non-square pixels. In addition, the coefficient

alpha_c encodes the angle between the x and y sensor axes. Consequently, pixels are even

allowed to be non-rectangular. Some authors refer to that type of model as "affine distortion"

model. Usually, however alpha_c is equal to zero.

3.2.2 Extrinsic parameters

The extrinsic parameters of a camera in relation to observed objects (e.g. calibration grid in

various images) include:

• Rotations: A set of 3x3 rotation matrices Rc_i, where i denotes the image number.

• Translations: A set of 3x1 vectors Tc_ i, where i denotes the image number.

Definition of the extrinsic parameters:

Consider the calibration grid #i (attached to the ith calibration image), and concentrate on the

camera reference frame attached to that grid. Without loss of generality, take i = 1. The following

figure shows the reference frame (O,X,Y,Z) attached to that calibration grid.

Page 6: Imas Camera Calibration

6

Let P be a point space of coordinate vector XX = [X;Y;Z] in the grid reference frame (reference

frame shown on the previous figure). Let XXc = [Xc;Yc;Zc] be the coordinate vector of P in the

camera reference frame. Then XX and XXc are related to each other through the following rigid

motion equation:

XXc = Rc_1 * XX + Tc_1

In particular, the translation vector Tc_1 is the coordinate vector of the origin of the grid pattern

(O) in the camera reference frame, and the third column of the matrix Rc_1 is the surface normal

vector of the plane containing the planar grid in the camera reference frame.

3.3 Step-by-step calibration instruction

Follow the instruction below to calibrate your camera. Save the initial and the final calibration

results and compare them. Observe the distortions introduced by the camera lens and discuss

them.

Page 7: Imas Camera Calibration

7

3.3.1 Taking calibration images

In order to perform camera calibration a number of calibration images must be taken. Use the

provided calibration target and take around 20 different images as shown below. Use a single

basename and ascending numbers for your images, for example: image_01, image_02… Use

either bmp or tiff image format.

3.3.2 Starting the toolbox

Start the calibration toolbox by typing calib in the directory where your images are located.

3.3.3 Reading the images

Click on the Image names button in the Camera calibration tool window. Enter the basename

of the calibration images and the image format (file extension) symbol as prompted.

All the images are then loaded in memory (through the command Read images that is

automatically executed). The complete set of images is also shown in thumbnail format (this

images can always be regenerated by running mosaic):

3.3.4 Extracting of the grid corners

Click on the Extract grid corners button in the Camera calibration tool window.

Press "enter" (with an empty argument) to select all the images. Then, select the default window

Page 8: Imas Camera Calibration

8

size of the corner finder: wintx=winty=5 by pressing "enter" with empty arguments to the wintx

and winty question. This leads to a effective window of size 11x11 pixels.

The corner extraction engine includes an automatic mechanism for counting the number of

squares in the grid. This tool is especially convenient when working with a large number of

images since the user does not have to manually enter the number of squares in both x and y

directions of the pattern. On some very rare occasions however, this code may not predict the

right number of squares. This would typically happen when calibrating lenses with extreme

distortions. At this point in the corner extraction procedure, the program gives the option to the

user to disable the automatic square counting code. In that special mode, the user would be

prompted for the square count for every image.

Click on the four extreme corners on the rectangular checkerboard pattern. The clicking locations

are shown on the four following figures (WARNING: try to click accurately on the four corners,

at most 5 pixels away from the corners. Otherwise some of the corners might be missed by the

detector).

Ordering rule for clicking: The first clicked point is selected to be associated to the origin point

of the reference frame attached to the grid. The other three points of the rectangular grid can be

clicked in any order. This first-click rule is especially important if you need to calibrate

externally multiple cameras (i.e. compute the relative positions of several cameras in space).

When dealing with multiple cameras, the same grid pattern reference frame needs to be

consistently selected for the different camera images (i.e. grid points need to correspond across

the different camera views).

Page 9: Imas Camera Calibration

9

The boundary of the calibration grid is then shown in Figure below:

Enter the sizes dX and dY in X and Y of each square in the grid. The program automatically

counts the number of squares in both dimensions, and shows the predicted grid corners:

Page 10: Imas Camera Calibration

10

If the predicted corners are close to the real image corners, then the following step may be

skipped (if there is not much image distortion). This is the case in that present image: the

predicted corners are close enough to the real image corners. Therefore, it is not necessary to

"help" the software to detect the image corners by entering a guess for radial distortion

coefficient. Press "enter", and the corners are automatically extracted using those positions as

initial guess. The image corners are then automatically extracted, and displayed:

The corners are extracted to an accuracy of about 0.1 pixel. Follow the same procedure for the

remaining images. Observe the square dimensions dX, dY are always kept to their original

Page 11: Imas Camera Calibration

11

values. Sometimes, the predicted corners are not quite close enough to the real image corners to

allow for an effective corner extraction (see the image below). In that case, it is necessary to

refine the predicted corners by entering a guess for lens distortion coefficient.

Observe that some of the predicted corners within the grid are far enough from the real grid

corners to result into wrong extractions. The cause: image distortion. In order to help the system

make a better guess of the corner locations, the user is free to manually input a guess for the first

order lens distortion coefficient kc (to be precise, it is the first entry of the full distortion

coefficient vector kc described above). In order to input a guess for the lens distortion coefficient,

enter a non-empty string to the question Need of an initial guess for distortion? (for example

enter: 1). Enter then a distortion coefficient of kc=-0.3 (in practice, this number is typically

between -1 and 1). If the new predicted corners are close enough to the real image corners (this is

the case here), input any non-empty string (such as 1) to the question Satisfied with distortion?.

The subpixel corner locations are then computed using the new predicted locations (with image

distortion) as initial guesses. If we had not been satisfied, we would have entered an empty-string

to the question Satisfied with distortion? (by directly pressing "enter"), and then tried a new

distortion coefficient kc. You may repeat this process as many times as you want until satisfied

with the prediction (side note: the values of distortion used at that stage are only used to help

corner extraction and will not affect at all the next main calibration step. In other words, these

Page 12: Imas Camera Calibration

12

values are neither used as final distortion coefficients, nor used as initial guesses for the true

distortion coefficients estimated through the calibration optimization stage).

After corner extraction, the Matlab data file calib_data.mat is automatically generated. This file

contains all the information gathered throughout the corner extraction stage (image coordinates,

corresponding 3D grid coordinates, grid sizes, ...). This file is only created in case of emergency

when for example Matlab is abruptly terminated before saving. Loading this file would prevent

you from having to click again on the images.

3.3.5 Main Calibration step

After corner extraction, click on the button Calibration of the Camera calibration tool to run

the main camera calibration procedure. Calibration is done in two steps: first initialization, and

then nonlinear optimization. The Calibration parameters are stored in a number of variables.

Notice that the skew coefficient alpha_c and the 6th order radial distortion coefficient (the last

entry of kc) have not been estimated (this is the default mode) and they are equal to zero.

Therefore, the angle between the x and y pixel axes is 90 degrees. In most practical situations,

this is a very good assumption. Click on Show Extrinsic in the Camera calibration tool. The

extrinsic parameters (relative positions of the grids with respect to the camera) are then shown in

a form of a 3D plot:

Page 13: Imas Camera Calibration

13

In this figure, the frame (Oc,Xc,Yc,Zc) is the camera reference frame. The red pyramid

corresponds to the effective field of view of the camera defined by the image plane. To switch

from a "camera-centered" view to a "world-centered" view, just click on the Switch to world-

centered view button located at the bottom-left corner of the figure.

In this new figure, every camera position and orientation is represented by a green pyramid.

3.3.6 Recomputing corners and increasing the calibration precision

Our camera has not been calibrated very precisely. The reason for that is that we have not done a

very careful job at extracting the corners on some highly distorted images (a better job could have

been done by using the predicted distortion option). Nevertheless, we can correct for that now by

recomputing the image corners on all images automatically. Here is the way it is going to be

done: press on the Recomp. corners button in the main Camera calibration tool and select once

again a corner finder window size of wintx = winty = 5 (the default values). To the question

Number(s) of image(s) to process ([] = all images) press "enter" with an empty argument to

recompute the corners on all the images. Enter then the mode of extraction: the automatic mode

(auto) uses the re-projected grid as initial guess locations for the corner, the manual mode lets the

user extract the corners manually (the traditional corner extraction method). In the present case,

the reprojected grid points are very close to the actual image corners. Therefore, we select the

automatic mode: press "enter" with an empty string. The corners on all images are then

Page 14: Imas Camera Calibration

14

recomputed. Run then another calibration optimization by clicking on Calibration. After

optimization, click on Save to save the calibration results (intrinsic and extrinsic) in the matlab

file Calib_Results.mat. Compare these calibration results with the previous results.

3.3.7 Visualizing distortions

In order to make a decision on the appropriate distortion model to use, it is sometimes very useful

to visualize the effect of distortions on the pixel image, and the importance of the radial

component versus the tangential component of distortion. For this purpose, run the script

visualize_distortions at the Matlab prompt (this function is not yet linked to any button in the

GUI window). The three following images are then produced. The first figure shows the impact

of the complete distortion model (radial + tangential) on each pixel of the image. Each arrow

represents the effective displacement of a pixel induced by the lens distortion. Observe that points

at the corners of the image are displaced by as much as 25 pixels. The second figure shows the

impact of the tangential component of distortion. On this plot, the maximum induced

displacement is 0.14 pixel (at the upper left corner of the image). Finally, the third figure shows

the impact of the radial component of distortion. This plot is very similar to the full distortion

plot, showing the tangential component could very well be discarded in the complete distortion

model for this particular lens. On the three figures, the cross indicates the center of the image,

and the circle the location of the principal point.

Page 15: Imas Camera Calibration

15

Page 16: Imas Camera Calibration

16

4 Your exercise

Do the following instructions always saving your results as they will be necessary to execute a

consecutive point as well as in the final report.

1. Acquire 20 images of your calibration target and perform calibration as described above

and save the results (by pressing the Save button) as well as copy and paste the

calibration results to a wordprocessor file. After that DO NOT change the optical

parameters of your camera (you can move it though, but do not touch the lens).

a. Visualize the distortion model of the camera. Discuss the results.

b. Select the most distorted image from the calibration images and undistort it. Use

rather the command-line undistort_image_color then the Undistort image

button. Verify (how?) and discuss the results.

2. Now use the Add/Supress Images button to deselect most of the images – leave just one

image left (use expressions like [2:20] or [2 3 4 5] when prompted for image numbers).

Afterwards, simply press the Calibration button and the calibration will be performed

(there is no need for corner extraction and recomputation, as this was done in the very firs

step for all the images). The calibration results based on a single image will be very

inaccurate. Copy and paste the results to a file. Repeat this procedure for 2, 5, 8, 10, 13,

16 and 19 images. Plot the results: camera intrinsic parameters as a function of

chessboard images.

3. Load the calibration results based on all the images.

4. Use the Comp.Extrinsic button on a calibration image to calculate the extrinsic parameters

of your camera (mark the outer corners of the grid just like during calibration) - the results

yield the location and orientation of the grid with respect to the camera. Combine the

rotation matrix and the translation vector in a homogenous format:

=

1000

__ extTcextRcO

Page 17: Imas Camera Calibration

17

Do the same with a single black square – one that the local coordinate system of the grid

is connected to. Store the result in homogenous variable O2. Are O and O2 identical?

Discuss the results.

5. Localize two black squares just like before and store their coordinates in homogenous

format P1 and P2. In order to find the localization of these two squares in a reference

frame different than the camera’s reference frame, one must transform the measurement

results from one frame to the other. It is done by multiplying the results by the

transformation matrix (left hand multiplication: T*P). In our case, the matrix that would

transform the location of the squares from the camera reference frame to the grid

reference frame is T=O-1

. Perform the calculation for both squares and discuss the results.

What can you tell about the orientation of the squares (after transformation)? What angles

do you expect? Convert the rotation matrix (denoted here by R) to three angles

representing heading, attitude and bank using the following formulas:

heading = atan2(-R(3,1),R(1,1))

attitude = asin(R(2,1))

bank = atan2(-R(2,3),R(2,2))

except when R(2,1) = 1 or -1, or is “close enough”

heading = atan2(R(1,3),R(3,3))

bank = 0

O

X

Y

Z

Image points (+) and reprojected grid points (o)

100 200 300 400 500 600

50

100

150

200

250

300

350

400

450

Page 18: Imas Camera Calibration

18

6. Repeat the above for a square marked in the following way:

O

dX dY

Xc (in camera frame)

Yc (

in c

am

era

fra

me)

Extracted corners

100 200 300 400 500 600

50

100

150

200

250

300

350

400

450

Do not forget to provide the correct length of the square sides! Is the yaw angle correct?

Discuss the results.

7. Repeat the measurements from points 5 and 6 with all distortion coefficients set to 0 (they

are stored in the ‘kc’ variable). Discuss the differences. Do not forget to restore the

original values of ‘kc’ afterwards!

8. When localizing objects on a well defined plane (such as our calibration grid or any

other plane which at least 4 points are known) we do not need to provide 4 known points.

A single point can be localized in the local coordinate frame of such a plane. The

problem is to map pixels from one quadrilateral to another one and it’s called planar

homography2.

2 In general homography is the mathematical term for mapping points on one surface to points on another. In

this sense it is a more general term than as used here. In the context of computer vision, homography almost always

refers to mapping between points on two image planes that correspond to the same location on a planar object in the

real world. It can be shown that such a mapping is representable by a single 3-by-3 orthogonal matrix.

Page 19: Imas Camera Calibration

19

It is possible to express this mapping in terms of matrix multiplication if we use

homogeneous coordinates to express both the viewed point Q and the point q on the

imager to which Q is mapped. If we define:

[ ]TZYXQ 1

~=

[ ]Tyxq 1~ =

then we can express the action of the homography simply as:

qHQ ~~=

The above equation holds only for a simple perspective model that does not consider

distortions to the image, therefore one should use undistorted image for homography

calculations.

Undistort the same image you used previousely with undistort_image_color

command and run the calculate_homography script. Having calculated the

homography matrix H, you can find the local coordinates of your planar object by

executing the localize_on_plane script and clicking on the image. Verify if

localization calculations are correct and compare with previous results.

9. Given a set of geometric figures use one of the above methods to establish their position

in space with respect to a predefined coordinate system. In general, there is no way of

knowing if an object lies on the predefined plane or not – it is up to you decide. Discuss

cases when a particular method can be applied.

Page 20: Imas Camera Calibration

20

10. Assuming that you are dealing only with flat figures on the reference plane, make the

localization process automatic. Find the following features of the observed objects: color,

number of vertices, centroid and area. Use the find_vertices function to find

vertices of labeled objects (as input use a blank image with a single object). Verify the

results.