I realized that it is more convenient to resize (downsize) the input camera image to the small standard size (in my case 320 x 240) instead of change parameters accordingly to each camera image size every time. That is, the original large image and its intrinsic parameters are just used for displaying the rendered scene of augmented reality and all the processing behind is done with the resized image and the corresponding intrinsic parameters.

Basically, intrinsic camera parameters are needed for calculating the projection of 3D point on to the image plane. Such projection appears to be important part in visual SLAM as predicting observation of 3D landmarks for current frame based on the camera pose and 3D map estimation of the previous frame.

So for getting the resized image, all that I have to do is to use cv::resize() function of OpenCV. For the corresponding intrinsic parameters, I have to do something. I have to create the intrinsic parameters of virtual camera from a physically existing and calibrated camera.

Intuitively, among intrinsic parameters, the focal length and principal point of the resized image can be calculated by just scaling as resizing ratio. For example, if the original camera image is 1280 x 960 and resized image is 320 x 240, the ratio is 1/4 and the focal length and principal point is scaled so.

What about distortion factors? Should I scale them as the focal length and principal point? I posted the following question on the visual slam community board in Google plus and José Jerónimo Moreira Rodrigues gave me an answer saying “distortion factors remain the same due to their definitions”. Thanks Rodrigues.

My question :

Let’s say I have chessboard images of size w(width) and h(height). (Notice that w and h are not the actual size of the chessboard but that of the images of chessboard.)

After (Zhang’s) camera calibration, the resulted intrinsic camera parameters are fx(focal length x), fy(focal length y), cx(principal point x), cy(principal point y), k1(1st distortion factor), k2(2nd distortion factor), k3(3rd distortion factor), k4(4th distortion factor), k5(5th distortion factor).

If I resize the chessboard images by half so that the width and height of the resized images are w/2 and h/2 respectively and conduct camera calibration again, I would expect to theoretically get fx/2 ,fy/2, cx/2 and cy/2 as focal length x, focal length y, principal point x and principal point y respectively.

How about distortion factors ? What would I expect to get theoretically the 1st, 2nd, 3rd, 4th and 5th distortion factors of the resized chessboard images in terms of k1, k2, k3, k4 and k5 ?

Rodrigues :

They dont change. They are function of X/Z and Y/Z. Checkhttp://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html. Only the intrinsic matrix changes, not the distortion coefficients.

The experiment using Camera calibration toolbox for Matlab with chessboard images in the OpenCV example folder also shows the distortion factors do NOT change.

I did the experiment by making two (1280 x 960 and 320 x 240) sets of images from the original chessboard images (640×480) in OpenCV example folder. The result is

1280 x 960 (double)

Focal Length: fc = [ 1074.34831 1073.83567 ]

Principal point: cc = [ 653.70628 499.16838 ]

Distortion: kc = [ -0.28805 0.10555 -0.00083 0.00033 0.00000 ] 640 x 480 (original)

Focal Length: fc = [ 536.93593 536.52653 ]

Principal point: cc = [ 326.87081 249.24606 ]

Distortion: kc = [ -0.28990 0.10780 -0.00079 0.00017 0.00000 ]

320 x 240 (half)

Focal Length: fc = [ 268.65196 268.42076 ]

Principal point: cc = [ 162.95051 124.58663 ]

Distortion: kc = [ -0.29024 0.10502 -0.00072 -0.00004 0.00000 ]

…