Intrinsic camera parameters for resized images set of a chessboard

I realized that it is more convenient to resize (downsize) the input camera image to the small standard size (in my case 320 x 240) instead of change parameters accordingly to each camera image size every time.  That is, the original large image and its intrinsic parameters are just used  for displaying the rendered scene of augmented reality and all the processing behind is done with the resized image and the corresponding intrinsic parameters.

Basically, intrinsic camera parameters are needed for calculating the projection of 3D point on to the image plane.  Such projection appears to be important part in visual SLAM as predicting observation of 3D landmarks for current frame based on the camera pose and 3D map estimation of the previous frame.

So for getting the resized image, all that I have to do is to use cv::resize() function of OpenCV.  For the corresponding intrinsic parameters, I have to do something.  I have to create the intrinsic parameters of virtual camera from a physically existing and calibrated camera.

Intuitively, among intrinsic parameters, the focal length and principal point of the resized image can be calculated by just scaling as resizing ratio.  For example, if the original camera image is 1280 x 960 and resized image is 320 x 240, the ratio is 1/4 and the focal length and principal point is scaled so.

What about distortion factors? Should I scale them as the focal length and principal point?  I posted the following question on the visual slam community board in Google plus and  José Jerónimo Moreira Rodrigues gave me an answer saying “distortion factors remain the same due to their definitions”.  Thanks Rodrigues.

My question :

Let’s say I have chessboard images of size w(width) and h(height). (Notice that w and h are not the actual size of the chessboard but that of the images of chessboard.)
After (Zhang’s) camera calibration, the resulted intrinsic camera parameters are fx(focal length x), fy(focal length y), cx(principal point x), cy(principal point y), k1(1st distortion factor), k2(2nd distortion factor), k3(3rd distortion factor), k4(4th distortion factor), k5(5th distortion factor).
If I resize the chessboard images by half so that the width and height of the resized images are w/2 and h/2 respectively and conduct camera calibration again, I would expect to theoretically get fx/2 ,fy/2, cx/2 and cy/2 as focal length x, focal length y, principal point x and principal point y respectively.
How about distortion factors ? What would I expect to get theoretically the 1st, 2nd, 3rd, 4th and 5th distortion factors of the resized chessboard images in terms of k1, k2, k3, k4 and k5 ?

Rodrigues :

They dont change. They are function of X/Z and Y/Z. Check Only the intrinsic matrix changes, not the distortion coefficients.


The experiment using Camera calibration toolbox for Matlab with chessboard images in the OpenCV example folder also shows the distortion factors do NOT change.
I did the experiment by making two (1280 x 960 and 320 x 240) sets of images from the original chessboard images (640×480) in OpenCV example folder.    The result is

1280 x 960 (double)
Focal Length:          fc = [ 1074.34831   1073.83567 ]
Principal point:       cc = [ 653.70628   499.16838 ]
Distortion:            kc = [ -0.28805   0.10555   -0.00083   0.00033  0.00000 ] 640 x 480 (original)
Focal Length:          fc = [ 536.93593   536.52653 ]
Principal point:       cc = [ 326.87081   249.24606 ]
Distortion:            kc = [ -0.28990   0.10780   -0.00079   0.00017  0.00000 ]
320 x 240 (half)
Focal Length:          fc = [ 268.65196   268.42076 ]
Principal point:       cc = [ 162.95051   124.58663 ]
Distortion:            kc = [ -0.29024   0.10502   -0.00072   -0.00004  0.00000 ]



Role of size of calibration pattern in camera calibration

When using Camera Calibration Toolbox or OpenCV camera calibration, we are asked to provide the size of width and height of a grid in the the chessboard.  So we measure the length with a ruler (in millimeter or meter) and type the length into the program.

When the measured width and height of a grid are 25 mm and 25 mm, what will happen if we provide the width and height as 2500 mm and 2500 mm (which are wrong and hundred times bigger than the real size) respectively to the program.  The answer is that it does not make any difference to the intrinsic parameter values and this implies the unit of focal length as the output of calibration software above is NOT mm or meter BUT pixels.  (Actually it affects only translation, that is, part of extrinsic parameters of each camera frame, which is unimportant calibration results).  This also implies that if the image size (resolution) is big enough, it is OK to use a small chessboard such as the one printed on a A4 paper sheet.

This also means it is up to scale, that is, when the actually measured width and height are 25 and 50 mm respectively, it is okay to provide such a pair of scaled values (250, 500), (25000, 50000) , (1, 2) or (0.001, 0.002) as long as the ratio of width and height is the same as that of the real size (here 1 : 2).

What if, we provide the width and height value whose ratio is not the same as the measured one?  In other words, if we provide 250 and 750 as the width and height of grid (ratio 1 : 3), the program will not run properly giving complain or very strange intrinsic parameters.  Because with a chessboard of grid ratio 1 : 3, it might be impossible to get such images (of chessboard with ratio 1 : 2) taken by the camera to be calibrated.

Leave a comment

Posted by on December 16, 2015 in Computer Vision


OpenCV-related tips

  1. To get the same results from PC and Android platforms with the same source codes.
    1. For matrix inverse function, that is inv(), there are many options namely DECOMP_SVD, DECOMP_EIG, DECOMP_LU and DECOMP_CHOLESKY.
      If you want the inversion result of PC and that of Android to be the same, you should use either DECOMP_SVD or DECOMP_EIG.
      However the results of DECOMP_SVD and DECOMP_EIG are also different for the same input.  For my work, DECOMP_SVD seems to give better result than DECOMP_EIG.
      For the issue of speed, I checked the processing times for DECOMP_LU, DECOMP_SVD and DECOMP_EIG and found there is no such a difference in their running times.
    2. For arc-tangent, there is a OpenCV function fastAtan2().
      If you want the arc-tangent return values of PC and Android to be the same, you should use fastAtan2() instead of atan2() of “math.h”
      For the issue of speed, I checked the speeds of atan2() and fastAtan2() and got strange results. In Debug mode, fastAtan2() is faster than atan2() as the name implies.  In Release mode, however, atan2() is much faster than fastAtan2() and it took almost 0 millisecond for processing atan2().  (I might have done something wrong)
    3. For random number generation, there are functions rand() and srand() of “stdlib.h”.  OpenCV also provides random number generator class cv::RNG.
      Sometimes you have to compare the (intermediate) results of PC and Android version of the same source codes so that you should remove the randomness in your codes. In other words, if you want for the sequences of generated random numbers from PC and Android to be fixed and the same, you should use cv::RNG with a fixed seed such as RNG(1234567) instead of rand() of “stdlib.h”.  Even if you give a fixed seed to srand() such as in srand(1234567), it will give you different fixed random sequences for PC and Android. In fact, this will give the same random sequence every run for PC.  And it will do so for Android.  However, the sequences for PC and Android will not be the same.  To get the same fixed random sequence with the same source codes, you should use cv::RNG as in the following example.

      cv::RNG rng = RNG(12378213);
      float randomF = rng.uniform(5.f, 45.f);
      int randomI = rng.uniform(5, 45);
      double randomD = rng.uniform((double)5, (double)45);

      In addition, there are some OpenCV functions such as cv::solvePnPRansac() which use random number generator internally.  If you want such a random function to give the fixed and same results in every run, you should give a fixed seed such as in following.

      theRNG().state = 1234567;
      cv::solvePnPRansac(blah, blah, blah, ...);
  2. About reading gray image file.
    There are many options for the funtion cv::imread(), namely
    and the default is IMREAD_COLOR.
    If you read a gray image file with the default option IMREAD_COLOR, the resulted Mat (say matCOLOR) has 3 channels as a color image.  If you split matCOLOR into 3 channels(red, green and blue) with the function cv::split(), the resulted blue, green and red matrices (say matBlueFromCOLOR, matGreenFromCOLOR and matRedFromCOLOR respectively)  are all identical.  If you convert matCOLOR to a gray image (say matGrayFromCOLOR) using cv::cvtColor(CV_BGR2GRAY), matGrayFromCOLOR is also identical to the blue, green and red matrices.
    If you read the gray image file with the option  IMREAD_GRAYSCALE or IMREAD_UNCHANGED, the resulted Mat (say matGRAYSCALE or matUNCHANGED respectively) is also identical to the converted Mat matGrayFromCOLOR.
    In short, for a gray image file, the following Mats are all identical.
    matBlueFromCOLOR, matGreenFromCOLOR, matRedFromCOLOR,
Leave a comment

Posted by on December 16, 2015 in OpenCV


The acknowledgement

“This book is dedicated to XXX, without whom it would never have been started, and to YYY, without whom it would never have been finished.”

This is the acknowledgement of a book I took a look at today.  I have seen some acknowledgements with “never have been started” or “never have been finished”.  But not both.   It sounds much more beautiful with both than each.  It does not seem to be likely for me to have something to publish a book about under my name in my life.

Leave a comment

Posted by on December 14, 2015 in Misc


PointCloud SDK: PointCloud.h File Reference

Leave a comment

Posted by on September 16, 2014 in Uncategorized


uniform random number generation

0 과 1 사이의 uniform random number를 발생시키기 위해, 기존에 쓰던 코드를 그대로 쓰다가 낭패를 봐서 기록해 두고자 한다.

기존에 쓰던 코드는

(float)rand() / (float)RAND_MAX;

였다. 보통의 uniform random number가 0 <= x < 1 인 소수 x를 구하는 경우가 많아서 위의 코드가 그에 해당하는 코드라고 만 생각했다.
그런데 알고 보니, 위의 코드는 0 <= x <= 1 사이의 값을 리턴해 주는 코드였다. 다시 말해서 결과로 1이 나올 수도 있는 것이다. 아, 그것도 모르고 개삽질 했다.
내가 원하는 대로 0 <= x < 1 사이의 값을 구하려면

(float)rand() / (float)(RAND_MAX + 1);

을 해줘야 했다. 즉 rand() 함수는 0 <= rand() <= RAND_MAX 의 값을 리턴해 주는 함수인 것이다.
그렇다면, 0 < x < 1 사이의 random number가 필요할 경우에는 어떻게 해야 할까?

(float)(rand() + 1) / (float)(RAND_MAX + 2);

이렇게 해주면 되지 않을까?

결과적으로 요약해 보자면

0 <= x <= 1

(float)rand() / (float)(RAND_MAX);

0 <= x < 1

(float)rand() / (float)(RAND_MAX + 1);

0 < x < 1

(float)(rand() + 1) / (float)(RAND_MAX + 2);

돌려 보니까, 대충 맞는거 같다.

2015.12.16 추가 :

0 <= x < 1 의 경우를 위해 다른 방법으로 하는 코드를 발견했다.

(float)(rand() % RAND_MAX) / (float)RAND_MAX;
Leave a comment

Posted by on August 6, 2014 in Programming


A High Schooler’s Comp. Sci Blog: Extended Kalman Filter Example With Code


Leave a comment

Posted by on July 28, 2014 in Computer Vision, Programming