RSS

Category Archives: Computer Vision

Intrinsic camera parameters for resized images set of a chessboard

I realized that it is more convenient to resize (downsize) the input camera image to the small standard size (in my case 320 x 240) instead of change parameters accordingly to each camera image size every time.  That is, the original large image and its intrinsic parameters are just used  for displaying the rendered scene of augmented reality and all the processing behind is done with the resized image and the corresponding intrinsic parameters.

Basically, intrinsic camera parameters are needed for calculating the projection of 3D point on to the image plane.  Such projection appears to be important part in visual SLAM as predicting observation of 3D landmarks for current frame based on the camera pose and 3D map estimation of the previous frame.

So for getting the resized image, all that I have to do is to use cv::resize() function of OpenCV.  For the corresponding intrinsic parameters, I have to do something.  I have to create the intrinsic parameters of virtual camera from a physically existing and calibrated camera.

Intuitively, among intrinsic parameters, the focal length and principal point of the resized image can be calculated by just scaling as resizing ratio.  For example, if the original camera image is 1280 x 960 and resized image is 320 x 240, the ratio is 1/4 and the focal length and principal point is scaled so.

What about distortion factors? Should I scale them as the focal length and principal point?  I posted the following question on the visual slam community board in Google plus and  José Jerónimo Moreira Rodrigues gave me an answer saying “distortion factors remain the same due to their definitions”.  Thanks Rodrigues.

My question :

Let’s say I have chessboard images of size w(width) and h(height). (Notice that w and h are not the actual size of the chessboard but that of the images of chessboard.)
After (Zhang’s) camera calibration, the resulted intrinsic camera parameters are fx(focal length x), fy(focal length y), cx(principal point x), cy(principal point y), k1(1st distortion factor), k2(2nd distortion factor), k3(3rd distortion factor), k4(4th distortion factor), k5(5th distortion factor).
If I resize the chessboard images by half so that the width and height of the resized images are w/2 and h/2 respectively and conduct camera calibration again, I would expect to theoretically get fx/2 ,fy/2, cx/2 and cy/2 as focal length x, focal length y, principal point x and principal point y respectively.
How about distortion factors ? What would I expect to get theoretically the 1st, 2nd, 3rd, 4th and 5th distortion factors of the resized chessboard images in terms of k1, k2, k3, k4 and k5 ?

Rodrigues :

They dont change. They are function of X/Z and Y/Z. Checkhttp://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html. Only the intrinsic matrix changes, not the distortion coefficients.

 

The experiment using Camera calibration toolbox for Matlab with chessboard images in the OpenCV example folder also shows the distortion factors do NOT change.
I did the experiment by making two (1280 x 960 and 320 x 240) sets of images from the original chessboard images (640×480) in OpenCV example folder.    The result is

1280 x 960 (double)
Focal Length:          fc = [ 1074.34831   1073.83567 ]
Principal point:       cc = [ 653.70628   499.16838 ]
Distortion:            kc = [ -0.28805   0.10555   -0.00083   0.00033  0.00000 ] 640 x 480 (original)
Focal Length:          fc = [ 536.93593   536.52653 ]
Principal point:       cc = [ 326.87081   249.24606 ]
Distortion:            kc = [ -0.28990   0.10780   -0.00079   0.00017  0.00000 ]
320 x 240 (half)
Focal Length:          fc = [ 268.65196   268.42076 ]
Principal point:       cc = [ 162.95051   124.58663 ]
Distortion:            kc = [ -0.29024   0.10502   -0.00072   -0.00004  0.00000 ]

chessboards

Advertisements
 
 

Role of size of calibration pattern in camera calibration

When using Camera Calibration Toolbox or OpenCV camera calibration, we are asked to provide the size of width and height of a grid in the the chessboard.  So we measure the length with a ruler (in millimeter or meter) and type the length into the program.

When the measured width and height of a grid are 25 mm and 25 mm, what will happen if we provide the width and height as 2500 mm and 2500 mm (which are wrong and hundred times bigger than the real size) respectively to the program.  The answer is that it does not make any difference to the intrinsic parameter values and this implies the unit of focal length as the output of calibration software above is NOT mm or meter BUT pixels.  (Actually it affects only translation, that is, part of extrinsic parameters of each camera frame, which is unimportant calibration results).  This also implies that if the image size (resolution) is big enough, it is OK to use a small chessboard such as the one printed on a A4 paper sheet.

This also means it is up to scale, that is, when the actually measured width and height are 25 and 50 mm respectively, it is okay to provide such a pair of scaled values (250, 500), (25000, 50000) , (1, 2) or (0.001, 0.002) as long as the ratio of width and height is the same as that of the real size (here 1 : 2).

What if, we provide the width and height value whose ratio is not the same as the measured one?  In other words, if we provide 250 and 750 as the width and height of grid (ratio 1 : 3), the program will not run properly giving complain or very strange intrinsic parameters.  Because with a chessboard of grid ratio 1 : 3, it might be impossible to get such images (of chessboard with ratio 1 : 2) taken by the camera to be calibrated.

 
Leave a comment

Posted by on December 16, 2015 in Computer Vision

 
Link

A High Schooler’s Comp. Sci Blog: Extended Kalman Filter Example With Code

 

 
Leave a comment

Posted by on July 28, 2014 in Computer Vision, Programming

 
Link

Chehra

 

 
Leave a comment

Posted by on July 25, 2014 in Computer Vision

 
Image

homography?

image

 
Leave a comment

Posted by on July 6, 2014 in Computer Vision, Misc

 
Video

Google I/O 2014 – A 3D tablet, an OSCAR, and a little cash. Tango, Spotlight, Ara. ATAP. – YouTube

 

 
 
Link

Perspective 3-Point (P3P) Algorithm

 
Leave a comment

Posted by on June 18, 2014 in Computer Vision, Programming