How To Get Intrinsic Camera Parameters
Goal
In this department, we will acquire about
- types of baloney caused by cameras
- how to detect the intrinsic and extrinsic properties of a camera
- how to undistort images based off these properties
Basics
Some pinhole cameras introduce meaning distortion to images. Two major kinds of distortion are radial distortion and tangential baloney.
Radial distortion causes straight lines to appear curved. Radial distortion becomes larger the farther points are from the middle of the prototype. For example, i image is shown beneath in which two edges of a chess board are marked with red lines. Only, yous can encounter that the edge of the chess board is not a straight line and doesn't lucifer with the red line. All the expected direct lines are bulged out. Visit Baloney (optics) for more details.
image
Radial distortion can exist represented as follows:
\[x_{distorted} = x( one + k_1 r^2 + k_2 r^4 + k_3 r^6) \\ y_{distorted} = y( i + k_1 r^2 + k_2 r^iv + k_3 r^6)\]
Similarly, tangential distortion occurs considering the image-taking lense is not aligned perfectly parallel to the imaging plane. And so, some areas in the image may look nearer than expected. The amount of tangential baloney can exist represented as below:
\[x_{distorted} = ten + [ 2p_1xy + p_2(r^2+2x^two)] \\ y_{distorted} = y + [ p_1(r^two+ 2y^2)+ 2p_2xy]\]
In short, we demand to find five parameters, known as distortion coefficients given by:
\[Distortion \; coefficients=(k_1 \hspace{10pt} k_2 \hspace{10pt} p_1 \hspace{10pt} p_2 \hspace{10pt} k_3)\]
In improver to this, we need to another information, like the intrinsic and extrinsic parameters of the photographic camera. Intrinsic parameters are specific to a camera. They include data similar focal length ( \(f_x,f_y\)) and optical centers ( \(c_x, c_y\)). The focal length and optical centers can be used to create a camera matrix, which can exist used to remove baloney due to the lenses of a specific photographic camera. The camera matrix is unique to a specific camera, so once calculated, it tin can be reused on other images taken by the same camera. It is expressed as a 3x3 matrix:
\[photographic camera \; matrix = \left [ \begin{matrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & ane \finish{matrix} \right ]\]
Extrinsic parameters corresponds to rotation and translation vectors which translates a coordinates of a 3D point to a coordinate organization.
For stereo applications, these distortions need to be corrected start. To find these parameters, we must provide some sample images of a well defined pattern (eastward.thou. a chess lath). Nosotros detect some specific points of which we already know the relative positions (eastward.g. square corners in the chess board). We know the coordinates of these points in real world infinite and we know the coordinates in the paradigm, then we can solve for the baloney coefficients. For amend results, we need at least 10 exam patterns.
Lawmaking
As mentioned above, we demand at least 10 examination patterns for photographic camera calibration. OpenCV comes with some images of a chess lath (see samples/data/left01.jpg – left14.jpg), then we will utilize these. Consider an paradigm of a chess board. The important input data needed for calibration of the camera is the set up of 3D real world points and the corresponding 2d coordinates of these points in the epitome. 2d prototype points are OK which we can easily find from the image. (These image points are locations where two black squares touch each other in chess boards)
What about the 3D points from existent world space? Those images are taken from a static camera and chess boards are placed at different locations and orientations. And then we need to know \((X,Y,Z)\) values. Simply for simplicity, we can say chess lath was kept stationary at XY airplane, (then Z=0 always) and photographic camera was moved accordingly. This consideration helps us to find just 10,Y values. At present for Ten,Y values, we can just pass the points every bit (0,0), (i,0), (2,0), ... which denotes the location of points. In this case, the results we get will be in the scale of size of chess lath square. Simply if we know the foursquare size, (say 30 mm), we tin can pass the values as (0,0), (30,0), (60,0), ... . Thus, nosotros go the results in mm. (In this instance, nosotros don't know square size since nosotros didn't have those images, so we laissez passer in terms of square size).
3D points are called object points and 2nd epitome points are called image points.
Setup
So to observe design in chess board, we can apply the function, cv.findChessboardCorners(). We also need to pass what kind of design we are looking for, like 8x8 grid, 5x5 grid etc. In this case, we utilise 7x6 grid. (Commonly a chess lath has 8x8 squares and 7x7 internal corners). It returns the corner points and retval which volition exist True if pattern is obtained. These corners will exist placed in an social club (from left-to-right, top-to-bottom)
- Note
- This role may not be able to notice the required blueprint in all the images. So, one proficient choice is to write the lawmaking such that, it starts the camera and check each frame for required pattern. Once the pattern is obtained, find the corners and store it in a list. Also, provide some interval before reading adjacent frame so that we can adjust our chess board in unlike direction. Continue this procedure until the required number of good patterns are obtained. Even in the example provided here, we are non certain how many images out of the 14 given are skilful. Thus, we must read all the images and take only the good ones.
- Instead of chess board, we can alternatively use a round grid. In this example, we must use the role cv.findCirclesGrid() to find the pattern. Fewer images are sufficient to perform photographic camera calibration using a circular grid.
Once we find the corners, we can increase their accurateness using cv.cornerSubPix(). We can also draw the pattern using cv.drawChessboardCorners(). All these steps are included in below code:
import numpy as np
import cv2 every bit cv
import glob
criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, thirty, 0.001)
objp = np.zeros((half dozen*seven,3), np.float32)
objp[:,:two] = np.mgrid[0:7,0:6].T.reshape(-1,2)
objpoints = []
imgpoints = []
images = glob.glob('*.jpg')
for fname in images:
if ret == True:
objpoints.append(objp)
corners2 = cv.cornerSubPix(grayness,corners, (xi,11), (-1,-i), criteria)
imgpoints.append(corners)
One image with blueprint drawn on information technology is shown below:
image
Calibration
Now that we have our object points and image points, nosotros are ready to become for scale. Nosotros can employ the role, cv.calibrateCamera() which returns the camera matrix, distortion coefficients, rotation and translation vectors etc.
ret, mtx, dist, rvecs, tvecs = cv.calibrateCamera(objpoints, imgpoints, greyness.shape[::-ane], None, None)
Undistortion
Now, nosotros tin have an image and undistort it. OpenCV comes with ii methods for doing this. However offset, we can refine the camera matrix based on a gratuitous scaling parameter using cv.getOptimalNewCameraMatrix(). If the scaling parameter alpha=0, it returns undistorted epitome with minimum unwanted pixels. So it may even remove some pixels at image corners. If blastoff=1, all pixels are retained with some extra black images. This function as well returns an image ROI which tin be used to crop the result.
So, nosotros accept a new image (left12.jpg in this example. That is the first image in this affiliate)
one. Using cv.undistort()
This is the easiest way. Just call the office and use ROI obtained in a higher place to crop the result.
dst = cv.undistort(img, mtx, dist, None, newcameramtx)
x, y, w, h = roi
dst = dst[y:y+h, x:x+w]
2. Using remapping
This way is a petty fleck more hard. Start, discover a mapping office from the distorted epitome to the undistorted image. Then utilize the remap function.
dst = cv.remap(img, mapx, mapy, cv.INTER_LINEAR)
ten, y, w, h = roi
dst = dst[y:y+h, x:10+westward]
Withal, both the methods give the same outcome. Come across the result beneath:
image
Yous tin see in the result that all the edges are direct.
Now you can store the camera matrix and distortion coefficients using write functions in NumPy (np.savez, np.savetxt etc) for time to come uses.
Re-projection Fault
Re-project error gives a skilful estimation of just how verbal the found parameters are. The closer the re-projection error is to aught, the more accurate the parameters nosotros found are. Given the intrinsic, distortion, rotation and translation matrices, nosotros must start transform the object betoken to image point using cv.projectPoints(). And so, we can calculate the absolute norm betwixt what we got with our transformation and the corner finding algorithm. To discover the average error, we summate the arithmetical mean of the errors calculated for all the calibration images.
mean_error = 0
for i in range(len(objpoints)):
imgpoints2, _ = cv.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
error = cv.norm(imgpoints[i], imgpoints2, cv.NORM_L2)/len(imgpoints2)
mean_error += error
print( "total error: {}".format(mean_error/len(objpoints)) )
Additional Resource
Exercises
- Try photographic camera scale with round grid.
Source: https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html
Posted by: shawpuble1956.blogspot.com
0 Response to "How To Get Intrinsic Camera Parameters"
Post a Comment