It's a bit of an old question, but Yandex/Google finds it by searching for "simple distortion models", which I needed. Unfortunately, no answer is perfect, and most importantly, simple.
the one-parameter division model by Fitzgibbon:
The one-parameter division model by Fitzgibbon looks a bit different:
_division_citeria = (cv2.TermCriteria_COUNT + cv2.TermCriteria_EPS,
10, # 5 for cv2.undistortPoints()
0.1)
def division_distorted(p, k, dc=[0, 0]):
p = p - dc
return p/(1 + k*np.dot(p, p)) + dc
def division_undistorted(x, k, dc=[0, 0], criteria=_division_citeria,
verbose=False):
p0 = x - dc
for i in range(criteria[1] if criteria[0]&cv2.TermCriteria_COUNT
else 99999):
p = x*(1 + k*np.dot(p0, p0))
if verbose:
print(f"division_undistorted: {i}: {np.round(p - p0, 3)}")
if ((criteria[0]&cv2.TermCriteria_EPS) and
np.linalg.norm(p - p0) < criteria[2]):
# This is inaccurate, simplistic, OpenCV compares projections
break
p0 = p
return p + dc
OpenCV uses the one-parameter division model when setting the following parameter settings:
dis_und_distCoeffs = np.asarray([0, 0, 0, 0, 0, k, 0, 0])
I have experience with OpenCV (undistort(), initUndistortRectifyMap(), etc.), however, these methods require an estimate of the camera focal properties (fx, fy) which I do not have.
Roughly speaking, (fx, fy)
these are scale factors along the axes, if you need to process an image pixel-by-pixel, then they can be set to 1.
dis_und_cameraMatrix = np.array([[1., 0., dc[0]],
[0., 1., dc[1]],
[0., 0., 1.]])
I am wondering what the best way is to process this transform as fast as possible.
In my opinion, cv2.undistort()
is quite simple and quite efficient, at least 12...19 times faster, depending on the data types, than `scipy.interpolate.RegularGridInterpolator()'.
A more detailed comparison of methods and performance can be found in my Jupyter notebook on Colab: https://github.com/Serge3leo/temp-cola/blob/main/SO/77889635-fast-implementation-of-the-1-parameter-division-distortion-model.ipynb.