You're close, but there's room to make this calibration pipeline a lot more robust, especially across varied lighting, contrast, and resolution conditions. OpenCV’s findCirclesGrid
with SimpleBlobDetector
is a solid base, but you need some adaptability in preprocessing and parameter tuning to make it reliable. Here's how I’d approach it.
Start by adapting the preprocessing step. Instead of hardcoding an inversion, let the pipeline decide based on image brightness. You can combine this with CLAHE (adaptive histogram equalization) and optional Gaussian blurring to boost contrast and suppress noise:
def preprocess_image(gray):
# Auto invert if mean brightness is high
if np.mean(gray) > 127:
gray = cv2.bitwise_not(gray)
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
gray = clahe.apply(gray)
gray = cv2.GaussianBlur(gray, (5, 5), 0)
return gray
For the blob detector, don’t use fixed values. Instead, estimate parameters dynamically based on image size. This keeps the detector responsive to different resolutions or dot sizes. Something like this works well:
def create_blob_detector(gray):
h, w = gray.shape
estimated_dot_area = (h * w) * 0.0005 # heuristic estimate
params = cv2.SimpleBlobDetector_Params()
params.filterByArea = True
params.minArea = estimated_dot_area * 0.5
params.maxArea = estimated_dot_area * 3.0
params.filterByCircularity = True
params.minCircularity = 0.7
params.filterByConvexity = True
params.minConvexity = 0.85
params.filterByInertia = False
return cv2.SimpleBlobDetector_create(params)
This adaptive approach is inspired by guides like the one from Longer Vision Technology, which walks through calibration with circle grids using OpenCV: https://longervision.github.io/2017/03/18/ComputerVision/OpenCV/opencv-internal-calibration-circle-grid/
You can then wrap the entire detection and calibration process in a reusable function that works across a wide range of images:
def calibrate_from_image(image_path, pattern_size=(4,4), spacing=1.0):
img = cv2.imread(image_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
preprocessed = preprocess_image(gray)
detector = create_blob_detector(preprocessed)
found, centers = cv2.findCirclesGrid(
preprocessed, pattern_size,
flags=cv2.CALIB_CB_SYMMETRIC_GRID + cv2.CALIB_CB_CLUSTERING,
blobDetector=detector
)
if not found:
print("❌ Grid not found.")
return None
objp = np.zeros((pattern_size[0] * pattern_size[1], 3), np.float32)
objp[:, :2] = np.mgrid[0:pattern_size[0], 0:pattern_size[1]].T.reshape(-1, 2) * spacing
image_points = [centers]
object_points = [objp]
image_size = (img.shape[1], img.shape[0])
ret, cam_matrix, dist_coeffs, _, _ = cv2.calibrateCamera(
object_points, image_points, image_size, None, None)
print("✅ Grid found and calibrated.")
print("🔹 RMS Error:", ret)
print("🔹 Camera Matrix:\n", cam_matrix)
print("🔹 Distortion Coefficients:\n", dist_coeffs)
return cam_matrix, dist_coeffs
For even more robustness, consider running detection with multiple preprocessing strategies in parallel (e.g., with and without inversion, different CLAHE tile sizes), or use entropy/edge density as cues to decide preprocessing strategies dynamically.
Also worth noting: adaptive thresholding techniques can help in poor lighting conditions. Take a look at this StackOverflow discussion for examples using cv2.adaptiveThreshold
: OpenCV Thresholding adaptive to different lightning conditions
This setup will get you much closer to a reliable, general-purpose camera calibration pipeline—especially when you're dealing with non-uniform images and mixed camera setups. Let me know if you want to expand this to batch processing or video input.