5个实战案例教你用PythonOpenCV搞定双目视觉测距附完整代码双目视觉测距技术正逐渐成为机器人导航、工业检测和智能驾驶等领域的核心工具。与激光雷达等主动传感器相比基于摄像头的被动方案具有成本低、数据丰富、易于部署等优势。本文将带你从零开始通过五个典型场景的完整代码实现掌握OpenCV双目测距的核心技术栈。1. 双目视觉测距基础与环境搭建双目测距的基本原理源于人类双眼的立体视觉。当两个摄像头水平放置时同一物体在左右图像中的水平位置差视差与物体到相机的距离成反比。OpenCV提供了完整的工具链来实现这一过程从摄像头标定到深度图生成。1.1 硬件选型与安装推荐以下硬件配置组合组件类型推荐型号关键参数适用场景摄像头Intel RealSense D435全局快门1280×72030fps动态场景摄像头ELP USB双目模组200万像素可调基线低成本方案计算平台NVIDIA Jetson Xavier NX6核CPU384核GPU嵌入式部署标定板7×9棋盘格方格尺寸30mm室内标定安装时需注意保持双摄像头光轴平行建议使用刚性支架基线距离两摄像头间距通常为50-200mm根据测距范围调整避免强光直射镜头工业场景可加装偏振滤光片1.2 Python环境配置推荐使用Miniconda创建独立环境conda create -n stereo python3.8 conda activate stereo pip install opencv-contrib-python4.5.5.64 numpy matplotlib scipy验证OpenCV安装import cv2 print(cv2.__version__) # 应输出4.5.x print(cv2.cuda.getCudaEnabledDeviceCount()) # GPU加速支持检测2. 摄像头标定与立体校正实战精确的标定是双目测距的前提。我们将使用棋盘格法获取摄像头内外参数并消除镜头畸变。2.1 单目标定流程采集20-30张不同角度的棋盘格图像存储为calib_imgs/left_*.jpg和calib_imgs/right_*.jpgimport glob import cv2 import numpy as np # 标定板参数 pattern_size (6, 8) # 内角点数量 square_size 0.03 # 方格实际大小(m) # 准备对象点 objp np.zeros((pattern_size[0]*pattern_size[1], 3), np.float32) objp[:,:2] np.mgrid[0:pattern_size[0], 0:pattern_size[1]].T.reshape(-1,2) * square_size # 存储对象点和图像点 obj_points [] img_points_left [] img_points_right [] images_left sorted(glob.glob(calib_imgs/left_*.jpg)) images_right sorted(glob.glob(calib_imgs/right_*.jpg)) for fname_left, fname_right in zip(images_left, images_right): img_left cv2.imread(fname_left) img_right cv2.imread(fname_right) gray_left cv2.cvtColor(img_left, cv2.COLOR_BGR2GRAY) gray_right cv2.cvtColor(img_right, cv2.COLOR_BGR2GRAY) # 查找角点 ret_left, corners_left cv2.findChessboardCorners(gray_left, pattern_size) ret_right, corners_right cv2.findChessboardCorners(gray_right, pattern_size) if ret_left and ret_right: obj_points.append(objp) corners_refined_left cv2.cornerSubPix(gray_left, corners_left, (11,11), (-1,-1), (cv2.TERM_CRITERIA_EPS cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)) corners_refined_right cv2.cornerSubPix(gray_right, corners_right, (11,11), (-1,-1), (cv2.TERM_CRITERIA_EPS cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)) img_points_left.append(corners_refined_left) img_points_right.append(corners_refined_right) # 左摄像头标定 ret_left, mtx_left, dist_left, rvecs_left, tvecs_left cv2.calibrateCamera( obj_points, img_points_left, gray_left.shape[::-1], None, None) # 右摄像头标定 ret_right, mtx_right, dist_right, rvecs_right, tvecs_right cv2.calibrateCamera( obj_points, img_points_right, gray_right.shape[::-1], None, None)2.2 立体标定与校正获取双目的相对位置关系flags 0 flags | cv2.CALIB_FIX_INTRINSIC # 使用单目标定结果 ret, _, _, _, _, R, T, E, F cv2.stereoCalibrate( obj_points, img_points_left, img_points_right, mtx_left, dist_left, mtx_right, dist_right, gray_left.shape[::-1], flagsflags) # 立体校正 R1, R2, P1, P2, Q, _, _ cv2.stereoRectify( mtx_left, dist_left, mtx_right, dist_right, gray_left.shape[::-1], R, T) # 生成校正映射 left_map1, left_map2 cv2.initUndistortRectifyMap( mtx_left, dist_left, R1, P1, gray_left.shape[::-1], cv2.CV_16SC2) right_map1, right_map2 cv2.initUndistortRectifyMap( mtx_right, dist_right, R2, P2, gray_right.shape[::-1], cv2.CV_16SC2)保存标定结果供后续使用np.savez(stereo_calib.npz, left_map1left_map1, left_map2left_map2, right_map1right_map1, right_map2right_map2, QQ)3. 视差图生成与深度计算OpenCV提供两种立体匹配算法BMBlock Matching和SGBMSemi-Global Block Matching。SGBM效果更好但计算量略大。3.1 SGBM参数配置def create_SGBM(): window_size 5 min_disp 0 num_disp 160 - min_disp stereo cv2.StereoSGBM_create( minDisparitymin_disp, numDisparitiesnum_disp, blockSizewindow_size, P18*3*window_size**2, P232*3*window_size**2, disp12MaxDiff1, uniquenessRatio10, speckleWindowSize100, speckleRange32, modecv2.STEREO_SGBM_MODE_SGBM_3WAY) return stereo3.2 实时深度图生成cap_left cv2.VideoCapture(0) # 左摄像头 cap_right cv2.VideoCapture(1) # 右摄像头 # 加载校正参数 calib_data np.load(stereo_calib.npz) left_map1 calib_data[left_map1] left_map2 calib_data[left_map2] right_map1 calib_data[right_map1] right_map2 calib_data[right_map2] Q calib_data[Q] stereo create_SGBM() while True: ret_left, frame_left cap_left.read() ret_right, frame_right cap_right.read() if not ret_left or not ret_right: break # 校正图像 left_rect cv2.remap(frame_left, left_map1, left_map2, cv2.INTER_LINEAR) right_rect cv2.remap(frame_right, right_map1, right_map2, cv2.INTER_LINEAR) # 转换为灰度图 gray_left cv2.cvtColor(left_rect, cv2.COLOR_BGR2GRAY) gray_right cv2.cvtColor(right_rect, cv2.CGR2GRAY) # 计算视差 disp stereo.compute(gray_left, gray_right).astype(np.float32)/16.0 # 转换为深度图 depth cv2.reprojectImageTo3D(disp, Q)[:,:,2] # 显示结果 cv2.imshow(Disparity, (disp-min_disp)/num_disp) cv2.imshow(Depth, depth/np.max(depth)) if cv2.waitKey(1) 0xFF ord(q): break cap_left.release() cap_right.release() cv2.destroyAllWindows()4. 工业零件尺寸测量案例在自动化生产线上双目视觉可以非接触测量零件尺寸。我们以螺栓直径测量为例。4.1 测量流程使用阈值分割提取螺栓区域在深度图中获取轮廓点云拟合圆获取直径def measure_bolt_diameter(depth_map, color_img): # 转换为HSV空间提取螺栓 hsv cv2.cvtColor(color_img, cv2.COLOR_BGR2HSV) lower_metal np.array([20, 50, 50]) upper_metal np.array([40, 255, 255]) mask cv2.inRange(hsv, lower_metal, upper_metal) # 形态学处理 kernel np.ones((5,5), np.uint8) mask cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel) # 查找轮廓 contours, _ cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) largest_contour max(contours, keycv2.contourArea) # 获取轮廓点云坐标 points [] for point in largest_contour[:,0,:]: z depth_map[point[1], point[0]] if z 0: # 过滤无效深度 points.append([point[0], point[1], z]) points np.array(points) # 转换为世界坐标系 points_3d cv2.perspectiveTransform(points.reshape(-1,1,3), Q) # 拟合圆投影到XY平面 center, radius cv2.minEnclosingCircle(points_3d[:,0,:2]) diameter_mm radius * 2 * 1000 # 转换为毫米 return diameter_mm4.2 精度优化技巧使用双边滤波平滑深度图cv2.bilateralFilter(depth, 9, 75, 75)多次测量取平均值标定时使用与测量距离相近的标定板位置环境光补偿cv2.createCLAHE(clipLimit2.0, tileGridSize(8,8))5. 无人机避障系统实现基于双目视觉的避障系统需要实时计算前方障碍物距离并标记危险区域。5.1 避障算法实现def obstacle_detection(depth_map, safe_distance3.0): # 转换为伪彩色显示 depth_colormap cv2.applyColorMap( cv2.convertScaleAbs(depth_map, alpha0.03), cv2.COLORMAP_JET) # 创建危险区域掩膜 danger_mask depth_map safe_distance danger_mask danger_mask (depth_map 0) # 标记危险区域 depth_colormap[danger_mask] [0, 0, 255] # 红色标记 # 计算最近障碍物距离 valid_depths depth_map[depth_map 0] if len(valid_depths) 0: min_depth np.min(valid_depths) cv2.putText(depth_colormap, fNearest: {min_depth:.2f}m, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,255,255), 2) return depth_colormap5.2 性能优化策略ROI处理只计算感兴趣区域的深度roi depth_map[200:400, 300:500] # 中心区域分辨率降采样small cv2.resize(frame, (0,0), fx0.5, fy0.5)GPU加速stereo cv2.cuda.createStereoBM(numDisparities64, blockSize21) disp_gpu stereo.compute(cv2.cuda_GpuMat(left), cv2.cuda_GpuMat(right))6. KITTI数据集适配实战KITTI是自动驾驶领域的标准数据集其双目图像对已校正可直接用于算法测试。6.1 数据加载与视差计算import os from matplotlib import pyplot as plt # 加载KITTI图像 left_img cv2.imread(kitti_left.png, 0) right_img cv2.imread(kitti_right.png, 0) # 调整SGBM参数适应KITTI stereo cv2.StereoSGBM_create( minDisparity0, numDisparities128, blockSize7, P18*3*7**2, P232*3*7**2, disp12MaxDiff1, preFilterCap63, uniquenessRatio10, speckleWindowSize100, speckleRange2, modecv2.STEREO_SGBM_MODE_HH) # 计算视差 disp stereo.compute(left_img, right_img).astype(np.float32)/16.0 # 可视化 plt.figure(figsize(12,6)) plt.subplot(121) plt.imshow(left_img, gray) plt.title(Left Image) plt.subplot(122) plt.imshow(disp, jet) plt.title(Disparity Map) plt.colorbar() plt.show()6.2 评估指标实现KITTI提供真实视差图可计算以下指标def evaluate_disp(disp_pred, disp_gt, max_disp128): mask disp_gt 0 error np.abs(disp_pred[mask] - disp_gt[mask]) # 异常点比例(D1) outlier error 3 d1 np.mean(outlier) * 100 # 平均端点误差(EPE) epe np.mean(error[~outlier]) # 视差误差大于5%的像素比例 error_rel error[~outlier] / disp_gt[mask][~outlier] bad5 np.mean(error_rel 0.05) * 100 return {D1: d1, EPE: epe, bad5: bad5}7. 深度优化与后处理技术原始视差图常包含噪声和空洞需要优化处理才能获得稳定结果。7.1 视差优化技术加权最小二乘滤波def wls_filter(disp_left, img_left, lambda_8000, sigma1.5): right_matcher cv2.ximgproc.createRightMatcher(stereo) disp_right right_matcher.compute(right_img, left_img) # 创建WLS滤波器 wls_filter cv2.ximgproc.createDisparityWLSFilter(stereo) wls_filter.setLambda(lambda_) wls_filter.setSigmaColor(sigma) filtered_disp wls_filter.filter(disp_left, img_left, None, disp_right) return filtered_disp空洞填充def fill_holes(disp): disp_filled disp.copy() holes (disp 0).astype(np.uint8) # 使用形态学闭运算填充小空洞 kernel np.ones((5,5), np.uint8) closed cv2.morphologyEx(holes, cv2.MORPH_CLOSE, kernel) # 最近邻填充 disp_filled[holes0] cv2.inpaint(disp, holes, 3, cv2.INPAINT_NS) return disp_filled7.2 点云生成与可视化def disp_to_pointcloud(disp, Q, color_imgNone): points cv2.reprojectImageTo3D(disp, Q) mask disp disp.min() if color_img is not None: colors cv2.cvtColor(color_img, cv2.COLOR_BGR2RGB) colors colors[mask] else: colors None return points[mask], colors # 使用Open3D可视化 import open3d as o3d def visualize_pointcloud(points, colorsNone): pcd o3d.geometry.PointCloud() pcd.points o3d.utility.Vector3dVector(points) if colors is not None: pcd.colors o3d.utility.Vector3dVector(colors/255.0) o3d.visualization.draw_geometries([pcd])8. 性能优化与部署建议实际工程部署时需要考虑实时性和资源占用。8.1 算法加速方案优化方法实现手段加速比适用场景分辨率降采样cv2.resize(fx0.5, fy0.5)2-4x远距离检测ROI处理只计算图像中心区域3-5x固定视角GPU加速cv2.cuda.StereoSGBM5-10x支持CUDA平台多线程Python threading模块1.5-2x多核CPU视差范围限制根据场景调整numDisparities线性比例已知深度范围8.2 嵌入式部署示例在Jetson Xavier NX上的优化配置def create_fast_SGBM(): stereo cv2.cuda_StereoSGBM_create( minDisparity0, numDisparities64, # 减少视差范围 blockSize3, # 更小的块大小 P124, P296, uniquenessRatio5, modecv2.STEREO_SGBM_MODE_HH) return stereo # 使用半精度浮点加速 cv2.cuda.setDevice(0) left_gpu cv2.cuda_GpuMat() right_gpu cv2.cuda_GpuMat() left_gpu.upload(left_img) right_gpu.upload(right_img) stereo create_fast_SGBM() disp_gpu stereo.compute(left_gpu, right_gpu) disp disp_gpu.download()9. 典型问题排查指南开发过程中常见问题及解决方案问题1视差图出现水平条纹检查摄像头同步信号增加preFilterCap参数值使用外触发模式采集图像问题2深度值跳动严重增加speckleWindowSize和speckleRange应用时域滤波depth alpha*new_depth (1-alpha)*last_depth检查摄像头固定是否牢固问题3近距离物体测距不准重新标定特别是近距离标定减小blockSize参数增加numDisparities问题4算法运行速度慢启用CUDA加速cv2.cuda.StereoSGBM_create()降低图像分辨率限制视差搜索范围10. 扩展应用与进阶方向双目视觉技术还有更多创新应用可能三维重建系统# 多视角点云拼接 pcd1 o3d.geometry.PointCloud() pcd1.points o3d.utility.Vector3dVector(points1) pcd1.colors o3d.utility.Vector3dVector(colors1) pcd2 o3d.geometry.PointCloud() pcd2.points o3d.utility.Vector3dVector(points2) pcd2.colors o3d.utility.Vector3dVector(colors2) # ICP配准 trans_init np.identity(4) threshold 0.02 reg_p2p o3d.pipelines.registration.registration_icp( pcd1, pcd2, threshold, trans_init, o3d.pipelines.registration.TransformationEstimationPointToPoint()) pcd1.transform(reg_p2p.transformation) combined_pcd pcd1 pcd2深度学习立体匹配# 使用OpenCV加载预训练模型 model_path stereo_net.onnx net cv2.dnn.readNet(model_path) # 预处理输入 blob cv2.dnn.blobFromImages([left_img, right_img], scalefactor1.0/255.0, size(1024, 512)) # 推理 net.setInput(blob) disp net.forward() # 后处理 disp disp[0,0,:,:] disp cv2.normalize(disp, None, 0, 255, cv2.NORM_MINMAX)