从图像点计算 x,y 坐标(3D)

我的任务是在3 d 坐标系中定位一个物体。因为我必须得到几乎精确的 X 和 Y 坐标,我决定跟踪一个已知 Z 坐标的颜色标记,它将被放置在移动物体的顶部,就像这张图片中的橙色球: undistored

首先,我进行了摄像机校准以获得内部参数,然后我使用 cv: : solvePnP 来获得旋转和平移矢量,如下面的代码所示:

std::vector<cv::Point2f> imagePoints;
std::vector<cv::Point3f> objectPoints;
//img points are green dots in the picture
imagePoints.push_back(cv::Point2f(271.,109.));
imagePoints.push_back(cv::Point2f(65.,208.));
imagePoints.push_back(cv::Point2f(334.,459.));
imagePoints.push_back(cv::Point2f(600.,225.));


//object points are measured in millimeters because calibration is done in mm also
objectPoints.push_back(cv::Point3f(0., 0., 0.));
objectPoints.push_back(cv::Point3f(-511.,2181.,0.));
objectPoints.push_back(cv::Point3f(-3574.,2354.,0.));
objectPoints.push_back(cv::Point3f(-3400.,0.,0.));


cv::Mat rvec(1,3,cv::DataType<double>::type);
cv::Mat tvec(1,3,cv::DataType<double>::type);
cv::Mat rotationMatrix(3,3,cv::DataType<double>::type);


cv::solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs, rvec, tvec);
cv::Rodrigues(rvec,rotationMatrix);

在得到所有的矩阵之后,这个方程可以帮助我把图像点转换成字坐标:

transform_equation

其中 M 是 Camera Matrix R-rotationMatrix t-tvec s 是未知数。Zconst 表示橙色球所在的高度,在本例中为285毫米。 所以,首先我需要解出先前的方程,得到“ s”,然后我可以通过选择图像点得到 X 和 Y 坐标: equation2

解决这个问题,我可以找出变量“ s”,使用矩阵的最后一行,因为 Zconst 是已知的,所以下面是代码:

cv::Mat uvPoint = (cv::Mat_<double>(3,1) << 363, 222, 1); // u = 363, v = 222, got this point using mouse callback


cv::Mat leftSideMat  = rotationMatrix.inv() * cameraMatrix.inv() * uvPoint;
cv::Mat rightSideMat = rotationMatrix.inv() * tvec;


double s = (285 + rightSideMat.at<double>(2,0))/leftSideMat.at<double>(2,0));
//285 represents the height Zconst


std::cout << "P = " << rotationMatrix.inv() * (s * cameraMatrix.inv() * uvPoint - tvec) << std::endl;

在此之后,我得到了结果: P = [-2629.5,1272.6,285. ]

当我把它和测量值比较时,它是: Preal = [ -2629.6,1269.5,285. ]

误差是非常小的,这是非常好的,但是当我把这个盒子移动到这个房间的边缘,误差可能是20-40毫米,我想改善这一点。有人能帮我吗,你有什么建议吗?

36408 次浏览

Given your configuration, errors of 20-40mm at the edges are average. It looks like you've done everything well.

Without modifying camera/system configuration, doing better will be hard. You can try to redo camera calibration and hope for better results, but this will not improve them alot (and you may eventually get worse results, so don't erase actual instrinsic parameters)

As said by count0, if you need more precision you should go for multiple measurements.

Do you get the green dots (imagePoints) from the distorted or undistorted image? Because the function solvePnP already undistort the imagePoints (unless you don't pass the distortion coefficients, or pass them as null). You may be undistorting those imagePoints twice if you are getting them from the undistorted image, and this would end up causing an increased error in the corners.

https://github.com/Itseez/opencv/blob/master/modules/calib3d/src/solvepnp.cpp