The second part of this tutorial is deprecated if your OpenCV version is equal to 2.8.0 or more recent. If so you should rather follow Tutorial: Homography estimation from points.
This tutorial shows how to estimate an homography from points. Two cases are considered in the next sections:
define an homogeneous transformation aMo from frame a to to the object frame o,
define a similar homogeneous transformation bMo from frame b to to the object frame o,
compute the coordinates of the four 3D points in the image plane a and b. These are the matched coordinates (xa,ya) and (xb,yb) that are then used to estimate the homography using either vpHomography::DLT() or vpHomography::HLM().
From these transformations we compute the coordinates of the points in the image plane a and b. For each point we have its coordinates (xa,ya) in frame a and (xb,yb) in frame b:
std::vector<vpPoint> aP(4), bP(4);
std::vector<double> xa(4), ya(4), xb(4), yb(4);
for(unsignedint i=0 ; i < 4; i++)
{
oP[i].project(aMo);
xa[i] = oP[i].get_x();
ya[i] = oP[i].get_y();
oP[i].project(bMo);
xb[i] = oP[i].get_x();
yb[i] = oP[i].get_y();
}
We have now matched couples of coordinates of four points that are used to estimate an homography between image plane a and b. Two methods are available, either using the DLT (Direct Linear Transform) algorithm or the HLM algorithm.
This last part of the code produce the following output:
Ground truth: Point 3 in pixels in frame b: 377.9450564, 193.9928711
Ground truth: Point 3 in pixels in frame a: 353.8501593, 486.1851856
Estimation from homography: Point 3 in pixels in frame a: 353.8501593, 486.1851856
Ransac or robust homography estimation
This section follows the Tutorial: Keypoint matching (deprecated). It explains how to exploit couples of matched points obtained using SURF detector in order to estimate an homography that allows to reject mismatched couples of points. The homography is then used to track a postcard from its initial position in the reference image.
The command line allows to use either Ransac algorithm of a robust M-estimator approach:
% ./tutorial-matching-surf-homography-deprecated 0 // to run Ransac
% ./tutorial-matching-surf-homography-deprecated 1 // to run the robust M-estimator
Here after is the resulting video. The left image represents the reference image. The right images correspond to the successive images of the input video. All the green lines extremities represent the points that are well matched and used in the homography estimation process. All the red lines represent couple of matched points that are rejected by the robust estimator.
Now, let us explain the new lines that were introduced to estimate the homography.
First we detect the command line arguments to be able later to user either Ransac or the robust M-estimator:
int method = 0;
if (argc > 1)
method = atoi(argv[1]);
if (method == 0)
std::cout << "Uses Ransac to estimate the homography" << std::endl;
else
std::cout << "Uses a robust scheme to estimate the homography" << std::endl;
We also initialize the coordinates of the pixels in the reference image that correspond to the postcard corners. These coordinates were obtained after a user initial click. To simplify the code, we set directly the coordinates of the points:
We allocate new containers useful for the homography estimation. The coordinates of the points in the reference image will be stored in (mPref_x, mPref_y), while the corresponding coordinates in the current image will be stored in (mPcur_x, mPcur_y). We also allocate inliers a vector of boolean that will indicate if a couple of point is an inlier or an outlier:
For Ransac we consider that at least 50 percent of the points should be inliers (mPref_x.size()/2) to reach a consensus and that a couple of point is stated as an inlier if the reprojection error is lower than 2 pixels (2.0/cam.get_px()).
Then using the homography, we project the coordinates of the postcard corners in the current image:
Since the homography estimation updates the status of the couples of matched points as inliers or outliers, between the matched points we are able to draw green lines when they are inliers, or red lines when they are outliers.