Visual Servoing Platform  version 3.2.0 under development (2019-01-22)
Tutorial: Keypoints matching

Introduction

This tutorial focuses on keypoints detection and matching. You will learn how to detect keypoints on a reference image considered here as the first image of an mpeg video. Then in the next images of the video, keypoints that match those detected in the reference image are displayed. To leverage keypoints detection and matching capabilities ViSP should be build with OpenCV 3rd party.

Note that all the material (source code and video) described in this tutorial is part of ViSP source code and could be downloaded using the following command:

$ svn export https://github.com/lagadic/visp.git/trunk/tutorial/detection/matching
Note
This tutorial is adapted if your OpenCV version is equal or greater than 2.1.1 version.
We assume that you are familiar with video framegrabbing described in Tutorial: Image frame grabbing and with the way to display an image in a window described in Tutorial: How to create and build a CMake project that uses ViSP on Unix or Windows.

ORB keypoints detection and matching

Let us consider the following source code also available in tutorial-matching-keypoint.cpp.

#include <visp3/gui/vpDisplayOpenCV.h>
#include <visp3/io/vpImageIo.h>
#include <visp3/io/vpVideoReader.h>
#include <visp3/vision/vpKeyPoint.h>
int main()
{
#if (VISP_HAVE_OPENCV_VERSION >= 0x020101)
vpVideoReader reader;
reader.setFileName("video-postcard.mpeg");
reader.acquire(I);
const std::string detectorName = "ORB";
const std::string extractorName = "ORB";
// Hamming distance must be used with ORB
const std::string matcherName = "BruteForce-Hamming";
vpKeyPoint keypoint(detectorName, extractorName, matcherName, filterType);
std::cout << "Reference keypoints=" << keypoint.buildReference(I) << std::endl;
Idisp.resize(I.getHeight(), 2 * I.getWidth());
Idisp.insert(I, vpImagePoint(0, 0));
Idisp.insert(I, vpImagePoint(0, I.getWidth()));
vpDisplayOpenCV d(Idisp, 0, 0, "Matching keypoints with ORB keypoints");
while (!reader.end()) {
reader.acquire(I);
Idisp.insert(I, vpImagePoint(0, I.getWidth()));
unsigned int nbMatch = keypoint.matchPoint(I);
std::cout << "Matches=" << nbMatch << std::endl;
vpImagePoint iPref, iPcur;
for (unsigned int i = 0; i < nbMatch; i++) {
keypoint.getMatchedPoints(i, iPref, iPcur);
}
if (vpDisplay::getClick(Idisp, false))
break;
}
#endif
return 0;
}

Here after is the resulting video. The left image represents the reference image. The right images correspond to the successive images of the input video. All the green lines extremities represent the points that are matched.

Now, let us explain the lines dedicated to the ORB keypoint detection and matching usage.

First we have to include the header of the vpKeyPoint class that is a wrapper over OpenCV classes.

#include <visp3/vision/vpKeyPoint.h>

Note that this class is only available if ViSP was build with OpenCV. This is ensured by the check of VISP_HAVE_OPENCV_VERSION macro.

#if (VISP_HAVE_OPENCV_VERSION >= 0x020101)

Then we open the mpeg video stream and grab the first image of the video that is stored in I container. The vpKeyPoint class is instantiated and keypoints are detected on the first image which is considered as the reference image:

const std::string detectorName = "ORB";
const std::string extractorName = "ORB";
// Hamming distance must be used with ORB
const std::string matcherName = "BruteForce-Hamming";
vpKeyPoint keypoint(detectorName, extractorName, matcherName, filterType);
std::cout << "Reference keypoints=" << keypoint.buildReference(I) << std::endl;

The next lines are used to create image Idisp to render the matching results; left image for the reference image, right image for the current image that is processed:

Idisp.resize(I.getHeight(), 2 * I.getWidth());
Idisp.insert(I, vpImagePoint(0, 0));
Idisp.insert(I, vpImagePoint(0, I.getWidth()));

Then a display using OpenCV is created and image Idisp is rendered:

vpDisplayOpenCV d(Idisp, 0, 0, "Matching keypoints with ORB keypoints");

We enter then in the while() loop where a new image is acquired from the video stream and inserted in the right part of image Idisp dedicated to rendering of the matching results.

reader.acquire(I);
Idisp.insert(I, vpImagePoint(0, I.getWidth()));

We start the rendering by displaying the rendered image and by drawing a white vertical line to separate the reference image from the current one:

Keypoint matches between the reference image and the current image I are detected using:

unsigned int nbMatch = keypoint.matchPoint(I);

Then we parse all the matches to retrieve the coordinates of the points in the reference image (in iPref variable) and in the current image (in iPcur variable):

vpImagePoint iPref, iPcur;
for (unsigned int i = 0; i < nbMatch; i++) {
keypoint.getMatchedPoints(i, iPref, iPcur);

Next we draw green lines between the matched points:

At the end of the iteration, we flush all the previous display to the render window:

Using others types of keypoints

Using other types of detectors / descriptors (SIFT, SURF, etc.) or matchers (Brute Force, Flann based matcher) is also possible. This can be easily done by using the correct OpenCV identifier name. For example, if you want to use the couple of detector / descriptor SIFT with a matching using FLANN, you only have to change the following lines :

const std::string detectorName = "SIFT";
const std::string extractorName = "SIFT";
// Use L2 distance with a matching done using FLANN (Fast Library for
// Approximate Nearest Neighbors)
const std::string matcherName = "FlannBased";
vpKeyPoint keypoint(detectorName, extractorName, matcherName, filterType);

A complete example is given in tutorial-matching-keypoint-SIFT.cpp

Available types of detectors, extractors or matchers depend on OpenCV version. Check the OpenCV documentation to know which ones are available.

Due to some patents, SIFT and SURF keypoints are available in a separate module in OpenCV: in nonfree module in OpenCV version before 3.0.0 and in xfeatures2d in OpenCV version from 3.0.0 if OpenCV contrib modules are build. If you want to use them, be sure that you have the nonfree module or xfeatures2d module.

Some usage restrictions could also exist according to the usage you plan (e.g. research or commercial use).

You can now follow Tutorial: Homography estimation from points to see how to exploit couple of matched points in order to estimate an homography that allows to track the position of an object.