Visual Servoing Platform
version 3.6.1 under development (2024-11-15)
|
This tutorial focuses on keypoints detection and matching. You will learn how to detect keypoints on a reference image considered here as the first image of an mpeg video. Then in the next images of the video, keypoints that match those detected in the reference image are displayed. To leverage keypoints detection and matching capabilities ViSP should be build with OpenCV 3rd party.
Note that all the material (source code and image) described in this tutorial is part of ViSP source code (in tutorial/detection/matching
folder) and could be found in https://github.com/lagadic/visp/tree/master/tutorial/detection/matching.
Let us consider the following source code also available in tutorial-matching-keypoint.cpp.
Here after is the resulting video. The left image represents the reference image. The right images correspond to the successive images of the input video. All the green lines extremities represent the points that are matched.
Now, let us explain the lines dedicated to the ORB keypoint detection and matching usage.
First we have to include the header of the vpKeyPoint class that is a wrapper over OpenCV classes.
Note that this class is only available if ViSP was build with OpenCV. This is ensured by the check of VISP_HAVE_OPENCV_VERSION
macro.
Then we open the mpeg video stream and grab the first image of the video that is stored in I
container. The vpKeyPoint class is instantiated and keypoints are detected on the first image which is considered as the reference image:
The next lines are used to create image Idisp
to render the matching results; left image for the reference image, right image for the current image that is processed:
Then a display using OpenCV is created and image Idisp
is rendered:
We enter then in the while()
loop where a new image is acquired from the video stream and inserted in the right part of image Idisp
dedicated to rendering of the matching results.
We start the rendering by displaying the rendered image and by drawing a white vertical line to separate the reference image from the current one:
Keypoint matches between the reference image and the current image I
are detected using:
Then we parse all the matches to retrieve the coordinates of the points in the reference image (in iPref
variable) and in the current image (in iPcur
variable):
Next we draw green lines between the matched points:
At the end of the iteration, we flush all the previous display to the render window:
Using other types of detectors / descriptors (SIFT, SURF, etc.) or matchers (Brute Force, Flann based matcher) is also possible. This can be easily done by using the correct OpenCV identifier name. For example, if you want to use the couple of detector / descriptor SIFT with a matching using FLANN, you only have to change the following lines :
A complete example is given in tutorial-matching-keypoint-SIFT.cpp
Available types of detectors, extractors or matchers depend on OpenCV version. Check the OpenCV documentation to know which ones are available.
Due to some patents, SIFT and SURF keypoints are available in a separate module in OpenCV: in nonfree module in OpenCV version before 3.0.0 and in xfeatures2d in OpenCV version from 3.0.0 if OpenCV contrib modules are build. If you want to use them, be sure that you have the nonfree module or xfeatures2d module.
Some usage restrictions could also exist according to the usage you plan (e.g. research or commercial use).
You can now follow Tutorial: Homography estimation from points to see how to exploit couple of matched points in order to estimate an homography that allows to track the position of an object.