Visual Servoing Platform
version 3.3.0 under development (2020-02-17)
|
This tutorial focuses on SURF key points manipulation. You will learn how to detect SURF key points on a reference image considered here as the first image of an mpeg video. Then in the next images of the video, key points that match those detected in the reference image using SURF descriptor are displayed.
Let us consider the following source code also available in tutorial-matching-surf-deprecated.cpp.
Here after is the resulting video. The left image represents the reference image. The right images correspond to the successive images of the input video. All the green lines extremities represent the points that are matched.
Now, let us explain the lines dedicated to the SURF keypoint usage.
First we have to include the header of the vpKeyPointSurf class that is a wrapper over OpenCV classes.
Note that this class is only available if ViSP was build with OpenCV non free module. This is ensured by the check of VISP_HAVE_OPENCV_NONFREE macro.
Then we open the mpeg video stream and grab the first image of the video that is stored in I
container. A Surf keypoint class is instantiated and keypoints are detected on the first image which is considered as the reference image:
The next lines are used to create image Idisp
to render the matching results; left image for the reference image, right image for the current image that is processed:
Then a display using OpenCV is created and image Idisp
is rendered:
We enter then in the while()
loop where a new image is acquired from the video stream and inserted in the right part of image Idisp
dedicated to rendering of the matching results.
We start the rendering by displaying the rendered image and by drawing a white vertical line to separate the reference image from the current one:
Keypoint matches between the reference image and the current image I
are detected using:
Then we parse all the matches to retrieve the coordinates of the points in the reference image (in iPref
variable) and in the current image (in iPcur
variable):
Next we draw green lines between the matched points:
At the end of the iteration, we flush all the previous display to the render window:
You can now follow Tutorial: Homography estimation from points (deprecated) to see how to exploit couple of matched points in order to estimate an homography that allows to track the position of an object.