Visual Servoing Platform
version 3.6.1 under development (2024-12-17)
|
initialization (use case)
As stated in Tutorial: Markerless generic model-based tracking using a color camera ViSP allows the tracking of a markerless object using the knowledge of its CAD model while providing its 3D localization when a calibrated camera is used [8], [49].
In this tutorial we will experiment live tracking of a cube observed by an USB camera (webcam) or an Intel Realsense RGB-D camera (SR300 or D400 series).
To automate generic model-based tracker initialization avoiding user click, we use here an AprilTag glued on a cube face. The pose of the tag is used to initialize the tracker; see Tutorial: AprilTag marker detection.
Thanks to a confidence indicator associated to the tracker we are able to detect a tracking failure and continue waiting for an AprilTag new detection to initialize again the tracker in an infinite loop.
The object considered in this tutorial is a cube made of cardboard with an AprilTag sticked on a face. The default length of one side of the cube is 12.5 cm, while the default AprilTag external black square is 8 cm large. As explained in Using an USB camera and in Using an Intel Realsense camera you can build your own object with other sizes and set your cube and tag dimensions as command line parameters.
To print your own tag, follow instructions given in Print an AprilTag marker. Of course, you will need a webcam to run the tutorial on your computer.
Here is an image of the cube we built:
This tutorial focuses on vpMbGenericTracker class that was introduced in ViSP 3.1.0 and on vpDetectorAprilTag for initialization of the tracker.
In this tutorial, we will show how to use vpMbGenericTracker class in order to track an object from images acquires by a monocular color camera using moving edges and keypoints as visual features, and how to initialize the tracker using vpDetectorAprilTag class.
Note that all the material (source code and videos) described in this tutorial is part of ViSP source code (in tutorial/tracking/model-based/generic-apriltag
folder) and could be found in https://github.com/lagadic/visp/tree/master/tutorial/tracking/model-based/generic-apriltag.
Depending on your use case the following optional third-parties may be used by the tracker. Make sure ViSP was build with the appropriate 3rd parties:
OpenCV:
Essential if you want to acquire images on non Linux platforms (Windows, OSX) and use the keypoints as visual features that are detected and tracked thanks to the KLT tracker.The algorithm that is implemented in order to detect the cube location and use this location to initialize the model-based tracker is the following:
If you have an USB camera, this algorithm is implemented in tutorial-mb-generic-tracker-apriltag-webcam.cpp. If you have rather an Intel Realsense RGB-D camera, this algorithm is implemented in tutorial-mb-generic-tracker-apriltag-rs2.cpp
If you have an USB camera like a webcam or a laptop camera, you can try to track your cube using tutorial-mb-generic-tracker-apriltag-webcam.cpp source code that is part of ViSP. This example allows to track a cubic box with an AprilTag on a face using either moving edges or keypoints as visual features. This example is an extension of the one explained in Tutorial: Markerless generic model-based tracking using a color camera. To grab images from an USB camera we use either Video 4 Linux driver thanks to vpV4l2Grabber class (a wrapper over libv4l), or OpenCV VideoCapture class if vpV4l2Grabber is not available (on Windows or OSX for example). To install libv4l third-party, follow Recommended 3rd parties. To install OpenCV, depending on your platform follow one of the tutorials given in Installation from source code.
Once build, to see the options that are available, just run:
To test the tracker on a 12.5 cm wide cube that has an AprilTag of size 8 by 8 cm, and enable moving-edges and keypoints as visual features run:
To test the tracker on a 12.5 cm wide cube that has an AprilTag of size 8 by 8 cm, with only the moving-edges as visual features run:
By default, the following settings are used:
camera.xml
parameters file with a camera named myCam
in it, by running: camera.xml
file by calibration your camera.$ ./tutorial-mb-generic-tracker-apriltag-rs2
you should be able to get results similar to these:
Cube tracking result with an USB camera using only moving edges
$ ./tutorial-mb-generic-tracker-apriltag-rs2 --texture
. The corresponding results would rather look like these:
Cube tracking result with an USB camera using moving edges and keypoints
If you have an Intel Realsense RDB-D camera like an SR300 or a D435, you can try to track your cube using tutorial-mb-generic-tracker-apriltag-rs2.cpp source code that is part of ViSP. This example allows to track a cubic box with an AprilTag on a face using either moving edges, keypoints or depth as visual features. This example is an extension of the one explained in Tutorial: Markerless generic model-based tracking using a RGB-D camera. To grab images from an Intel RealSense device we use vpRealSense2 class. This class needs librealsense 2.x third-party which installation instructions are provided in the tutorials available from Installation from source code.
Once build, to see the options that are available, just run:
To test the tracker on a 12.5 cm wide cube that has an AprilTag of size 8 by 8 cm, and enable moving-edges, keypoints and depth as visual features run:
To test the tracker on a 12.5 cm wide cube that has an AprilTag of size 8 by 8 cm, and enable only moving-edges and keypoints as visual features run:
To test the tracker on a 12.5 cm wide cube that has an AprilTag of size 8 by 8 cm, using only the moving-edges as visual features run rather:
By default, the following settings are used:
$ ./tutorial-mb-generic-tracker-apriltag-rs2
you should be able to get results similar to these:
Cube tracking result with an Intel Realsense camera using only moving edges
$ ./tutorial-mb-generic-tracker-apriltag-rs2 --texture
. The corresponding results would rather look like these:
Cube tracking result with an Intel Realsense camera using moving edges and keypoints
$ ./tutorial-mb-generic-tracker-apriltag-rs2 --texture --depth
. The corresponding results would rather look like these:
Cube tracking result with an Intel Realsense camera using moving edges, keypoints and depth
Playing with the above binaries you will see that:
Some recommendations:
Hereafter, you have the tracking result using a textured cube (we just glue images over the cube faces):
Textured cube tracking result with an Intel Realsense camera using moving edges, keypoints and depth
You can follow Tutorial: Object detection and localization to learn how to initialize the tracker without user click, by learning the object to track using keypoints when the object is textured.
There is also Tutorial: Markerless generic model-based tracking using a stereo camera if you want to know how to extend the tracker to use a stereo camera. There is also this other Tutorial: Template tracking.