Visual Servoing Platform  version 3.2.0 under development (2019-01-22)
Tutorial: Markerless model-based tracking with stereo cameras (deprecated)

Introduction

Warning
This tutorial can be considered obsolete since ViSP 3.1.0 version as we have introduced a generic tracker (vpMbGenericTracker) that can replace the vpMbEdgeMultiTracker, vpMbKltMultiTracker and vpMbEdgeKltMultiTracker classes. The explanations about the multi-views model-based tracking remain valid though.

This tutorial describes the model-based tracking of object using simultaneously multiple cameras views. It allows to track the object in the images viewed by a set of cameras while providing its 3D localization. Calibrated cameras (intrinsic and extrinsic between the reference and the other cameras) are required.

The mbt ViSP module allows the tracking of a markerless object using the knowledge of its CAD model. Considered objects have to be modeled by segment, circle or cylinder primitives. The model of the object could be defined in vrml format (except for circles) or in cao (our own format).

Next section highlights the different versions of the markerless multi-view model-based trackers that have been developed. The multi-view model-based tracker can consider moving-edges (thanks to the vpMbEdgeMultiTracker class). It can also consider KLT features that are detected and tracked on each visible face of the model (thanks to the vpMbKltMultiTracker class). The tracker can also handle moving-edges and KLT features in a hybrid scheme (thanks to vpMbEdgeKltMultiTracker the class).

While the multi-view model-based edges tracker implemented in vpMbEdgeMultiTracker is appropriate to track texture-less objects (with visible edges), the multi-view model-based KLT tracker implemented in vpMbKltMultiTracker is suitable for textured objects. The multi-view model-based hybrid tracker implemented in vpMbEdgeKltMultiTracker is appropriate to track objects with texture and or with visible edges.

These classes allow the tracking of the same object assuming two or more cameras: The main advantages of this configuration with respect to the mono-camera case (see Tutorial: Markerless model-based tracking (deprecated)) concern:

  • the possibility to extend the application field of view;
  • a more robust tracking as the configuration of the stereo rig allows to track the object under multiple viewpoints and thus with more visual features.

In order to achieve this, the following information are required:

  • the intrinsic parameters of each camera;
  • the transformation matrix between each camera and a reference camera: $ ^{c_{current}}{\bf M}_{c_{reference}} $.

In the following sections, we consider the tracking of a tea box modeled in cao format. A stereo camera sees this object. The following video shows the tracking performed with vpMbEdgeMultiTracker. In this example ,the fixed cameras located on the Romeo Humanoid robot head captured the images.

This other video shows the behavior of the hybrid tracking performed with vpMbEdgeKltMultiTracker.

Note
The cameras can move, but the tracking will be effective as long as the transformation matrix between the cameras and the reference camera is known and updated at each iteration.
The new introduced classes are not restricted to stereo configuration. They allow the usage of multiple cameras (see How to deal with moving cameras).

Next sections will highlight how to easily adapt your code to use multiple cameras with the model-based tracker. As only the new methods dedicated to multiple views tracking will be presented, you are highly recommended to follow Tutorial: Markerless model-based tracking (deprecated) in order to be familiar with the model-based tracking concepts, the different trackers that are available in ViSP (the edge tracker: vpMbEdgeTracker, the klt feature points tracker: vpMbKltTracker and the hybrid tracker: vpMbEdgeKltTracker) and with the configuration part.

Note that all the material (source code and video) described in this tutorial is part of ViSP source code and could be downloaded using the following command:

$ svn export https://github.com/lagadic/visp.git/trunk/tutorial/tracking/model-based/old/stereo-deprecated

Getting started

Overview

The model-based trackers available for multiple views tracking rely on the same trackers than in the monocular case:

  • a vpMbEdgeMultiTracker similar to vpMbEdgeTracker which tracks moving-edges corresponding to the visible lines of the model projected in the image plane at the current pose (suitable for texture-less objects).
  • a vpMbKltMultiTracker similar to vpMbKltTracker which uses the optical flow information to track the object (suitable for textured objects).
  • a vpMbEdgeKltMultiTracker similar to vpMbEdgeKltTracker which merges the two information (edge and texture information) for better robustness of the tracking (can deal with both types of objects).

The following class diagram offers an overview of the hierarchy between the different classes:

img-mbt-multi-class-diagram-resize.jpeg
Simplified class diagram.

The vpMbEdgeMultiTracker class inherits from the vpMbEdgeTracker class, the vpMbKltMultiTracker inherits from the vpMbKltTracker class and the vpMbEdgeKltMultiTracker class inherits from the vpMbEdgeMultiTracker and vpMbKltMultiTracker classes. This conception permits to easily extend the usage of the model-based tracker to multiple cameras with the guarantee to preserve the same behavior compared to the tracking in the monocular configuration (more precisely, only the model-based edge and the model-based klt should have the same behavior, the hybrid multi class has a slight different implementation that will lead to minor differences compared to vpMbEdgeKltTracker).

As you will see after, the principal methods present in the parent class are accessible and used for single view tracking. Lot of new overridden methods have been introduced to deal with the different cameras configuration (single camera, stereo cameras and multiple cameras).

Implementation detail

Each tracker is stored in a map, the key corresponding to the name of the camera on which the tracker will process. By default, the camera names are set to:

  • "Camera" when the tracker is constructed with one camera.
  • "Camera1" to "CameraN" when the tracker is constructed with N cameras.
  • The default reference camera will be "Camera1" in the multiple cameras case.
img-multi-cameras-config.png
Default name convention and reference camera ("Camera1").

To deal with multiple cameras, in the virtual visual servoing control law we concatenate all the interaction matrices and residual vectors and transform them in a single reference camera frame to compute the reference camera velocity. Thus, we have to know the transformation matrix between each camera and the reference camera.

For example, if the reference camera is "Camera1" ( $ c_1 $), we need the following information: $ _{}^{c_2}\textrm{M}_{c_1}, _{}^{c_3}\textrm{M}_{c_1}, \cdots, _{}^{c_n}\textrm{M}_{c_1} $.

Interfacing with the code

Each essential method used to initialize the tracker and process the tracking have three signatures in order to ease the call to the method and according to three working modes:

  • tracking using one camera, the signature remains the same than the previous classes (vpMbEdgeTracker, vpMbKltTracker, vpMbEdgeKltTracker).
  • tracking using two cameras, all the necessary methods accept directly the corresponding parameter for each camera. By default, the first parameter corresponds to the reference camera.
  • tracking using multiple cameras, you have to supply the different parameters with a map. The key corresponds to the name of the camera and the value to the parameter.

The following table sums up how to call the different methods based on the camera configuration for the main functions.

Example of the different method signatures.
Method calling example: Monocular case Stereo case Multiple cameras case Remarks
Construct a model-based edge tracker: vpMbEdgeMultiTracker tracker vpMbEdgeMultiTracker tracker(2) vpMbEdgeMultiTracker tracker(5) The default constructor corresponds to the monocular configuration.
Load a configuration file: tracker.loadConfigFile("config.xml") tracker.loadConfigFile("config1.xml", "config2.xml") tracker.loadConfigFile(mapOfConfigFiles) Each tracker can have different parameters (intrinsic parameters, visibility angles, etc.).
Load a model file: tracker.loadModel("model.cao") tracker.loadModel("model.cao") tracker.loadModel("model.cao") All the trackers must used the same 3D model.
Get the intrinsic camera parameters: tracker.getCameraParameters(cam) tracker.getCameraParameters(cam1, cam2) tracker.getCameraParameters(mapOfCam)
Set the transformation matrix between each camera and the reference one: tracker.setCameraTransformationMatrix(mapOfCamTrans) tracker.setCameraTransformationMatrix(mapOfCamTrans) For the reference camera, the identity homogeneous matrix must be set.
Setting to display the features:tracker.setDisplayFeatures(true) tracker.setDisplayFeatures(true) tracker.setDisplayFeatures(true) This is a general parameter.
Initialize the pose by click: tracker.initClick(I, "f_init.init") tracker.initClick(I1, I2, "f_init1.init", "f_init2.init") tracker.initClick(mapOfImg, mapOfInitFiles) If the transformation matrices between the cameras have been set, some init files can be omitted as long as the reference camera has an init file.
Track the object: tracker.track(I) tracker.track(I1, I2) tracker.track(mapOfImg)
Get the pose: tracker.getPose(cMo) tracker.getPose(c1Mo, c2Mo) tracker.getPose(mapOfPoses) tracker.getPose(cMo) will return the pose for the reference camera in the multiple cameras configurations.
Display the model: tracker.display(I, cMo, cam, ...) tracker.display(I1, I2, c1Mo, c2Mo, cam1, cam2, ...) tracker.display(mapOfImg, mapOfPoses, mapOfCam)
Note
As the trackers are stored in an alphabetic order internally, you have to match the method parameters with the correct tracker position in the map in the stereo cameras case.

Example code

The following example comes from tutorial-mb-tracker-stereo.cpp and allows to track a tea box modeled in cao format using one of the three multi-view markerless trackers implemented in ViSP. In this example we consider a stereo configuration.

Once built, to choose which tracker to use, run the binary with the following argument:

$ ./tutorial-mb-tracker-stereo --tracker <0=egde|1=klt|2=hybrid>

The source code is the following:

#include <iostream>
#include <visp3/core/vpConfig.h>
#if defined(VISP_BUILD_DEPRECATED_FUNCTIONS)
#include <visp3/core/vpIoTools.h>
#include <visp3/gui/vpDisplayGDI.h>
#include <visp3/gui/vpDisplayOpenCV.h>
#include <visp3/gui/vpDisplayX.h>
#include <visp3/io/vpImageIo.h>
#include <visp3/mbt/vpMbEdgeKltMultiTracker.h>
#include <visp3/mbt/vpMbEdgeMultiTracker.h>
#include <visp3/io/vpVideoReader.h>
int main(int argc, char **argv)
{
#if defined(VISP_HAVE_OPENCV) && (VISP_HAVE_OPENCV_VERSION >= 0x020100)
try {
std::string opt_videoname_left = "teabox_left.mpg";
std::string opt_videoname_right = "teabox_right.mpg";
int opt_tracker = 0;
for (int i = 0; i < argc; i++) {
if (std::string(argv[i]) == "--name" && i + 2 < argc) {
opt_videoname_left = std::string(argv[i + 1]);
opt_videoname_right = std::string(argv[i + 2]);
} else if (std::string(argv[i]) == "--tracker" && i + 1 < argc) {
opt_tracker = atoi(argv[i + 1]);
} else if (std::string(argv[i]) == "--help") {
std::cout << "\nUsage: " << argv[0]
<< " [--name <video name left> <video name right>] "
"[--tracker <0=egde|1=klt|2=hybrid>] [--help]\n"
<< std::endl;
return 0;
}
}
std::string parentname = vpIoTools::getParent(opt_videoname_left);
std::string objectname_left = vpIoTools::getNameWE(opt_videoname_left);
std::string objectname_right = vpIoTools::getNameWE(opt_videoname_right);
if (!parentname.empty()) {
objectname_left = parentname + "/" + objectname_left;
}
std::cout << "Video name: " << opt_videoname_left << " ; " << opt_videoname_right << std::endl;
std::cout << "Tracker requested config files: " << objectname_left << ".[init, cao]"
<< " and " << objectname_right << ".[init, cao]" << std::endl;
std::cout << "Tracker optional config files: " << opt_videoname_left << ".ppm"
<< " and " << opt_videoname_right << ".ppm" << std::endl;
vpImage<unsigned char> I_left, I_right;
vpVideoReader g_left, g_right;
g_left.setFileName(opt_videoname_left);
g_left.open(I_left);
g_right.setFileName(opt_videoname_right);
g_right.open(I_right);
vpDisplay *display_left = NULL, *display_right = NULL;
#if defined(VISP_HAVE_X11)
display_left = new vpDisplayX;
display_right = new vpDisplayX;
#elif defined(VISP_HAVE_GDI)
display_left = new vpDisplayGDI;
display_right = new vpDisplayGDI;
#else
display_left = new vpDisplayOpenCV;
display_right = new vpDisplayOpenCV;
#endif
display_right->setDownScalingFactor(vpDisplay::SCALE_AUTO);
display_left->init(I_left, 100, 100, "Model-based tracker (Left)");
display_right->init(I_right, 110 + (int)I_left.getWidth(), 100, "Model-based tracker (Right)");
vpMbTracker *tracker;
if (opt_tracker == 0)
tracker = new vpMbEdgeMultiTracker(2);
#ifdef VISP_HAVE_MODULE_KLT
else if (opt_tracker == 1)
tracker = new vpMbKltMultiTracker(2);
else
tracker = new vpMbEdgeKltMultiTracker(2);
#else
else {
std::cout << "klt and hybrid model-based tracker are not available "
"since visp_klt module is missing"
<< std::endl;
return 0;
}
#endif
if (opt_tracker == 0)
dynamic_cast<vpMbEdgeMultiTracker *>(tracker)->loadConfigFile(objectname_left + ".xml",
objectname_right + ".xml");
#if defined(VISP_HAVE_MODULE_KLT)
else if (opt_tracker == 1)
dynamic_cast<vpMbKltMultiTracker *>(tracker)->loadConfigFile(objectname_left + ".xml", objectname_right + ".xml");
else
dynamic_cast<vpMbEdgeKltMultiTracker *>(tracker)->loadConfigFile(objectname_left + ".xml",
objectname_right + ".xml");
#endif
vpCameraParameters cam_left, cam_right;
if (opt_tracker == 0)
dynamic_cast<vpMbEdgeMultiTracker *>(tracker)->getCameraParameters(cam_left, cam_right);
#if defined(VISP_HAVE_MODULE_KLT)
else if (opt_tracker == 1)
dynamic_cast<vpMbKltMultiTracker *>(tracker)->getCameraParameters(cam_left, cam_right);
else
dynamic_cast<vpMbEdgeKltMultiTracker *>(tracker)->getCameraParameters(cam_left, cam_right);
#endif
tracker->loadModel("teabox.cao");
tracker->setDisplayFeatures(true);
vpHomogeneousMatrix cRightMcLeft;
std::ifstream file_cRightMcLeft("cRightMcLeft.txt");
cRightMcLeft.load(file_cRightMcLeft);
std::map<std::string, vpHomogeneousMatrix> mapOfCameraTransformationMatrix;
mapOfCameraTransformationMatrix["Camera1"] = vpHomogeneousMatrix();
mapOfCameraTransformationMatrix["Camera2"] = cRightMcLeft;
if (opt_tracker == 0)
dynamic_cast<vpMbEdgeMultiTracker *>(tracker)->setCameraTransformationMatrix(mapOfCameraTransformationMatrix);
#if defined(VISP_HAVE_MODULE_KLT)
else if (opt_tracker == 1)
dynamic_cast<vpMbKltMultiTracker *>(tracker)->setCameraTransformationMatrix(mapOfCameraTransformationMatrix);
else
dynamic_cast<vpMbEdgeKltMultiTracker *>(tracker)->setCameraTransformationMatrix(mapOfCameraTransformationMatrix);
#endif
#ifndef VISP_HAVE_XML2
std::cout << "\n**********************************************************\n"
<< "Warning: we are not able to load the tracker settings from\n"
<< "the xml config files since ViSP is not build with libxml2\n"
<< "3rd party. As a consequence, the tracking may fail !"
<< "\n**********************************************************\n"
<< std::endl;
#endif
if (opt_tracker == 0)
dynamic_cast<vpMbEdgeMultiTracker *>(tracker)->initClick(I_left, I_right, objectname_left + ".init",
objectname_right + ".init", true);
#if defined(VISP_HAVE_MODULE_KLT)
else if (opt_tracker == 1)
dynamic_cast<vpMbKltMultiTracker *>(tracker)->initClick(I_left, I_right, objectname_left + ".init",
objectname_right + ".init", true);
else
dynamic_cast<vpMbEdgeKltMultiTracker *>(tracker)->initClick(I_left, I_right, objectname_left + ".init",
objectname_right + ".init", true);
#endif
vpHomogeneousMatrix cLeftMo, cRightMo;
while (!g_left.end() && !g_right.end()) {
g_left.acquire(I_left);
g_right.acquire(I_right);
if (opt_tracker == 0)
dynamic_cast<vpMbEdgeMultiTracker *>(tracker)->track(I_left, I_right);
#if defined(VISP_HAVE_MODULE_KLT)
else if (opt_tracker == 1)
dynamic_cast<vpMbKltMultiTracker *>(tracker)->track(I_left, I_right);
else
dynamic_cast<vpMbEdgeKltMultiTracker *>(tracker)->track(I_left, I_right);
#endif
if (opt_tracker == 0)
dynamic_cast<vpMbEdgeMultiTracker *>(tracker)->getPose(cLeftMo, cRightMo);
#if defined(VISP_HAVE_MODULE_KLT)
else if (opt_tracker == 1)
dynamic_cast<vpMbKltMultiTracker *>(tracker)->getPose(cLeftMo, cRightMo);
else
dynamic_cast<vpMbEdgeKltMultiTracker *>(tracker)->getPose(cLeftMo, cRightMo);
#endif
if (opt_tracker == 0)
dynamic_cast<vpMbEdgeMultiTracker *>(tracker)->display(I_left, I_right, cLeftMo, cRightMo, cam_left, cam_right,
#if defined(VISP_HAVE_MODULE_KLT)
else if (opt_tracker == 1)
dynamic_cast<vpMbKltMultiTracker *>(tracker)->display(I_left, I_right, cLeftMo, cRightMo, cam_left, cam_right,
else
dynamic_cast<vpMbEdgeKltMultiTracker *>(tracker)->display(I_left, I_right, cLeftMo, cRightMo, cam_left,
cam_right, vpColor::red, 2);
#endif
vpDisplay::displayFrame(I_left, cLeftMo, cam_left, 0.025, vpColor::none, 3);
vpDisplay::displayFrame(I_right, cRightMo, cam_right, 0.025, vpColor::none, 3);
vpDisplay::displayText(I_left, 10, 10, "A click to exit...", vpColor::red);
vpDisplay::flush(I_right);
if (vpDisplay::getClick(I_left, false)) {
break;
}
}
delete display_left;
delete display_right;
delete tracker;
} catch (const vpException &e) {
std::cerr << "Catch a ViSP exception: " << e.getMessage() << std::endl;
}
#else
(void)argc;
(void)argv;
std::cout << "Install OpenCV and rebuild ViSP to use this example." << std::endl;
#endif
}
#else
int main()
{
std::cout << "Nothing to run, deprecated tutorial." << std::endl;
return 0;
}
#endif //#if defined(VISP_BUILD_DEPRECATED_FUNCTIONS)

Explanation of the code

The previous source code shows how to use a model-based tracking on stereo images using the standard procedure to configure the tracker:

  • construct the tracker
  • initialize the tracker by loading a configuration file
  • load a 3D model
  • process the tracking
  • get the pose and display the model in the image
Warning
The xml2 library, used to load the configuration file, is required to build the tutorial example. OpenCV is also required and the KLT module has to be enabled to use the KLT functionality.

Please refer to the tutorial Tutorial: Markerless model-based tracking (deprecated) in order to have the explanations about the configuration parameters (Tracker settings) and how to model an object in a ViSP compatible format (CAD model in cao format).

To test the three kind of trackers, only vpMbEdgeKltMultiTracker.h header is required as the others (vpMbEdgeMultiTracker.h and vpMbKltMultiTracker.h) are already included in the hybrid header class.

#include <visp3/mbt/vpMbEdgeKltMultiTracker.h>
#include <visp3/mbt/vpMbEdgeMultiTracker.h>

We declare two images for the left and right camera views.

vpImage<unsigned char> I_left, I_right;

To construct a stereo tracker, we have to specify the desired number of cameras (in our case 2) as argument given to the tracker constructors:

vpMbTracker *tracker;
if (opt_tracker == 0)
tracker = new vpMbEdgeMultiTracker(2);
#ifdef VISP_HAVE_MODULE_KLT
else if (opt_tracker == 1)
tracker = new vpMbKltMultiTracker(2);
else
tracker = new vpMbEdgeKltMultiTracker(2);
#else
else {
std::cout << "klt and hybrid model-based tracker are not available "
"since visp_klt module is missing"
<< std::endl;
return 0;
}
#endif
Note
We used a pointer to vpMbTracker to be able to construct a tracker according to the desired type (edge, klt or hybrid) but you could directly declare the desired tracker class in your program.

All the configuration parameters for the tracker are stored in xml configuration files. To load the different files, we use:

if (opt_tracker == 0)
dynamic_cast<vpMbEdgeMultiTracker *>(tracker)->loadConfigFile(objectname_left + ".xml",
objectname_right + ".xml");
#if defined(VISP_HAVE_MODULE_KLT)
else if (opt_tracker == 1)
dynamic_cast<vpMbKltMultiTracker *>(tracker)->loadConfigFile(objectname_left + ".xml", objectname_right + ".xml");
else
dynamic_cast<vpMbEdgeKltMultiTracker *>(tracker)->loadConfigFile(objectname_left + ".xml",
objectname_right + ".xml");
#endif
Note
The dynamic cast is necessary to access to the specific methods that are not declared in vpMbTracker.

The following code is used in order to retrieve the intrinsic camera parameters:

vpCameraParameters cam_left, cam_right;
if (opt_tracker == 0)
dynamic_cast<vpMbEdgeMultiTracker *>(tracker)->getCameraParameters(cam_left, cam_right);
#if defined(VISP_HAVE_MODULE_KLT)
else if (opt_tracker == 1)
dynamic_cast<vpMbKltMultiTracker *>(tracker)->getCameraParameters(cam_left, cam_right);
else
dynamic_cast<vpMbEdgeKltMultiTracker *>(tracker)->getCameraParameters(cam_left, cam_right);
#endif

To load the 3D object model, we use:

tracker->loadModel("teabox.cao");

We can also use the following setting that enables the display of the features used during the tracking:

tracker->setDisplayFeatures(true);

We have to set the transformation matrices between the cameras and the reference camera to be able to compute the control law in a reference camera frame. In the code we consider the left camera with the name "Camera1" as the reference camera. For the right camera with the name "Camera2" we have to set the transformation ( $ ^{c_{right}}{\bf M}_{c_{left}} $). This transformation is read from cRightMcLeft.txt file. Since our left and right cameras are not moving, this transformation is constant and has not to be updated in the tracking loop:

Note
For the reference camera, the camera transformation matrix has to be specified as an identity homogeneous matrix (no rotation, no translation). By default the vpHomogeneousMatrix constructor builds an identity matrix.
std::map<std::string, vpHomogeneousMatrix> mapOfCameraTransformationMatrix;
mapOfCameraTransformationMatrix["Camera1"] = vpHomogeneousMatrix();
mapOfCameraTransformationMatrix["Camera2"] = cRightMcLeft;
if (opt_tracker == 0)
dynamic_cast<vpMbEdgeMultiTracker *>(tracker)->setCameraTransformationMatrix(mapOfCameraTransformationMatrix);
#if defined(VISP_HAVE_MODULE_KLT)
else if (opt_tracker == 1)
dynamic_cast<vpMbKltMultiTracker *>(tracker)->setCameraTransformationMatrix(mapOfCameraTransformationMatrix);
else
dynamic_cast<vpMbEdgeKltMultiTracker *>(tracker)->setCameraTransformationMatrix(mapOfCameraTransformationMatrix);
#endif

The initial pose is set by clicking on specific points in the image:

if (opt_tracker == 0)
dynamic_cast<vpMbEdgeMultiTracker *>(tracker)->initClick(I_left, I_right, objectname_left + ".init",
objectname_right + ".init", true);
#if defined(VISP_HAVE_MODULE_KLT)
else if (opt_tracker == 1)
dynamic_cast<vpMbKltMultiTracker *>(tracker)->initClick(I_left, I_right, objectname_left + ".init",
objectname_right + ".init", true);
else
dynamic_cast<vpMbEdgeKltMultiTracker *>(tracker)->initClick(I_left, I_right, objectname_left + ".init",
objectname_right + ".init", true);
#endif

The poses for the left and right views have to be declared:

vpHomogeneousMatrix cLeftMo, cRightMo;

The tracking is done by:

if (opt_tracker == 0)
dynamic_cast<vpMbEdgeMultiTracker *>(tracker)->track(I_left, I_right);
#if defined(VISP_HAVE_MODULE_KLT)
else if (opt_tracker == 1)
dynamic_cast<vpMbKltMultiTracker *>(tracker)->track(I_left, I_right);
else
dynamic_cast<vpMbEdgeKltMultiTracker *>(tracker)->track(I_left, I_right);
#endif

The poses for each camera are retrieved with:

if (opt_tracker == 0)
dynamic_cast<vpMbEdgeMultiTracker *>(tracker)->getPose(cLeftMo, cRightMo);
#if defined(VISP_HAVE_MODULE_KLT)
else if (opt_tracker == 1)
dynamic_cast<vpMbKltMultiTracker *>(tracker)->getPose(cLeftMo, cRightMo);
else
dynamic_cast<vpMbEdgeKltMultiTracker *>(tracker)->getPose(cLeftMo, cRightMo);
#endif

To display the model with the estimated pose, we use:

if (opt_tracker == 0)
dynamic_cast<vpMbEdgeMultiTracker *>(tracker)->display(I_left, I_right, cLeftMo, cRightMo, cam_left, cam_right,
#if defined(VISP_HAVE_MODULE_KLT)
else if (opt_tracker == 1)
dynamic_cast<vpMbKltMultiTracker *>(tracker)->display(I_left, I_right, cLeftMo, cRightMo, cam_left, cam_right,
else
dynamic_cast<vpMbEdgeKltMultiTracker *>(tracker)->display(I_left, I_right, cLeftMo, cRightMo, cam_left,
cam_right, vpColor::red, 2);
#endif

Finally, do not forget to delete the pointers:

delete display_left;
delete display_right;
delete tracker;

Advanced

How to deal with moving cameras

The principle remains the same than with static cameras. You have to supply the camera transformation matrices to the tracker each time the cameras move and before calling the track method:

mapOfCamTrans["Camera1"] = vpHomogeneousMatrix(); //The Camera1 is the reference camera.
mapOfCamTrans["Camera2"] = get_c2Mc1(); //Get the new transformation between the two cameras.
tracker.setCameraTransformationMatrix(mapOfCamTrans);
tracker.track(mapOfImg);

This information can be available through the robot kinematics or using different kind of sensors.

The following video shows the stereo hybrid model-based tracking based on object edges and KLT features located on visible faces. The result of the tracking is then used to servo the Romeo humanoid robot eyes to gaze toward the object. The images were captured by cameras located in the Romeo eyes.

Next tutorial

You are now ready to see the next Tutorial: Markerless generic model-based tracking using a color camera.