MbKltTracker

class MbKltTracker(self)

Bases: MbTracker

Model based tracker using only KLT.

Warning

This class is deprecated for user usage. You should rather use the high level vpMbGenericTracker class.

Warning

This class is only available if OpenCV is installed, and used.

The tutorial-tracking-mb-deprecated is a good starting point to use this class.

The tracker requires the knowledge of the 3D model that could be provided in a vrml or in a cao file. The cao format is described in loadCAOModel() . It may also use an xml file used to tune the behavior of the tracker and an init file used to compute the pose at the very first image.

The following code shows the simplest way to use the tracker. The tutorial-tracking-mb-deprecated is also a good starting point to use this class.

#include <visp3/core/vpCameraParameters.h>
#include <visp3/core/vpHomogeneousMatrix.h>
#include <visp3/core/vpImage.h>
#include <visp3/gui/vpDisplayX.h>
#include <visp3/io/vpImageIo.h>
#include <visp3/mbt/vpMbKltTracker.h>

int main()
{
#if defined VISP_HAVE_OPENCV
  vpMbKltTracker tracker; // Create a model based tracker via KLT points.
  vpImage<unsigned char> I;
  vpHomogeneousMatrix cMo; // Pose computed using the tracker.
  vpCameraParameters cam;

  // Acquire an image
  vpImageIo::read(I, "cube.pgm");

#if defined(VISP_HAVE_X11)
  vpDisplayX display;
  display.init(I,100,100,"Mb Klt Tracker");
#endif

  tracker.loadConfigFile("cube.xml"); // Load the configuration of the tracker
  tracker.getCameraParameters(cam);   // Get the camera parameters used by the tracker (from the configuration file).
  tracker.loadModel("cube.cao");      // Load the 3d model in cao format. No 3rd party library is required
  // Initialise manually the pose by clicking on the image points associated to the 3d points contained in the
  // cube.init file.
  tracker.initClick(I, "cube.init");

  while(true){
    // Acquire a new image
    vpDisplay::display(I);
    tracker.track(I);     // Track the object on this image
    tracker.getPose(cMo); // Get the pose

    tracker.display(I, cMo, cam, vpColor::darkRed, 1); // Display the model at the computed pose.
    vpDisplay::flush(I);
  }

  return 0;
#endif
}

The tracker can also be used without display, in that case the initial pose must be known (object always at the same initial pose for example) or computed using another method:

#include <visp3/core/vpCameraParameters.h>
#include <visp3/core/vpHomogeneousMatrix.h>
#include <visp3/core/vpImage.h>
#include <visp3/io/vpImageIo.h>
#include <visp3/mbt/vpMbKltTracker.h>

int main()
{
#if defined VISP_HAVE_OPENCV
  vpMbKltTracker tracker; // Create a model based tracker via Klt Points.
  vpImage<unsigned char> I;
  vpHomogeneousMatrix cMo; // Pose used in entry (has to be defined), then computed using the tracker.

  //acquire an image
  vpImageIo::read(I, "cube.pgm"); // Example of acquisition

  tracker.loadConfigFile("cube.xml"); // Load the configuration of the tracker
  // load the 3d model, to read .wrl model coin is required, if coin is not installed .cao file can be used.
  tracker.loadModel("cube.cao");
  tracker.initFromPose(I, cMo); // initialize the tracker with the given pose.

  while(true){
    // acquire a new image
    tracker.track(I); // track the object on this image
    tracker.getPose(cMo); // get the pose
  }

  return 0;
#endif
}

Finally it can be used not to track an object but just to display a model at a given pose:

#include <visp3/core/vpCameraParameters.h>
#include <visp3/core/vpHomogeneousMatrix.h>
#include <visp3/core/vpImage.h>
#include <visp3/gui/vpDisplayX.h>
#include <visp3/io/vpImageIo.h>
#include <visp3/mbt/vpMbKltTracker.h>

int main()
{
#if defined VISP_HAVE_OPENCV
  vpMbKltTracker tracker; // Create a model based tracker via Klt Points.
  vpImage<unsigned char> I;
  vpHomogeneousMatrix cMo; // Pose used to display the model.
  vpCameraParameters cam;

  // Acquire an image
  vpImageIo::read(I, "cube.pgm");

#if defined(VISP_HAVE_X11)
  vpDisplayX display;
  display.init(I,100,100,"Mb Klt Tracker");
#endif

  tracker.loadConfigFile("cube.xml"); // Load the configuration of the tracker
  tracker.getCameraParameters(cam); // Get the camera parameters used by the tracker (from the configuration file).
  // load the 3d model, to read .wrl model coin is required, if coin is not installed .cao file can be used.
  tracker.loadModel("cube.cao");

  while(true){
    // acquire a new image
    // Get the pose using any method
    vpDisplay::display(I);
    tracker.display(I, cMo, cam, vpColor::darkRed, 1, true); // Display the model at the computed pose.
    vpDisplay::flush(I);
  }

  return 0;
#endif
}

Methods

__init__

addCircle

Add a circle to the list of circles.

display

Overloaded function.

getError

Return the error vector \((s-s^*)\) reached after the virtual visual servoing process used to estimate the pose.

getFeaturesCircle

Return the address of the circle feature list.

getFeaturesKlt

Return the address of the Klt feature list.

getFeaturesKltCylinder

Return the address of the cylinder feature list.

getKltImagePoints

Get the current list of KLT points.

getKltImagePointsWithId

Get the current list of KLT points and their id.

getKltMaskBorder

Get the erosion of the mask used on the Model faces.

getKltNbPoints

Get the current number of klt points.

getKltOpencv

Get the klt tracker at the current state.

getKltPoints

Get the current list of KLT points.

getKltThresholdAcceptation

Get the threshold for the acceptation of a point.

getMaskBorder

<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7ff689a6b5b0>>

getModelForDisplay

Return a list of primitives parameters to display the model at a given pose and camera parameters.

getNbKltPoints

<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7ff689a6be20>>

getRobustWeights

Return the weights vector \(w_i\) computed by the robust scheme.

getThresholdAcceptation

<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7ff689a84850>>

loadConfigFile

Overloaded function.

reInitModel

Re-initialize the model used by the tracker.

resetTracker

Reset the tracker.

setCameraParameters

Overloaded function.

setKltMaskBorder

Set the erosion of the mask used on the Model faces.

setKltOpencv

Set the new value of the klt tracker.

setKltThresholdAcceptation

Set the threshold for the acceptation of a point.

setMaskBorder

Set the erosion of the mask used on the Model faces.

setOgreVisibilityTest

Overloaded function.

setPose

Overloaded function.

setProjectionErrorComputation

Overloaded function.

setScanLineVisibilityTest

Overloaded function.

setThresholdAcceptation

<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7ff689a85bd0>>

setUseKltTracking

Set if the polygons that have the given name have to be considered during the tracking phase.

testTracking

Test the quality of the tracking.

track

Overloaded function.

Inherited Methods

setClipping

Specify which clipping to use.

setStopCriteriaEpsilon

Set the minimal error (previous / current estimation) to determine if there is convergence or not.

setOgreShowConfigDialog

Enable/Disable the appearance of Ogre config dialog on startup.

getClipping

Get the clipping used and defined in vpPolygon3D::vpMbtPolygonClippingType.

setDisplayFeatures

Enable to display the features.

getPose

Overloaded function.

setLambda

Set the value of the gain used to compute the control law.

getNearClippingDistance

Get the near distance for clipping.

getProjectionError

Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal.

getFarClippingDistance

Get the far distance for clipping.

getPolygonFaces

Get the list of polygons faces (a vpPolygon representing the projection of the face in the image and a list of face corners in 3D), with the possibility to order by distance to the camera or to use the visibility check to consider if the polygon face must be retrieved or not.

setProjectionErrorMovingEdge

Set Moving-Edges parameters for projection error computation.

setNearClippingDistance

Set the near distance for clipping.

LEVENBERG_MARQUARDT_OPT

getOptimizationMethod

Get the optimization method used during the tracking.

setMaxIter

Set the maximum iteration of the virtual visual servoing stage.

getAngleDisappear

Return the angle used to test polygons disappearance.

getMaxIter

Get the maximum number of iterations of the virtual visual servoing stage.

setFarClippingDistance

Set the far distance for clipping.

getNbPolygon

Get the number of polygons (faces) representing the object to track.

initFromPoints

Overloaded function.

setLod

Set the flag to consider if the level of detail (LOD) is used.

setOptimizationMethod

GAUSS_NEWTON_OPT

setMask

getInitialMu

Get the initial value of mu used in the Levenberg Marquardt optimization loop.

setEstimatedDoF

Set a 6-dim column vector representing the degrees of freedom in the object frame that are estimated by the tracker.

setMinPolygonAreaThresh

Set the minimum polygon area to be considered as visible in the LOD case.

computeCurrentProjectionError

Compute projection error given an input image and camera pose, parameters.

setMinLineLengthThresh

Set the threshold for the minimum line length to be considered as visible in the LOD case.

initFromPose

Overloaded function.

initClick

Overloaded function.

getFaces

Return a reference to the faces structure.

savePose

Save the pose in the given filename

getLambda

Get the value of the gain used to compute the control law.

setAngleDisappear

Set the angle used to test polygons disappearance.

getCameraParameters

Get the camera parameters.

loadModel

Load a 3D model from the file in parameter.

setProjectionErrorKernelSize

Set kernel size used for projection error computation.

getCovarianceMatrix

Get the covariance matrix.

setProjectionErrorDisplay

Display or not gradient and model orientation when computing the projection error.

getAngleAppear

Return the angle used to test polygons appearance.

setAngleAppear

Set the angle used to test polygons appearance.

getStopCriteriaEpsilon

setProjectionErrorDisplayArrowLength

Arrow length used to display gradient and model orientation for projection error computation.

setProjectionErrorDisplayArrowThickness

Arrow thickness used to display gradient and model orientation for projection error computation.

getEstimatedDoF

Get a 1x6 vpColVector representing the estimated degrees of freedom.

MbtOptimizationMethod

Values:

setPoseSavingFilename

Set the filename used to save the initial pose computed using the initClick() method.

setCovarianceComputation

Set if the covariance matrix has to be computed.

setInitialMu

Set the initial value of mu for the Levenberg Marquardt optimization loop.

Operators

__annotations__

__doc__

__init__

__module__

Attributes

GAUSS_NEWTON_OPT

LEVENBERG_MARQUARDT_OPT

__annotations__

class MbtOptimizationMethod(self, value: int)

Bases: pybind11_object

Values:

  • GAUSS_NEWTON_OPT

  • LEVENBERG_MARQUARDT_OPT

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
__init__(self)
addCircle(self: visp._visp.mbt.MbKltTracker, P1: visp._visp.core.Point, P2: visp._visp.core.Point, P3: visp._visp.core.Point, r: float, name: str =) None

Add a circle to the list of circles.

Parameters:
P1

Center of the circle.

P2

Two points on the plane containing the circle. With the center of the circle we have 3 points defining the plane that contains the circle.

P3

Two points on the plane containing the circle. With the center of the circle we have 3 points defining the plane that contains the circle.

r

Radius of the circle.

name

Name of the circle.

computeCurrentProjectionError(self, I: visp._visp.core.ImageGray, _cMo: visp._visp.core.HomogeneousMatrix, _cam: visp._visp.core.CameraParameters) float

Compute projection error given an input image and camera pose, parameters. This projection error uses locations sampled exactly where the model is projected using the camera pose and intrinsic parameters. You may want to use

Note

See setProjectionErrorComputation

Note

See getProjectionError

to get a projection error computed at the ME locations after a call to track() . It works similarly to vpMbTracker::getProjectionError function:Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal. The error is expressed in degree between 0 and 90.

Parameters:
I: visp._visp.core.ImageGray

Input grayscale image.

_cMo: visp._visp.core.HomogeneousMatrix

Camera pose.

_cam: visp._visp.core.CameraParameters

Camera parameters.

display(*args, **kwargs)

Overloaded function.

  1. display(self: visp._visp.mbt.MbKltTracker, I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None

Display the 3D model at a given position using the given camera parameters

Parameters:
I

The image.

cMo

Pose used to project the 3D model into the image.

cam

The camera parameters.

col

The desired color.

thickness

The thickness of the lines.

displayFullModel

Boolean to say if all the model has to be displayed, even the faces that are visible.

  1. display(self: visp._visp.mbt.MbKltTracker, I: visp._visp.core.ImageRGBa, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None

Display the 3D model at a given position using the given camera parameters

Parameters:
I

The color image.

cMo

Pose used to project the 3D model into the image.

cam

The camera parameters.

col

The desired color.

thickness

The thickness of the lines.

displayFullModel

Boolean to say if all the model has to be displayed, even the faces that are not visible.

getAngleAppear(self) float

Return the angle used to test polygons appearance.

getAngleDisappear(self) float

Return the angle used to test polygons disappearance.

getCameraParameters(self, cam: visp._visp.core.CameraParameters) None

Get the camera parameters.

Parameters:
cam: visp._visp.core.CameraParameters

copy of the camera parameters used by the tracker.

getClipping(self) int

Get the clipping used and defined in vpPolygon3D::vpMbtPolygonClippingType.

Returns:

Clipping flags.

getCovarianceMatrix(self) visp._visp.core.Matrix

Get the covariance matrix. This matrix is only computed if setCovarianceComputation() is turned on.

Note

See setCovarianceComputation()

getError(self) visp._visp.core.ColVector

Return the error vector \((s-s^*)\) reached after the virtual visual servoing process used to estimate the pose.

The following example shows how to use this function to compute the norm of the residual and the norm of the residual normalized by the number of features that are tracked:

tracker.track(I); std::cout << "Residual: " << sqrt( (tracker.getError()).sumSquare()) << std::endl;
std::cout << "Residual normalized: "
          << sqrt( (tracker.getError()).sumSquare())/tracker.getError().size() << std::endl;

Note

See getRobustWeights()

getEstimatedDoF(self) visp._visp.core.ColVector

Get a 1x6 vpColVector representing the estimated degrees of freedom. vpColVector [0] = 1 if translation on X is estimated, 0 otherwise; vpColVector [1] = 1 if translation on Y is estimated, 0 otherwise; vpColVector [2] = 1 if translation on Z is estimated, 0 otherwise; vpColVector [3] = 1 if rotation on X is estimated, 0 otherwise; vpColVector [4] = 1 if rotation on Y is estimated, 0 otherwise; vpColVector [5] = 1 if rotation on Z is estimated, 0 otherwise;

Returns:

1x6 vpColVector representing the estimated degrees of freedom.

getFaces(self) vpMbHiddenFaces<vpMbtPolygon>

Return a reference to the faces structure.

getFarClippingDistance(self) float

Get the far distance for clipping.

Returns:

Far clipping value.

getFeaturesCircle(self) list[visp._visp.mbt.MbtDistanceCircle]

Return the address of the circle feature list.

getFeaturesKlt(self) list[visp._visp.mbt.MbtDistanceKltPoints]

Return the address of the Klt feature list.

getFeaturesKltCylinder(self) list[visp._visp.mbt.MbtDistanceKltCylinder]

Return the address of the cylinder feature list.

getInitialMu(self) float

Get the initial value of mu used in the Levenberg Marquardt optimization loop.

Returns:

the initial mu value.

getKltImagePoints(self) list[visp._visp.core.ImagePoint]

Get the current list of KLT points.

Returns:

the list of KLT points through vpKltOpencv .

getKltImagePointsWithId(self) dict[int, visp._visp.core.ImagePoint]

Get the current list of KLT points and their id.

Returns:

the list of KLT points and their id through vpKltOpencv .

getKltMaskBorder(self) int

Get the erosion of the mask used on the Model faces.

Returns:

The erosion.

getKltNbPoints(self) int

Get the current number of klt points.

Returns:

the number of features

getKltOpencv(self) visp._visp.klt.KltOpencv

Get the klt tracker at the current state.

Returns:

klt tracker.

getKltPoints(self) list[cv::Point_<float>]

Get the current list of KLT points.

Returns:

the list of KLT points through vpKltOpencv .

getKltThresholdAcceptation(self) float

Get the threshold for the acceptation of a point.

Returns:

threshold_outlier : Threshold for the weight below which a point is rejected.

getLambda(self) float

Get the value of the gain used to compute the control law.

Returns:

the value for the gain.

getMaskBorder(self) int

<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7ff689a6b5b0>>

Returns:

The erosion.

getMaxIter(self) int

Get the maximum number of iterations of the virtual visual servoing stage.

Returns:

the number of iteration

getModelForDisplay(self, width: int, height: int, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, displayFullModel: bool = false) list[list[float]]

Return a list of primitives parameters to display the model at a given pose and camera parameters.

  • Line parameters are: <primitive id (here 0 for line)> , <pt_start.i()> , <pt_start.j()> , <pt_end.i()> , <pt_end.j()>

  • Ellipse parameters are: <primitive id (here 1 for ellipse)> , <pt_center.i()> , <pt_center.j()> , <n_20> , <n_11> , <n_02> where <n_ij> are the second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area).

Parameters:
width: int

Image width.

height: int

Image height.

cMo: visp._visp.core.HomogeneousMatrix

Pose used to project the 3D model into the image.

cam: visp._visp.core.CameraParameters

The camera parameters.

displayFullModel: bool = false

If true, the line is displayed even if it is not

getNbKltPoints(self) int

<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7ff689a6be20>>

Returns:

the number of features

getNbPolygon(self) int

Get the number of polygons (faces) representing the object to track.

Returns:

Number of polygons.

getNearClippingDistance(self) float

Get the near distance for clipping.

Returns:

Near clipping value.

getOptimizationMethod(self) visp._visp.mbt.MbTracker.MbtOptimizationMethod

Get the optimization method used during the tracking. 0 = Gauss-Newton approach. 1 = Levenberg-Marquardt approach.

Returns:

Optimization method.

getPolygonFaces(self, orderPolygons: bool = true, useVisibility: bool = true, clipPolygon: bool = false) tuple[list[visp._visp.core.Polygon], list[list[visp._visp.core.Point]]]

Get the list of polygons faces (a vpPolygon representing the projection of the face in the image and a list of face corners in 3D), with the possibility to order by distance to the camera or to use the visibility check to consider if the polygon face must be retrieved or not.

Parameters:
orderPolygons: bool = true

If true, the resulting list is ordered from the nearest polygon faces to the farther.

useVisibility: bool = true

If true, only visible faces will be retrieved.

clipPolygon: bool = false

If true, the polygons will be clipped according to the clipping flags set in vpMbTracker .

Returns:

A pair object containing the list of vpPolygon and the list of face corners.

getPose(*args, **kwargs)

Overloaded function.

  1. getPose(self: visp._visp.mbt.MbTracker, cMo: visp._visp.core.HomogeneousMatrix) -> None

Get the current pose between the object and the camera. cMo is the matrix which can be used to express coordinates from the object frame to camera frame.

Parameters:
cMo

the pose

  1. getPose(self: visp._visp.mbt.MbTracker) -> visp._visp.core.HomogeneousMatrix

Get the current pose between the object and the camera. cMo is the matrix which can be used to express coordinates from the object frame to camera frame.

Returns:

the current pose

getProjectionError(self) float

Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal. The error is expressed in degree between 0 and 90. This value is computed if setProjectionErrorComputation() is turned on.

Note

See setProjectionErrorComputation()

Returns:

the value for the error.

getRobustWeights(self) visp._visp.core.ColVector

Return the weights vector \(w_i\) computed by the robust scheme.

The following example shows how to use this function to compute the norm of the weighted residual and the norm of the weighted residual normalized by the sum of the weights associated to the features that are tracked:

tracker.track(I);
vpColVector w = tracker.getRobustWeights();
vpColVector e = tracker.getError();
vpColVector we(w.size());
for(unsigned int i=0; i<w.size(); i++)
  we[i] = w[i]*e[i];

std::cout << "Weighted residual: " << sqrt( (we).sumSquare() ) << std::endl;
std::cout << "Weighted residual normalized: " << sqrt( (we).sumSquare() ) / w.sum() << std::endl;

Note

See getError()

getStopCriteriaEpsilon(self) float
getThresholdAcceptation(self) float

<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7ff689a84850>>

Returns:

threshold_outlier : Threshold for the weight below which a point is rejected.

initClick(*args, **kwargs)

Overloaded function.

  1. initClick(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str, displayHelp: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None

Initialise the tracker by clicking in the image on the pixels that correspond to the 3D points whose coordinates are extracted from a file. In this file, comments starting with # character are allowed. Notice that 3D point coordinates are expressed in meter in the object frame with their X, Y and Z values.

The structure of this file is the following:

# 3D point coordinates
4                 # Number of points in the file (minimum is four)
0.01 0.01 0.01    # \
...               #  | 3D coordinates in the object frame (X, Y, Z)
0.01 -0.01 -0.01  # /
Parameters:
I

Input grayscale image where the user has to click.

initFile

File containing the coordinates of at least 4 3D points the user has to click in the image. This file should have .init extension (ie teabox.init).

displayHelp

Optional display of an image (.ppm, .pgm, .jpg, .jpeg, .png) that should have the same generic name as the init file (ie teabox.ppm or teabox.png). This image may be used to show where to click. This functionality is only available if visp_io module is used.

T

optional transformation matrix to transform 3D points expressed in the original object frame to the desired object frame.

  1. initClick(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str, displayHelp: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None

Initialise the tracker by clicking in the image on the pixels that correspond to the 3D points whose coordinates are extracted from a file. In this file, comments starting with # character are allowed. Notice that 3D point coordinates are expressed in meter in the object frame with their X, Y and Z values.

The structure of this file is the following:

# 3D point coordinates
4                 # Number of points in the file (minimum is four)
0.01 0.01 0.01    # \
...               #  | 3D coordinates in the object frame (X, Y, Z)
0.01 -0.01 -0.01  # /
Parameters:
I_color

Input color image where the user has to click.

initFile

File containing the coordinates of at least 4 3D points the user has to click in the image. This file should have .init extension (ie teabox.init).

displayHelp

Optional display of an image (.ppm, .pgm, .jpg, .jpeg, .png) that should have the same generic name as the init file (ie teabox.ppm or teabox.png). This image may be used to show where to click. This functionality is only available if visp_io module is used.

T

optional transformation matrix to transform 3D points expressed in the original object frame to the desired object frame.

  1. initClick(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, points3D_list: list[visp._visp.core.Point], displayFile: str = ) -> None

Initialise the tracker by clicking in the image on the pixels that correspond to the 3D points whose coordinates are given in points3D_list .

Parameters:
I

Input grayscale image where the user has to click.

points3D_list

List of at least 4 3D points with coordinates expressed in meters in the object frame.

displayFile

Path to the image used to display the help. This image may be used to show where to click. This functionality is only available if visp_io module is used.

  1. initClick(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, points3D_list: list[visp._visp.core.Point], displayFile: str = ) -> None

Initialise the tracker by clicking in the image on the pixels that correspond to the 3D points whose coordinates are given in points3D_list .

Parameters:
I_color

Input color image where the user has to click.

points3D_list

List of at least 4 3D points with coordinates expressed in meters in the object frame.

displayFile

Path to the image used to display the help. This image may be used to show where to click. This functionality is only available if visp_io module is used.

initFromPoints(*args, **kwargs)

Overloaded function.

  1. initFromPoints(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str) -> None

Initialise the tracker by reading 3D point coordinates and the corresponding 2D image point coordinates from a file. Comments starting with # character are allowed. 3D point coordinates are expressed in meter in the object frame with X, Y and Z values. 2D point coordinates are expressied in pixel coordinates, with first the line and then the column of the pixel in the image. The structure of this file is the following.

# 3D point coordinates
4                 # Number of 3D points in the file (minimum is four)
0.01 0.01 0.01    #  \
...               #  | 3D coordinates in meters in the object frame
0.01 -0.01 -0.01  # /
# corresponding 2D point coordinates
4                 # Number of image points in the file (has to be the same
as the number of 3D points)
100 200           #  \
...               #  | 2D coordinates in pixel in the image
50 10             #  /
Parameters:
I

Input grayscale image

initFile

Path to the file containing all the points.

  1. initFromPoints(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str) -> None

Initialise the tracker by reading 3D point coordinates and the corresponding 2D image point coordinates from a file. Comments starting with # character are allowed. 3D point coordinates are expressed in meter in the object frame with X, Y and Z values. 2D point coordinates are expressied in pixel coordinates, with first the line and then the column of the pixel in the image. The structure of this file is the following.

# 3D point coordinates
4                 # Number of 3D points in the file (minimum is four)
0.01 0.01 0.01    #  \
...               #  | 3D coordinates in meters in the object frame
0.01 -0.01 -0.01  # /
# corresponding 2D point coordinates
4                 # Number of image points in the file (has to be the same
as the number of 3D points)
100 200           #  \
...               #  | 2D coordinates in pixel in the image
50 10             #  /
Parameters:
I_color

Input color image

initFile

Path to the file containing all the points.

  1. initFromPoints(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, points2D_list: list[visp._visp.core.ImagePoint], points3D_list: list[visp._visp.core.Point]) -> None

Initialise the tracking with the list of image points (points2D_list) and the list of corresponding 3D points (object frame) (points3D_list).

Parameters:
I

Input grayscale image

points2D_list

List of image points.

points3D_list

List of 3D points (object frame).

  1. initFromPoints(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, points2D_list: list[visp._visp.core.ImagePoint], points3D_list: list[visp._visp.core.Point]) -> None

Initialise the tracking with the list of image points (points2D_list) and the list of corresponding 3D points (object frame) (points3D_list).

Parameters:
I_color

Input color grayscale image

points2D_list

List of image points.

points3D_list

List of 3D points (object frame).

initFromPose(*args, **kwargs)

Overloaded function.

  1. initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str) -> None

Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile. The structure of this file is (without the comments):

// The six value of the pose vector
0.0000    //  \
0.0000    //  |
1.0000    //  | Example of value for the pose vector where Z = 1 meter
0.0000    //  |
0.0000    //  |
0.0000    //  /

Where the three firsts lines refer to the translation and the three last to the rotation in thetaU parametrisation (see vpThetaUVector ).

Parameters:
I

Input grayscale image

initFile

Path to the file containing the pose.

  1. initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str) -> None

Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile. The structure of this file is (without the comments):

// The six value of the pose vector
0.0000    //  \
0.0000    //  |
1.0000    //  | Example of value for the pose vector where Z = 1 meter
0.0000    //  |
0.0000    //  |
0.0000    //  /

Where the three firsts lines refer to the translation and the three last to the rotation in thetaU parametrisation (see vpThetaUVector ).

Parameters:
I_color

Input color image

initFile

Path to the file containing the pose.

  1. initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix) -> None

Initialise the tracking thanks to the pose.

Parameters:
I

Input grayscale image

cMo

Pose matrix.

  1. initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, cMo: visp._visp.core.HomogeneousMatrix) -> None

Initialise the tracking thanks to the pose.

Parameters:
I_color

Input color image

cMo

Pose matrix.

  1. initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, cPo: visp._visp.core.PoseVector) -> None

Initialise the tracking thanks to the pose vector.

Parameters:
I

Input grayscale image

cPo

Pose vector.

  1. initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, cPo: visp._visp.core.PoseVector) -> None

Initialise the tracking thanks to the pose vector.

Parameters:
I_color

Input color image

cPo

Pose vector.

loadConfigFile(*args, **kwargs)

Overloaded function.

  1. loadConfigFile(self: visp._visp.mbt.MbKltTracker, configFile: str, verbose: bool = true) -> None

Load the xml configuration file. From the configuration file initialize the parameters corresponding to the objects: KLT, camera.

The XML configuration file has the following form:

<?xml version="1.0"?>
<conf>
  <camera>
    <width>640</width>
    <height>480</height>
    <u0>320</u0>
    <v0>240</v0>
    <px>686.24</px>
    <py>686.24</py>
  </camera>
  <face>
    <angle_appear>65</angle_appear>
    <angle_disappear>85</angle_disappear>
    <near_clipping>0.01</near_clipping>
    <far_clipping>0.90</far_clipping>
    <fov_clipping>1</fov_clipping>
  </face>
  <klt>
    <mask_border>10</mask_border>
    <max_features>10000</max_features>
    <window_size>5</window_size>
    <quality>0.02</quality>
    <min_distance>10</min_distance>
    <harris>0.02</harris>
    <size_block>3</size_block>
    <pyramid_lvl>3</pyramid_lvl>
  </klt>
</conf>
Parameters:
configFile

full name of the xml file.

verbose

Set true to activate the verbose mode, false otherwise.

  1. loadConfigFile(self: visp._visp.mbt.MbTracker, configFile: str, verbose: bool = true) -> None

Load a config file to parameterise the behavior of the tracker.

Virtual method to adapt to each tracker.

Parameters:
configFile

An xml config file to parse.

verbose

verbose flag.

loadModel(self, modelFile: str, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) None

Load a 3D model from the file in parameter. This file must either be a vrml file (.wrl) or a CAO file (.cao). CAO format is described in the loadCAOModel() method.

Parameters:
modelFile: str

the file containing the the 3D model description. The extension of this file is either .wrl or .cao.

verbose: bool = false

verbose option to print additional information when loading CAO model files which include other CAO model files.

reInitModel(self, I: visp._visp.core.ImageGray, cad_name: str, cMo: visp._visp.core.HomogeneousMatrix, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) None

Re-initialize the model used by the tracker.

Parameters:
I: visp._visp.core.ImageGray

The image containing the object to initialize.

cad_name: str

Path to the file containing the 3D model description.

cMo: visp._visp.core.HomogeneousMatrix

The new vpHomogeneousMatrix between the camera and the new model

verbose: bool = false

verbose option to print additional information when loading CAO model files which include other CAO model files.

T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()

optional transformation matrix (currently only for .cao) to transform 3D points expressed in the original object frame to the desired object frame.

resetTracker(self) None

Reset the tracker. The model is removed and the pose is set to identity. The tracker needs to be initialized with a new model and a new pose.

savePose(self, filename: str) None

Save the pose in the given filename

Parameters:
filename: str

Path to the file used to save the pose.

setAngleAppear(self, a: float) None

Set the angle used to test polygons appearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value lower than this parameter, the polygon is considered as appearing. The polygon will then be tracked.

Parameters:
a: float

new angle in radian.

setAngleDisappear(self, a: float) None

Set the angle used to test polygons disappearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value greater than this parameter, the polygon is considered as disappearing. The tracking of the polygon will then be stopped.

Parameters:
a: float

new angle in radian.

setCameraParameters(*args, **kwargs)

Overloaded function.

  1. setCameraParameters(self: visp._visp.mbt.MbKltTracker, cam: visp._visp.core.CameraParameters) -> None

Set the camera parameters.

Parameters:
cam

the new camera parameters.

  1. setCameraParameters(self: visp._visp.mbt.MbTracker, cam: visp._visp.core.CameraParameters) -> None

Set the camera parameters.

Parameters:
cam

The new camera parameters.

setClipping(self, flags: int) None

Specify which clipping to use.

Note

See vpMbtPolygonClipping

Parameters:
flags: int

New clipping flags.

setCovarianceComputation(self, flag: bool) None

Set if the covariance matrix has to be computed.

Note

See getCovarianceMatrix()

Parameters:
flag: bool

True if the covariance has to be computed, false otherwise. If computed its value is available with getCovarianceMatrix()

setDisplayFeatures(self, displayF: bool) None

Enable to display the features. By features, we meant the moving edges (ME) and the klt points if used.

Note that if present, the moving edges can be displayed with different colors:

  • If green : The ME is a good point.

  • If blue : The ME is removed because of a contrast problem during the tracking phase.

  • If purple : The ME is removed because of a threshold problem during the tracking phase.

  • If red : The ME is removed because it is rejected by the robust approach in the virtual visual servoing scheme.

Parameters:
displayF: bool

set it to true to display the features.

setEstimatedDoF(self, v: visp._visp.core.ColVector) None

Set a 6-dim column vector representing the degrees of freedom in the object frame that are estimated by the tracker. When set to 1, all the 6 dof are estimated.

Below we give the correspondence between the index of the vector and the considered dof:

  • v[0] = 1 if translation along X is estimated, 0 otherwise;

  • v[1] = 1 if translation along Y is estimated, 0 otherwise;

  • v[2] = 1 if translation along Z is estimated, 0 otherwise;

  • v[3] = 1 if rotation along X is estimated, 0 otherwise;

  • v[4] = 1 if rotation along Y is estimated, 0 otherwise;

  • v[5] = 1 if rotation along Z is estimated, 0 otherwise;

setFarClippingDistance(self, dist: float) None

Set the far distance for clipping.

Parameters:
dist: float

Far clipping value.

setInitialMu(self, mu: float) None

Set the initial value of mu for the Levenberg Marquardt optimization loop.

Parameters:
mu: float

initial mu.

setKltMaskBorder(self, e: int) None

Set the erosion of the mask used on the Model faces.

Parameters:
e: int

The desired erosion.

setKltOpencv(self, t: visp._visp.klt.KltOpencv) None

Set the new value of the klt tracker.

Parameters:
t: visp._visp.klt.KltOpencv

Klt tracker containing the new values.

setKltThresholdAcceptation(self, th: float) None

Set the threshold for the acceptation of a point.

Parameters:
th: float

Threshold for the weight below which a point is rejected.

setLambda(self, gain: float) None

Set the value of the gain used to compute the control law.

Parameters:
gain: float

the desired value for the gain.

setLod(self: visp._visp.mbt.MbTracker, useLod: bool, name: str =) None

Set the flag to consider if the level of detail (LOD) is used.

Note

See setMinLineLengthThresh() , setMinPolygonAreaThresh()

Parameters:
useLod

true if the level of detail must be used, false otherwise. When true, two parameters can be set, see setMinLineLengthThresh() and setMinPolygonAreaThresh() .

name

name of the face we want to modify the LOD parameter.

setMask(self: visp._visp.mbt.MbTracker, mask: vpImage<bool>) None
setMaskBorder(self, e: int) None

Set the erosion of the mask used on the Model faces.

Parameters:
e: int

The desired erosion.

setMaxIter(self, max: int) None

Set the maximum iteration of the virtual visual servoing stage.

Parameters:
max: int

the desired number of iteration

setMinLineLengthThresh(self: visp._visp.mbt.MbTracker, minLineLengthThresh: float, name: str =) None

Set the threshold for the minimum line length to be considered as visible in the LOD case.

Note

See setLod() , setMinPolygonAreaThresh()

Parameters:
minLineLengthThresh

threshold for the minimum line length in pixel.

name

name of the face we want to modify the LOD threshold.

setMinPolygonAreaThresh(self: visp._visp.mbt.MbTracker, minPolygonAreaThresh: float, name: str =) None

Set the minimum polygon area to be considered as visible in the LOD case.

Note

See setLod() , setMinLineLengthThresh()

Parameters:
minPolygonAreaThresh

threshold for the minimum polygon area in pixel.

name

name of the face we want to modify the LOD threshold.

setNearClippingDistance(self, dist: float) None

Set the near distance for clipping.

Parameters:
dist: float

Near clipping value.

setOgreShowConfigDialog(self, showConfigDialog: bool) None

Enable/Disable the appearance of Ogre config dialog on startup.

Warning

This method has only effect when Ogre is used and Ogre visibility test is enabled using setOgreVisibilityTest() with true parameter.

Parameters:
showConfigDialog: bool

if true, shows Ogre dialog window (used to set Ogre rendering options) when Ogre visibility is enabled. By default, this functionality is turned off.

setOgreVisibilityTest(*args, **kwargs)

Overloaded function.

  1. setOgreVisibilityTest(self: visp._visp.mbt.MbKltTracker, v: bool) -> None

Use Ogre3D for visibility tests

Warning

This function has to be called before the initialization of the tracker.

Parameters:
v

True to use it, False otherwise

  1. setOgreVisibilityTest(self: visp._visp.mbt.MbTracker, v: bool) -> None

Use Ogre3D for visibility tests

Warning

This function has to be called before the initialization of the tracker.

Parameters:
v

True to use it, False otherwise

setOptimizationMethod(self, opt: visp._visp.mbt.MbTracker.MbtOptimizationMethod) None
setPose(*args, **kwargs)

Overloaded function.

  1. setPose(self: visp._visp.mbt.MbKltTracker, I: visp._visp.core.ImageGray, cdMo: visp._visp.core.HomogeneousMatrix) -> None

Set the pose to be used in entry (as guess) of the next call to the track() function. This pose will be just used once.

Warning

This functionnality is not available when tracking cylinders.

Parameters:
I

grayscale image corresponding to the desired pose.

cdMo

Pose to affect.

  1. setPose(self: visp._visp.mbt.MbKltTracker, I_color: visp._visp.core.ImageRGBa, cdMo: visp._visp.core.HomogeneousMatrix) -> None

Set the pose to be used in entry (as guess) of the next call to the track() function. This pose will be just used once.

Warning

This functionnality is not available when tracking cylinders.

Parameters:
I_color

color image corresponding to the desired pose.

cdMo

Pose to affect.

setPoseSavingFilename(self, filename: str) None

Set the filename used to save the initial pose computed using the initClick() method. It is also used to read a previous pose in the same method. If the file is not set then, the initClick() method will create a .0.pos file in the root directory. This directory is the path to the file given to the method initClick() used to know the coordinates in the object frame.

Parameters:
filename: str

The new filename.

setProjectionErrorComputation(*args, **kwargs)

Overloaded function.

  1. setProjectionErrorComputation(self: visp._visp.mbt.MbKltTracker, flag: bool) -> None

Set if the projection error criteria has to be computed.

Parameters:
flag

True if the projection error criteria has to be computed, false otherwise

  1. setProjectionErrorComputation(self: visp._visp.mbt.MbTracker, flag: bool) -> None

Set if the projection error criteria has to be computed. This criteria could be used to detect the quality of the tracking. It computes an angle between 0 and 90 degrees that is available with getProjectionError() . Closer to 0 is the value, better is the tracking.

Note

See getProjectionError()

Parameters:
flag

True if the projection error criteria has to be computed, false otherwise.

setProjectionErrorDisplay(self, display: bool) None

Display or not gradient and model orientation when computing the projection error.

setProjectionErrorDisplayArrowLength(self, length: int) None

Arrow length used to display gradient and model orientation for projection error computation.

setProjectionErrorDisplayArrowThickness(self, thickness: int) None

Arrow thickness used to display gradient and model orientation for projection error computation.

setProjectionErrorKernelSize(self, size: int) None

Set kernel size used for projection error computation.

Parameters:
size: int

Kernel size computed as kernel_size = size*2 + 1.

setProjectionErrorMovingEdge(self, me: visp._visp.me.Me) None

Set Moving-Edges parameters for projection error computation.

Parameters:
me: visp._visp.me.Me

Moving-Edges parameters.

setScanLineVisibilityTest(*args, **kwargs)

Overloaded function.

  1. setScanLineVisibilityTest(self: visp._visp.mbt.MbKltTracker, v: bool) -> None

Use Scanline algorithm for visibility tests

Parameters:
v

True to use it, False otherwise

  1. setScanLineVisibilityTest(self: visp._visp.mbt.MbTracker, v: bool) -> None

setStopCriteriaEpsilon(self, eps: float) None

Set the minimal error (previous / current estimation) to determine if there is convergence or not.

Parameters:
eps: float

Epsilon threshold.

setThresholdAcceptation(self, th: float) None

<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7ff689a85bd0>>

Parameters:
th: float

Threshold for the weight below which a point is rejected.

setUseKltTracking(self, name: str, useKltTracking: bool) None

Set if the polygons that have the given name have to be considered during the tracking phase.

Parameters:
name: str

name of the polygon(s).

useKltTracking: bool

True if it has to be considered, False otherwise.

testTracking(self) None

Test the quality of the tracking. The tracking is supposed to fail if less than 10 points are tracked.

track(*args, **kwargs)

Overloaded function.

  1. track(self: visp._visp.mbt.MbKltTracker, I: visp._visp.core.ImageGray) -> None

Realize the tracking of the object in the image

Parameters:
I

the input grayscale image

  1. track(self: visp._visp.mbt.MbKltTracker, I_color: visp._visp.core.ImageRGBa) -> None

Realize the tracking of the object in the image

Parameters:
I_color

the input color image