MbGenericTracker

class MbGenericTracker(*args, **kwargs)

Bases: MbTracker

Real-time 6D object pose tracking using its CAD model.

The tracker requires the knowledge of the 3D model that could be provided in a vrml or in a cao file. The cao format is described in loadCAOModel() . It may also use an xml file used to tune the behavior of the tracker and an init file used to compute the pose at the very first image.

This class allows tracking an object or a scene given its 3D model. More information in [41] . A lot of videos can be found on YouTube VispTeam channel.

<unparsed htmlonly <doxmlparser.compound.docHtmlOnlyType object at 0x7ff687e212d0>>

The tutorial-tracking-mb-generic is a good starting point to use this class. If you want to track an object with a stereo camera refer to tutorial-tracking-mb-generic-stereo. If you want rather use a RGB-D camera and exploit the depth information, you may see tutorial-tracking-mb-generic-rgbd. There is also tutorial-detection-object that shows how to initialize the tracker from an initial pose provided by a detection algorithm.

JSON serialization

Since ViSP 3.6.0, if ViSP is build with soft_tool_json 3rd-party we introduce JSON serialization capabilities for vpMbGenericTracker . The following sample code shows how to save a model-based tracker settings in a file named ` mbt.json ` and reload the values from this JSON file.

#include <visp3/mbt/vpMbGenericTracker.h>

int main()
{
#if defined(VISP_HAVE_NLOHMANN_JSON)
  std::string filename = "mbt-generic.json";
  {
    vpMbGenericTracker mbt;
    mbt.saveConfigFile(filename);
  }
  {
    vpMbGenericTracker mbt;
    bool verbose = false;
    std::cout << "Read model-based tracker settings from " << filename << std::endl;
    mbt.loadConfigFile(filename, verbose);
  }
#endif
}

If you build and execute the sample code, it will produce the following output:

Read model-based tracker settings from mbt-generic.json

The content of the ` mbt.json ` file is the following:

$ cat mbt-generic.json
{
  "referenceCameraName": "Camera",
  "trackers": {
      "Camera": {
          "angleAppear": 89.0,
          "angleDisappear": 89.0,
          "camTref": {
              "cols": 4,
              "data": [
                  1.0,
                  0.0,
                  0.0,
                  0.0,
                  0.0,
                  1.0,
                  0.0,
                  0.0,
                  0.0,
                  0.0,
                  1.0,
                  0.0,
                  0.0,
                  0.0,
                  0.0,
                  1.0
              ],
              "rows": 4,
              "type": "vpHomogeneousMatrix"
          },
          "camera": {
              "model": "perspectiveWithoutDistortion",
              "px": 600.0,
              "py": 600.0,
              "u0": 192.0,
              "v0": 144.0
          },
          "clipping": {
              "far": 100.0,
              "flags": [
                  "none"
              ],
              "near": 0.001
          },
          "display": {
              "features": false,
              "projectionError": false
          },
          "edge": {
              "maskSign": 0,
              "maskSize": 5,
              "minSampleStep": 4.0,
              "mu": [
                  0.5,
                  0.5
              ],
              "nMask": 180,
              "ntotalSample": 0,
              "pointsToTrack": 500,
              "range": 4,
              "sampleStep": 10.0,
              "strip": 2,
              "thresholdType": "normalized",
              "threshold": 20.0
          },
          "lod": {
              "minLineLengthThresholdGeneral": 50.0,
              "minPolygonAreaThresholdGeneral": 2500.0,
              "useLod": false
          },
          "type": [
              "edge"
          ],
          "visibilityTest": {
              "ogre": false,
              "scanline": false
          }
      }
  },
  "version": "1.0"
}

Overloaded function.

  1. __init__(self: visp._visp.mbt.MbGenericTracker) -> None

json namespace shortcut

  1. __init__(self: visp._visp.mbt.MbGenericTracker, nbCameras: int, trackerType: int = EDGE_TRACKER) -> None

  2. __init__(self: visp._visp.mbt.MbGenericTracker, trackerTypes: list[int]) -> None

  3. __init__(self: visp._visp.mbt.MbGenericTracker, cameraNames: list[str], trackerTypes: list[int]) -> None

Methods

__init__

Overloaded function.

computeCurrentProjectionError

Overloaded function.

display

Overloaded function.

getCameraNames

Get the camera names.

getCameraParameters

Overloaded function.

getCameraTrackerTypes

Get the camera tracker types.

getClipping

Overloaded function.

getError

Return the error vector \((s-s^*)\) reached after the virtual visual servoing process used to estimate the pose.

getFaces

Overloaded function.

getFeaturesCircle

Return the address of the circle feature list for the reference camera.

getFeaturesForDisplay

Overloaded function.

getFeaturesKlt

Return the address of the Klt feature list for the reference camera.

getFeaturesKltCylinder

Return the address of the cylinder feature list for the reference camera.

getGoodMovingEdgesRatioThreshold

Note

See setGoodMovingEdgesRatioThreshold()

getKltImagePoints

Get the current list of KLT points for the reference camera.

getKltImagePointsWithId

Get the current list of KLT points and their id for the reference camera.

getKltMaskBorder

Get the erosion of the mask used on the Model faces.

getKltNbPoints

Get number of KLT points for the reference camera.

getKltOpencv

Overloaded function.

getKltPoints

Get the current list of KLT points for the reference camera.

getKltThresholdAcceptation

Get the threshold for the acceptation of a point.

getLcircle

Overloaded function.

getLcylinder

Overloaded function.

getLline

Overloaded function.

getModelForDisplay

Overloaded function.

getMovingEdge

Overloaded function.

getNbFeaturesDepthDense

Return the number of depth dense features taken into account in the virtual visual-servoing scheme.

getNbFeaturesDepthNormal

Return the number of depth normal features features taken into account in the virtual visual-servoing scheme.

getNbFeaturesEdge

Return the number of moving-edges features taken into account in the virtual visual-servoing scheme.

getNbFeaturesKlt

Return the number of klt keypoints features taken into account in the virtual visual-servoing scheme.

getNbPoints

Overloaded function.

getNbPolygon

Overloaded function.

getPolygonFaces

Overloaded function.

getPose

Overloaded function.

getReferenceCameraName

Get name of the reference camera.

getRobustWeights

Return the weights vector \(w_i\) computed by the robust scheme.

getTrackerType

The tracker type for the reference camera.

init

Initialise the tracking.

initClick

Overloaded function.

initFromPoints

Overloaded function.

initFromPose

Overloaded function.

loadConfigFile

Overloaded function.

loadModel

Overloaded function.

reInitModel

Overloaded function.

resetTracker

Reset the tracker.

setAngleAppear

Overloaded function.

setAngleDisappear

Overloaded function.

setCameraParameters

Overloaded function.

setCameraTransformationMatrix

Overloaded function.

setClipping

Overloaded function.

setDepthDenseFilteringMaxDistance

Set maximum distance to consider a face.

setDepthDenseFilteringMethod

Set method to discard a face, e.g.if outside of the depth range.

setDepthDenseFilteringMinDistance

Set minimum distance to consider a face.

setDepthDenseFilteringOccupancyRatio

Set depth occupancy ratio to consider a face, used to discard faces where the depth map is not well reconstructed.

setDepthDenseSamplingStep

Set depth dense sampling step.

setDepthNormalFaceCentroidMethod

Set method to compute the centroid for display for depth tracker.

setDepthNormalFeatureEstimationMethod

Set depth feature estimation method.

setDepthNormalPclPlaneEstimationMethod

Set depth PCL plane estimation method.

setDepthNormalPclPlaneEstimationRansacMaxIter

Set depth PCL RANSAC maximum number of iterations.

setDepthNormalPclPlaneEstimationRansacThreshold

Set depth PCL RANSAC threshold.

setDepthNormalSamplingStep

Set depth sampling step.

setDisplayFeatures

Overloaded function.

setFarClippingDistance

Overloaded function.

setFeatureFactors

Set the feature factors used in the VVS stage (ponderation between the feature types).

setGoodMovingEdgesRatioThreshold

Set the threshold value between 0 and 1 over good moving edges ratio.

setKltMaskBorder

Overloaded function.

setKltOpencv

Overloaded function.

setKltThresholdAcceptation

Set the threshold for the acceptation of a point.

setLod

Overloaded function.

setMask

Overloaded function.

setMinLineLengthThresh

Overloaded function.

setMinPolygonAreaThresh

Overloaded function.

setMovingEdge

Overloaded function.

setNearClippingDistance

Overloaded function.

setOgreShowConfigDialog

Overloaded function.

setOgreVisibilityTest

Overloaded function.

setOptimizationMethod

Overloaded function.

setPose

Overloaded function.

setProjectionErrorComputation

Overloaded function.

setProjectionErrorDisplay

Overloaded function.

setProjectionErrorDisplayArrowLength

Overloaded function.

setProjectionErrorDisplayArrowThickness

Overloaded function.

setReferenceCameraName

Set the reference camera name.

setScanLineVisibilityTest

Overloaded function.

setTrackerType

Overloaded function.

setUseDepthDenseTracking

Set if the polygon that has the given name has to be considered during the tracking phase.

setUseDepthNormalTracking

Set if the polygon that has the given name has to be considered during the tracking phase.

setUseEdgeTracking

Set if the polygon that has the given name has to be considered during the tracking phase.

setUseKltTracking

Set if the polygon that has the given name has to be considered during the tracking phase.

testTracking

Test the quality of the tracking.

track

Overloaded function.

Inherited Methods

setStopCriteriaEpsilon

Set the minimal error (previous / current estimation) to determine if there is convergence or not.

setLambda

Set the value of the gain used to compute the control law.

getNearClippingDistance

Get the near distance for clipping.

getProjectionError

Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal.

getFarClippingDistance

Get the far distance for clipping.

setProjectionErrorMovingEdge

Set Moving-Edges parameters for projection error computation.

LEVENBERG_MARQUARDT_OPT

getOptimizationMethod

Get the optimization method used during the tracking.

setMaxIter

Set the maximum iteration of the virtual visual servoing stage.

getAngleDisappear

Return the angle used to test polygons disappearance.

getMaxIter

Get the maximum number of iterations of the virtual visual servoing stage.

GAUSS_NEWTON_OPT

getInitialMu

Get the initial value of mu used in the Levenberg Marquardt optimization loop.

setEstimatedDoF

Set a 6-dim column vector representing the degrees of freedom in the object frame that are estimated by the tracker.

savePose

Save the pose in the given filename

getLambda

Get the value of the gain used to compute the control law.

getCovarianceMatrix

Get the covariance matrix.

setProjectionErrorKernelSize

Set kernel size used for projection error computation.

getAngleAppear

Return the angle used to test polygons appearance.

getStopCriteriaEpsilon

getEstimatedDoF

Get a 1x6 vpColVector representing the estimated degrees of freedom.

MbtOptimizationMethod

Values:

setPoseSavingFilename

Set the filename used to save the initial pose computed using the initClick() method.

setCovarianceComputation

Set if the covariance matrix has to be computed.

setInitialMu

Set the initial value of mu for the Levenberg Marquardt optimization loop.

Operators

__doc__

__init__

Overloaded function.

__module__

Attributes

DEPTH_DENSE_TRACKER

DEPTH_NORMAL_TRACKER

EDGE_TRACKER

GAUSS_NEWTON_OPT

KLT_TRACKER

LEVENBERG_MARQUARDT_OPT

__annotations__

class MbtOptimizationMethod(self, value: int)

Bases: pybind11_object

Values:

  • GAUSS_NEWTON_OPT

  • LEVENBERG_MARQUARDT_OPT

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
class TrackerType(self, value: int)

Bases: pybind11_object

Values:

  • EDGE_TRACKER: Model-based tracking using moving edges features.

  • KLT_TRACKER: Model-based tracking using KLT features.

  • DEPTH_NORMAL_TRACKER: Model-based tracking using depth normal features.

  • DEPTH_DENSE_TRACKER: Model-based tracking using depth dense features.

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: visp._visp.mbt.MbGenericTracker) -> None

json namespace shortcut

  1. __init__(self: visp._visp.mbt.MbGenericTracker, nbCameras: int, trackerType: int = EDGE_TRACKER) -> None

  2. __init__(self: visp._visp.mbt.MbGenericTracker, trackerTypes: list[int]) -> None

  3. __init__(self: visp._visp.mbt.MbGenericTracker, cameraNames: list[str], trackerTypes: list[int]) -> None

computeCurrentProjectionError(*args, **kwargs)

Overloaded function.

  1. computeCurrentProjectionError(self: visp._visp.mbt.MbGenericTracker, I: visp._visp.core.ImageGray, _cMo: visp._visp.core.HomogeneousMatrix, _cam: visp._visp.core.CameraParameters) -> float

Compute projection error given an input image and camera pose, parameters. This projection error uses locations sampled exactly where the model is projected using the camera pose and intrinsic parameters. You may want to use

Note

See setProjectionErrorComputation

Note

See getProjectionError

to get a projection error computed at the ME locations after a call to track() . It works similarly to vpMbTracker::getProjectionError function:Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal. The error is expressed in degree between 0 and 90.

Parameters:
I

Input grayscale image.

  1. computeCurrentProjectionError(self: visp._visp.mbt.MbGenericTracker, I: visp._visp.core.ImageRGBa, _cMo: visp._visp.core.HomogeneousMatrix, _cam: visp._visp.core.CameraParameters) -> float

Compute projection error given an input image and camera pose, parameters. This projection error uses locations sampled exactly where the model is projected using the camera pose and intrinsic parameters. You may want to use

Note

See setProjectionErrorComputation

Note

See getProjectionError

to get a projection error computed at the ME locations after a call to track() . It works similarly to vpMbTracker::getProjectionError function:Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal. The error is expressed in degree between 0 and 90.

Parameters:
_cMo

Camera pose.

_cam

Camera parameters.

  1. computeCurrentProjectionError(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, _cMo: visp._visp.core.HomogeneousMatrix, _cam: visp._visp.core.CameraParameters) -> float

Compute projection error given an input image and camera pose, parameters. This projection error uses locations sampled exactly where the model is projected using the camera pose and intrinsic parameters. You may want to use

Note

See setProjectionErrorComputation

Note

See getProjectionError

to get a projection error computed at the ME locations after a call to track() . It works similarly to vpMbTracker::getProjectionError function:Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal. The error is expressed in degree between 0 and 90.

Parameters:
I

Input grayscale image.

_cMo

Camera pose.

_cam

Camera parameters.

display(*args, **kwargs)

Overloaded function.

  1. display(self: visp._visp.mbt.MbGenericTracker, I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None

Display the 3D model from a given position of the camera.

Note

This function will display the model only for the reference camera.

Parameters:
I

The grayscale image.

cMo

Pose used to project the 3D model into the image.

cam

The camera parameters.

col

The desired color.

thickness

The thickness of the lines.

displayFullModel

If true, the full model is displayed (even the non visible faces).

  1. display(self: visp._visp.mbt.MbGenericTracker, I: visp._visp.core.ImageRGBa, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None

Display the 3D model from a given position of the camera.

Note

This function will display the model only for the reference camera.

Parameters:
I

The color image.

cMo

Pose used to project the 3D model into the image.

cam

The camera parameters.

col

The desired color.

thickness

The thickness of the lines.

displayFullModel

If true, the full model is displayed (even the non visible faces).

  1. display(self: visp._visp.mbt.MbGenericTracker, I1: visp._visp.core.ImageGray, I2: visp._visp.core.ImageGray, c1Mo: visp._visp.core.HomogeneousMatrix, c2Mo: visp._visp.core.HomogeneousMatrix, cam1: visp._visp.core.CameraParameters, cam2: visp._visp.core.CameraParameters, color: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None

Display the 3D model from a given position of the camera.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
I1

The first grayscale image.

I2

The second grayscale image.

c1Mo

Pose used to project the 3D model into the first image.

c2Mo

Pose used to project the 3D model into the second image.

cam1

The first camera parameters.

cam2

The second camera parameters.

color

The desired color.

thickness

The thickness of the lines.

displayFullModel

If true, the full model is displayed (even the non visible faces).

  1. display(self: visp._visp.mbt.MbGenericTracker, I1: visp._visp.core.ImageRGBa, I2: visp._visp.core.ImageRGBa, c1Mo: visp._visp.core.HomogeneousMatrix, c2Mo: visp._visp.core.HomogeneousMatrix, cam1: visp._visp.core.CameraParameters, cam2: visp._visp.core.CameraParameters, color: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None

Display the 3D model from a given position of the camera.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
I1

The first color image.

I2

The second color image.

c1Mo

Pose used to project the 3D model into the first image.

c2Mo

Pose used to project the 3D model into the second image.

cam1

The first camera parameters.

cam2

The second camera parameters.

color

The desired color.

thickness

The thickness of the lines.

displayFullModel

If true, the full model is displayed (even the non visible faces).

  1. display(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfCameraPoses: dict[str, visp._visp.core.HomogeneousMatrix], mapOfCameraParameters: dict[str, visp._visp.core.CameraParameters], col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None

Display the 3D model from a given position of the camera.

Parameters:
mapOfImages

Map of grayscale images.

mapOfCameraPoses

Map of camera poses.

mapOfCameraParameters

Map of camera parameters.

col

The desired color.

thickness

The thickness of the lines.

displayFullModel

If true, the full model is displayed (even the non visible faces).

  1. display(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageRGBa], mapOfCameraPoses: dict[str, visp._visp.core.HomogeneousMatrix], mapOfCameraParameters: dict[str, visp._visp.core.CameraParameters], col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None

Display the 3D model from a given position of the camera.

Parameters:
mapOfImages

Map of color images.

mapOfCameraPoses

Map of camera poses.

mapOfCameraParameters

Map of camera parameters.

col

The desired color.

thickness

The thickness of the lines.

displayFullModel

If true, the full model is displayed (even the non visible faces).

getAngleAppear(self) float

Return the angle used to test polygons appearance.

getAngleDisappear(self) float

Return the angle used to test polygons disappearance.

getCameraNames(self) list[str]

Get the camera names.

Returns:

The vector of camera names.

getCameraParameters(*args, **kwargs)

Overloaded function.

  1. getCameraParameters(self: visp._visp.mbt.MbGenericTracker, camera: visp._visp.core.CameraParameters) -> None

Get the camera parameters.

  1. getCameraParameters(self: visp._visp.mbt.MbGenericTracker, cam1: visp._visp.core.CameraParameters, cam2: visp._visp.core.CameraParameters) -> None

Get all the camera parameters.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
cam1

Copy of the camera parameters for the first camera.

cam2

Copy of the camera parameters for the second camera.

  1. getCameraParameters(self: visp._visp.mbt.MbGenericTracker, mapOfCameraParameters: dict[str, visp._visp.core.CameraParameters]) -> None

Get all the camera parameters.

Parameters:
mapOfCameraParameters

Map of camera parameters.

  1. getCameraParameters(self: visp._visp.mbt.MbTracker, cam: visp._visp.core.CameraParameters) -> None

Get the camera parameters.

Parameters:
cam

copy of the camera parameters used by the tracker.

getCameraTrackerTypes(self) dict[str, int]

Get the camera tracker types.

Note

See vpTrackerType

Returns:

The map of camera tracker types.

getClipping(*args, **kwargs)

Overloaded function.

  1. getClipping(self: visp._visp.mbt.MbGenericTracker, clippingFlag1: int, clippingFlag2: int) -> tuple[int, int]

Get the clipping used and defined in vpPolygon3D::vpMbtPolygonClippingType.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
clippingFlag1

Clipping flags for the first camera.

clippingFlag2

Clipping flags for the second camera.

Returns:

A tuple containing:

  • clippingFlag1: Clipping flags for the first camera.

  • clippingFlag2: Clipping flags for the second camera.

  1. getClipping(self: visp._visp.mbt.MbGenericTracker, mapOfClippingFlags: dict[str, int]) -> None

Get the clipping used and defined in vpPolygon3D::vpMbtPolygonClippingType.

Parameters:
mapOfClippingFlags

Map of clipping flags.

  1. getClipping(self: visp._visp.mbt.MbTracker) -> int

Get the clipping used and defined in vpPolygon3D::vpMbtPolygonClippingType.

Returns:

Clipping flags.

getCovarianceMatrix(self) visp._visp.core.Matrix

Get the covariance matrix. This matrix is only computed if setCovarianceComputation() is turned on.

Note

See setCovarianceComputation()

getError(self) visp._visp.core.ColVector

Return the error vector \((s-s^*)\) reached after the virtual visual servoing process used to estimate the pose.

The following example shows how to use this function to compute the norm of the residual and the norm of the residual normalized by the number of features that are tracked:

tracker.track(I); std::cout << "Residual: " << sqrt( (tracker.getError()).sumSquare()) << std::endl;
std::cout << "Residual normalized: "
          << sqrt( (tracker.getError()).sumSquare())/tracker.getError().size() << std::endl;

Note

See getRobustWeights()

getEstimatedDoF(self) visp._visp.core.ColVector

Get a 1x6 vpColVector representing the estimated degrees of freedom. vpColVector [0] = 1 if translation on X is estimated, 0 otherwise; vpColVector [1] = 1 if translation on Y is estimated, 0 otherwise; vpColVector [2] = 1 if translation on Z is estimated, 0 otherwise; vpColVector [3] = 1 if rotation on X is estimated, 0 otherwise; vpColVector [4] = 1 if rotation on Y is estimated, 0 otherwise; vpColVector [5] = 1 if rotation on Z is estimated, 0 otherwise;

Returns:

1x6 vpColVector representing the estimated degrees of freedom.

getFaces(*args, **kwargs)

Overloaded function.

  1. getFaces(self: visp._visp.mbt.MbGenericTracker) -> vpMbHiddenFaces<vpMbtPolygon>

Return a reference to the faces structure.

Returns:

Reference to the face structure.

  1. getFaces(self: visp._visp.mbt.MbGenericTracker, cameraName: str) -> vpMbHiddenFaces<vpMbtPolygon>

Return a reference to the faces structure for the given camera name.

Returns:

Reference to the face structure.

  1. getFaces(self: visp._visp.mbt.MbTracker) -> vpMbHiddenFaces<vpMbtPolygon>

Return a reference to the faces structure.

getFarClippingDistance(self) float

Get the far distance for clipping.

Returns:

Far clipping value.

getFeaturesCircle(self) list[visp._visp.mbt.MbtDistanceCircle]

Return the address of the circle feature list for the reference camera.

getFeaturesForDisplay(*args, **kwargs)

Overloaded function.

  1. getFeaturesForDisplay(self: visp._visp.mbt.MbGenericTracker) -> list[list[float]]

Returns a list of visual features parameters for the reference camera. The first element of the vector indicates the feature type that is either a moving-edge (ME) when value is 0, or a keypoint (KLT) when value is 1. Then behind, the second element of the vector gives the corresponding feature parameters.

  • Moving-edges parameters are: <feature id (here 0 for ME)> , <pt.i()> , <pt.j()> , <state> where pt.i(), pt.j() are the coordinates of the moving-edge point feature, and state with values in range [0,4] indicates the state of the ME

    • 0 for vpMeSite::NO_SUPPRESSION

    • 1 for vpMeSite::CONTRAST

    • 2 for vpMeSite::THRESHOLD

    • 3 for vpMeSite::M_ESTIMATOR

    • 4 for vpMeSite::TOO_NEAR

  • KLT parameters are: <feature id (here 1 for KLT)> , <pt.i()> , <pt.j()> , <klt_id.i()> , <klt_id.j()> , <klt_id.id>

When the tracking is achieved with features from multiple cameras you can use rather getFeaturesForDisplay(std::map<std::string, std::vector<std::vector<double> > > &) .

It can be used to display the 3D model with a render engine of your choice.

Note

It returns the visual features for the reference camera.

Note

See getModelForDisplay(unsigned int, unsigned int, const vpHomogeneousMatrix &, const vpCameraParameters &, bool)

  1. getFeaturesForDisplay(self: visp._visp.mbt.MbGenericTracker, mapOfFeatures: dict[str, list[list[float]]]) -> None

Get a list of visual features parameters for multiple cameras. The considered camera name is the first element of the map. The second element of the map contains the visual features parameters where the first element of the vector indicates the feature type that is either a moving-edge (ME) when value is 0, or a keypoint (KLT) when value is 1. Then behind, the second element of the vector gives the corresponding feature parameters.

  • Moving-edges parameters are: <feature id (here 0 for ME)> , <pt.i()> , <pt.j()> , <state> where pt.i(), pt.j() are the coordinates of the moving-edge point feature, and state with values in range [0,4] indicates the state of the ME

    • 0 for vpMeSite::NO_SUPPRESSION

    • 1 for vpMeSite::CONTRAST

    • 2 for vpMeSite::THRESHOLD

    • 3 for vpMeSite::M_ESTIMATOR

    • 4 for vpMeSite::TOO_NEAR

  • KLT parameters are: <feature id (here 1 for KLT)> , <pt.i()> , <pt.j()> , <klt_id.i()> , <klt_id.j()> , <klt_id.id> It can be used to display the 3D model with a render engine of your choice.

When the tracking is achieved with features from a single camera you can use rather getFeaturesForDisplay() .

Note

See getModelForDisplay (std::map<std::string, std::vector<std::vector<double> > > &, const std::map<std::string, unsigned int> &, const std::map<std::string, unsigned int> &, const std::map<std::string, vpHomogeneousMatrix> &, const std::map<std::string, vpCameraParameters> &, bool)

getFeaturesKlt(self) list[visp._visp.mbt.MbtDistanceKltPoints]

Return the address of the Klt feature list for the reference camera.

getFeaturesKltCylinder(self) list[visp._visp.mbt.MbtDistanceKltCylinder]

Return the address of the cylinder feature list for the reference camera.

getGoodMovingEdgesRatioThreshold(self) float

Note

See setGoodMovingEdgesRatioThreshold()

Returns:

The threshold value between 0 and 1 over good moving edges ratio. It allows to decide if the tracker has enough valid moving edges to compute a pose. 1 means that all moving edges should be considered as good to have a valid pose, while 0.1 means that 10% of the moving edge are enough to declare a pose valid.

getInitialMu(self) float

Get the initial value of mu used in the Levenberg Marquardt optimization loop.

Returns:

the initial mu value.

getKltImagePoints(self) list[visp._visp.core.ImagePoint]

Get the current list of KLT points for the reference camera.

Returns:

the list of KLT points through vpKltOpencv .

getKltImagePointsWithId(self) dict[int, visp._visp.core.ImagePoint]

Get the current list of KLT points and their id for the reference camera.

Returns:

the list of KLT points and their id through vpKltOpencv .

getKltMaskBorder(self) int

Get the erosion of the mask used on the Model faces.

Returns:

The erosion for the reference camera.

getKltNbPoints(self) int

Get number of KLT points for the reference camera.

Returns:

Number of KLT points for the reference camera.

getKltOpencv(*args, **kwargs)

Overloaded function.

  1. getKltOpencv(self: visp._visp.mbt.MbGenericTracker) -> visp._visp.klt.KltOpencv

Get the klt tracker at the current state for the reference camera.

Returns:

klt tracker.

  1. getKltOpencv(self: visp._visp.mbt.MbGenericTracker, klt1: visp._visp.klt.KltOpencv, klt2: visp._visp.klt.KltOpencv) -> None

Get the klt tracker at the current state.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
klt1

Klt tracker for the first camera.

klt2

Klt tracker for the second camera.

  1. getKltOpencv(self: visp._visp.mbt.MbGenericTracker, mapOfKlts: dict[str, visp._visp.klt.KltOpencv]) -> None

Get the klt tracker at the current state.

Parameters:
mapOfKlts

Map if klt trackers.

getKltPoints(self) list[cv::Point_<float>]

Get the current list of KLT points for the reference camera.

Returns:

the list of KLT points through vpKltOpencv .

getKltThresholdAcceptation(self) float

Get the threshold for the acceptation of a point.

Returns:

threshold_outlier : Threshold for the weight below which a point is rejected.

getLambda(self) float

Get the value of the gain used to compute the control law.

Returns:

the value for the gain.

getLcircle(*args, **kwargs)

Overloaded function.

  1. getLcircle(self: visp._visp.mbt.MbGenericTracker, circlesList: list[visp._visp.mbt.MbtDistanceCircle], level: int = 0) -> list[visp._visp.mbt.MbtDistanceCircle]

Get the list of the circles tracked for the specified level. Each circle contains the list of the vpMeSite .

Note

Multi-scale moving edge tracking is not possible, scale level=0 must be used.

Parameters:
circlesList

The list of the circles of the model.

level

Level corresponding to the list to return.

Returns:

A tuple containing:

  • circlesList: The list of the circles of the model.

  1. getLcircle(self: visp._visp.mbt.MbGenericTracker, cameraName: str, circlesList: list[visp._visp.mbt.MbtDistanceCircle], level: int = 0) -> list[visp._visp.mbt.MbtDistanceCircle]

Get the list of the circles tracked for the specified level. Each circle contains the list of the vpMeSite .

Note

Multi-scale moving edge tracking is not possible, scale level=0 must be used.

Parameters:
cameraName

Camera name for which we want to get the list of vpMbtDistanceCircle .

circlesList

The list of the circles of the model.

level

Level corresponding to the list to return.

Returns:

A tuple containing:

  • circlesList: The list of the circles of the model.

getLcylinder(*args, **kwargs)

Overloaded function.

  1. getLcylinder(self: visp._visp.mbt.MbGenericTracker, cylindersList: list[visp._visp.mbt.MbtDistanceCylinder], level: int = 0) -> list[visp._visp.mbt.MbtDistanceCylinder]

Get the list of the cylinders tracked for the specified level. Each cylinder contains the list of the vpMeSite .

Note

Multi-scale moving edge tracking is not possible, scale level=0 must be used.

Parameters:
cylindersList

The list of the cylinders of the model.

level

Level corresponding to the list to return.

Returns:

A tuple containing:

  • cylindersList: The list of the cylinders of the model.

  1. getLcylinder(self: visp._visp.mbt.MbGenericTracker, cameraName: str, cylindersList: list[visp._visp.mbt.MbtDistanceCylinder], level: int = 0) -> list[visp._visp.mbt.MbtDistanceCylinder]

Get the list of the cylinders tracked for the specified level. Each cylinder contains the list of the vpMeSite .

Note

Multi-scale moving edge tracking is not possible, scale level=0 must be used.

Parameters:
cameraName

Camera name for which we want to get the list of vpMbtDistanceCylinder .

cylindersList

The list of the cylinders of the model.

level

Level corresponding to the list to return.

Returns:

A tuple containing:

  • cylindersList: The list of the cylinders of the model.

getLline(*args, **kwargs)

Overloaded function.

  1. getLline(self: visp._visp.mbt.MbGenericTracker, linesList: list[visp._visp.mbt.MbtDistanceLine], level: int = 0) -> list[visp._visp.mbt.MbtDistanceLine]

Get the list of the lines tracked for the specified level. Each line contains the list of the vpMeSite .

Note

Multi-scale moving edge tracking is not possible, scale level=0 must be used.

Parameters:
linesList

The list of the lines of the model.

level

Level corresponding to the list to return.

Returns:

A tuple containing:

  • linesList: The list of the lines of the model.

  1. getLline(self: visp._visp.mbt.MbGenericTracker, cameraName: str, linesList: list[visp._visp.mbt.MbtDistanceLine], level: int = 0) -> list[visp._visp.mbt.MbtDistanceLine]

Get the list of the lines tracked for the specified level. Each line contains the list of the vpMeSite .

Note

Multi-scale moving edge tracking is not possible, scale level=0 must be used.

Parameters:
cameraName

Camera name for which we want to get the list of vpMbtDistanceLine .

linesList

The list of the lines of the model.

level

Level corresponding to the list to return.

Returns:

A tuple containing:

  • linesList: The list of the lines of the model.

getMaxIter(self) int

Get the maximum number of iterations of the virtual visual servoing stage.

Returns:

the number of iteration

getModelForDisplay(*args, **kwargs)

Overloaded function.

  1. getModelForDisplay(self: visp._visp.mbt.MbGenericTracker, width: int, height: int, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, displayFullModel: bool = false) -> list[list[float]]

Get primitive parameters to display the object CAD model for the reference camera.

It can be used to display the 3D model with a render engine of your choice.

When tracking is performed using multiple cameras, you should rather use getModelForDisplay(std::map<std::string, std::vector<std::vector<double> > > &, const std::map<std::string, unsigned int> &, const std::map<std::string, unsigned int> &, const std::map<std::string, vpHomogeneousMatrix> &, const std::map<std::string, vpCameraParameters> &, bool)

Note

See getFeaturesForDisplay()

Parameters:
width

Image width.

height

Image height.

cMo

Pose used to project the 3D model into the image.

cam

The camera parameters.

displayFullModel

If true, the line is displayed even if it is not

Returns:

List of primitives parameters corresponding to the reference camera in order to display the model to a given pose with camera parameters. The first element of the vector indicates the type of parameters: 0 for a line and 1 for an ellipse. Then the second element gives the corresponding parameters.

  • Line parameters are: <primitive id (here 0 for line)> , <pt_start.i()> , <pt_start.j()> , <pt_end.i()> , <pt_end.j()> .

  • Ellipse parameters are: <primitive id (here 1 for ellipse)> , <pt_center.i()> , <pt_center.j()> , <n_20> , <n_11> , <n_02> where <n_ij> are the second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area).

  1. getModelForDisplay(self: visp._visp.mbt.MbGenericTracker, mapOfModels: dict[str, list[list[float]]], mapOfwidths: dict[str, int], mapOfheights: dict[str, int], mapOfcMos: dict[str, visp._visp.core.HomogeneousMatrix], mapOfCams: dict[str, visp._visp.core.CameraParameters], displayFullModel: bool = false) -> None

Get primitive parameters to display the object CAD model for the multiple cameras.

It can be used to display the 3D model with a render engine of your choice.

Each first element of the map correspond to the camera name.

If you are using a single camera you should rather use getModelForDisplay(unsigned int, unsigned int, const vpHomogeneousMatrix &, const vpCameraParameters &, bool)

Note

See getFeaturesForDisplay(std::map<std::string, std::vector<std::vector<double> > > &)

Parameters:
mapOfModels

Map of models. The second element of the map contains a list of primitives parameters to display the model at a given pose with corresponding camera parameters. The first element of the vector indicates the type of parameters: 0 for a line and 1 for an ellipse. Then the second element gives the corresponding parameters.

  • Line parameters are: <primitive id (here 0 for line)> , <pt_start.i()> , <pt_start.j()> , <pt_end.i()> , <pt_end.j()> .

  • Ellipse parameters are: <primitive id (here 1 for ellipse)> , <pt_center.i()> , <pt_center.j()> , <n_20> , <n_11> , <n_02> where <n_ij> are the second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area).

mapOfwidths

Map of images width.

mapOfheights

Map of images height.

mapOfcMos

Map of poses used to project the 3D model into the images.

mapOfCams

The camera parameters.

displayFullModel

If true, the line is displayed even if it is not

getMovingEdge(*args, **kwargs)

Overloaded function.

  1. getMovingEdge(self: visp._visp.mbt.MbGenericTracker) -> visp._visp.me.Me

Get the moving edge parameters for the reference camera.

Returns:

an instance of the moving edge parameters used by the tracker.

  1. getMovingEdge(self: visp._visp.mbt.MbGenericTracker, me1: visp._visp.me.Me, me2: visp._visp.me.Me) -> None

Get the moving edge parameters.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
me1

Moving edge parameters for the first camera.

me2

Moving edge parameters for the second camera.

  1. getMovingEdge(self: visp._visp.mbt.MbGenericTracker, mapOfMovingEdges: dict[str, visp._visp.me.Me]) -> None

Get the moving edge parameters for all the cameras

Parameters:
mapOfMovingEdges

Map of moving edge parameters for all the cameras.

getNbFeaturesDepthDense(self) int

Return the number of depth dense features taken into account in the virtual visual-servoing scheme.

getNbFeaturesDepthNormal(self) int

Return the number of depth normal features features taken into account in the virtual visual-servoing scheme.

getNbFeaturesEdge(self) int

Return the number of moving-edges features taken into account in the virtual visual-servoing scheme.

This function is similar to getNbPoints() .

getNbFeaturesKlt(self) int

Return the number of klt keypoints features taken into account in the virtual visual-servoing scheme.

getNbPoints(*args, **kwargs)

Overloaded function.

  1. getNbPoints(self: visp._visp.mbt.MbGenericTracker, level: int = 0) -> int

Return the number of good points ( vpMeSite ) tracked. A good point is a vpMeSite with its flag “state” equal to 0. Only these points are used during the virtual visual servoing stage.

Note

Multi-scale moving edge tracking is not possible, scale level=0 must be used.

Note

See getNbFeaturesEdge()

Parameters:
level

Pyramid level to consider.

Returns:

the number of good points for the reference camera.

  1. getNbPoints(self: visp._visp.mbt.MbGenericTracker, mapOfNbPoints: dict[str, int], level: int = 0) -> None

Return the number of good points ( vpMeSite ) tracked. A good point is a vpMeSite with its flag “state” equal to 0. Only these points are used during the virtual visual servoing stage.

Note

Multi-scale moving edge tracking is not possible, scale level=0 must be used.

Parameters:
mapOfNbPoints

Map of number of good points ( vpMeSite ) tracked for all the cameras.

level

Pyramid level to consider.

getNbPolygon(*args, **kwargs)

Overloaded function.

  1. getNbPolygon(self: visp._visp.mbt.MbGenericTracker) -> int

Get the number of polygons (faces) representing the object to track.

Returns:

Number of polygons for the reference camera.

  1. getNbPolygon(self: visp._visp.mbt.MbGenericTracker, mapOfNbPolygons: dict[str, int]) -> None

Get the number of polygons (faces) representing the object to track.

Parameters:
mapOfNbPolygons

Map that contains the number of polygons for all the cameras.

  1. getNbPolygon(self: visp._visp.mbt.MbTracker) -> int

Get the number of polygons (faces) representing the object to track.

Returns:

Number of polygons.

getNearClippingDistance(self) float

Get the near distance for clipping.

Returns:

Near clipping value.

getOptimizationMethod(self) visp._visp.mbt.MbTracker.MbtOptimizationMethod

Get the optimization method used during the tracking. 0 = Gauss-Newton approach. 1 = Levenberg-Marquardt approach.

Returns:

Optimization method.

getPolygonFaces(*args, **kwargs)

Overloaded function.

  1. getPolygonFaces(self: visp._visp.mbt.MbGenericTracker, orderPolygons: bool = true, useVisibility: bool = true, clipPolygon: bool = false) -> tuple[list[visp._visp.core.Polygon], list[list[visp._visp.core.Point]]]

Get the list of polygons faces (a vpPolygon representing the projection of the face in the image and a list of face corners in 3D), with the possibility to order by distance to the camera or to use the visibility check to consider if the polygon face must be retrieved or not.

Note

This function will return the 2D polygons faces and 3D face points only for the reference camera.

Parameters:
orderPolygons

If true, the resulting list is ordered from the nearest polygon faces to the farther.

useVisibility

If true, only visible faces will be retrieved.

clipPolygon

If true, the polygons will be clipped according to the clipping flags set in vpMbTracker .

Returns:

A pair object containing the list of vpPolygon and the list of face corners.

  1. getPolygonFaces(self: visp._visp.mbt.MbGenericTracker, mapOfPolygons: dict[str, list[visp._visp.core.Polygon]], mapOfPoints: dict[str, list[list[visp._visp.core.Point]]], orderPolygons: bool = true, useVisibility: bool = true, clipPolygon: bool = false) -> None

Get the list of polygons faces (a vpPolygon representing the projection of the face in the image and a list of face corners in 3D), with the possibility to order by distance to the camera or to use the visibility check to consider if the polygon face must be retrieved or not.

Note

This function will return the 2D polygons faces and 3D face points only for all the cameras.

Parameters:
mapOfPolygons

Map of 2D polygon faces.

mapOfPoints

Map of face 3D points.

orderPolygons

If true, the resulting list is ordered from the nearest polygon faces to the farther.

useVisibility

If true, only visible faces will be retrieved.

clipPolygon

If true, the polygons will be clipped according to the clipping flags set in vpMbTracker .

  1. getPolygonFaces(self: visp._visp.mbt.MbTracker, orderPolygons: bool = true, useVisibility: bool = true, clipPolygon: bool = false) -> tuple[list[visp._visp.core.Polygon], list[list[visp._visp.core.Point]]]

Get the list of polygons faces (a vpPolygon representing the projection of the face in the image and a list of face corners in 3D), with the possibility to order by distance to the camera or to use the visibility check to consider if the polygon face must be retrieved or not.

Parameters:
orderPolygons

If true, the resulting list is ordered from the nearest polygon faces to the farther.

useVisibility

If true, only visible faces will be retrieved.

clipPolygon

If true, the polygons will be clipped according to the clipping flags set in vpMbTracker .

Returns:

A pair object containing the list of vpPolygon and the list of face corners.

getPose(*args, **kwargs)

Overloaded function.

  1. getPose(self: visp._visp.mbt.MbGenericTracker, cMo: visp._visp.core.HomogeneousMatrix) -> None

Get the current pose between the object and the camera. cMo is the matrix which can be used to express coordinates from the object frame to camera frame.

Parameters:
cMo

the pose

  1. getPose(self: visp._visp.mbt.MbGenericTracker, c1Mo: visp._visp.core.HomogeneousMatrix, c2Mo: visp._visp.core.HomogeneousMatrix) -> None

Get the current pose between the object and the cameras.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
c1Mo

The camera pose for the first camera.

c2Mo

The camera pose for the second camera.

  1. getPose(self: visp._visp.mbt.MbGenericTracker, mapOfCameraPoses: dict[str, visp._visp.core.HomogeneousMatrix]) -> None

Get the current pose between the object and the cameras.

Parameters:
mapOfCameraPoses

The map of camera poses for all the cameras.

  1. getPose(self: visp._visp.mbt.MbTracker, cMo: visp._visp.core.HomogeneousMatrix) -> None

Get the current pose between the object and the camera. cMo is the matrix which can be used to express coordinates from the object frame to camera frame.

Parameters:
cMo

the pose

  1. getPose(self: visp._visp.mbt.MbTracker) -> visp._visp.core.HomogeneousMatrix

Get the current pose between the object and the camera. cMo is the matrix which can be used to express coordinates from the object frame to camera frame.

Returns:

the current pose

getProjectionError(self) float

Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal. The error is expressed in degree between 0 and 90. This value is computed if setProjectionErrorComputation() is turned on.

Note

See setProjectionErrorComputation()

Returns:

the value for the error.

getReferenceCameraName(self) str

Get name of the reference camera.

getRobustWeights(self) visp._visp.core.ColVector

Return the weights vector \(w_i\) computed by the robust scheme.

The following example shows how to use this function to compute the norm of the weighted residual and the norm of the weighted residual normalized by the sum of the weights associated to the features that are tracked:

tracker.track(I);
vpColVector w = tracker.getRobustWeights();
vpColVector e = tracker.getError();
vpColVector we(w.size());
for(unsigned int i=0; i<w.size(); i++)
  we[i] = w[i]*e[i];

std::cout << "Weighted residual: " << sqrt( (we).sumSquare() ) << std::endl;
std::cout << "Weighted residual normalized: " << sqrt( (we).sumSquare() ) / w.sum() << std::endl;

Note

See getError()

getStopCriteriaEpsilon(self) float
getTrackerType(self) int

The tracker type for the reference camera.

init(self, I: visp._visp.core.ImageGray) None

Initialise the tracking.

Parameters:
I: visp._visp.core.ImageGray

Input image.

initClick(*args, **kwargs)

Overloaded function.

  1. initClick(self: visp._visp.mbt.MbGenericTracker, I1: visp._visp.core.ImageGray, I2: visp._visp.core.ImageGray, initFile1: str, initFile2: str, displayHelp: bool = false, T1: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix(), T2: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None

Initialise the tracker by clicking in the reference image on the pixels that correspond to the 3D points whose coordinates are extracted from a file. In this file, comments starting with # character are allowed. Notice that 3D point coordinates are expressed in meter in the object frame with their X, Y and Z values.

The structure of this file is the following:

# 3D point coordinates
4                 # Number of points in the file (minimum is four)
0.01 0.01 0.01    # \
...               #  | 3D coordinates in the object frame (X, Y, Z)
0.01 -0.01 -0.01  # /

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
I1

Input grayscale image for the first camera.

I2

Input grayscale image for the second camera.

initFile1

File containing the coordinates of at least 4 3D points the user has to click in the image acquired by the first camera. This file should have .init extension (ie teabox.init).

initFile2

File containing the coordinates of at least 4 3D points the user has to click in the image acquired by the second camera. This file should have .init extension.

displayHelp

Optional display of an image that should have the same generic name as the init file (ie teabox.ppm, or teabox.png). This image may be used to show where to click. This functionality is only available if visp_io module is used. Supported image formats are .pgm, .ppm, .png, .jpeg.

T1

optional transformation matrix to transform 3D points in initFile1 expressed in the original object frame to the desired object frame.

T2

optional transformation matrix to transform 3D points in initFile2 expressed in the original object frame to the desired object frame (T2==T1 if the init points are expressed in the same object frame which should be the case most of the time).

  1. initClick(self: visp._visp.mbt.MbGenericTracker, I_color1: visp._visp.core.ImageRGBa, I_color2: visp._visp.core.ImageRGBa, initFile1: str, initFile2: str, displayHelp: bool = false, T1: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix(), T2: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None

Initialise the tracker by clicking in the reference image on the pixels that correspond to the 3D points whose coordinates are extracted from a file. In this file, comments starting with # character are allowed. Notice that 3D point coordinates are expressed in meter in the object frame with their X, Y and Z values.

The structure of this file is the following:

# 3D point coordinates
4                 # Number of points in the file (minimum is four)
0.01 0.01 0.01    # \
...               #  | 3D coordinates in the object frame (X, Y, Z)
0.01 -0.01 -0.01  # /

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
I_color1

Input color image for the first camera.

I_color2

Input color image for the second camera.

initFile1

File containing the coordinates of at least 4 3D points the user has to click in the image acquired by the first camera. This file should have .init extension (ie teabox.init).

initFile2

File containing the coordinates of at least 4 3D points the user has to click in the image acquired by the second camera. This file should have .init extension.

displayHelp

Optional display of an image that should have the same generic name as the init file (ie teabox.ppm, or teabox.png). This image may be used to show where to click. This functionality is only available if visp_io module is used. Supported image formats are .pgm, .ppm, .png, .jpeg.

T1

optional transformation matrix to transform 3D points in initFile1 expressed in the original object frame to the desired object frame.

T2

optional transformation matrix to transform 3D points in initFile2 expressed in the original object frame to the desired object frame (T2==T1 if the init points are expressed in the same object frame which should be the case most of the time).

  1. initClick(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfInitFiles: dict[str, str], displayHelp: bool = false, mapOfT: dict[str, visp._visp.core.HomogeneousMatrix] = std::map<std::string,vpHomogeneousMatrix>()) -> None

Initialise the tracker by clicking in the reference image on the pixels that correspond to the 3D points whose coordinates are extracted from a file. In this file, comments starting with # character are allowed. Notice that 3D point coordinates are expressed in meter in the object frame with their X, Y and Z values.

The structure of this file is the following:

# 3D point coordinates
4                 # Number of points in the file (minimum is four)
0.01 0.01 0.01    # \
...               #  | 3D coordinates in the object frame (X, Y, Z)
0.01 -0.01 -0.01  # /

The cameras that have not an init file will be automatically initialized but the camera transformation matrices have to be set before.

Note

Image and init file must be supplied for the reference camera. The images for all the cameras must be supplied to correctly initialize the trackers but some init files can be omitted. In this case, they will be initialized using the pose computed from the reference camera pose and using the known geometric transformation between each camera (see setCameraTransformationMatrix() ).

Parameters:
mapOfImages

Map of grayscale images.

mapOfInitFiles

Map of files containing the points where to click for each camera.

displayHelp

Optional display of an image that should have the same generic name as the init file (ie teabox.ppm, or teabox.png). This image may be used to show where to click. This functionality is only available if visp_io module is used. Supported image formats are .pgm, .ppm, .png, .jpeg.

mapOfT

optional map of transformation matrices to transform 3D points in mapOfInitFiles expressed in the original object frame to the desired object frame (if the init points are expressed in the same object frame which should be the case most of the time, all the transformation matrices are identical).

  1. initClick(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageRGBa], mapOfInitFiles: dict[str, str], displayHelp: bool = false, mapOfT: dict[str, visp._visp.core.HomogeneousMatrix] = std::map<std::string,vpHomogeneousMatrix>()) -> None

Initialise the tracker by clicking in the reference image on the pixels that correspond to the 3D points whose coordinates are extracted from a file. In this file, comments starting with # character are allowed. Notice that 3D point coordinates are expressed in meter in the object frame with their X, Y and Z values.

The structure of this file is the following:

# 3D point coordinates
4                 # Number of points in the file (minimum is four)
0.01 0.01 0.01    # \
...               #  | 3D coordinates in the object frame (X, Y, Z)
0.01 -0.01 -0.01  # /

The cameras that have not an init file will be automatically initialized but the camera transformation matrices have to be set before.

Note

Image and init file must be supplied for the reference camera. The images for all the cameras must be supplied to correctly initialize the trackers but some init files can be omitted. In this case, they will be initialized using the pose computed from the reference camera pose and using the known geometric transformation between each camera (see setCameraTransformationMatrix() ).

Parameters:
mapOfInitFiles

Map of files containing the points where to click for each camera.

displayHelp

Optional display of an image that should have the same generic name as the init file (ie teabox.ppm, or teabox.png). This image may be used to show where to click. This functionality is only available if visp_io module is used. Supported image formats are .pgm, .ppm, .png, .jpeg.

mapOfT

optional map of transformation matrices to transform 3D points in mapOfInitFiles expressed in the original object frame to the desired object frame (if the init points are expressed in the same object frame which should be the case most of the time, all the transformation matrices are identical).

  1. initClick(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str, displayHelp: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None

Initialise the tracker by clicking in the image on the pixels that correspond to the 3D points whose coordinates are extracted from a file. In this file, comments starting with # character are allowed. Notice that 3D point coordinates are expressed in meter in the object frame with their X, Y and Z values.

The structure of this file is the following:

# 3D point coordinates
4                 # Number of points in the file (minimum is four)
0.01 0.01 0.01    # \
...               #  | 3D coordinates in the object frame (X, Y, Z)
0.01 -0.01 -0.01  # /
Parameters:
I

Input grayscale image where the user has to click.

initFile

File containing the coordinates of at least 4 3D points the user has to click in the image. This file should have .init extension (ie teabox.init).

displayHelp

Optional display of an image (.ppm, .pgm, .jpg, .jpeg, .png) that should have the same generic name as the init file (ie teabox.ppm or teabox.png). This image may be used to show where to click. This functionality is only available if visp_io module is used.

T

optional transformation matrix to transform 3D points expressed in the original object frame to the desired object frame.

  1. initClick(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str, displayHelp: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None

Initialise the tracker by clicking in the image on the pixels that correspond to the 3D points whose coordinates are extracted from a file. In this file, comments starting with # character are allowed. Notice that 3D point coordinates are expressed in meter in the object frame with their X, Y and Z values.

The structure of this file is the following:

# 3D point coordinates
4                 # Number of points in the file (minimum is four)
0.01 0.01 0.01    # \
...               #  | 3D coordinates in the object frame (X, Y, Z)
0.01 -0.01 -0.01  # /
Parameters:
I_color

Input color image where the user has to click.

initFile

File containing the coordinates of at least 4 3D points the user has to click in the image. This file should have .init extension (ie teabox.init).

displayHelp

Optional display of an image (.ppm, .pgm, .jpg, .jpeg, .png) that should have the same generic name as the init file (ie teabox.ppm or teabox.png). This image may be used to show where to click. This functionality is only available if visp_io module is used.

T

optional transformation matrix to transform 3D points expressed in the original object frame to the desired object frame.

  1. initClick(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, points3D_list: list[visp._visp.core.Point], displayFile: str = ) -> None

Initialise the tracker by clicking in the image on the pixels that correspond to the 3D points whose coordinates are given in points3D_list .

Parameters:
I

Input grayscale image where the user has to click.

points3D_list

List of at least 4 3D points with coordinates expressed in meters in the object frame.

displayFile

Path to the image used to display the help. This image may be used to show where to click. This functionality is only available if visp_io module is used.

  1. initClick(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, points3D_list: list[visp._visp.core.Point], displayFile: str = ) -> None

Initialise the tracker by clicking in the image on the pixels that correspond to the 3D points whose coordinates are given in points3D_list .

Parameters:
I_color

Input color image where the user has to click.

points3D_list

List of at least 4 3D points with coordinates expressed in meters in the object frame.

displayFile

Path to the image used to display the help. This image may be used to show where to click. This functionality is only available if visp_io module is used.

initFromPoints(*args, **kwargs)

Overloaded function.

  1. initFromPoints(self: visp._visp.mbt.MbGenericTracker, I1: visp._visp.core.ImageGray, I2: visp._visp.core.ImageGray, initFile1: str, initFile2: str) -> None

Initialise the tracker by reading 3D point coordinates and the corresponding 2D image point coordinates from a file. Comments starting with # character are allowed. 3D point coordinates are expressed in meter in the object frame with X, Y and Z values. 2D point coordinates are expressied in pixel coordinates, with first the line and then the column of the pixel in the image. The structure of this file is the following.

# 3D point coordinates
4                 # Number of 3D points in the file (minimum is four)
0.01 0.01 0.01    #  \
...               #  | 3D coordinates in meters in the object frame
0.01 -0.01 -0.01  # /
# corresponding 2D point coordinates
4                 # Number of image points in the file (has to be the same as the number of 3D points)
100 200           #  \
...               #  | 2D coordinates in pixel in the image
50 10             #  /

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
I1

Input grayscale image for the first camera.

I2

Input grayscale image for the second camera.

initFile1

Path to the file containing all the points for the first camera.

initFile2

Path to the file containing all the points for the second camera.

  1. initFromPoints(self: visp._visp.mbt.MbGenericTracker, I_color1: visp._visp.core.ImageRGBa, I_color2: visp._visp.core.ImageRGBa, initFile1: str, initFile2: str) -> None

Initialise the tracker by reading 3D point coordinates and the corresponding 2D image point coordinates from a file. Comments starting with # character are allowed. 3D point coordinates are expressed in meter in the object frame with X, Y and Z values. 2D point coordinates are expressied in pixel coordinates, with first the line and then the column of the pixel in the image. The structure of this file is the following.

# 3D point coordinates
4                 # Number of 3D points in the file (minimum is four)
0.01 0.01 0.01    #  \
...               #  | 3D coordinates in meters in the object frame
0.01 -0.01 -0.01  # /
# corresponding 2D point coordinates
4                 # Number of image points in the file (has to be the same as the number of 3D points)
100 200           #  \
...               #  | 2D coordinates in pixel in the image
50 10              #  /

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
I_color1

Input color image for the first camera.

I_color2

Input color image for the second camera.

initFile1

Path to the file containing all the points for the first camera.

initFile2

Path to the file containing all the points for the second camera.

  1. initFromPoints(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfInitPoints: dict[str, str]) -> None

  2. initFromPoints(self: visp._visp.mbt.MbGenericTracker, mapOfColorImages: dict[str, visp._visp.core.ImageRGBa], mapOfInitPoints: dict[str, str]) -> None

  3. initFromPoints(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str) -> None

Initialise the tracker by reading 3D point coordinates and the corresponding 2D image point coordinates from a file. Comments starting with # character are allowed. 3D point coordinates are expressed in meter in the object frame with X, Y and Z values. 2D point coordinates are expressied in pixel coordinates, with first the line and then the column of the pixel in the image. The structure of this file is the following.

# 3D point coordinates
4                 # Number of 3D points in the file (minimum is four)
0.01 0.01 0.01    #  \
...               #  | 3D coordinates in meters in the object frame
0.01 -0.01 -0.01  # /
# corresponding 2D point coordinates
4                 # Number of image points in the file (has to be the same
as the number of 3D points)
100 200           #  \
...               #  | 2D coordinates in pixel in the image
50 10             #  /
Parameters:
I

Input grayscale image

initFile

Path to the file containing all the points.

  1. initFromPoints(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str) -> None

Initialise the tracker by reading 3D point coordinates and the corresponding 2D image point coordinates from a file. Comments starting with # character are allowed. 3D point coordinates are expressed in meter in the object frame with X, Y and Z values. 2D point coordinates are expressied in pixel coordinates, with first the line and then the column of the pixel in the image. The structure of this file is the following.

# 3D point coordinates
4                 # Number of 3D points in the file (minimum is four)
0.01 0.01 0.01    #  \
...               #  | 3D coordinates in meters in the object frame
0.01 -0.01 -0.01  # /
# corresponding 2D point coordinates
4                 # Number of image points in the file (has to be the same
as the number of 3D points)
100 200           #  \
...               #  | 2D coordinates in pixel in the image
50 10             #  /
Parameters:
I_color

Input color image

initFile

Path to the file containing all the points.

  1. initFromPoints(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, points2D_list: list[visp._visp.core.ImagePoint], points3D_list: list[visp._visp.core.Point]) -> None

Initialise the tracking with the list of image points (points2D_list) and the list of corresponding 3D points (object frame) (points3D_list).

Parameters:
I

Input grayscale image

points2D_list

List of image points.

points3D_list

List of 3D points (object frame).

  1. initFromPoints(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, points2D_list: list[visp._visp.core.ImagePoint], points3D_list: list[visp._visp.core.Point]) -> None

Initialise the tracking with the list of image points (points2D_list) and the list of corresponding 3D points (object frame) (points3D_list).

Parameters:
I_color

Input color grayscale image

points2D_list

List of image points.

points3D_list

List of 3D points (object frame).

initFromPose(*args, **kwargs)

Overloaded function.

  1. initFromPose(self: visp._visp.mbt.MbGenericTracker, I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix) -> None

Initialise the tracking thanks to the pose.

Parameters:
I

Input grayscale image

cMo

Pose matrix.

  1. initFromPose(self: visp._visp.mbt.MbGenericTracker, I1: visp._visp.core.ImageGray, I2: visp._visp.core.ImageGray, initFile1: str, initFile2: str) -> None

Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
I1

Input grayscale image for the first camera.

I2

Input grayscale image for the second camera.

initFile1

Init pose file for the first camera.

initFile2

Init pose file for the second camera.

  1. initFromPose(self: visp._visp.mbt.MbGenericTracker, I_color1: visp._visp.core.ImageRGBa, I_color2: visp._visp.core.ImageRGBa, initFile1: str, initFile2: str) -> None

Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
I_color1

Input color image for the first camera.

I_color2

Input color image for the second camera.

initFile1

Init pose file for the first camera.

initFile2

Init pose file for the second camera.

  1. initFromPose(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfInitPoses: dict[str, str]) -> None

Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile.

Note

Image and init pose file must be supplied for the reference camera. The images for all the cameras must be supplied to correctly initialize the trackers but some init pose files can be omitted. In this case, they will be initialized using the pose computed from the reference camera pose and using the known geometric transformation between each camera (see setCameraTransformationMatrix() ).

Parameters:
mapOfImages

Map of grayscale images.

mapOfInitPoses

Map of init pose files.

  1. initFromPose(self: visp._visp.mbt.MbGenericTracker, mapOfColorImages: dict[str, visp._visp.core.ImageRGBa], mapOfInitPoses: dict[str, str]) -> None

Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile.

Note

Image and init pose file must be supplied for the reference camera. The images for all the cameras must be supplied to correctly initialize the trackers but some init pose files can be omitted. In this case, they will be initialized using the pose computed from the reference camera pose and using the known geometric transformation between each camera (see setCameraTransformationMatrix() ).

Parameters:
mapOfColorImages

Map of color images.

mapOfInitPoses

Map of init pose files.

  1. initFromPose(self: visp._visp.mbt.MbGenericTracker, I1: visp._visp.core.ImageGray, I2: visp._visp.core.ImageGray, c1Mo: visp._visp.core.HomogeneousMatrix, c2Mo: visp._visp.core.HomogeneousMatrix) -> None

Initialize the tracking thanks to the pose.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
I1

Input grayscale image for the first camera.

I2

Input grayscale image for the second camera.

c1Mo

Pose matrix for the first camera.

c2Mo

Pose matrix for the second camera.

  1. initFromPose(self: visp._visp.mbt.MbGenericTracker, I_color1: visp._visp.core.ImageRGBa, I_color2: visp._visp.core.ImageRGBa, c1Mo: visp._visp.core.HomogeneousMatrix, c2Mo: visp._visp.core.HomogeneousMatrix) -> None

Initialize the tracking thanks to the pose.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
I_color1

Input color image for the first camera.

I_color2

Input color image for the second camera.

c1Mo

Pose matrix for the first camera.

c2Mo

Pose matrix for the second camera.

  1. initFromPose(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfCameraPoses: dict[str, visp._visp.core.HomogeneousMatrix]) -> None

Initialize the tracking thanks to the pose.

Note

Image and camera pose must be supplied for the reference camera. The images for all the cameras must be supplied to correctly initialize the trackers but some camera poses can be omitted. In this case, they will be initialized using the pose computed from the reference camera pose and using the known geometric transformation between each camera (see setCameraTransformationMatrix() ).

Parameters:
mapOfImages

Map of grayscale images.

mapOfCameraPoses

Map of pose matrix.

  1. initFromPose(self: visp._visp.mbt.MbGenericTracker, mapOfColorImages: dict[str, visp._visp.core.ImageRGBa], mapOfCameraPoses: dict[str, visp._visp.core.HomogeneousMatrix]) -> None

Initialize the tracking thanks to the pose.

Note

Image and camera pose must be supplied for the reference camera. The images for all the cameras must be supplied to correctly initialize the trackers but some camera poses can be omitted. In this case, they will be initialized using the pose computed from the reference camera pose and using the known geometric transformation between each camera (see setCameraTransformationMatrix() ).

Parameters:
mapOfColorImages

Map of color images.

mapOfCameraPoses

Map of pose matrix.

  1. initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str) -> None

Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile. The structure of this file is (without the comments):

// The six value of the pose vector
0.0000    //  \
0.0000    //  |
1.0000    //  | Example of value for the pose vector where Z = 1 meter
0.0000    //  |
0.0000    //  |
0.0000    //  /

Where the three firsts lines refer to the translation and the three last to the rotation in thetaU parametrisation (see vpThetaUVector ).

Parameters:
I

Input grayscale image

initFile

Path to the file containing the pose.

  1. initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str) -> None

Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile. The structure of this file is (without the comments):

// The six value of the pose vector
0.0000    //  \
0.0000    //  |
1.0000    //  | Example of value for the pose vector where Z = 1 meter
0.0000    //  |
0.0000    //  |
0.0000    //  /

Where the three firsts lines refer to the translation and the three last to the rotation in thetaU parametrisation (see vpThetaUVector ).

Parameters:
I_color

Input color image

initFile

Path to the file containing the pose.

  1. initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix) -> None

Initialise the tracking thanks to the pose.

Parameters:
I

Input grayscale image

cMo

Pose matrix.

  1. initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, cMo: visp._visp.core.HomogeneousMatrix) -> None

Initialise the tracking thanks to the pose.

Parameters:
I_color

Input color image

cMo

Pose matrix.

  1. initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, cPo: visp._visp.core.PoseVector) -> None

Initialise the tracking thanks to the pose vector.

Parameters:
I

Input grayscale image

cPo

Pose vector.

  1. initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, cPo: visp._visp.core.PoseVector) -> None

Initialise the tracking thanks to the pose vector.

Parameters:
I_color

Input color image

cPo

Pose vector.

loadConfigFile(*args, **kwargs)

Overloaded function.

  1. loadConfigFile(self: visp._visp.mbt.MbGenericTracker, configFile: str, verbose: bool = true) -> None

Load the configuration file. This file can be in XML format(.xml) or in JSON (.json) if ViSP is compiled with the JSON option. From the configuration file initialize the parameters corresponding to the objects: tracking parameters, camera intrinsic parameters.

Parameters:
configFile

full name of the xml or json file.

verbose

verbose flag. Ignored when parsing JSON

  1. loadConfigFile(self: visp._visp.mbt.MbGenericTracker, configFile1: str, configFile2: str, verbose: bool = true) -> None

Load the xml configuration files. From the configuration file initialize the parameters corresponding to the objects: tracking parameters, camera intrinsic parameters.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
configFile1

Full name of the xml file for the first camera.

configFile2

Full name of the xml file for the second camera.

verbose

verbose flag.

  1. loadConfigFile(self: visp._visp.mbt.MbGenericTracker, mapOfConfigFiles: dict[str, str], verbose: bool = true) -> None

Load the xml configuration files. From the configuration file initialize the parameters corresponding to the objects: tracking parameters, camera intrinsic parameters.

Note

Configuration files must be supplied for all the cameras.

Parameters:
mapOfConfigFiles

Map of xml files.

verbose

verbose flag.

  1. loadConfigFile(self: visp._visp.mbt.MbTracker, configFile: str, verbose: bool = true) -> None

Load a config file to parameterise the behavior of the tracker.

Virtual method to adapt to each tracker.

Parameters:
configFile

An xml config file to parse.

verbose

verbose flag.

loadModel(*args, **kwargs)

Overloaded function.

  1. loadModel(self: visp._visp.mbt.MbGenericTracker, modelFile: str, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None

Load a 3D model from the file in parameter. This file must either be a vrml file (.wrl) or a CAO file (.cao). CAO format is described in the loadCAOModel() method.

Note

All the trackers will use the same model in case of stereo / multiple cameras configuration.

Parameters:
modelFile

the file containing the 3D model description. The extension of this file is either .wrl or .cao.

verbose

verbose option to print additional information when loading CAO model files which include other CAO model files.

T

optional transformation matrix (currently only for .cao) to transform 3D points expressed in the original object frame to the desired object frame.

  1. loadModel(self: visp._visp.mbt.MbGenericTracker, modelFile1: str, modelFile2: str, verbose: bool = false, T1: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix(), T2: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None

Load a 3D model from the file in parameter. This file must either be a vrml file (.wrl) or a CAO file (.cao). CAO format is described in the loadCAOModel() method.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
modelFile1

the file containing the 3D model description for the first camera. The extension of this file is either .wrl or .cao.

modelFile2

the file containing the the 3D model description for the second camera. The extension of this file is either .wrl or .cao.

verbose

verbose option to print additional information when loading CAO model files which include other CAO model files.

T1

optional transformation matrix (currently only for .cao) to transform 3D points in modelFile1 expressed in the original object frame to the desired object frame.

T2

optional transformation matrix (currently only for .cao) to transform 3D points in modelFile2 expressed in the original object frame to the desired object frame ( T2==T1 if the two models have the same object frame which should be the case most of the time).

  1. loadModel(self: visp._visp.mbt.MbGenericTracker, mapOfModelFiles: dict[str, str], verbose: bool = false, mapOfT: dict[str, visp._visp.core.HomogeneousMatrix] = std::map<std::string,vpHomogeneousMatrix>()) -> None

Load a 3D model from the file in parameter. This file must either be a vrml file (.wrl) or a CAO file (.cao). CAO format is described in the loadCAOModel() method.

Note

Each camera must have a model file.

Parameters:
mapOfModelFiles

map of files containing the 3D model description. The extension of this file is either .wrl or .cao.

verbose

verbose option to print additional information when loading CAO model files which include other CAO model files.

mapOfT

optional map of transformation matrices (currently only for .cao) to transform 3D points in mapOfModelFiles expressed in the original object frame to the desired object frame (if the models have the same object frame which should be the case most of the time, all the transformation matrices are identical).

  1. loadModel(self: visp._visp.mbt.MbTracker, modelFile: str, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None

Load a 3D model from the file in parameter. This file must either be a vrml file (.wrl) or a CAO file (.cao). CAO format is described in the loadCAOModel() method.

Parameters:
modelFile

the file containing the the 3D model description. The extension of this file is either .wrl or .cao.

verbose

verbose option to print additional information when loading CAO model files which include other CAO model files.

reInitModel(*args, **kwargs)

Overloaded function.

  1. reInitModel(self: visp._visp.mbt.MbGenericTracker, I: visp._visp.core.ImageGray, cad_name: str, cMo: visp._visp.core.HomogeneousMatrix, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None

Re-initialize the model used by the tracker.

Parameters:
I

The grayscale image containing the object to initialize.

cad_name

Path to the file containing the 3D model description.

cMo

The new vpHomogeneousMatrix between the camera and the new model.

verbose

verbose option to print additional information when loading CAO model files which include other CAO model files.

T

optional transformation matrix (currently only for .cao).

  1. reInitModel(self: visp._visp.mbt.MbGenericTracker, I_color: visp._visp.core.ImageRGBa, cad_name: str, cMo: visp._visp.core.HomogeneousMatrix, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None

Re-initialize the model used by the tracker.

Parameters:
I_color

The color image containing the object to initialize.

cad_name

Path to the file containing the 3D model description.

cMo

The new vpHomogeneousMatrix between the camera and the new model.

verbose

verbose option to print additional information when loading CAO model files which include other CAO model files.

T

optional transformation matrix (currently only for .cao).

  1. reInitModel(self: visp._visp.mbt.MbGenericTracker, I1: visp._visp.core.ImageGray, I2: visp._visp.core.ImageGray, cad_name1: str, cad_name2: str, c1Mo: visp._visp.core.HomogeneousMatrix, c2Mo: visp._visp.core.HomogeneousMatrix, verbose: bool = false, T1: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix(), T2: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None

Re-initialize the model used by the tracker.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
I1

The grayscale image containing the object to initialize for the first camera.

I2

The grayscale image containing the object to initialize for the second camera.

cad_name1

Path to the file containing the 3D model description for the first camera.

cad_name2

Path to the file containing the 3D model description for the second camera.

c1Mo

The new vpHomogeneousMatrix between the first camera and the new model.

c2Mo

The new vpHomogeneousMatrix between the second camera and the new model.

verbose

verbose option to print additional information when loading CAO model files which include other CAO model files.

T1

optional transformation matrix (currently only for .cao) to transform 3D points in cad_name1 expressed in the original object frame to the desired object frame.

T2

optional transformation matrix (currently only for .cao) to transform 3D points in cad_name2 expressed in the original object frame to the desired object frame ( T2==T1 if the two models have the same object frame which should be the case most of the time).

  1. reInitModel(self: visp._visp.mbt.MbGenericTracker, I_color1: visp._visp.core.ImageRGBa, I_color2: visp._visp.core.ImageRGBa, cad_name1: str, cad_name2: str, c1Mo: visp._visp.core.HomogeneousMatrix, c2Mo: visp._visp.core.HomogeneousMatrix, verbose: bool = false, T1: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix(), T2: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None

Re-initialize the model used by the tracker.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
I_color1

The color image containing the object to initialize for the first camera.

I_color2

The color image containing the object to initialize for the second camera.

cad_name1

Path to the file containing the 3D model description for the first camera.

cad_name2

Path to the file containing the 3D model description for the second camera.

c1Mo

The new vpHomogeneousMatrix between the first camera and the new model.

c2Mo

The new vpHomogeneousMatrix between the second camera and the new model.

verbose

verbose option to print additional information when loading CAO model files which include other CAO model files.

T1

optional transformation matrix (currently only for .cao) to transform 3D points in cad_name1 expressed in the original object frame to the desired object frame.

T2

optional transformation matrix (currently only for .cao) to transform 3D points in cad_name2 expressed in the original object frame to the desired object frame ( T2==T1 if the two models have the same object frame which should be the case most of the time).

  1. reInitModel(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfModelFiles: dict[str, str], mapOfCameraPoses: dict[str, visp._visp.core.HomogeneousMatrix], verbose: bool = false, mapOfT: dict[str, visp._visp.core.HomogeneousMatrix] = std::map<std::string,vpHomogeneousMatrix>()) -> None

Re-initialize the model used by the tracker.

Parameters:
mapOfImages

Map of grayscale images.

mapOfModelFiles

Map of model files.

mapOfCameraPoses

The new vpHomogeneousMatrix between the cameras and the current object position.

verbose

Verbose option to print additional information when loading CAO model files which include other CAO model files.

mapOfT

optional map of transformation matrices (currently only for .cao) to transform 3D points in mapOfModelFiles expressed in the original object frame to the desired object frame (if the models have the same object frame which should be the case most of the time, all the transformation matrices are identical).

  1. reInitModel(self: visp._visp.mbt.MbGenericTracker, mapOfColorImages: dict[str, visp._visp.core.ImageRGBa], mapOfModelFiles: dict[str, str], mapOfCameraPoses: dict[str, visp._visp.core.HomogeneousMatrix], verbose: bool = false, mapOfT: dict[str, visp._visp.core.HomogeneousMatrix] = std::map<std::string,vpHomogeneousMatrix>()) -> None

Re-initialize the model used by the tracker.

Parameters:
mapOfColorImages

Map of color images.

mapOfModelFiles

Map of model files.

mapOfCameraPoses

The new vpHomogeneousMatrix between the cameras and the current object position.

verbose

Verbose option to print additional information when loading CAO model files which include other CAO model files.

mapOfT

optional map of transformation matrices (currently only for .cao) to transform 3D points in mapOfModelFiles expressed in the original object frame to the desired object frame (if the models have the same object frame which should be the case most of the time, all the transformation matrices are identical).

resetTracker(self) None

Reset the tracker. The model is removed and the pose is set to identity. The tracker needs to be initialized with a new model and a new pose.

savePose(self, filename: str) None

Save the pose in the given filename

Parameters:
filename: str

Path to the file used to save the pose.

setAngleAppear(*args, **kwargs)

Overloaded function.

  1. setAngleAppear(self: visp._visp.mbt.MbGenericTracker, a: float) -> None

Set the angle used to test polygons appearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value lower than this parameter, the polygon is considered as appearing. The polygon will then be tracked.

Parameters:
a

new angle in radian.

  1. setAngleAppear(self: visp._visp.mbt.MbGenericTracker, a1: float, a2: float) -> None

Set the angle used to test polygons appearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value lower than this parameter, the polygon is considered as appearing. The polygon will then be tracked.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
a1

new angle in radian for the first camera.

a2

new angle in radian for the second camera.

  1. setAngleAppear(self: visp._visp.mbt.MbGenericTracker, mapOfAngles: dict[str, float]) -> None

Set the angle used to test polygons appearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value lower than this parameter, the polygon is considered as appearing. The polygon will then be tracked.

Parameters:
mapOfAngles

Map of new angles in radian.

  1. setAngleAppear(self: visp._visp.mbt.MbTracker, a: float) -> None

Set the angle used to test polygons appearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value lower than this parameter, the polygon is considered as appearing. The polygon will then be tracked.

Parameters:
a

new angle in radian.

setAngleDisappear(*args, **kwargs)

Overloaded function.

  1. setAngleDisappear(self: visp._visp.mbt.MbGenericTracker, a: float) -> None

Set the angle used to test polygons disappearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value greater than this parameter, the polygon is considered as disappearing. The tracking of the polygon will then be stopped.

Parameters:
a

new angle in radian.

  1. setAngleDisappear(self: visp._visp.mbt.MbGenericTracker, a1: float, a2: float) -> None

Set the angle used to test polygons disappearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value greater than this parameter, the polygon is considered as disappearing. The tracking of the polygon will then be stopped.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
a1

new angle in radian for the first camera.

a2

new angle in radian for the second camera.

  1. setAngleDisappear(self: visp._visp.mbt.MbGenericTracker, mapOfAngles: dict[str, float]) -> None

Set the angle used to test polygons disappearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value greater than this parameter, the polygon is considered as disappearing. The tracking of the polygon will then be stopped.

Parameters:
mapOfAngles

Map of new angles in radian.

  1. setAngleDisappear(self: visp._visp.mbt.MbTracker, a: float) -> None

Set the angle used to test polygons disappearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value greater than this parameter, the polygon is considered as disappearing. The tracking of the polygon will then be stopped.

Parameters:
a

new angle in radian.

setCameraParameters(*args, **kwargs)

Overloaded function.

  1. setCameraParameters(self: visp._visp.mbt.MbGenericTracker, camera: visp._visp.core.CameraParameters) -> None

Set the camera parameters.

Parameters:
camera

the new camera parameters.

  1. setCameraParameters(self: visp._visp.mbt.MbGenericTracker, camera1: visp._visp.core.CameraParameters, camera2: visp._visp.core.CameraParameters) -> None

Set the camera parameters.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
camera1

the new camera parameters for the first camera.

camera2

the new camera parameters for the second camera.

  1. setCameraParameters(self: visp._visp.mbt.MbGenericTracker, mapOfCameraParameters: dict[str, visp._visp.core.CameraParameters]) -> None

Set the camera parameters.

Note

This function will set the camera parameters only for the supplied camera names.

Parameters:
mapOfCameraParameters

map of new camera parameters.

  1. setCameraParameters(self: visp._visp.mbt.MbTracker, cam: visp._visp.core.CameraParameters) -> None

Set the camera parameters.

Parameters:
cam

The new camera parameters.

setCameraTransformationMatrix(*args, **kwargs)

Overloaded function.

  1. setCameraTransformationMatrix(self: visp._visp.mbt.MbGenericTracker, cameraName: str, cameraTransformationMatrix: visp._visp.core.HomogeneousMatrix) -> None

Set the camera transformation matrix for the specified camera ( \(_{}^{c_{current}}\textrm{M}_{c_{reference}}\) ).

Parameters:
cameraName

Camera name.

cameraTransformationMatrix

Camera transformation matrix between the current and the reference camera.

  1. setCameraTransformationMatrix(self: visp._visp.mbt.MbGenericTracker, mapOfTransformationMatrix: dict[str, visp._visp.core.HomogeneousMatrix]) -> None

Set the map of camera transformation matrices ( \(_{}^{c_1}\textrm{M}_{c_1}, _{}^{c_2}\textrm{M}_{c_1}, _{}^{c_3}\textrm{M}_{c_1}, \cdots, _{}^{c_n}\textrm{M}_{c_1}\) ).

Parameters:
mapOfTransformationMatrix

map of camera transformation matrices.

setClipping(*args, **kwargs)

Overloaded function.

  1. setClipping(self: visp._visp.mbt.MbGenericTracker, flags: int) -> None

Specify which clipping to use.

Note

See vpMbtPolygonClipping

Note

This function will set the new parameter for all the cameras.

Parameters:
flags

New clipping flags.

  1. setClipping(self: visp._visp.mbt.MbGenericTracker, flags1: int, flags2: int) -> None

Specify which clipping to use.

Note

See vpMbtPolygonClipping

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
flags1

New clipping flags for the first camera.

flags2

New clipping flags for the second camera.

  1. setClipping(self: visp._visp.mbt.MbGenericTracker, mapOfClippingFlags: dict[str, int]) -> None

Specify which clipping to use.

Note

See vpMbtPolygonClipping

Parameters:
mapOfClippingFlags

Map of new clipping flags.

  1. setClipping(self: visp._visp.mbt.MbTracker, flags: int) -> None

Specify which clipping to use.

Note

See vpMbtPolygonClipping

Parameters:
flags

New clipping flags.

setCovarianceComputation(self, flag: bool) None

Set if the covariance matrix has to be computed.

Note

See getCovarianceMatrix()

Parameters:
flag: bool

True if the covariance has to be computed, false otherwise. If computed its value is available with getCovarianceMatrix()

setDepthDenseFilteringMaxDistance(self, maxDistance: float) None

Set maximum distance to consider a face. You should use the maximum depth range of the sensor used.

Note

See setDepthDenseFilteringMethod

Note

This function will set the new parameter for all the cameras.

Parameters:
maxDistance: float

Maximum distance to the face.

setDepthDenseFilteringMethod(self, method: int) None

Set method to discard a face, e.g.if outside of the depth range.

Note

See vpMbtFaceDepthDense::vpDepthDenseFilteringType

Note

This function will set the new parameter for all the cameras.

Parameters:
method: int

Depth dense filtering method.

setDepthDenseFilteringMinDistance(self, minDistance: float) None

Set minimum distance to consider a face. You should use the minimum depth range of the sensor used.

Note

See setDepthDenseFilteringMethod

Note

This function will set the new parameter for all the cameras.

Parameters:
minDistance: float

Minimum distance to the face.

setDepthDenseFilteringOccupancyRatio(self, occupancyRatio: float) None

Set depth occupancy ratio to consider a face, used to discard faces where the depth map is not well reconstructed.

Note

See setDepthDenseFilteringMethod

Note

This function will set the new parameter for all the cameras.

Parameters:
occupancyRatio: float

Occupancy ratio, between [0 ; 1].

setDepthDenseSamplingStep(self, stepX: int, stepY: int) None

Set depth dense sampling step.

Note

This function will set the new parameter for all the cameras.

Parameters:
stepX: int

Sampling step in x-direction.

stepY: int

Sampling step in y-direction.

setDepthNormalFaceCentroidMethod(self, method: visp._visp.mbt.MbtFaceDepthNormal.FaceCentroidType) None

Set method to compute the centroid for display for depth tracker.

Note

This function will set the new parameter for all the cameras.

Parameters:
method: visp._visp.mbt.MbtFaceDepthNormal.FaceCentroidType

Centroid computation method.

setDepthNormalFeatureEstimationMethod(self, method: visp._visp.mbt.MbtFaceDepthNormal.FeatureEstimationType) None

Set depth feature estimation method.

Note

This function will set the new parameter for all the cameras.

Parameters:
method: visp._visp.mbt.MbtFaceDepthNormal.FeatureEstimationType

Depth feature estimation method.

setDepthNormalPclPlaneEstimationMethod(self, method: int) None

Set depth PCL plane estimation method.

Note

This function will set the new parameter for all the cameras.

Parameters:
method: int

Depth PCL plane estimation method.

setDepthNormalPclPlaneEstimationRansacMaxIter(self, maxIter: int) None

Set depth PCL RANSAC maximum number of iterations.

Note

This function will set the new parameter for all the cameras.

Parameters:
maxIter: int

Depth PCL RANSAC maximum number of iterations.

setDepthNormalPclPlaneEstimationRansacThreshold(self, threshold: float) None

Set depth PCL RANSAC threshold.

Note

This function will set the new parameter for all the cameras.

setDepthNormalSamplingStep(self, stepX: int, stepY: int) None

Set depth sampling step.

Note

This function will set the new parameter for all the cameras.

Parameters:
stepX: int

Sampling step in x-direction.

stepY: int

Sampling step in y-direction.

setDisplayFeatures(*args, **kwargs)

Overloaded function.

  1. setDisplayFeatures(self: visp._visp.mbt.MbGenericTracker, displayF: bool) -> None

Enable to display the features. By features, we meant the moving edges (ME) and the klt points if used.

Note that if present, the moving edges can be displayed with different colors:

  • If green : The ME is a good point.

  • If blue : The ME is removed because of a contrast problem during the tracking phase.

  • If purple : The ME is removed because of a threshold problem during the tracking phase.

  • If red : The ME is removed because it is rejected by the robust approach in the virtual visual servoing scheme.

Note

This function will set the new parameter for all the cameras.

Parameters:
displayF

set it to true to display the features.

  1. setDisplayFeatures(self: visp._visp.mbt.MbTracker, displayF: bool) -> None

Enable to display the features. By features, we meant the moving edges (ME) and the klt points if used.

Note that if present, the moving edges can be displayed with different colors:

  • If green : The ME is a good point.

  • If blue : The ME is removed because of a contrast problem during the tracking phase.

  • If purple : The ME is removed because of a threshold problem during the tracking phase.

  • If red : The ME is removed because it is rejected by the robust approach in the virtual visual servoing scheme.

Parameters:
displayF

set it to true to display the features.

setEstimatedDoF(self, v: visp._visp.core.ColVector) None

Set a 6-dim column vector representing the degrees of freedom in the object frame that are estimated by the tracker. When set to 1, all the 6 dof are estimated.

Below we give the correspondence between the index of the vector and the considered dof:

  • v[0] = 1 if translation along X is estimated, 0 otherwise;

  • v[1] = 1 if translation along Y is estimated, 0 otherwise;

  • v[2] = 1 if translation along Z is estimated, 0 otherwise;

  • v[3] = 1 if rotation along X is estimated, 0 otherwise;

  • v[4] = 1 if rotation along Y is estimated, 0 otherwise;

  • v[5] = 1 if rotation along Z is estimated, 0 otherwise;

setFarClippingDistance(*args, **kwargs)

Overloaded function.

  1. setFarClippingDistance(self: visp._visp.mbt.MbGenericTracker, dist: float) -> None

Set the far distance for clipping.

Note

This function will set the new parameter for all the cameras.

Parameters:
dist

Far clipping value.

  1. setFarClippingDistance(self: visp._visp.mbt.MbGenericTracker, dist1: float, dist2: float) -> None

Set the far distance for clipping.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
dist1

Far clipping value for the first camera.

dist2

Far clipping value for the second camera.

  1. setFarClippingDistance(self: visp._visp.mbt.MbGenericTracker, mapOfClippingDists: dict[str, float]) -> None

Set the far distance for clipping.

Parameters:
mapOfClippingDists

Map of far clipping values.

  1. setFarClippingDistance(self: visp._visp.mbt.MbTracker, dist: float) -> None

Set the far distance for clipping.

Parameters:
dist

Far clipping value.

setFeatureFactors(self, mapOfFeatureFactors: dict[visp._visp.mbt.MbGenericTracker.TrackerType, float]) None

Set the feature factors used in the VVS stage (ponderation between the feature types).

Parameters:
mapOfFeatureFactors: dict[visp._visp.mbt.MbGenericTracker.TrackerType, float]

Map of feature factors.

setGoodMovingEdgesRatioThreshold(self, threshold: float) None

Set the threshold value between 0 and 1 over good moving edges ratio. It allows to decide if the tracker has enough valid moving edges to compute a pose. 1 means that all moving edges should be considered as good to have a valid pose, while 0.1 means that 10% of the moving edge are enough to declare a pose valid.

Note

See getGoodMovingEdgesRatioThreshold()

Note

This function will set the new parameter for all the cameras.

Parameters:
threshold: float

Value between 0 and 1 that corresponds to the ratio of good moving edges that is necessary to consider that the estimated pose is valid. Default value is 0.4.

setInitialMu(self, mu: float) None

Set the initial value of mu for the Levenberg Marquardt optimization loop.

Parameters:
mu: float

initial mu.

setKltMaskBorder(*args, **kwargs)

Overloaded function.

  1. setKltMaskBorder(self: visp._visp.mbt.MbGenericTracker, e: int) -> None

Set the erosion of the mask used on the Model faces.

Note

This function will set the new parameter for all the cameras.

Parameters:
e

The desired erosion.

  1. setKltMaskBorder(self: visp._visp.mbt.MbGenericTracker, e1: int, e2: int) -> None

Set the erosion of the mask used on the Model faces.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
e1

The desired erosion for the first camera.

e2

The desired erosion for the second camera.

  1. setKltMaskBorder(self: visp._visp.mbt.MbGenericTracker, mapOfErosions: dict[str, int]) -> None

Set the erosion of the mask used on the Model faces.

Parameters:
mapOfErosions

Map of desired erosions.

setKltOpencv(*args, **kwargs)

Overloaded function.

  1. setKltOpencv(self: visp._visp.mbt.MbGenericTracker, t: visp._visp.klt.KltOpencv) -> None

Set the new value of the klt tracker.

Note

This function will set the new parameter for all the cameras.

Parameters:
t

Klt tracker containing the new values.

  1. setKltOpencv(self: visp._visp.mbt.MbGenericTracker, t1: visp._visp.klt.KltOpencv, t2: visp._visp.klt.KltOpencv) -> None

Set the new value of the klt tracker.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
t1

Klt tracker containing the new values for the first camera.

t2

Klt tracker containing the new values for the second camera.

  1. setKltOpencv(self: visp._visp.mbt.MbGenericTracker, mapOfKlts: dict[str, visp._visp.klt.KltOpencv]) -> None

Set the new value of the klt tracker.

Parameters:
mapOfKlts

Map of klt tracker containing the new values.

setKltThresholdAcceptation(self, th: float) None

Set the threshold for the acceptation of a point.

Note

This function will set the new parameter for all the cameras.

Parameters:
th: float

Threshold for the weight below which a point is rejected.

setLambda(self, gain: float) None

Set the value of the gain used to compute the control law.

Parameters:
gain: float

the desired value for the gain.

setLod(*args, **kwargs)

Overloaded function.

  1. setLod(self: visp._visp.mbt.MbGenericTracker, useLod: bool, name: str = ) -> None

Set the flag to consider if the level of detail (LOD) is used.

Note

See setMinLineLengthThresh() , setMinPolygonAreaThresh()

Note

This function will set the new parameter for all the cameras.

Parameters:
useLod

true if the level of detail must be used, false otherwise. When true, two parameters can be set, see setMinLineLengthThresh() and setMinPolygonAreaThresh() .

name

name of the face we want to modify the LOD parameter.

  1. setLod(self: visp._visp.mbt.MbTracker, useLod: bool, name: str = ) -> None

Set the flag to consider if the level of detail (LOD) is used.

Note

See setMinLineLengthThresh() , setMinPolygonAreaThresh()

Parameters:
useLod

true if the level of detail must be used, false otherwise. When true, two parameters can be set, see setMinLineLengthThresh() and setMinPolygonAreaThresh() .

name

name of the face we want to modify the LOD parameter.

setMask(*args, **kwargs)

Overloaded function.

  1. setMask(self: visp._visp.mbt.MbGenericTracker, mask: vpImage<bool>) -> None

Set the visibility mask.

Parameters:
mask

visibility mask.

  1. setMask(self: visp._visp.mbt.MbTracker, mask: vpImage<bool>) -> None

setMaxIter(self, max: int) None

Set the maximum iteration of the virtual visual servoing stage.

Parameters:
max: int

the desired number of iteration

setMinLineLengthThresh(*args, **kwargs)

Overloaded function.

  1. setMinLineLengthThresh(self: visp._visp.mbt.MbGenericTracker, minLineLengthThresh: float, name: str = ) -> None

Set the threshold for the minimum line length to be considered as visible in the LOD case.

Note

See setLod() , setMinPolygonAreaThresh()

Note

This function will set the new parameter for all the cameras.

Parameters:
minLineLengthThresh

threshold for the minimum line length in pixel.

name

name of the face we want to modify the LOD threshold.

  1. setMinLineLengthThresh(self: visp._visp.mbt.MbTracker, minLineLengthThresh: float, name: str = ) -> None

Set the threshold for the minimum line length to be considered as visible in the LOD case.

Note

See setLod() , setMinPolygonAreaThresh()

Parameters:
minLineLengthThresh

threshold for the minimum line length in pixel.

name

name of the face we want to modify the LOD threshold.

setMinPolygonAreaThresh(*args, **kwargs)

Overloaded function.

  1. setMinPolygonAreaThresh(self: visp._visp.mbt.MbGenericTracker, minPolygonAreaThresh: float, name: str = ) -> None

Set the minimum polygon area to be considered as visible in the LOD case.

Note

See setLod() , setMinLineLengthThresh()

Note

This function will set the new parameter for all the cameras.

Parameters:
minPolygonAreaThresh

threshold for the minimum polygon area in pixel.

name

name of the face we want to modify the LOD threshold.

  1. setMinPolygonAreaThresh(self: visp._visp.mbt.MbTracker, minPolygonAreaThresh: float, name: str = ) -> None

Set the minimum polygon area to be considered as visible in the LOD case.

Note

See setLod() , setMinLineLengthThresh()

Parameters:
minPolygonAreaThresh

threshold for the minimum polygon area in pixel.

name

name of the face we want to modify the LOD threshold.

setMovingEdge(*args, **kwargs)

Overloaded function.

  1. setMovingEdge(self: visp._visp.mbt.MbGenericTracker, me: visp._visp.me.Me) -> None

Set the moving edge parameters.

Note

This function will set the new parameter for all the cameras.

Parameters:
me

an instance of vpMe containing all the desired parameters.

  1. setMovingEdge(self: visp._visp.mbt.MbGenericTracker, me1: visp._visp.me.Me, me2: visp._visp.me.Me) -> None

Set the moving edge parameters.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
me1

an instance of vpMe containing all the desired parameters for the first camera.

me2

an instance of vpMe containing all the desired parameters for the second camera.

  1. setMovingEdge(self: visp._visp.mbt.MbGenericTracker, mapOfMe: dict[str, visp._visp.me.Me]) -> None

Set the moving edge parameters.

Parameters:
mapOfMe

Map of vpMe containing all the desired parameters.

setNearClippingDistance(*args, **kwargs)

Overloaded function.

  1. setNearClippingDistance(self: visp._visp.mbt.MbGenericTracker, dist: float) -> None

Set the near distance for clipping.

Note

This function will set the new parameter for all the cameras.

Parameters:
dist

Near clipping value.

  1. setNearClippingDistance(self: visp._visp.mbt.MbGenericTracker, dist1: float, dist2: float) -> None

Set the near distance for clipping.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
dist1

Near clipping value for the first camera.

dist2

Near clipping value for the second camera.

  1. setNearClippingDistance(self: visp._visp.mbt.MbGenericTracker, mapOfDists: dict[str, float]) -> None

Set the near distance for clipping.

Parameters:
mapOfDists

Map of near clipping values.

  1. setNearClippingDistance(self: visp._visp.mbt.MbTracker, dist: float) -> None

Set the near distance for clipping.

Parameters:
dist

Near clipping value.

setOgreShowConfigDialog(*args, **kwargs)

Overloaded function.

  1. setOgreShowConfigDialog(self: visp._visp.mbt.MbGenericTracker, showConfigDialog: bool) -> None

Enable/Disable the appearance of Ogre config dialog on startup.

Warning

This method has only effect when Ogre is used and Ogre visibility test is enabled using setOgreVisibilityTest() with true parameter.

Note

This function will set the new parameter for all the cameras.

Parameters:
showConfigDialog

if true, shows Ogre dialog window (used to set Ogre rendering options) when Ogre visibility is enabled. By default, this functionality is turned off.

  1. setOgreShowConfigDialog(self: visp._visp.mbt.MbTracker, showConfigDialog: bool) -> None

Enable/Disable the appearance of Ogre config dialog on startup.

Warning

This method has only effect when Ogre is used and Ogre visibility test is enabled using setOgreVisibilityTest() with true parameter.

Parameters:
showConfigDialog

if true, shows Ogre dialog window (used to set Ogre rendering options) when Ogre visibility is enabled. By default, this functionality is turned off.

setOgreVisibilityTest(*args, **kwargs)

Overloaded function.

  1. setOgreVisibilityTest(self: visp._visp.mbt.MbGenericTracker, v: bool) -> None

Use Ogre3D for visibility tests

Warning

This function has to be called before the initialization of the tracker.

Note

This function will set the new parameter for all the cameras.

Parameters:
v

True to use it, False otherwise

  1. setOgreVisibilityTest(self: visp._visp.mbt.MbTracker, v: bool) -> None

Use Ogre3D for visibility tests

Warning

This function has to be called before the initialization of the tracker.

Parameters:
v

True to use it, False otherwise

setOptimizationMethod(*args, **kwargs)

Overloaded function.

  1. setOptimizationMethod(self: visp._visp.mbt.MbGenericTracker, opt: visp._visp.mbt.MbTracker.MbtOptimizationMethod) -> None

  2. setOptimizationMethod(self: visp._visp.mbt.MbTracker, opt: visp._visp.mbt.MbTracker.MbtOptimizationMethod) -> None

setPose(*args, **kwargs)

Overloaded function.

  1. setPose(self: visp._visp.mbt.MbGenericTracker, I: visp._visp.core.ImageGray, cdMo: visp._visp.core.HomogeneousMatrix) -> None

Set the pose to be used in entry (as guess) of the next call to the track() function. This pose will be just used once.

Warning

This functionnality is not available when tracking cylinders with the KLT tracking.

Note

This function will set the new parameter for all the cameras.

Parameters:
I

grayscale image corresponding to the desired pose.

cdMo

Pose to affect.

  1. setPose(self: visp._visp.mbt.MbGenericTracker, I_color: visp._visp.core.ImageRGBa, cdMo: visp._visp.core.HomogeneousMatrix) -> None

Set the pose to be used in entry (as guess) of the next call to the track() function. This pose will be just used once.

Warning

This functionnality is not available when tracking cylinders with the KLT tracking.

Note

This function will set the new parameter for all the cameras.

Parameters:
I_color

color image corresponding to the desired pose.

cdMo

Pose to affect.

  1. setPose(self: visp._visp.mbt.MbGenericTracker, I1: visp._visp.core.ImageGray, I2: visp._visp.core.ImageGray, c1Mo: visp._visp.core.HomogeneousMatrix, c2Mo: visp._visp.core.HomogeneousMatrix) -> None

Set the pose to be used in entry of the next call to the track() function. This pose will be just used once.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
I1

First grayscale image corresponding to the desired pose.

I2

Second grayscale image corresponding to the desired pose.

c1Mo

First pose to affect.

c2Mo

Second pose to affect.

  1. setPose(self: visp._visp.mbt.MbGenericTracker, I_color1: visp._visp.core.ImageRGBa, I_color2: visp._visp.core.ImageRGBa, c1Mo: visp._visp.core.HomogeneousMatrix, c2Mo: visp._visp.core.HomogeneousMatrix) -> None

Set the pose to be used in entry of the next call to the track() function. This pose will be just used once.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
I_color1

First color image corresponding to the desired pose.

I_color2

Second color image corresponding to the desired pose.

c1Mo

First pose to affect.

c2Mo

Second pose to affect.

  1. setPose(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfCameraPoses: dict[str, visp._visp.core.HomogeneousMatrix]) -> None

Set the pose to be used in entry of the next call to the track() function. This pose will be just used once. The camera transformation matrices have to be set before.

Note

Image and camera pose must be supplied for the reference camera. The images for all the cameras must be supplied to correctly initialize the trackers but some camera poses can be omitted. In this case, they will be initialized using the pose computed from the reference camera pose and using the known geometric transformation between each camera (see setCameraTransformationMatrix() ).

Parameters:
mapOfImages

Map of grayscale images.

mapOfCameraPoses

Map of pose to affect to the cameras.

  1. setPose(self: visp._visp.mbt.MbGenericTracker, mapOfColorImages: dict[str, visp._visp.core.ImageRGBa], mapOfCameraPoses: dict[str, visp._visp.core.HomogeneousMatrix]) -> None

Set the pose to be used in entry of the next call to the track() function. This pose will be just used once. The camera transformation matrices have to be set before.

Note

Image and camera pose must be supplied for the reference camera. The images for all the cameras must be supplied to correctly initialize the trackers but some camera poses can be omitted. In this case, they will be initialized using the pose computed from the reference camera pose and using the known geometric transformation between each camera(see setCameraTransformationMatrix() ).

Parameters:
mapOfColorImages

Map of color images.

mapOfCameraPoses

Map of pose to affect to the cameras.

setPoseSavingFilename(self, filename: str) None

Set the filename used to save the initial pose computed using the initClick() method. It is also used to read a previous pose in the same method. If the file is not set then, the initClick() method will create a .0.pos file in the root directory. This directory is the path to the file given to the method initClick() used to know the coordinates in the object frame.

Parameters:
filename: str

The new filename.

setProjectionErrorComputation(*args, **kwargs)

Overloaded function.

  1. setProjectionErrorComputation(self: visp._visp.mbt.MbGenericTracker, flag: bool) -> None

Set if the projection error criteria has to be computed. This criteria could be used to detect the quality of the tracking. It computes an angle between 0 and 90 degrees that is available with getProjectionError() . Closer to 0 is the value, better is the tracking.

Note

See getProjectionError()

Note

Available only if the edge features are used (e.g. Edge tracking or Edge + KLT tracking). Otherwise, the value of 90 degrees will be returned.

Parameters:
flag

True if the projection error criteria has to be computed, false otherwise.

  1. setProjectionErrorComputation(self: visp._visp.mbt.MbTracker, flag: bool) -> None

Set if the projection error criteria has to be computed. This criteria could be used to detect the quality of the tracking. It computes an angle between 0 and 90 degrees that is available with getProjectionError() . Closer to 0 is the value, better is the tracking.

Note

See getProjectionError()

Parameters:
flag

True if the projection error criteria has to be computed, false otherwise.

setProjectionErrorDisplay(*args, **kwargs)

Overloaded function.

  1. setProjectionErrorDisplay(self: visp._visp.mbt.MbGenericTracker, display: bool) -> None

Display or not gradient and model orientation when computing the projection error.

  1. setProjectionErrorDisplay(self: visp._visp.mbt.MbTracker, display: bool) -> None

Display or not gradient and model orientation when computing the projection error.

setProjectionErrorDisplayArrowLength(*args, **kwargs)

Overloaded function.

  1. setProjectionErrorDisplayArrowLength(self: visp._visp.mbt.MbGenericTracker, length: int) -> None

Arrow length used to display gradient and model orientation for projection error computation.

  1. setProjectionErrorDisplayArrowLength(self: visp._visp.mbt.MbTracker, length: int) -> None

Arrow length used to display gradient and model orientation for projection error computation.

setProjectionErrorDisplayArrowThickness(*args, **kwargs)

Overloaded function.

  1. setProjectionErrorDisplayArrowThickness(self: visp._visp.mbt.MbGenericTracker, thickness: int) -> None

Arrow thickness used to display gradient and model orientation for projection error computation.

  1. setProjectionErrorDisplayArrowThickness(self: visp._visp.mbt.MbTracker, thickness: int) -> None

Arrow thickness used to display gradient and model orientation for projection error computation.

setProjectionErrorKernelSize(self, size: int) None

Set kernel size used for projection error computation.

Parameters:
size: int

Kernel size computed as kernel_size = size*2 + 1.

setProjectionErrorMovingEdge(self, me: visp._visp.me.Me) None

Set Moving-Edges parameters for projection error computation.

Parameters:
me: visp._visp.me.Me

Moving-Edges parameters.

setReferenceCameraName(self, referenceCameraName: str) None

Set the reference camera name.

Parameters:
referenceCameraName: str

Name of the reference camera.

setScanLineVisibilityTest(*args, **kwargs)

Overloaded function.

  1. setScanLineVisibilityTest(self: visp._visp.mbt.MbGenericTracker, v: bool) -> None

  2. setScanLineVisibilityTest(self: visp._visp.mbt.MbTracker, v: bool) -> None

setStopCriteriaEpsilon(self, eps: float) None

Set the minimal error (previous / current estimation) to determine if there is convergence or not.

Parameters:
eps: float

Epsilon threshold.

setTrackerType(*args, **kwargs)

Overloaded function.

  1. setTrackerType(self: visp._visp.mbt.MbGenericTracker, type: int) -> None

Set the tracker type.

Note

This function will set the new parameter for all the cameras.

Warning

This function has to be called before the loading of the CAD model.

Parameters:
type

Type of features to used, see vpTrackerType (e.g. vpMbGenericTracker::EDGE_TRACKER or vpMbGenericTracker::EDGE_TRACKER | vpMbGenericTracker::KLT_TRACKER ).

  1. setTrackerType(self: visp._visp.mbt.MbGenericTracker, mapOfTrackerTypes: dict[str, int]) -> None

Set the tracker types.

Warning

This function has to be called before the loading of the CAD model.

Parameters:
mapOfTrackerTypes

Map of feature types to used, see vpTrackerType (e.g. vpMbGenericTracker::EDGE_TRACKER or vpMbGenericTracker::EDGE_TRACKER | vpMbGenericTracker::KLT_TRACKER ).

setUseDepthDenseTracking(self, name: str, useDepthDenseTracking: bool) None

Set if the polygon that has the given name has to be considered during the tracking phase.

Note

This function will set the new parameter for all the cameras.

Parameters:
name: str

name of the polygon.

useDepthDenseTracking: bool

True if it has to be considered, False otherwise.

setUseDepthNormalTracking(self, name: str, useDepthNormalTracking: bool) None

Set if the polygon that has the given name has to be considered during the tracking phase.

Note

This function will set the new parameter for all the cameras.

Parameters:
name: str

name of the polygon.

useDepthNormalTracking: bool

True if it has to be considered, False otherwise.

setUseEdgeTracking(self, name: str, useEdgeTracking: bool) None

Set if the polygon that has the given name has to be considered during the tracking phase.

Note

This function will set the new parameter for all the cameras.

Parameters:
name: str

name of the polygon.

useEdgeTracking: bool

True if it has to be considered, False otherwise.

setUseKltTracking(self, name: str, useKltTracking: bool) None

Set if the polygon that has the given name has to be considered during the tracking phase.

Note

This function will set the new parameter for all the cameras.

Parameters:
name: str

name of the polygon.

useKltTracking: bool

True if it has to be considered, False otherwise.

testTracking(self) None

Test the quality of the tracking.

track(*args, **kwargs)

Overloaded function.

  1. track(self: visp._visp.mbt.MbGenericTracker, I: visp._visp.core.ImageGray) -> None

Realize the tracking of the object in the image.

Note

This function will track only for the reference camera.

Parameters:
I

The current grayscale image.

  1. track(self: visp._visp.mbt.MbGenericTracker, I_color: visp._visp.core.ImageRGBa) -> None

Realize the tracking of the object in the image.

Note

This function will track only for the reference camera.

Parameters:
I_color

The current color image.

  1. track(self: visp._visp.mbt.MbGenericTracker, I1: visp._visp.core.ImageGray, I2: visp._visp.core.ImageGray) -> None

Realize the tracking of the object in the image.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
I1

The first grayscale image.

I2

The second grayscale image.

  1. track(self: visp._visp.mbt.MbGenericTracker, I_color1: visp._visp.core.ImageRGBa, I_color2: visp._visp.core.ImageRGBa) -> None

Realize the tracking of the object in the image.

Note

This function assumes a stereo configuration of the generic tracker.

Parameters:
I_color1

The first color image.

  1. track(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray]) -> None

Realize the tracking of the object in the image.

Parameters:
mapOfImages

Map of images.

  1. track(self: visp._visp.mbt.MbGenericTracker, mapOfColorImages: dict[str, visp._visp.core.ImageRGBa]) -> None

Realize the tracking of the object in the image.

Parameters:
mapOfColorImages

Map of color images.

  1. track(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfPointClouds: dict[str, pcl::PointCloud<pcl::PointXYZ>]) -> None

  2. track(self: visp._visp.mbt.MbGenericTracker, mapOfColorImages: dict[str, visp._visp.core.ImageRGBa], mapOfPointClouds: dict[str, pcl::PointCloud<pcl::PointXYZ>]) -> None

  3. track(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfPointClouds: dict[str, list[visp._visp.core.ColVector]], mapOfPointCloudWidths: dict[str, int], mapOfPointCloudHeights: dict[str, int]) -> None

Realize the tracking of the object in the image.

Parameters:
mapOfImages

Map of images.

mapOfPointClouds

Map of pointclouds.

mapOfPointCloudWidths

Map of pointcloud widths.

mapOfPointCloudHeights

Map of pointcloud heights.

  1. track(self: visp._visp.mbt.MbGenericTracker, mapOfColorImages: dict[str, visp._visp.core.ImageRGBa], mapOfPointClouds: dict[str, list[visp._visp.core.ColVector]], mapOfPointCloudWidths: dict[str, int], mapOfPointCloudHeights: dict[str, int]) -> None

Realize the tracking of the object in the image.

Parameters:
mapOfColorImages

Map of images.

mapOfPointClouds

Map of pointclouds.

mapOfPointCloudWidths

Map of pointcloud widths.

mapOfPointCloudHeights

Map of pointcloud heights.

  1. track(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfPointClouds: dict[str, visp._visp.core.Matrix], mapOfPointCloudWidths: dict[str, int], mapOfPointCloudHeights: dict[str, int]) -> None

  2. track(self: visp._visp.mbt.MbGenericTracker, mapOfColorImages: dict[str, visp._visp.core.ImageRGBa], mapOfPointClouds: dict[str, visp._visp.core.Matrix], mapOfPointCloudWidths: dict[str, int], mapOfPointCloudHeights: dict[str, int]) -> None

Realize the tracking of the object in the image.

Parameters:
mapOfColorImages

Map of images.

mapOfPointClouds

Map of pointclouds.

mapOfPointCloudWidths

Map of pointcloud widths.

mapOfPointCloudHeights

Map of pointcloud heights.

  1. track(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfPointclouds: dict[str, numpy.ndarray[numpy.float64]]) -> None

Perform tracking, with point clouds being represented as numpy arrays

Parameters:
mapOfImages

Dictionary mapping from a camera name to a grayscale image

Param:

mapOfPointclouds: Dictionary mapping from a camera name to a point cloud.

A point cloud is represented as a H x W x 3 double NumPy array.