MbDepthNormalTracker¶
- class MbDepthNormalTracker(self)¶
Bases:
MbTracker
Methods
Overloaded function.
Return the error vector \((s-s^*)\) reached after the virtual visual servoing process used to estimate the pose.
Return a list of primitives parameters to display the model at a given pose and camera parameters.
Return the weights vector \(w_i\) computed by the robust scheme.
Initialise the tracking.
Overloaded function.
Overloaded function.
Reset the tracker.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Test the quality of the tracking.
Overloaded function.
Inherited Methods
Arrow length used to display gradient and model orientation for projection error computation.
Set the near distance for clipping.
Get the near distance for clipping.
Enable to display the features.
Set if the covariance matrix has to be computed.
Save the pose in the given filename
Set the value of the gain used to compute the control law.
Get the value of the gain used to compute the control law.
Get the far distance for clipping.
Enable/Disable the appearance of Ogre config dialog on startup.
Set the filename used to save the initial pose computed using the initClick() method.
Return the angle used to test polygons disappearance.
Overloaded function.
Set the minimal error (previous / current estimation) to determine if there is convergence or not.
Overloaded function.
Overloaded function.
LEVENBERG_MARQUARDT_OPT
Set the far distance for clipping.
Set the angle used to test polygons appearance.
Set Moving-Edges parameters for projection error computation.
Get the covariance matrix.
Set the initial value of mu for the Levenberg Marquardt optimization loop.
Set the angle used to test polygons disappearance.
Set a 6-dim column vector representing the degrees of freedom in the object frame that are estimated by the tracker.
Load a 3D model from the file in parameter.
Get the camera parameters.
Arrow thickness used to display gradient and model orientation for projection error computation.
Get the number of polygons (faces) representing the object to track.
Get the clipping used and defined in vpPolygon3D::vpMbtPolygonClippingType.
Compute projection error given an input image and camera pose, parameters.
Get the optimization method used during the tracking.
GAUSS_NEWTON_OPT
Get the initial value of mu used in the Levenberg Marquardt optimization loop.
Return the angle used to test polygons appearance.
Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal.
Set the flag to consider if the level of detail (LOD) is used.
Values:
Specify which clipping to use.
Set the minimum polygon area to be considered as visible in the LOD case.
Overloaded function.
Return a reference to the faces structure.
Set kernel size used for projection error computation.
Display or not gradient and model orientation when computing the projection error.
Get the maximum number of iterations of the virtual visual servoing stage.
Get the list of polygons faces (a vpPolygon representing the projection of the face in the image and a list of face corners in 3D), with the possibility to order by distance to the camera or to use the visibility check to consider if the polygon face must be retrieved or not.
Get a 1x6 vpColVector representing the estimated degrees of freedom.
Set the threshold for the minimum line length to be considered as visible in the LOD case.
Set the maximum iteration of the virtual visual servoing stage.
Set if the projection error criteria has to be computed.
Operators
__doc__
__module__
Attributes
GAUSS_NEWTON_OPT
LEVENBERG_MARQUARDT_OPT
__annotations__
- class MbtOptimizationMethod(self, value: int)¶
Bases:
pybind11_object
Values:
GAUSS_NEWTON_OPT
LEVENBERG_MARQUARDT_OPT
- __init__(self)¶
- computeCurrentProjectionError(self, I: visp._visp.core.ImageGray, _cMo: visp._visp.core.HomogeneousMatrix, _cam: visp._visp.core.CameraParameters) float ¶
Compute projection error given an input image and camera pose, parameters. This projection error uses locations sampled exactly where the model is projected using the camera pose and intrinsic parameters. You may want to use
Note
See setProjectionErrorComputation
Note
See getProjectionError
to get a projection error computed at the ME locations after a call to track() . It works similarly to vpMbTracker::getProjectionError function:Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal. The error is expressed in degree between 0 and 90.
- Parameters:
- I: visp._visp.core.ImageGray¶
Input grayscale image.
- _cMo: visp._visp.core.HomogeneousMatrix¶
Camera pose.
- _cam: visp._visp.core.CameraParameters¶
Camera parameters.
- display(*args, **kwargs)¶
Overloaded function.
display(self: visp._visp.mbt.MbDepthNormalTracker, I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None
Display the 3D model at a given position using the given camera parameters on a grey level image.
- Parameters:
- I
The image.
- cMo
Pose used to project the 3D model into the image.
- cam
The camera parameters.
- col
The desired color.
- thickness
The thickness of the lines.
- displayFullModel
If true, the full model is displayed (even the non visible surfaces).
display(self: visp._visp.mbt.MbDepthNormalTracker, I: visp._visp.core.ImageRGBa, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None
Display the 3D model at a given position using the given camera parameters on a color (RGBa) image.
- Parameters:
- I
The image.
- cMo
Pose used to project the 3D model into the image.
- cam
The camera parameters.
- col
The desired color.
- thickness
The thickness of the lines.
- displayFullModel
If true, the full model is displayed (even the non visible surfaces).
- getCameraParameters(self, cam: visp._visp.core.CameraParameters) None ¶
Get the camera parameters.
- Parameters:
- cam: visp._visp.core.CameraParameters¶
copy of the camera parameters used by the tracker.
- getClipping(self) int ¶
Get the clipping used and defined in vpPolygon3D::vpMbtPolygonClippingType.
- Returns:
Clipping flags.
- getCovarianceMatrix(self) visp._visp.core.Matrix ¶
Get the covariance matrix. This matrix is only computed if setCovarianceComputation() is turned on.
Note
See setCovarianceComputation()
- getDepthFeatureEstimationMethod(self) visp._visp.mbt.MbtFaceDepthNormal.FeatureEstimationType ¶
- getError(self) visp._visp.core.ColVector ¶
Return the error vector \((s-s^*)\) reached after the virtual visual servoing process used to estimate the pose.
The following example shows how to use this function to compute the norm of the residual and the norm of the residual normalized by the number of features that are tracked:
tracker.track(I); std::cout << "Residual: " << sqrt( (tracker.getError()).sumSquare()) << std::endl; std::cout << "Residual normalized: " << sqrt( (tracker.getError()).sumSquare())/tracker.getError().size() << std::endl;
Note
See getRobustWeights()
- getEstimatedDoF(self) visp._visp.core.ColVector ¶
Get a 1x6 vpColVector representing the estimated degrees of freedom. vpColVector [0] = 1 if translation on X is estimated, 0 otherwise; vpColVector [1] = 1 if translation on Y is estimated, 0 otherwise; vpColVector [2] = 1 if translation on Z is estimated, 0 otherwise; vpColVector [3] = 1 if rotation on X is estimated, 0 otherwise; vpColVector [4] = 1 if rotation on Y is estimated, 0 otherwise; vpColVector [5] = 1 if rotation on Z is estimated, 0 otherwise;
- Returns:
1x6 vpColVector representing the estimated degrees of freedom.
- getFaces(self) vpMbHiddenFaces<vpMbtPolygon> ¶
Return a reference to the faces structure.
- getFarClippingDistance(self) float ¶
Get the far distance for clipping.
- Returns:
Far clipping value.
- getInitialMu(self) float ¶
Get the initial value of mu used in the Levenberg Marquardt optimization loop.
- Returns:
the initial mu value.
- getLambda(self) float ¶
Get the value of the gain used to compute the control law.
- Returns:
the value for the gain.
- getMaxIter(self) int ¶
Get the maximum number of iterations of the virtual visual servoing stage.
- Returns:
the number of iteration
- getModelForDisplay(self, width: int, height: int, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, displayFullModel: bool = false) list[list[float]] ¶
Return a list of primitives parameters to display the model at a given pose and camera parameters.
Line parameters are: <primitive id (here 0 for line)> , <pt_start.i()> , <pt_start.j()> , <pt_end.i()> , <pt_end.j()> .
Ellipse parameters are: <primitive id (here 1 for ellipse)> , <pt_center.i()> , <pt_center.j()> , <n_20> , <n_11> , <n_02> where <n_ij> are the second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area).
- Parameters:
- width: int¶
Image width.
- height: int¶
Image height.
- cMo: visp._visp.core.HomogeneousMatrix¶
Pose used to project the 3D model into the image.
- cam: visp._visp.core.CameraParameters¶
The camera parameters.
- displayFullModel: bool = false¶
If true, the line is displayed even if it is not
- getNbPolygon(self) int ¶
Get the number of polygons (faces) representing the object to track.
- Returns:
Number of polygons.
- getNearClippingDistance(self) float ¶
Get the near distance for clipping.
- Returns:
Near clipping value.
- getOptimizationMethod(self) visp._visp.mbt.MbTracker.MbtOptimizationMethod ¶
Get the optimization method used during the tracking. 0 = Gauss-Newton approach. 1 = Levenberg-Marquardt approach.
- Returns:
Optimization method.
- getPolygonFaces(self, orderPolygons: bool = true, useVisibility: bool = true, clipPolygon: bool = false) tuple[list[visp._visp.core.Polygon], list[list[visp._visp.core.Point]]] ¶
Get the list of polygons faces (a vpPolygon representing the projection of the face in the image and a list of face corners in 3D), with the possibility to order by distance to the camera or to use the visibility check to consider if the polygon face must be retrieved or not.
- Parameters:
- orderPolygons: bool = true¶
If true, the resulting list is ordered from the nearest polygon faces to the farther.
- useVisibility: bool = true¶
If true, only visible faces will be retrieved.
- clipPolygon: bool = false¶
If true, the polygons will be clipped according to the clipping flags set in vpMbTracker .
- Returns:
A pair object containing the list of vpPolygon and the list of face corners.
- getPose(*args, **kwargs)¶
Overloaded function.
getPose(self: visp._visp.mbt.MbTracker, cMo: visp._visp.core.HomogeneousMatrix) -> None
Get the current pose between the object and the camera. cMo is the matrix which can be used to express coordinates from the object frame to camera frame.
- Parameters:
- cMo
the pose
getPose(self: visp._visp.mbt.MbTracker) -> visp._visp.core.HomogeneousMatrix
Get the current pose between the object and the camera. cMo is the matrix which can be used to express coordinates from the object frame to camera frame.
- Returns:
the current pose
- getProjectionError(self) float ¶
Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal. The error is expressed in degree between 0 and 90. This value is computed if setProjectionErrorComputation() is turned on.
Note
See setProjectionErrorComputation()
- Returns:
the value for the error.
- getRobustWeights(self) visp._visp.core.ColVector ¶
Return the weights vector \(w_i\) computed by the robust scheme.
The following example shows how to use this function to compute the norm of the weighted residual and the norm of the weighted residual normalized by the sum of the weights associated to the features that are tracked:
tracker.track(I); vpColVector w = tracker.getRobustWeights(); vpColVector e = tracker.getError(); vpColVector we(w.size()); for(unsigned int i=0; i<w.size(); i++) we[i] = w[i]*e[i]; std::cout << "Weighted residual: " << sqrt( (we).sumSquare() ) << std::endl; std::cout << "Weighted residual normalized: " << sqrt( (we).sumSquare() ) / w.sum() << std::endl;
Note
See getError()
- init(self, I: visp._visp.core.ImageGray) None ¶
Initialise the tracking.
- Parameters:
- I: visp._visp.core.ImageGray¶
Input image.
- initClick(*args, **kwargs)¶
Overloaded function.
initClick(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str, displayHelp: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
initClick(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str, displayHelp: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
initClick(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, points3D_list: list[visp._visp.core.Point], displayFile: str = ) -> None
initClick(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, points3D_list: list[visp._visp.core.Point], displayFile: str = ) -> None
- initFromPoints(*args, **kwargs)¶
Overloaded function.
initFromPoints(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str) -> None
Initialise the tracker by reading 3D point coordinates and the corresponding 2D image point coordinates from a file. Comments starting with # character are allowed. 3D point coordinates are expressed in meter in the object frame with X, Y and Z values. 2D point coordinates are expressied in pixel coordinates, with first the line and then the column of the pixel in the image. The structure of this file is the following.
# 3D point coordinates 4 # Number of 3D points in the file (minimum is four) 0.01 0.01 0.01 # \ ... # | 3D coordinates in meters in the object frame 0.01 -0.01 -0.01 # / # corresponding 2D point coordinates 4 # Number of image points in the file (has to be the same as the number of 3D points) 100 200 # \ ... # | 2D coordinates in pixel in the image 50 10 # /
- Parameters:
- I
Input grayscale image
- initFile
Path to the file containing all the points.
initFromPoints(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str) -> None
Initialise the tracker by reading 3D point coordinates and the corresponding 2D image point coordinates from a file. Comments starting with # character are allowed. 3D point coordinates are expressed in meter in the object frame with X, Y and Z values. 2D point coordinates are expressied in pixel coordinates, with first the line and then the column of the pixel in the image. The structure of this file is the following.
# 3D point coordinates 4 # Number of 3D points in the file (minimum is four) 0.01 0.01 0.01 # \ ... # | 3D coordinates in meters in the object frame 0.01 -0.01 -0.01 # / # corresponding 2D point coordinates 4 # Number of image points in the file (has to be the same as the number of 3D points) 100 200 # \ ... # | 2D coordinates in pixel in the image 50 10 # /
- Parameters:
- I_color
Input color image
- initFile
Path to the file containing all the points.
initFromPoints(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, points2D_list: list[visp._visp.core.ImagePoint], points3D_list: list[visp._visp.core.Point]) -> None
Initialise the tracking with the list of image points (points2D_list) and the list of corresponding 3D points (object frame) (points3D_list).
- Parameters:
- I
Input grayscale image
- points2D_list
List of image points.
- points3D_list
List of 3D points (object frame).
initFromPoints(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, points2D_list: list[visp._visp.core.ImagePoint], points3D_list: list[visp._visp.core.Point]) -> None
Initialise the tracking with the list of image points (points2D_list) and the list of corresponding 3D points (object frame) (points3D_list).
- Parameters:
- I_color
Input color grayscale image
- points2D_list
List of image points.
- points3D_list
List of 3D points (object frame).
- initFromPose(*args, **kwargs)¶
Overloaded function.
initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str) -> None
Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile. The structure of this file is (without the comments):
// The six value of the pose vector 0.0000 // \ 0.0000 // | 1.0000 // | Example of value for the pose vector where Z = 1 meter 0.0000 // | 0.0000 // | 0.0000 // /
Where the three firsts lines refer to the translation and the three last to the rotation in thetaU parametrisation (see vpThetaUVector ).
- Parameters:
- I
Input grayscale image
- initFile
Path to the file containing the pose.
initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str) -> None
Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile. The structure of this file is (without the comments):
// The six value of the pose vector 0.0000 // \ 0.0000 // | 1.0000 // | Example of value for the pose vector where Z = 1 meter 0.0000 // | 0.0000 // | 0.0000 // /
Where the three firsts lines refer to the translation and the three last to the rotation in thetaU parametrisation (see vpThetaUVector ).
- Parameters:
- I_color
Input color image
- initFile
Path to the file containing the pose.
initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix) -> None
Initialise the tracking thanks to the pose.
- Parameters:
- I
Input grayscale image
- cMo
Pose matrix.
initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, cMo: visp._visp.core.HomogeneousMatrix) -> None
Initialise the tracking thanks to the pose.
- Parameters:
- I_color
Input color image
- cMo
Pose matrix.
initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, cPo: visp._visp.core.PoseVector) -> None
Initialise the tracking thanks to the pose vector.
- Parameters:
- I
Input grayscale image
- cPo
Pose vector.
initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, cPo: visp._visp.core.PoseVector) -> None
Initialise the tracking thanks to the pose vector.
- Parameters:
- I_color
Input color image
- cPo
Pose vector.
- loadConfigFile(*args, **kwargs)¶
Overloaded function.
loadConfigFile(self: visp._visp.mbt.MbDepthNormalTracker, configFile: str, verbose: bool = true) -> None
Load a config file to parameterise the behavior of the tracker.
Virtual method to adapt to each tracker.
- Parameters:
- configFile
An xml config file to parse.
- verbose
verbose flag.
loadConfigFile(self: visp._visp.mbt.MbTracker, configFile: str, verbose: bool = true) -> None
Load a config file to parameterise the behavior of the tracker.
Virtual method to adapt to each tracker.
- Parameters:
- configFile
An xml config file to parse.
- verbose
verbose flag.
- loadModel(self, modelFile: str, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) None ¶
Load a 3D model from the file in parameter. This file must either be a vrml file (.wrl) or a CAO file (.cao). CAO format is described in the loadCAOModel() method.
- reInitModel(*args, **kwargs)¶
Overloaded function.
reInitModel(self: visp._visp.mbt.MbDepthNormalTracker, I: visp._visp.core.ImageGray, cad_name: str, cMo: visp._visp.core.HomogeneousMatrix, verbose: bool = false) -> None
reInitModel(self: visp._visp.mbt.MbDepthNormalTracker, point_cloud: pcl::PointCloud<pcl::PointXYZ>, cad_name: str, cMo: visp._visp.core.HomogeneousMatrix, verbose: bool = false) -> None
- setAngleAppear(self, a: float) None ¶
Set the angle used to test polygons appearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value lower than this parameter, the polygon is considered as appearing. The polygon will then be tracked.
- setAngleDisappear(self, a: float) None ¶
Set the angle used to test polygons disappearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value greater than this parameter, the polygon is considered as disappearing. The tracking of the polygon will then be stopped.
- setCameraParameters(*args, **kwargs)¶
Overloaded function.
setCameraParameters(self: visp._visp.mbt.MbDepthNormalTracker, camera: visp._visp.core.CameraParameters) -> None
Set the camera parameters.
setCameraParameters(self: visp._visp.mbt.MbTracker, cam: visp._visp.core.CameraParameters) -> None
Set the camera parameters.
- Parameters:
- cam
The new camera parameters.
- setCovarianceComputation(self, flag: bool) None ¶
Set if the covariance matrix has to be computed.
Note
See getCovarianceMatrix()
- setDepthNormalFaceCentroidMethod(self, method: visp._visp.mbt.MbtFaceDepthNormal.FaceCentroidType) None ¶
- setDepthNormalFeatureEstimationMethod(self, method: visp._visp.mbt.MbtFaceDepthNormal.FeatureEstimationType) None ¶
- setDisplayFeatures(self, displayF: bool) None ¶
Enable to display the features. By features, we meant the moving edges (ME) and the klt points if used.
Note that if present, the moving edges can be displayed with different colors:
If green : The ME is a good point.
If blue : The ME is removed because of a contrast problem during the tracking phase.
If purple : The ME is removed because of a threshold problem during the tracking phase.
If red : The ME is removed because it is rejected by the robust approach in the virtual visual servoing scheme.
- setEstimatedDoF(self, v: visp._visp.core.ColVector) None ¶
Set a 6-dim column vector representing the degrees of freedom in the object frame that are estimated by the tracker. When set to 1, all the 6 dof are estimated.
Below we give the correspondence between the index of the vector and the considered dof:
v[0] = 1 if translation along X is estimated, 0 otherwise;
v[1] = 1 if translation along Y is estimated, 0 otherwise;
v[2] = 1 if translation along Z is estimated, 0 otherwise;
v[3] = 1 if rotation along X is estimated, 0 otherwise;
v[4] = 1 if rotation along Y is estimated, 0 otherwise;
v[5] = 1 if rotation along Z is estimated, 0 otherwise;
- setInitialMu(self, mu: float) None ¶
Set the initial value of mu for the Levenberg Marquardt optimization loop.
- setLod(self: visp._visp.mbt.MbTracker, useLod: bool, name: str =) None ¶
Set the flag to consider if the level of detail (LOD) is used.
Note
See setMinLineLengthThresh() , setMinPolygonAreaThresh()
- Parameters:
- useLod
true if the level of detail must be used, false otherwise. When true, two parameters can be set, see setMinLineLengthThresh() and setMinPolygonAreaThresh() .
- name
name of the face we want to modify the LOD parameter.
- setMinLineLengthThresh(self: visp._visp.mbt.MbTracker, minLineLengthThresh: float, name: str =) None ¶
Set the threshold for the minimum line length to be considered as visible in the LOD case.
Note
See setLod() , setMinPolygonAreaThresh()
- Parameters:
- minLineLengthThresh
threshold for the minimum line length in pixel.
- name
name of the face we want to modify the LOD threshold.
- setMinPolygonAreaThresh(self: visp._visp.mbt.MbTracker, minPolygonAreaThresh: float, name: str =) None ¶
Set the minimum polygon area to be considered as visible in the LOD case.
Note
See setLod() , setMinLineLengthThresh()
- Parameters:
- minPolygonAreaThresh
threshold for the minimum polygon area in pixel.
- name
name of the face we want to modify the LOD threshold.
- setOgreShowConfigDialog(self, showConfigDialog: bool) None ¶
Enable/Disable the appearance of Ogre config dialog on startup.
Warning
This method has only effect when Ogre is used and Ogre visibility test is enabled using setOgreVisibilityTest() with true parameter.
- setOgreVisibilityTest(*args, **kwargs)¶
Overloaded function.
setOgreVisibilityTest(self: visp._visp.mbt.MbDepthNormalTracker, v: bool) -> None
Use Ogre3D for visibility tests
Warning
This function has to be called before the initialization of the tracker.
- Parameters:
- v
True to use it, False otherwise
setOgreVisibilityTest(self: visp._visp.mbt.MbTracker, v: bool) -> None
Use Ogre3D for visibility tests
Warning
This function has to be called before the initialization of the tracker.
- Parameters:
- v
True to use it, False otherwise
- setOptimizationMethod(self, opt: visp._visp.mbt.MbTracker.MbtOptimizationMethod) None ¶
- setPose(*args, **kwargs)¶
Overloaded function.
setPose(self: visp._visp.mbt.MbDepthNormalTracker, I: visp._visp.core.ImageGray, cdMo: visp._visp.core.HomogeneousMatrix) -> None
Set the pose to be used in entry of the next call to the track() function. This pose will be just used once.
Warning
This function has to be called after the initialisation of the tracker.
- Parameters:
- I
grayscale image corresponding to the desired pose.
- cdMo
Pose to affect.
setPose(self: visp._visp.mbt.MbDepthNormalTracker, I_color: visp._visp.core.ImageRGBa, cdMo: visp._visp.core.HomogeneousMatrix) -> None
Set the pose to be used in entry of the next call to the track() function. This pose will be just used once.
Warning
This function has to be called after the initialisation of the tracker.
- Parameters:
- I_color
color image corresponding to the desired pose.
- cdMo
Pose to affect.
setPose(self: visp._visp.mbt.MbDepthNormalTracker, point_cloud: pcl::PointCloud<pcl::PointXYZ>, cdMo: visp._visp.core.HomogeneousMatrix) -> None
- setPoseSavingFilename(self, filename: str) None ¶
Set the filename used to save the initial pose computed using the initClick() method. It is also used to read a previous pose in the same method. If the file is not set then, the initClick() method will create a .0.pos file in the root directory. This directory is the path to the file given to the method initClick() used to know the coordinates in the object frame.
- setProjectionErrorComputation(self, flag: bool) None ¶
Set if the projection error criteria has to be computed. This criteria could be used to detect the quality of the tracking. It computes an angle between 0 and 90 degrees that is available with getProjectionError() . Closer to 0 is the value, better is the tracking.
Note
See getProjectionError()
- setProjectionErrorDisplay(self, display: bool) None ¶
Display or not gradient and model orientation when computing the projection error.
- setProjectionErrorDisplayArrowLength(self, length: int) None ¶
Arrow length used to display gradient and model orientation for projection error computation.
- setProjectionErrorDisplayArrowThickness(self, thickness: int) None ¶
Arrow thickness used to display gradient and model orientation for projection error computation.
- setProjectionErrorKernelSize(self, size: int) None ¶
Set kernel size used for projection error computation.
- setProjectionErrorMovingEdge(self, me: visp._visp.me.Me) None ¶
Set Moving-Edges parameters for projection error computation.
- Parameters:
- me: visp._visp.me.Me¶
Moving-Edges parameters.
- setScanLineVisibilityTest(*args, **kwargs)¶
Overloaded function.
setScanLineVisibilityTest(self: visp._visp.mbt.MbDepthNormalTracker, v: bool) -> None
setScanLineVisibilityTest(self: visp._visp.mbt.MbTracker, v: bool) -> None
- setStopCriteriaEpsilon(self, eps: float) None ¶
Set the minimal error (previous / current estimation) to determine if there is convergence or not.
- track(*args, **kwargs)¶
Overloaded function.
track(self: visp._visp.mbt.MbDepthNormalTracker, arg0: visp._visp.core.ImageGray) -> None
Track the object in the given image
track(self: visp._visp.mbt.MbDepthNormalTracker, I_color: visp._visp.core.ImageRGBa) -> None
Track the object in the given image
track(self: visp._visp.mbt.MbDepthNormalTracker, point_cloud: pcl::PointCloud<pcl::PointXYZ>) -> None
track(self: visp._visp.mbt.MbDepthNormalTracker, point_cloud: list[visp._visp.core.ColVector], width: int, height: int) -> None