MbEdgeKltTracker¶
- class MbEdgeKltTracker(self)¶
Bases:
MbKltTracker
,MbEdgeTracker
Hybrid tracker based on moving-edges and keypoints tracked using KLT tracker.
Warning
This class is deprecated for user usage. You should rather use the high level vpMbGenericTracker class.
Warning
This class is only available if OpenCV is installed, and used.
The tutorial-tracking-mb-deprecated is a good starting point to use this class.
The tracker requires the knowledge of the 3D model that could be provided in a vrml or in a cao file. The cao format is described in loadCAOModel() . It may also use an xml file used to tune the behavior of the tracker and an init file used to compute the pose at the very first image.
The following code shows the simplest way to use the tracker. The tutorial-tracking-mb-deprecated is also a good starting point to use this class.
#include <visp3/core/vpCameraParameters.h> #include <visp3/core/vpHomogeneousMatrix.h> #include <visp3/core/vpImage.h> #include <visp3/gui/vpDisplayX.h> #include <visp3/io/vpImageIo.h> #include <visp3/mbt/vpMbEdgeKltTracker.h> int main() { #if defined VISP_HAVE_OPENCV vpMbEdgeKltTracker tracker; // Create an hybrid model based tracker. vpImage<unsigned char> I; vpHomogeneousMatrix cMo; // Pose computed using the tracker. vpCameraParameters cam; // Acquire an image vpImageIo::read(I, "cube.pgm"); #if defined(VISP_HAVE_X11) vpDisplayX display; display.init(I,100,100,"Mb Hybrid Tracker"); #endif tracker.loadConfigFile("cube.xml"); // Load the configuration of the tracker // Load the 3d model in cao format. No 3rd party library is required tracker.loadModel("cube.cao"); // Get the camera parameters used by the tracker (from the configuration file). tracker.getCameraParameters(cam); // Initialise manually the pose by clicking on the image points associated to the 3d points contained in the // cube.init file. tracker.initClick(I, "cube.init"); while(true){ // Acquire a new image vpDisplay::display(I); tracker.track(I); // Track the object on this image tracker.getPose(cMo); // Get the pose tracker.display(I, cMo, cam, vpColor::darkRed, 1); // Display the model at the computed pose. vpDisplay::flush(I); } return 0; #endif }
The tracker can also be used without display, in that case the initial pose must be known (object always at the same initial pose for example) or computed using another method:
#include <visp3/core/vpCameraParameters.h> #include <visp3/core/vpHomogeneousMatrix.h> #include <visp3/core/vpImage.h> #include <visp3/io/vpImageIo.h> #include <visp3/mbt/vpMbEdgeKltTracker.h> int main() { #if defined VISP_HAVE_OPENCV vpMbEdgeKltTracker tracker; // Create an hybrid model based tracker. vpImage<unsigned char> I; vpHomogeneousMatrix cMo; // Pose used in entry (has to be defined), then computed using the tracker. //acquire an image vpImageIo::read(I, "cube.pgm"); // Example of acquisition tracker.loadConfigFile("cube.xml"); // Load the configuration of the tracker // load the 3d model, to read .wrl model coin is required, if coin is not installed .cao file can be used. tracker.loadModel("cube.cao"); tracker.initFromPose(I, cMo); // initialise the tracker with the given pose. while(true){ // acquire a new image tracker.track(I); // track the object on this image tracker.getPose(cMo); // get the pose } return 0; #endif }
Finally it can be used not to track an object but just to display a model at a given pose:
#include <visp3/core/vpCameraParameters.h> #include <visp3/core/vpHomogeneousMatrix.h> #include <visp3/core/vpImage.h> #include <visp3/gui/vpDisplayX.h> #include <visp3/io/vpImageIo.h> #include <visp3/mbt/vpMbEdgeKltTracker.h> int main() { #if defined VISP_HAVE_OPENCV vpMbEdgeKltTracker tracker; // Create an hybrid model based tracker. vpImage<unsigned char> I; vpHomogeneousMatrix cMo; // Pose used to display the model. vpCameraParameters cam; // Acquire an image vpImageIo::read(I, "cube.pgm"); #if defined(VISP_HAVE_X11) vpDisplayX display; display.init(I,100,100,"Mb Hybrid Tracker"); #endif tracker.loadConfigFile("cube.xml"); // Load the configuration of the tracker tracker.getCameraParameters(cam); // Get the camera parameters used by the tracker (from the configuration file). // load the 3d model, to read .wrl model coin is required, if coin is not installed .cao file can be used. tracker.loadModel("cube.cao"); while(true){ // acquire a new image // Get the pose using any method vpDisplay::display(I); tracker.display(I, cMo, cam, vpColor::darkRed, 1, true); // Display the model at the computed pose. vpDisplay::flush(I); } #endif return 0; }
Methods
Overloaded function.
Overloaded function.
Overloaded function.
Get the near distance for clipping.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Inherited Methods
Enable/Disable the appearance of Ogre config dialog on startup.
Enable to display the features.
Get the current number of klt points.
Return the address of the cylinder feature list.
Get the far distance for clipping.
LEVENBERG_MARQUARDT_OPT
Get the erosion of the mask used on the Model faces.
Set the maximum iteration of the virtual visual servoing stage.
Return the angle used to test polygons disappearance.
Get the current list of KLT points.
Set the scales to use to realize the tracking.
Get the threshold for the acceptation of a point.
Get the list of the cylinders tracked for the specified level.
Get the number of polygons (faces) representing the object to track.
<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7ff689a85bd0>>
Return the number of good points ( vpMeSite ) tracked.
Overloaded function.
Get the initial value of mu used in the Levenberg Marquardt optimization loop.
Set a 6-dim column vector representing the degrees of freedom in the object frame that are estimated by the tracker.
Set the minimum polygon area to be considered as visible in the LOD case.
Get the list of the lines tracked for the specified level.
Set the threshold for the minimum line length to be considered as visible in the LOD case.
Save the pose in the given filename
Get the list of the circles tracked for the specified level.
Return the address of the Klt feature list.
Get the camera parameters.
Load a 3D model from the file in parameter.
Set kernel size used for projection error computation.
Display or not gradient and model orientation when computing the projection error.
<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7ff689a6be20>>
Set the angle used to test polygons appearance.
Arrow length used to display gradient and model orientation for projection error computation.
Return the scales levels used for the tracking.
Get a 1x6 vpColVector representing the estimated degrees of freedom.
Values:
Set if the polygons that have the given name have to be considered during the tracking phase.
Set if the covariance matrix has to be computed.
Set the initial value of mu for the Levenberg Marquardt optimization loop.
Arrow thickness used to display gradient and model orientation for projection error computation.
Get the current list of KLT points.
Set the erosion of the mask used on the Model faces.
<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7ff689a6b5b0>>
Set the minimal error (previous / current estimation) to determine if there is convergence or not.
Get the clipping used and defined in vpPolygon3D::vpMbtPolygonClippingType.
Overloaded function.
Set the value of the gain used to compute the control law.
Set the threshold value between 0 and 1 over good moving edges ratio.
Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal.
Get the list of polygons faces (a vpPolygon representing the projection of the face in the image and a list of face corners in 3D), with the possibility to order by distance to the camera or to use the visibility check to consider if the polygon face must be retrieved or not.
Set Moving-Edges parameters for projection error computation.
Get the optimization method used during the tracking.
Get the klt tracker at the current state.
Get the maximum number of iterations of the virtual visual servoing stage.
Get the current list of KLT points and their id.
<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7ff689a84850>>
Overloaded function.
Set the flag to consider if the level of detail (LOD) is used.
GAUSS_NEWTON_OPT
Set if the polygons that have the given name have to be considered during the tracking phase.
Set the new value of the klt tracker.
Overloaded function.
Overloaded function.
Compute projection error given an input image and camera pose, parameters.
Add a circle to the list of circles.
Return a reference to the faces structure.
Get the value of the gain used to compute the control law.
Set the erosion of the mask used on the Model faces.
Set the angle used to test polygons disappearance.
Get the covariance matrix.
Note
See setGoodMovingEdgesRatioThreshold()
Return the angle used to test polygons appearance.
Return the address of the circle feature list.
Set the moving edge parameters.
Set the threshold for the acceptation of a point.
Set the filename used to save the initial pose computed using the initClick() method.
Operators
__doc__
__module__
Attributes
GAUSS_NEWTON_OPT
LEVENBERG_MARQUARDT_OPT
__annotations__
- class MbtOptimizationMethod(self, value: int)¶
Bases:
pybind11_object
Values:
GAUSS_NEWTON_OPT
LEVENBERG_MARQUARDT_OPT
- __init__(self)¶
- addCircle(self: visp._visp.mbt.MbKltTracker, P1: visp._visp.core.Point, P2: visp._visp.core.Point, P3: visp._visp.core.Point, r: float, name: str =) None ¶
Add a circle to the list of circles.
- Parameters:
- P1
Center of the circle.
- P2
Two points on the plane containing the circle. With the center of the circle we have 3 points defining the plane that contains the circle.
- P3
Two points on the plane containing the circle. With the center of the circle we have 3 points defining the plane that contains the circle.
- r
Radius of the circle.
- name
Name of the circle.
- computeCurrentProjectionError(self, I: visp._visp.core.ImageGray, _cMo: visp._visp.core.HomogeneousMatrix, _cam: visp._visp.core.CameraParameters) float ¶
Compute projection error given an input image and camera pose, parameters. This projection error uses locations sampled exactly where the model is projected using the camera pose and intrinsic parameters. You may want to use
Note
See setProjectionErrorComputation
Note
See getProjectionError
to get a projection error computed at the ME locations after a call to track() . It works similarly to vpMbTracker::getProjectionError function:Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal. The error is expressed in degree between 0 and 90.
- Parameters:
- I: visp._visp.core.ImageGray¶
Input grayscale image.
- _cMo: visp._visp.core.HomogeneousMatrix¶
Camera pose.
- _cam: visp._visp.core.CameraParameters¶
Camera parameters.
- display(*args, **kwargs)¶
Overloaded function.
display(self: visp._visp.mbt.MbEdgeKltTracker, I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None
Display the 3D model at a given position using the given camera parameters
- Parameters:
- I
The image.
- cMo
Pose used to project the 3D model into the image.
- cam
The camera parameters.
- col
The desired color.
- thickness
The thickness of the lines.
- displayFullModel
boolean to say if all the model has to be displayed, even the faces that are not visible.
display(self: visp._visp.mbt.MbEdgeKltTracker, I: visp._visp.core.ImageRGBa, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None
Display the 3D model at a given position using the given camera parameters
- Parameters:
- I
The color image.
- cMo
Pose used to project the 3D model into the image.
- cam
The camera parameters.
- col
The desired color.
- thickness
The thickness of the lines.
- displayFullModel
boolean to say if all the model has to be displayed, even the faces that are not visible.
display(self: visp._visp.mbt.MbKltTracker, I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None
Display the 3D model at a given position using the given camera parameters
- Parameters:
- I
The image.
- cMo
Pose used to project the 3D model into the image.
- cam
The camera parameters.
- col
The desired color.
- thickness
The thickness of the lines.
- displayFullModel
Boolean to say if all the model has to be displayed, even the faces that are visible.
display(self: visp._visp.mbt.MbKltTracker, I: visp._visp.core.ImageRGBa, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None
Display the 3D model at a given position using the given camera parameters
- Parameters:
- I
The color image.
- cMo
Pose used to project the 3D model into the image.
- cam
The camera parameters.
- col
The desired color.
- thickness
The thickness of the lines.
- displayFullModel
Boolean to say if all the model has to be displayed, even the faces that are not visible.
display(self: visp._visp.mbt.MbEdgeTracker, I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None
Display the 3D model from a given position of the camera.
- Parameters:
- I
The image.
- cMo
Pose used to project the 3D model into the image.
- cam
The camera parameters.
- col
The desired color.
- thickness
The thickness of the lines.
- displayFullModel
If true, the full model is displayed (even the non visible faces).
display(self: visp._visp.mbt.MbEdgeTracker, I: visp._visp.core.ImageRGBa, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None
Display the 3D model from a given position of the camera.
- Parameters:
- I
The image.
- cMo
Pose used to project the 3D model into the image.
- cam
The camera parameters.
- col
The desired color.
- thickness
The thickness of the lines.
- displayFullModel
If true, the full model is displayed (even the non visible surfaces).
- getCameraParameters(self, cam: visp._visp.core.CameraParameters) None ¶
Get the camera parameters.
- Parameters:
- cam: visp._visp.core.CameraParameters¶
copy of the camera parameters used by the tracker.
- getClipping(self) int ¶
Get the clipping used and defined in vpPolygon3D::vpMbtPolygonClippingType.
- Returns:
Clipping flags.
- getCovarianceMatrix(self) visp._visp.core.Matrix ¶
Get the covariance matrix. This matrix is only computed if setCovarianceComputation() is turned on.
Note
See setCovarianceComputation()
- getError(*args, **kwargs)¶
Overloaded function.
getError(self: visp._visp.mbt.MbEdgeKltTracker) -> visp._visp.core.ColVector
Return the error vector \((s-s^*)\) reached after the virtual visual servoing process used to estimate the pose.
The following example shows how to use this function to compute the norm of the residual and the norm of the residual normalized by the number of features that are tracked:
tracker.track(I); std::cout << "Residual: " << sqrt( (tracker.getError()).sumSquare()) << std::endl; std::cout << "Residual normalized: " << sqrt( (tracker.getError()).sumSquare())/tracker.getError().size() << std::endl;
Note
See getRobustWeights()
getError(self: visp._visp.mbt.MbKltTracker) -> visp._visp.core.ColVector
Return the error vector \((s-s^*)\) reached after the virtual visual servoing process used to estimate the pose.
The following example shows how to use this function to compute the norm of the residual and the norm of the residual normalized by the number of features that are tracked:
tracker.track(I); std::cout << "Residual: " << sqrt( (tracker.getError()).sumSquare()) << std::endl; std::cout << "Residual normalized: " << sqrt( (tracker.getError()).sumSquare())/tracker.getError().size() << std::endl;
Note
See getRobustWeights()
getError(self: visp._visp.mbt.MbEdgeTracker) -> visp._visp.core.ColVector
Return the error vector \((s-s^*)\) reached after the virtual visual servoing process used to estimate the pose.
The following example shows how to use this function to compute the norm of the residual and the norm of the residual normalized by the number of features that are tracked:
tracker.track(I); std::cout << "Residual: " << sqrt( (tracker.getError()).sumSquare()) << std::endl; std::cout << "Residual normalized: " << sqrt( (tracker.getError()).sumSquare())/tracker.getError().size() << std::endl;
Note
See getRobustWeights()
- getEstimatedDoF(self) visp._visp.core.ColVector ¶
Get a 1x6 vpColVector representing the estimated degrees of freedom. vpColVector [0] = 1 if translation on X is estimated, 0 otherwise; vpColVector [1] = 1 if translation on Y is estimated, 0 otherwise; vpColVector [2] = 1 if translation on Z is estimated, 0 otherwise; vpColVector [3] = 1 if rotation on X is estimated, 0 otherwise; vpColVector [4] = 1 if rotation on Y is estimated, 0 otherwise; vpColVector [5] = 1 if rotation on Z is estimated, 0 otherwise;
- Returns:
1x6 vpColVector representing the estimated degrees of freedom.
- getFaces(self) vpMbHiddenFaces<vpMbtPolygon> ¶
Return a reference to the faces structure.
- getFarClippingDistance(self) float ¶
Get the far distance for clipping.
- Returns:
Far clipping value.
- getFeaturesCircle(self) list[visp._visp.mbt.MbtDistanceCircle] ¶
Return the address of the circle feature list.
- getFeaturesKlt(self) list[visp._visp.mbt.MbtDistanceKltPoints] ¶
Return the address of the Klt feature list.
- getFeaturesKltCylinder(self) list[visp._visp.mbt.MbtDistanceKltCylinder] ¶
Return the address of the cylinder feature list.
- getGoodMovingEdgesRatioThreshold(self) float ¶
Note
See setGoodMovingEdgesRatioThreshold()
- Returns:
The threshold value between 0 and 1 over good moving edges ratio. It allows to decide if the tracker has enough valid moving edges to compute a pose. 1 means that all moving edges should be considered as good to have a valid pose, while 0.1 means that 10% of the moving edge are enough to declare a pose valid.
- getInitialMu(self) float ¶
Get the initial value of mu used in the Levenberg Marquardt optimization loop.
- Returns:
the initial mu value.
- getKltImagePoints(self) list[visp._visp.core.ImagePoint] ¶
Get the current list of KLT points.
- Returns:
the list of KLT points through vpKltOpencv .
- getKltImagePointsWithId(self) dict[int, visp._visp.core.ImagePoint] ¶
Get the current list of KLT points and their id.
- Returns:
the list of KLT points and their id through vpKltOpencv .
- getKltMaskBorder(self) int ¶
Get the erosion of the mask used on the Model faces.
- Returns:
The erosion.
- getKltOpencv(self) visp._visp.klt.KltOpencv ¶
Get the klt tracker at the current state.
- Returns:
klt tracker.
- getKltPoints(self) list[cv::Point_<float>] ¶
Get the current list of KLT points.
- Returns:
the list of KLT points through vpKltOpencv .
- getKltThresholdAcceptation(self) float ¶
Get the threshold for the acceptation of a point.
- Returns:
threshold_outlier : Threshold for the weight below which a point is rejected.
- getLambda(self) float ¶
Get the value of the gain used to compute the control law.
- Returns:
the value for the gain.
- getLcircle(self, circlesList: list[visp._visp.mbt.MbtDistanceCircle], level: int = 0) list[visp._visp.mbt.MbtDistanceCircle] ¶
Get the list of the circles tracked for the specified level. Each circle contains the list of the vpMeSite .
- Parameters:
- circlesList: list[visp._visp.mbt.MbtDistanceCircle]¶
The list of the circles of the model.
- level: int = 0¶
Level corresponding to the list to return.
- Returns:
A tuple containing:
circlesList: The list of the circles of the model.
- getLcylinder(self, cylindersList: list[visp._visp.mbt.MbtDistanceCylinder], level: int = 0) list[visp._visp.mbt.MbtDistanceCylinder] ¶
Get the list of the cylinders tracked for the specified level. Each cylinder contains the list of the vpMeSite .
- Parameters:
- cylindersList: list[visp._visp.mbt.MbtDistanceCylinder]¶
The list of the cylinders of the model.
- level: int = 0¶
Level corresponding to the list to return.
- Returns:
A tuple containing:
cylindersList: The list of the cylinders of the model.
- getLline(self, linesList: list[visp._visp.mbt.MbtDistanceLine], level: int = 0) list[visp._visp.mbt.MbtDistanceLine] ¶
Get the list of the lines tracked for the specified level. Each line contains the list of the vpMeSite .
- Parameters:
- linesList: list[visp._visp.mbt.MbtDistanceLine]¶
The list of the lines of the model.
- level: int = 0¶
Level corresponding to the list to return.
- Returns:
A tuple containing:
linesList: The list of the lines of the model.
- getMaskBorder(self) int ¶
<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7ff689a6b5b0>>
- Returns:
The erosion.
- getMaxIter(self) int ¶
Get the maximum number of iterations of the virtual visual servoing stage.
- Returns:
the number of iteration
- getModelForDisplay(*args, **kwargs)¶
Overloaded function.
getModelForDisplay(self: visp._visp.mbt.MbEdgeKltTracker, width: int, height: int, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, displayFullModel: bool = false) -> list[list[float]]
Return a list of primitives parameters to display the model at a given pose and camera parameters.
Line parameters are: <primitive id (here 0 for line)> , <pt_start.i()> , <pt_start.j()> , <pt_end.i()> , <pt_end.j()>
Ellipse parameters are: <primitive id (here 1 for ellipse)> , <pt_center.i()> , <pt_center.j()> , <n_20> , <n_11> , <n_02> where <n_ij> are the second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area).
- Parameters:
- width
Image width.
- height
Image height.
- cMo
Pose used to project the 3D model into the image.
- cam
The camera parameters.
- displayFullModel
If true, the line is displayed even if it is not
getModelForDisplay(self: visp._visp.mbt.MbKltTracker, width: int, height: int, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, displayFullModel: bool = false) -> list[list[float]]
Return a list of primitives parameters to display the model at a given pose and camera parameters.
Line parameters are: <primitive id (here 0 for line)> , <pt_start.i()> , <pt_start.j()> , <pt_end.i()> , <pt_end.j()>
Ellipse parameters are: <primitive id (here 1 for ellipse)> , <pt_center.i()> , <pt_center.j()> , <n_20> , <n_11> , <n_02> where <n_ij> are the second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area).
- Parameters:
- width
Image width.
- height
Image height.
- cMo
Pose used to project the 3D model into the image.
- cam
The camera parameters.
- displayFullModel
If true, the line is displayed even if it is not
getModelForDisplay(self: visp._visp.mbt.MbEdgeTracker, width: int, height: int, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, displayFullModel: bool = false) -> list[list[float]]
Return a list of primitives parameters to display the model at a given pose and camera parameters.
Line parameters are: <primitive id (here 0 for line)> , <pt_start.i()> , <pt_start.j()> , <pt_end.i()> , <pt_end.j()>
Ellipse parameters are: <primitive id (here 1 for ellipse)> , <pt_center.i()> , <pt_center.j()> , <n_20> , <n_11> , <n_02> where <n_ij> are the second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area).
- Parameters:
- width
Image width.
- height
Image height.
- cMo
Pose used to project the 3D model into the image.
- cam
The camera parameters.
- displayFullModel
If true, the line is displayed even if it is not
- getMovingEdge(*args, **kwargs)¶
Overloaded function.
getMovingEdge(self: visp._visp.mbt.MbEdgeTracker, p_me: visp._visp.me.Me) -> None
Get the moving edge parameters.
- Parameters:
- p_me
[out] : an instance of the moving edge parameters used by the tracker.
getMovingEdge(self: visp._visp.mbt.MbEdgeTracker) -> visp._visp.me.Me
Get the moving edge parameters.
- Returns:
an instance of the moving edge parameters used by the tracker.
- getNbKltPoints(self) int ¶
<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7ff689a6be20>>
- Returns:
the number of features
- getNbPoints(self, level: int = 0) int ¶
Return the number of good points ( vpMeSite ) tracked. A good point is a vpMeSite with its flag “state” equal to 0. Only these points are used during the virtual visual servoing stage.
- Returns:
the number of good points.
- getNbPolygon(self) int ¶
Get the number of polygons (faces) representing the object to track.
- Returns:
Number of polygons.
- getNearClippingDistance(self) float ¶
Get the near distance for clipping.
- Returns:
Near clipping value.
- getOptimizationMethod(self) visp._visp.mbt.MbTracker.MbtOptimizationMethod ¶
Get the optimization method used during the tracking. 0 = Gauss-Newton approach. 1 = Levenberg-Marquardt approach.
- Returns:
Optimization method.
- getPolygonFaces(self, orderPolygons: bool = true, useVisibility: bool = true, clipPolygon: bool = false) tuple[list[visp._visp.core.Polygon], list[list[visp._visp.core.Point]]] ¶
Get the list of polygons faces (a vpPolygon representing the projection of the face in the image and a list of face corners in 3D), with the possibility to order by distance to the camera or to use the visibility check to consider if the polygon face must be retrieved or not.
- Parameters:
- orderPolygons: bool = true¶
If true, the resulting list is ordered from the nearest polygon faces to the farther.
- useVisibility: bool = true¶
If true, only visible faces will be retrieved.
- clipPolygon: bool = false¶
If true, the polygons will be clipped according to the clipping flags set in vpMbTracker .
- Returns:
A pair object containing the list of vpPolygon and the list of face corners.
- getPose(*args, **kwargs)¶
Overloaded function.
getPose(self: visp._visp.mbt.MbTracker, cMo: visp._visp.core.HomogeneousMatrix) -> None
Get the current pose between the object and the camera. cMo is the matrix which can be used to express coordinates from the object frame to camera frame.
- Parameters:
- cMo
the pose
getPose(self: visp._visp.mbt.MbTracker) -> visp._visp.core.HomogeneousMatrix
Get the current pose between the object and the camera. cMo is the matrix which can be used to express coordinates from the object frame to camera frame.
- Returns:
the current pose
- getProjectionError(self) float ¶
Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal. The error is expressed in degree between 0 and 90. This value is computed if setProjectionErrorComputation() is turned on.
Note
See setProjectionErrorComputation()
- Returns:
the value for the error.
- getRobustWeights(*args, **kwargs)¶
Overloaded function.
getRobustWeights(self: visp._visp.mbt.MbEdgeKltTracker) -> visp._visp.core.ColVector
Return the weights vector \(w_i\) computed by the robust scheme.
The following example shows how to use this function to compute the norm of the weighted residual and the norm of the weighted residual normalized by the sum of the weights associated to the features that are tracked:
tracker.track(I); vpColVector w = tracker.getRobustWeights(); vpColVector e = tracker.getError(); vpColVector we(w.size()); for(unsigned int i=0; i<w.size(); i++) we[i] = w[i]*e[i]; std::cout << "Weighted residual: " << sqrt( (we).sumSquare() ) << std::endl; std::cout << "Weighted residual normalized: " << sqrt( (we).sumSquare() ) / w.sum() << std::endl;
Note
See getError()
getRobustWeights(self: visp._visp.mbt.MbKltTracker) -> visp._visp.core.ColVector
Return the weights vector \(w_i\) computed by the robust scheme.
The following example shows how to use this function to compute the norm of the weighted residual and the norm of the weighted residual normalized by the sum of the weights associated to the features that are tracked:
tracker.track(I); vpColVector w = tracker.getRobustWeights(); vpColVector e = tracker.getError(); vpColVector we(w.size()); for(unsigned int i=0; i<w.size(); i++) we[i] = w[i]*e[i]; std::cout << "Weighted residual: " << sqrt( (we).sumSquare() ) << std::endl; std::cout << "Weighted residual normalized: " << sqrt( (we).sumSquare() ) / w.sum() << std::endl;
Note
See getError()
getRobustWeights(self: visp._visp.mbt.MbEdgeTracker) -> visp._visp.core.ColVector
Return the weights vector \(w_i\) computed by the robust scheme.
The following example shows how to use this function to compute the norm of the weighted residual and the norm of the weighted residual normalized by the sum of the weights associated to the features that are tracked:
tracker.track(I); vpColVector w = tracker.getRobustWeights(); vpColVector e = tracker.getError(); vpColVector we(w.size()); for(unsigned int i=0; i<w.size(); i++) we[i] = w[i]*e[i]; std::cout << "Weighted residual: " << sqrt( (we).sumSquare() ) << std::endl; std::cout << "Weighted residual normalized: " << sqrt( (we).sumSquare() ) / w.sum() << std::endl;
Note
See getError()
- getScales(self) list[bool] ¶
Return the scales levels used for the tracking.
- Returns:
The scales levels used for the tracking.
- getThresholdAcceptation(self) float ¶
<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7ff689a84850>>
- Returns:
threshold_outlier : Threshold for the weight below which a point is rejected.
- initClick(*args, **kwargs)¶
Overloaded function.
initClick(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str, displayHelp: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
Initialise the tracker by clicking in the image on the pixels that correspond to the 3D points whose coordinates are extracted from a file. In this file, comments starting with # character are allowed. Notice that 3D point coordinates are expressed in meter in the object frame with their X, Y and Z values.
The structure of this file is the following:
# 3D point coordinates 4 # Number of points in the file (minimum is four) 0.01 0.01 0.01 # \ ... # | 3D coordinates in the object frame (X, Y, Z) 0.01 -0.01 -0.01 # /
- Parameters:
- I
Input grayscale image where the user has to click.
- initFile
File containing the coordinates of at least 4 3D points the user has to click in the image. This file should have .init extension (ie teabox.init).
- displayHelp
Optional display of an image (.ppm, .pgm, .jpg, .jpeg, .png) that should have the same generic name as the init file (ie teabox.ppm or teabox.png). This image may be used to show where to click. This functionality is only available if visp_io module is used.
- T
optional transformation matrix to transform 3D points expressed in the original object frame to the desired object frame.
initClick(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str, displayHelp: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
Initialise the tracker by clicking in the image on the pixels that correspond to the 3D points whose coordinates are extracted from a file. In this file, comments starting with # character are allowed. Notice that 3D point coordinates are expressed in meter in the object frame with their X, Y and Z values.
The structure of this file is the following:
# 3D point coordinates 4 # Number of points in the file (minimum is four) 0.01 0.01 0.01 # \ ... # | 3D coordinates in the object frame (X, Y, Z) 0.01 -0.01 -0.01 # /
- Parameters:
- I_color
Input color image where the user has to click.
- initFile
File containing the coordinates of at least 4 3D points the user has to click in the image. This file should have .init extension (ie teabox.init).
- displayHelp
Optional display of an image (.ppm, .pgm, .jpg, .jpeg, .png) that should have the same generic name as the init file (ie teabox.ppm or teabox.png). This image may be used to show where to click. This functionality is only available if visp_io module is used.
- T
optional transformation matrix to transform 3D points expressed in the original object frame to the desired object frame.
initClick(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, points3D_list: list[visp._visp.core.Point], displayFile: str = ) -> None
Initialise the tracker by clicking in the image on the pixels that correspond to the 3D points whose coordinates are given in points3D_list .
- Parameters:
- I
Input grayscale image where the user has to click.
- points3D_list
List of at least 4 3D points with coordinates expressed in meters in the object frame.
- displayFile
Path to the image used to display the help. This image may be used to show where to click. This functionality is only available if visp_io module is used.
initClick(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, points3D_list: list[visp._visp.core.Point], displayFile: str = ) -> None
Initialise the tracker by clicking in the image on the pixels that correspond to the 3D points whose coordinates are given in points3D_list .
- Parameters:
- I_color
Input color image where the user has to click.
- points3D_list
List of at least 4 3D points with coordinates expressed in meters in the object frame.
- displayFile
Path to the image used to display the help. This image may be used to show where to click. This functionality is only available if visp_io module is used.
- initFromPoints(*args, **kwargs)¶
Overloaded function.
initFromPoints(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str) -> None
Initialise the tracker by reading 3D point coordinates and the corresponding 2D image point coordinates from a file. Comments starting with # character are allowed. 3D point coordinates are expressed in meter in the object frame with X, Y and Z values. 2D point coordinates are expressied in pixel coordinates, with first the line and then the column of the pixel in the image. The structure of this file is the following.
# 3D point coordinates 4 # Number of 3D points in the file (minimum is four) 0.01 0.01 0.01 # \ ... # | 3D coordinates in meters in the object frame 0.01 -0.01 -0.01 # / # corresponding 2D point coordinates 4 # Number of image points in the file (has to be the same as the number of 3D points) 100 200 # \ ... # | 2D coordinates in pixel in the image 50 10 # /
- Parameters:
- I
Input grayscale image
- initFile
Path to the file containing all the points.
initFromPoints(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str) -> None
Initialise the tracker by reading 3D point coordinates and the corresponding 2D image point coordinates from a file. Comments starting with # character are allowed. 3D point coordinates are expressed in meter in the object frame with X, Y and Z values. 2D point coordinates are expressied in pixel coordinates, with first the line and then the column of the pixel in the image. The structure of this file is the following.
# 3D point coordinates 4 # Number of 3D points in the file (minimum is four) 0.01 0.01 0.01 # \ ... # | 3D coordinates in meters in the object frame 0.01 -0.01 -0.01 # / # corresponding 2D point coordinates 4 # Number of image points in the file (has to be the same as the number of 3D points) 100 200 # \ ... # | 2D coordinates in pixel in the image 50 10 # /
- Parameters:
- I_color
Input color image
- initFile
Path to the file containing all the points.
initFromPoints(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, points2D_list: list[visp._visp.core.ImagePoint], points3D_list: list[visp._visp.core.Point]) -> None
Initialise the tracking with the list of image points (points2D_list) and the list of corresponding 3D points (object frame) (points3D_list).
- Parameters:
- I
Input grayscale image
- points2D_list
List of image points.
- points3D_list
List of 3D points (object frame).
initFromPoints(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, points2D_list: list[visp._visp.core.ImagePoint], points3D_list: list[visp._visp.core.Point]) -> None
Initialise the tracking with the list of image points (points2D_list) and the list of corresponding 3D points (object frame) (points3D_list).
- Parameters:
- I_color
Input color grayscale image
- points2D_list
List of image points.
- points3D_list
List of 3D points (object frame).
- initFromPose(*args, **kwargs)¶
Overloaded function.
initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str) -> None
Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile. The structure of this file is (without the comments):
// The six value of the pose vector 0.0000 // \ 0.0000 // | 1.0000 // | Example of value for the pose vector where Z = 1 meter 0.0000 // | 0.0000 // | 0.0000 // /
Where the three firsts lines refer to the translation and the three last to the rotation in thetaU parametrisation (see vpThetaUVector ).
- Parameters:
- I
Input grayscale image
- initFile
Path to the file containing the pose.
initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str) -> None
Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile. The structure of this file is (without the comments):
// The six value of the pose vector 0.0000 // \ 0.0000 // | 1.0000 // | Example of value for the pose vector where Z = 1 meter 0.0000 // | 0.0000 // | 0.0000 // /
Where the three firsts lines refer to the translation and the three last to the rotation in thetaU parametrisation (see vpThetaUVector ).
- Parameters:
- I_color
Input color image
- initFile
Path to the file containing the pose.
initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix) -> None
Initialise the tracking thanks to the pose.
- Parameters:
- I
Input grayscale image
- cMo
Pose matrix.
initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, cMo: visp._visp.core.HomogeneousMatrix) -> None
Initialise the tracking thanks to the pose.
- Parameters:
- I_color
Input color image
- cMo
Pose matrix.
initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, cPo: visp._visp.core.PoseVector) -> None
Initialise the tracking thanks to the pose vector.
- Parameters:
- I
Input grayscale image
- cPo
Pose vector.
initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, cPo: visp._visp.core.PoseVector) -> None
Initialise the tracking thanks to the pose vector.
- Parameters:
- I_color
Input color image
- cPo
Pose vector.
- loadConfigFile(*args, **kwargs)¶
Overloaded function.
loadConfigFile(self: visp._visp.mbt.MbEdgeKltTracker, configFile: str, verbose: bool = true) -> None
Load the xml configuration file. From the configuration file initialize the parameters corresponding to the objects: moving-edges, KLT, camera.
The XML configuration file has the following form:
<?xml version="1.0"?> <conf> <ecm> <mask> <size>5</size> <nb_mask>180</nb_mask> </mask> <range> <tracking>10</tracking> </range> <contrast> <edge_threshold_type>1</edge_threshold_type> <edge_threshold>20</edge_threshold> <mu1>0.5</mu1> <mu2>0.5</mu2> </contrast> <sample> <step>4</step> </sample> </ecm> <camera> <width>640</width> <height>480</height> <u0>320</u0> <v0>240</v0> <px>686.24</px> <py>686.24</py> </camera> <face> <angle_appear>65</angle_appear> <angle_disappear>85</angle_disappear> <near_clipping>0.01</near_clipping> <far_clipping>0.90</far_clipping> <fov_clipping>1</fov_clipping> </face> <klt> <mask_border>10</mask_border> <max_features>10000</max_features> <window_size>5</window_size> <quality>0.02</quality> <min_distance>10</min_distance> <harris>0.02</harris> <size_block>3</size_block> <pyramid_lvl>3</pyramid_lvl> </klt> </conf>
- Parameters:
- configFile
full name of the xml file.
- verbose
Set true to activate the verbose mode, false otherwise.
loadConfigFile(self: visp._visp.mbt.MbKltTracker, configFile: str, verbose: bool = true) -> None
Load the xml configuration file. From the configuration file initialize the parameters corresponding to the objects: KLT, camera.
The XML configuration file has the following form:
<?xml version="1.0"?> <conf> <camera> <width>640</width> <height>480</height> <u0>320</u0> <v0>240</v0> <px>686.24</px> <py>686.24</py> </camera> <face> <angle_appear>65</angle_appear> <angle_disappear>85</angle_disappear> <near_clipping>0.01</near_clipping> <far_clipping>0.90</far_clipping> <fov_clipping>1</fov_clipping> </face> <klt> <mask_border>10</mask_border> <max_features>10000</max_features> <window_size>5</window_size> <quality>0.02</quality> <min_distance>10</min_distance> <harris>0.02</harris> <size_block>3</size_block> <pyramid_lvl>3</pyramid_lvl> </klt> </conf>
- Parameters:
- configFile
full name of the xml file.
- verbose
Set true to activate the verbose mode, false otherwise.
loadConfigFile(self: visp._visp.mbt.MbTracker, configFile: str, verbose: bool = true) -> None
Load a config file to parameterise the behavior of the tracker.
Virtual method to adapt to each tracker.
- Parameters:
- configFile
An xml config file to parse.
- verbose
verbose flag.
loadConfigFile(self: visp._visp.mbt.MbEdgeTracker, configFile: str, verbose: bool = true) -> None
Load the xml configuration file. From the configuration file initialize the parameters corresponding to the objects: moving-edges, camera and visibility angles.
Note
See loadConfigFile(const char*)
- Parameters:
- configFile
full name of the xml file.
- verbose
verbose flag.
loadConfigFile(self: visp._visp.mbt.MbTracker, configFile: str, verbose: bool = true) -> None
Load a config file to parameterise the behavior of the tracker.
Virtual method to adapt to each tracker.
- Parameters:
- configFile
An xml config file to parse.
- verbose
verbose flag.
- loadModel(self, modelFile: str, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) None ¶
Load a 3D model from the file in parameter. This file must either be a vrml file (.wrl) or a CAO file (.cao). CAO format is described in the loadCAOModel() method.
- reInitModel(*args, **kwargs)¶
Overloaded function.
reInitModel(self: visp._visp.mbt.MbEdgeKltTracker, I: visp._visp.core.ImageGray, cad_name: str, cMo: visp._visp.core.HomogeneousMatrix, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
Re-initialize the model used by the tracker.
- Parameters:
- I
The image containing the object to initialize.
- cad_name
Path to the file containing the 3D model description.
- cMo
The new vpHomogeneousMatrix between the camera and the new model
- verbose
verbose option to print additional information when loading CAO model files which include other CAO model files.
- T
optional transformation matrix (currently only for .cao) to transform 3D points expressed in the original object frame to the desired object frame.
reInitModel(self: visp._visp.mbt.MbKltTracker, I: visp._visp.core.ImageGray, cad_name: str, cMo: visp._visp.core.HomogeneousMatrix, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
Re-initialize the model used by the tracker.
- Parameters:
- I
The image containing the object to initialize.
- cad_name
Path to the file containing the 3D model description.
- cMo
The new vpHomogeneousMatrix between the camera and the new model
- verbose
verbose option to print additional information when loading CAO model files which include other CAO model files.
- T
optional transformation matrix (currently only for .cao) to transform 3D points expressed in the original object frame to the desired object frame.
reInitModel(self: visp._visp.mbt.MbEdgeTracker, I: visp._visp.core.ImageGray, cad_name: str, cMo: visp._visp.core.HomogeneousMatrix, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
Re-initialize the model used by the tracker.
- Parameters:
- I
The image containing the object to initialize.
- cad_name
Path to the file containing the 3D model description.
- cMo
The new vpHomogeneousMatrix between the camera and the new model
- verbose
verbose option to print additional information when loading CAO model files which include other CAO model files.
- T
optional transformation matrix (currently only for .cao) to transform 3D points expressed in the original object frame to the desired object frame.
- resetTracker(*args, **kwargs)¶
Overloaded function.
resetTracker(self: visp._visp.mbt.MbEdgeKltTracker) -> None
Reset the tracker. The model is removed and the pose is set to identity. The tracker needs to be initialized with a new model and a new pose.
resetTracker(self: visp._visp.mbt.MbKltTracker) -> None
Reset the tracker. The model is removed and the pose is set to identity. The tracker needs to be initialized with a new model and a new pose.
resetTracker(self: visp._visp.mbt.MbEdgeTracker) -> None
Reset the tracker. The model is removed and the pose is set to identity. The tracker needs to be initialized with a new model and a new pose.
- setAngleAppear(self, a: float) None ¶
Set the angle used to test polygons appearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value lower than this parameter, the polygon is considered as appearing. The polygon will then be tracked.
- setAngleDisappear(self, a: float) None ¶
Set the angle used to test polygons disappearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value greater than this parameter, the polygon is considered as disappearing. The tracking of the polygon will then be stopped.
- setCameraParameters(*args, **kwargs)¶
Overloaded function.
setCameraParameters(self: visp._visp.mbt.MbEdgeKltTracker, cam: visp._visp.core.CameraParameters) -> None
Set the camera parameters
- Parameters:
- cam
the new camera parameters
setCameraParameters(self: visp._visp.mbt.MbKltTracker, cam: visp._visp.core.CameraParameters) -> None
Set the camera parameters.
- Parameters:
- cam
the new camera parameters.
setCameraParameters(self: visp._visp.mbt.MbTracker, cam: visp._visp.core.CameraParameters) -> None
Set the camera parameters.
- Parameters:
- cam
The new camera parameters.
setCameraParameters(self: visp._visp.mbt.MbEdgeTracker, cam: visp._visp.core.CameraParameters) -> None
Set the camera parameters.
- Parameters:
- cam
The new camera parameters.
setCameraParameters(self: visp._visp.mbt.MbTracker, cam: visp._visp.core.CameraParameters) -> None
Set the camera parameters.
- Parameters:
- cam
The new camera parameters.
- setClipping(*args, **kwargs)¶
Overloaded function.
setClipping(self: visp._visp.mbt.MbEdgeKltTracker, flags: int) -> None
Specify which clipping to use.
Note
See vpMbtPolygonClipping
- Parameters:
- flags
New clipping flags.
setClipping(self: visp._visp.mbt.MbEdgeTracker, flags: int) -> None
Specify which clipping to use.
Note
See vpMbtPolygonClipping
- Parameters:
- flags
New clipping flags.
setClipping(self: visp._visp.mbt.MbTracker, flags: int) -> None
Specify which clipping to use.
Note
See vpMbtPolygonClipping
- Parameters:
- flags
New clipping flags.
- setCovarianceComputation(self, flag: bool) None ¶
Set if the covariance matrix has to be computed.
Note
See getCovarianceMatrix()
- setDisplayFeatures(self, displayF: bool) None ¶
Enable to display the features. By features, we meant the moving edges (ME) and the klt points if used.
Note that if present, the moving edges can be displayed with different colors:
If green : The ME is a good point.
If blue : The ME is removed because of a contrast problem during the tracking phase.
If purple : The ME is removed because of a threshold problem during the tracking phase.
If red : The ME is removed because it is rejected by the robust approach in the virtual visual servoing scheme.
- setEstimatedDoF(self, v: visp._visp.core.ColVector) None ¶
Set a 6-dim column vector representing the degrees of freedom in the object frame that are estimated by the tracker. When set to 1, all the 6 dof are estimated.
Below we give the correspondence between the index of the vector and the considered dof:
v[0] = 1 if translation along X is estimated, 0 otherwise;
v[1] = 1 if translation along Y is estimated, 0 otherwise;
v[2] = 1 if translation along Z is estimated, 0 otherwise;
v[3] = 1 if rotation along X is estimated, 0 otherwise;
v[4] = 1 if rotation along Y is estimated, 0 otherwise;
v[5] = 1 if rotation along Z is estimated, 0 otherwise;
- setFarClippingDistance(*args, **kwargs)¶
Overloaded function.
setFarClippingDistance(self: visp._visp.mbt.MbEdgeKltTracker, dist: float) -> None
Set the far distance for clipping.
- Parameters:
- dist
Far clipping value.
setFarClippingDistance(self: visp._visp.mbt.MbEdgeTracker, dist: float) -> None
Set the far distance for clipping.
- Parameters:
- dist
Far clipping value.
setFarClippingDistance(self: visp._visp.mbt.MbTracker, dist: float) -> None
Set the far distance for clipping.
- Parameters:
- dist
Far clipping value.
- setGoodMovingEdgesRatioThreshold(self, threshold: float) None ¶
Set the threshold value between 0 and 1 over good moving edges ratio. It allows to decide if the tracker has enough valid moving edges to compute a pose. 1 means that all moving edges should be considered as good to have a valid pose, while 0.1 means that 10% of the moving edge are enough to declare a pose valid.
Note
See getGoodMovingEdgesRatioThreshold()
- setInitialMu(self, mu: float) None ¶
Set the initial value of mu for the Levenberg Marquardt optimization loop.
- setKltOpencv(self, t: visp._visp.klt.KltOpencv) None ¶
Set the new value of the klt tracker.
- Parameters:
- t: visp._visp.klt.KltOpencv¶
Klt tracker containing the new values.
- setKltThresholdAcceptation(self, th: float) None ¶
Set the threshold for the acceptation of a point.
- setLod(self: visp._visp.mbt.MbTracker, useLod: bool, name: str =) None ¶
Set the flag to consider if the level of detail (LOD) is used.
Note
See setMinLineLengthThresh() , setMinPolygonAreaThresh()
- Parameters:
- useLod
true if the level of detail must be used, false otherwise. When true, two parameters can be set, see setMinLineLengthThresh() and setMinPolygonAreaThresh() .
- name
name of the face we want to modify the LOD parameter.
- setMinLineLengthThresh(self: visp._visp.mbt.MbTracker, minLineLengthThresh: float, name: str =) None ¶
Set the threshold for the minimum line length to be considered as visible in the LOD case.
Note
See setLod() , setMinPolygonAreaThresh()
- Parameters:
- minLineLengthThresh
threshold for the minimum line length in pixel.
- name
name of the face we want to modify the LOD threshold.
- setMinPolygonAreaThresh(self: visp._visp.mbt.MbTracker, minPolygonAreaThresh: float, name: str =) None ¶
Set the minimum polygon area to be considered as visible in the LOD case.
Note
See setLod() , setMinLineLengthThresh()
- Parameters:
- minPolygonAreaThresh
threshold for the minimum polygon area in pixel.
- name
name of the face we want to modify the LOD threshold.
- setMovingEdge(self, me: visp._visp.me.Me) None ¶
Set the moving edge parameters.
- setNearClippingDistance(*args, **kwargs)¶
Overloaded function.
setNearClippingDistance(self: visp._visp.mbt.MbEdgeKltTracker, dist: float) -> None
Set the near distance for clipping.
- Parameters:
- dist
Near clipping value.
setNearClippingDistance(self: visp._visp.mbt.MbEdgeTracker, dist: float) -> None
Set the near distance for clipping.
- Parameters:
- dist
Near clipping value.
setNearClippingDistance(self: visp._visp.mbt.MbTracker, dist: float) -> None
Set the near distance for clipping.
- Parameters:
- dist
Near clipping value.
- setOgreShowConfigDialog(self, showConfigDialog: bool) None ¶
Enable/Disable the appearance of Ogre config dialog on startup.
Warning
This method has only effect when Ogre is used and Ogre visibility test is enabled using setOgreVisibilityTest() with true parameter.
- setOgreVisibilityTest(*args, **kwargs)¶
Overloaded function.
setOgreVisibilityTest(self: visp._visp.mbt.MbEdgeKltTracker, v: bool) -> None
Use Ogre3D for visibility tests
Warning
This function has to be called before the initialization of the tracker.
- Parameters:
- v
True to use it, False otherwise
setOgreVisibilityTest(self: visp._visp.mbt.MbKltTracker, v: bool) -> None
Use Ogre3D for visibility tests
Warning
This function has to be called before the initialization of the tracker.
- Parameters:
- v
True to use it, False otherwise
setOgreVisibilityTest(self: visp._visp.mbt.MbTracker, v: bool) -> None
Use Ogre3D for visibility tests
Warning
This function has to be called before the initialization of the tracker.
- Parameters:
- v
True to use it, False otherwise
setOgreVisibilityTest(self: visp._visp.mbt.MbEdgeTracker, v: bool) -> None
Use Ogre3D for visibility tests
Warning
This function has to be called before the initialization of the tracker.
- Parameters:
- v
True to use it, False otherwise
setOgreVisibilityTest(self: visp._visp.mbt.MbTracker, v: bool) -> None
Use Ogre3D for visibility tests
Warning
This function has to be called before the initialization of the tracker.
- Parameters:
- v
True to use it, False otherwise
- setOptimizationMethod(self, opt: visp._visp.mbt.MbTracker.MbtOptimizationMethod) None ¶
- setPose(*args, **kwargs)¶
Overloaded function.
setPose(self: visp._visp.mbt.MbEdgeKltTracker, I: visp._visp.core.ImageGray, cdMo: visp._visp.core.HomogeneousMatrix) -> None
Set the pose to be used in entry (as guess) of the next call to the track() function. This pose will be just used once.
Warning
This functionnality is not available when tracking cylinders.
- Parameters:
- I
grayscale image corresponding to the desired pose.
- cdMo
Pose to affect.
setPose(self: visp._visp.mbt.MbEdgeKltTracker, I_color: visp._visp.core.ImageRGBa, cdMo: visp._visp.core.HomogeneousMatrix) -> None
Set the pose to be used in entry (as guess) of the next call to the track() function. This pose will be just used once.
Warning
This functionnality is not available when tracking cylinders.
- Parameters:
- I_color
image corresponding to the desired pose.
- cdMo
Pose to affect.
setPose(self: visp._visp.mbt.MbKltTracker, I: visp._visp.core.ImageGray, cdMo: visp._visp.core.HomogeneousMatrix) -> None
Set the pose to be used in entry (as guess) of the next call to the track() function. This pose will be just used once.
Warning
This functionnality is not available when tracking cylinders.
- Parameters:
- I
grayscale image corresponding to the desired pose.
- cdMo
Pose to affect.
setPose(self: visp._visp.mbt.MbKltTracker, I_color: visp._visp.core.ImageRGBa, cdMo: visp._visp.core.HomogeneousMatrix) -> None
Set the pose to be used in entry (as guess) of the next call to the track() function. This pose will be just used once.
Warning
This functionnality is not available when tracking cylinders.
- Parameters:
- I_color
color image corresponding to the desired pose.
- cdMo
Pose to affect.
setPose(self: visp._visp.mbt.MbEdgeTracker, I: visp._visp.core.ImageGray, cdMo: visp._visp.core.HomogeneousMatrix) -> None
Set the pose to be used in entry of the next call to the track() function. This pose will be just used once.
- Parameters:
- I
grayscale image corresponding to the desired pose.
- cdMo
Pose to affect.
setPose(self: visp._visp.mbt.MbEdgeTracker, I_color: visp._visp.core.ImageRGBa, cdMo: visp._visp.core.HomogeneousMatrix) -> None
Set the pose to be used in entry of the next call to the track() function. This pose will be just used once.
- Parameters:
- I_color
color image corresponding to the desired pose.
- cdMo
Pose to affect.
- setPoseSavingFilename(self, filename: str) None ¶
Set the filename used to save the initial pose computed using the initClick() method. It is also used to read a previous pose in the same method. If the file is not set then, the initClick() method will create a .0.pos file in the root directory. This directory is the path to the file given to the method initClick() used to know the coordinates in the object frame.
- setProjectionErrorComputation(*args, **kwargs)¶
Overloaded function.
setProjectionErrorComputation(self: visp._visp.mbt.MbEdgeKltTracker, flag: bool) -> None
Set if the projection error criteria has to be computed.
- Parameters:
- flag
True if the projection error criteria has to be computed, false otherwise
setProjectionErrorComputation(self: visp._visp.mbt.MbKltTracker, flag: bool) -> None
Set if the projection error criteria has to be computed.
- Parameters:
- flag
True if the projection error criteria has to be computed, false otherwise
setProjectionErrorComputation(self: visp._visp.mbt.MbTracker, flag: bool) -> None
Set if the projection error criteria has to be computed. This criteria could be used to detect the quality of the tracking. It computes an angle between 0 and 90 degrees that is available with getProjectionError() . Closer to 0 is the value, better is the tracking.
Note
See getProjectionError()
- Parameters:
- flag
True if the projection error criteria has to be computed, false otherwise.
- setProjectionErrorDisplay(self, display: bool) None ¶
Display or not gradient and model orientation when computing the projection error.
- setProjectionErrorDisplayArrowLength(self, length: int) None ¶
Arrow length used to display gradient and model orientation for projection error computation.
- setProjectionErrorDisplayArrowThickness(self, thickness: int) None ¶
Arrow thickness used to display gradient and model orientation for projection error computation.
- setProjectionErrorKernelSize(self, size: int) None ¶
Set kernel size used for projection error computation.
- setProjectionErrorMovingEdge(self, me: visp._visp.me.Me) None ¶
Set Moving-Edges parameters for projection error computation.
- Parameters:
- me: visp._visp.me.Me¶
Moving-Edges parameters.
- setScales(self, _scales: list[bool]) None ¶
Set the scales to use to realize the tracking. The vector of boolean activates or not the scales to set for the object tracking. The first element of the list correspond to the tracking on the full image, the second element corresponds to the tracking on an image subsampled by two.
Using multi scale tracking allows to track the object with greater moves. It requires the computation of a pyramid of images, but the total tracking can be faster than a tracking based only on the full scale. The pose is computed from the smallest image to the biggest. This may be dangerous if the object to track is small in the image, because the subsampled scale(s) will have only few points to compute the pose (it could result in a loss of precision).
Warning
This method must be used before the tracker has been initialized ( before the call of the loadConfigFile() or loadModel() methods).
Warning
At least one level must be activated.
- setScanLineVisibilityTest(*args, **kwargs)¶
Overloaded function.
setScanLineVisibilityTest(self: visp._visp.mbt.MbEdgeKltTracker, v: bool) -> None
Use Scanline algorithm for visibility tests
- Parameters:
- v
True to use it, False otherwise
setScanLineVisibilityTest(self: visp._visp.mbt.MbKltTracker, v: bool) -> None
Use Scanline algorithm for visibility tests
- Parameters:
- v
True to use it, False otherwise
setScanLineVisibilityTest(self: visp._visp.mbt.MbTracker, v: bool) -> None
setScanLineVisibilityTest(self: visp._visp.mbt.MbEdgeTracker, v: bool) -> None
Use Scanline algorithm for visibility tests
- Parameters:
- v
True to use it, False otherwise
setScanLineVisibilityTest(self: visp._visp.mbt.MbTracker, v: bool) -> None
- setStopCriteriaEpsilon(self, eps: float) None ¶
Set the minimal error (previous / current estimation) to determine if there is convergence or not.
- setThresholdAcceptation(self, th: float) None ¶
<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7ff689a85bd0>>
- setUseEdgeTracking(self, name: str, useEdgeTracking: bool) None ¶
Set if the polygons that have the given name have to be considered during the tracking phase.
- setUseKltTracking(self, name: str, useKltTracking: bool) None ¶
Set if the polygons that have the given name have to be considered during the tracking phase.
- testTracking(*args, **kwargs)¶
Overloaded function.
testTracking(self: visp._visp.mbt.MbEdgeKltTracker) -> None
Check if the tracking failed.
testTracking(self: visp._visp.mbt.MbKltTracker) -> None
Test the quality of the tracking. The tracking is supposed to fail if less than 10 points are tracked.
- track(*args, **kwargs)¶
Overloaded function.
track(self: visp._visp.mbt.MbEdgeKltTracker, I: visp._visp.core.ImageGray) -> None
Realize the tracking of the object in the image.
- Parameters:
- I
the input grayscale image.
track(self: visp._visp.mbt.MbEdgeKltTracker, I_color: visp._visp.core.ImageRGBa) -> None
Realize the tracking of the object in the image.
- Parameters:
- I_color
the input color image.
track(self: visp._visp.mbt.MbKltTracker, I: visp._visp.core.ImageGray) -> None
Realize the tracking of the object in the image
- Parameters:
- I
the input grayscale image
track(self: visp._visp.mbt.MbKltTracker, I_color: visp._visp.core.ImageRGBa) -> None
Realize the tracking of the object in the image
- Parameters:
- I_color
the input color image
track(self: visp._visp.mbt.MbEdgeTracker, I: visp._visp.core.ImageGray) -> None
Compute each state of the tracking procedure for all the feature sets.
If the tracking is considered as failed an exception is thrown.
- Parameters:
- I
The image.
track(self: visp._visp.mbt.MbEdgeTracker, I: visp._visp.core.ImageRGBa) -> None
Track the object in the given image
- Parameters:
- I
The current image.