MbKltTracker¶
- class MbKltTracker(self)¶
Bases:
MbTracker
Model based tracker using only KLT.
Warning
This class is deprecated for user usage. You should rather use the high level vpMbGenericTracker class.
Warning
This class is only available if OpenCV is installed, and used.
The tutorial-tracking-mb-deprecated is a good starting point to use this class.
The tracker requires the knowledge of the 3D model that could be provided in a vrml or in a cao file. The cao format is described in loadCAOModel() . It may also use an xml file used to tune the behavior of the tracker and an init file used to compute the pose at the very first image.
The following code shows the simplest way to use the tracker. The tutorial-tracking-mb-deprecated is also a good starting point to use this class.
#include <visp3/core/vpCameraParameters.h> #include <visp3/core/vpHomogeneousMatrix.h> #include <visp3/core/vpImage.h> #include <visp3/gui/vpDisplayX.h> #include <visp3/io/vpImageIo.h> #include <visp3/mbt/vpMbKltTracker.h> #ifdef ENABLE_VISP_NAMESPACE using namespace VISP_NAMESPACE_NAME; #endif int main() { #if defined VISP_HAVE_OPENCV vpMbKltTracker tracker; // Create a model based tracker via KLT points. vpImage<unsigned char> I; vpHomogeneousMatrix cMo; // Pose computed using the tracker. vpCameraParameters cam; // Acquire an image vpImageIo::read(I, "cube.pgm"); #if defined(VISP_HAVE_X11) vpDisplayX display; display.init(I,100,100,"Mb Klt Tracker"); #endif tracker.loadConfigFile("cube.xml"); // Load the configuration of the tracker tracker.getCameraParameters(cam); // Get the camera parameters used by the tracker (from the configuration file). tracker.loadModel("cube.cao"); // Load the 3d model in cao format. No 3rd party library is required // Initialise manually the pose by clicking on the image points associated to the 3d points contained in the // cube.init file. tracker.initClick(I, "cube.init"); while(true){ // Acquire a new image vpDisplay::display(I); tracker.track(I); // Track the object on this image tracker.getPose(cMo); // Get the pose tracker.display(I, cMo, cam, vpColor::darkRed, 1); // Display the model at the computed pose. vpDisplay::flush(I); } return 0; #endif }
The tracker can also be used without display, in that case the initial pose must be known (object always at the same initial pose for example) or computed using another method:
#include <visp3/core/vpCameraParameters.h> #include <visp3/core/vpHomogeneousMatrix.h> #include <visp3/core/vpImage.h> #include <visp3/io/vpImageIo.h> #include <visp3/mbt/vpMbKltTracker.h> #ifdef ENABLE_VISP_NAMESPACE using namespace VISP_NAMESPACE_NAME; #endif int main() { #if defined VISP_HAVE_OPENCV vpMbKltTracker tracker; // Create a model based tracker via Klt Points. vpImage<unsigned char> I; vpHomogeneousMatrix cMo; // Pose used in entry (has to be defined), then computed using the tracker. //acquire an image vpImageIo::read(I, "cube.pgm"); // Example of acquisition tracker.loadConfigFile("cube.xml"); // Load the configuration of the tracker // load the 3d model, to read .wrl model coin is required, if coin is not installed .cao file can be used. tracker.loadModel("cube.cao"); tracker.initFromPose(I, cMo); // initialize the tracker with the given pose. while(true){ // acquire a new image tracker.track(I); // track the object on this image tracker.getPose(cMo); // get the pose } return 0; #endif }
Finally it can be used not to track an object but just to display a model at a given pose:
#include <visp3/core/vpCameraParameters.h> #include <visp3/core/vpHomogeneousMatrix.h> #include <visp3/core/vpImage.h> #include <visp3/gui/vpDisplayX.h> #include <visp3/io/vpImageIo.h> #include <visp3/mbt/vpMbKltTracker.h> #ifdef ENABLE_VISP_NAMESPACE using namespace VISP_NAMESPACE_NAME; #endif int main() { #if defined VISP_HAVE_OPENCV vpMbKltTracker tracker; // Create a model based tracker via Klt Points. vpImage<unsigned char> I; vpHomogeneousMatrix cMo; // Pose used to display the model. vpCameraParameters cam; // Acquire an image vpImageIo::read(I, "cube.pgm"); #if defined(VISP_HAVE_X11) vpDisplayX display; display.init(I,100,100,"Mb Klt Tracker"); #endif tracker.loadConfigFile("cube.xml"); // Load the configuration of the tracker tracker.getCameraParameters(cam); // Get the camera parameters used by the tracker (from the configuration file). // load the 3d model, to read .wrl model coin is required, if coin is not installed .cao file can be used. tracker.loadModel("cube.cao"); while(true){ // acquire a new image // Get the pose using any method vpDisplay::display(I); tracker.display(I, cMo, cam, vpColor::darkRed, 1, true); // Display the model at the computed pose. vpDisplay::flush(I); } return 0; #endif }
Methods
Add a circle to the list of circles.
Overloaded function.
Return the error vector \((s-s^*)\) reached after the virtual visual servoing process used to estimate the pose.
Return the address of the circle feature list.
Return the address of the Klt feature list.
Return the address of the cylinder feature list.
Get the current list of KLT points.
Get the current list of KLT points and their id.
Get the erosion of the mask used on the Model faces.
Get the current number of klt points.
Get the klt tracker at the current state.
Get the current list of KLT points.
Get the threshold for the acceptation of a point.
<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7f6963b51060>>
Return a list of primitives parameters to display the model at a given pose and camera parameters.
<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7f6963b518d0>>
Return the weights vector \(w_i\) computed by the robust scheme.
<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7f6963b522c0>>
Overloaded function.
Re-initialize the model used by the tracker.
Reset the tracker.
Overloaded function.
Set the erosion of the mask used on the Model faces.
Set the new value of the klt tracker.
Set the threshold for the acceptation of a point.
Set the erosion of the mask used on the Model faces.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7f6963b53640>>
Set if the polygons that have the given name have to be considered during the tracking phase.
Test the quality of the tracking.
Overloaded function.
Inherited Methods
Compute projection error given an input image and camera pose, parameters.
Set the near distance for clipping.
Set the angle used to test polygons appearance.
Overloaded function.
Get the value of the gain used to compute the control law.
Overloaded function.
Return a reference to the faces structure.
Get the initial value of mu used in the Levenberg Marquardt optimization loop.
Get the covariance matrix.
Get the maximum number of iterations of the virtual visual servoing stage.
LEVENBERG_MARQUARDT_OPT
Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal.
Set the maximum iteration of the virtual visual servoing stage.
Get the near distance for clipping.
Values:
Enable/Disable the appearance of Ogre config dialog on startup.
GAUSS_NEWTON_OPT
Save the pose in the given filename
Set the flag to consider if the level of detail (LOD) is used.
Set Moving-Edges parameters for projection error computation.
Get the optimization method used during the tracking.
Display or not gradient and model orientation when computing the projection error.
Get the far distance for clipping.
Return the angle used to test polygons appearance.
Return the angle used to test polygons disappearance.
Set the minimum polygon area to be considered as visible in the LOD case.
Set the minimal error (previous / current estimation) to determine if there is convergence or not.
Set the initial value of mu for the Levenberg Marquardt optimization loop.
Arrow length used to display gradient and model orientation for projection error computation.
Set if the covariance matrix has to be computed.
Overloaded function.
Set a 6-dim column vector representing the degrees of freedom in the object frame that are estimated by the tracker.
Set the far distance for clipping.
Get the camera parameters.
Get the clipping used and defined in vpPolygon3D::vpMbtPolygonClippingType.
Enable to display the features.
Get the list of polygons faces (a vpPolygon representing the projection of the face in the image and a list of face corners in 3D), with the possibility to order by distance to the camera or to use the visibility check to consider if the polygon face must be retrieved or not.
Specify which clipping to use.
Load a 3D model from the file in parameter.
Set the filename used to save the initial pose computed using the initClick() method.
Get the number of polygons (faces) representing the object to track.
Set the value of the gain used to compute the control law.
Set the threshold for the minimum line length to be considered as visible in the LOD case.
Set the angle used to test polygons disappearance.
Overloaded function.
Set kernel size used for projection error computation.
Arrow thickness used to display gradient and model orientation for projection error computation.
Get a 1x6 vpColVector representing the estimated degrees of freedom.
Operators
__annotations__
__doc__
__module__
Attributes
GAUSS_NEWTON_OPT
LEVENBERG_MARQUARDT_OPT
__annotations__
- class MbtOptimizationMethod(self, value: int)¶
Bases:
pybind11_object
Values:
GAUSS_NEWTON_OPT
LEVENBERG_MARQUARDT_OPT
- __init__(self)¶
- addCircle(self: visp._visp.mbt.MbKltTracker, P1: visp._visp.core.Point, P2: visp._visp.core.Point, P3: visp._visp.core.Point, r: float, name: str =) None ¶
Add a circle to the list of circles.
- Parameters:
- P1
Center of the circle.
- P2
Two points on the plane containing the circle. With the center of the circle we have 3 points defining the plane that contains the circle.
- P3
Two points on the plane containing the circle. With the center of the circle we have 3 points defining the plane that contains the circle.
- r
Radius of the circle.
- name
Name of the circle.
- computeCurrentProjectionError(self, I: visp._visp.core.ImageGray, _cMo: visp._visp.core.HomogeneousMatrix, _cam: visp._visp.core.CameraParameters) float ¶
Compute projection error given an input image and camera pose, parameters. This projection error uses locations sampled exactly where the model is projected using the camera pose and intrinsic parameters. You may want to use
Note
See setProjectionErrorComputation
Note
See getProjectionError
to get a projection error computed at the ME locations after a call to track() . It works similarly to vpMbTracker::getProjectionError function:Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal. The error is expressed in degree between 0 and 90.
- Parameters:
- I: visp._visp.core.ImageGray¶
Input grayscale image.
- _cMo: visp._visp.core.HomogeneousMatrix¶
Camera pose.
- _cam: visp._visp.core.CameraParameters¶
Camera parameters.
- display(*args, **kwargs)¶
Overloaded function.
display(self: visp._visp.mbt.MbKltTracker, I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None
Display the 3D model at a given position using the given camera parameters
- Parameters:
- I
The image.
- cMo
Pose used to project the 3D model into the image.
- cam
The camera parameters.
- col
The desired color.
- thickness
The thickness of the lines.
- displayFullModel
Boolean to say if all the model has to be displayed, even the faces that are visible.
display(self: visp._visp.mbt.MbKltTracker, I: visp._visp.core.ImageRGBa, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None
Display the 3D model at a given position using the given camera parameters
- Parameters:
- I
The color image.
- cMo
Pose used to project the 3D model into the image.
- cam
The camera parameters.
- col
The desired color.
- thickness
The thickness of the lines.
- displayFullModel
Boolean to say if all the model has to be displayed, even the faces that are not visible.
- getCameraParameters(self, cam: visp._visp.core.CameraParameters) None ¶
Get the camera parameters.
- Parameters:
- cam: visp._visp.core.CameraParameters¶
copy of the camera parameters used by the tracker.
- getClipping(self) int ¶
Get the clipping used and defined in vpPolygon3D::vpMbtPolygonClippingType.
- Returns:
Clipping flags.
- getCovarianceMatrix(self) visp._visp.core.Matrix ¶
Get the covariance matrix. This matrix is only computed if setCovarianceComputation() is turned on.
Note
See setCovarianceComputation()
- getError(self) visp._visp.core.ColVector ¶
Return the error vector \((s-s^*)\) reached after the virtual visual servoing process used to estimate the pose.
The following example shows how to use this function to compute the norm of the residual and the norm of the residual normalized by the number of features that are tracked:
tracker.track(I); std::cout << "Residual: " << sqrt( (tracker.getError()).sumSquare()) << std::endl; std::cout << "Residual normalized: " << sqrt( (tracker.getError()).sumSquare())/tracker.getError().size() << std::endl;
Note
See getRobustWeights()
- getEstimatedDoF(self) visp._visp.core.ColVector ¶
Get a 1x6 vpColVector representing the estimated degrees of freedom. vpColVector [0] = 1 if translation on X is estimated, 0 otherwise; vpColVector [1] = 1 if translation on Y is estimated, 0 otherwise; vpColVector [2] = 1 if translation on Z is estimated, 0 otherwise; vpColVector [3] = 1 if rotation on X is estimated, 0 otherwise; vpColVector [4] = 1 if rotation on Y is estimated, 0 otherwise; vpColVector [5] = 1 if rotation on Z is estimated, 0 otherwise;
- Returns:
1x6 vpColVector representing the estimated degrees of freedom.
- getFaces(self) vpMbHiddenFaces<vpMbtPolygon> ¶
Return a reference to the faces structure.
- getFarClippingDistance(self) float ¶
Get the far distance for clipping.
- Returns:
Far clipping value.
- getFeaturesCircle(self) list[visp._visp.mbt.MbtDistanceCircle] ¶
Return the address of the circle feature list.
- getFeaturesKlt(self) list[visp._visp.mbt.MbtDistanceKltPoints] ¶
Return the address of the Klt feature list.
- getFeaturesKltCylinder(self) list[visp._visp.mbt.MbtDistanceKltCylinder] ¶
Return the address of the cylinder feature list.
- getInitialMu(self) float ¶
Get the initial value of mu used in the Levenberg Marquardt optimization loop.
- Returns:
the initial mu value.
- getKltImagePoints(self) list[visp._visp.core.ImagePoint] ¶
Get the current list of KLT points.
- Returns:
the list of KLT points through vpKltOpencv .
- getKltImagePointsWithId(self) dict[int, visp._visp.core.ImagePoint] ¶
Get the current list of KLT points and their id.
- Returns:
the list of KLT points and their id through vpKltOpencv .
- getKltMaskBorder(self) int ¶
Get the erosion of the mask used on the Model faces.
- Returns:
The erosion.
- getKltOpencv(self) visp._visp.klt.KltOpencv ¶
Get the klt tracker at the current state.
- Returns:
klt tracker.
- getKltPoints(self) list[cv::Point_<float>] ¶
Get the current list of KLT points.
- Returns:
the list of KLT points through vpKltOpencv .
- getKltThresholdAcceptation(self) float ¶
Get the threshold for the acceptation of a point.
- Returns:
threshold_outlier : Threshold for the weight below which a point is rejected.
- getLambda(self) float ¶
Get the value of the gain used to compute the control law.
- Returns:
the value for the gain.
- getMaskBorder(self) int ¶
<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7f6963b51060>>
- Returns:
The erosion.
- getMaxIter(self) int ¶
Get the maximum number of iterations of the virtual visual servoing stage.
- Returns:
the number of iteration
- getModelForDisplay(self, width: int, height: int, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, displayFullModel: bool = false) list[list[float]] ¶
Return a list of primitives parameters to display the model at a given pose and camera parameters.
Line parameters are: <primitive id (here 0 for line)> , <pt_start.i()> , <pt_start.j()> , <pt_end.i()> , <pt_end.j()>
Ellipse parameters are: <primitive id (here 1 for ellipse)> , <pt_center.i()> , <pt_center.j()> , <n_20> , <n_11> , <n_02> where <n_ij> are the second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area).
- Parameters:
- width: int¶
Image width.
- height: int¶
Image height.
- cMo: visp._visp.core.HomogeneousMatrix¶
Pose used to project the 3D model into the image.
- cam: visp._visp.core.CameraParameters¶
The camera parameters.
- displayFullModel: bool = false¶
If true, the line is displayed even if it is not
- getNbKltPoints(self) int ¶
<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7f6963b518d0>>
- Returns:
the number of features
- getNbPolygon(self) int ¶
Get the number of polygons (faces) representing the object to track.
- Returns:
Number of polygons.
- getNearClippingDistance(self) float ¶
Get the near distance for clipping.
- Returns:
Near clipping value.
- getOptimizationMethod(self) visp._visp.mbt.MbTracker.MbtOptimizationMethod ¶
Get the optimization method used during the tracking. 0 = Gauss-Newton approach. 1 = Levenberg-Marquardt approach.
- Returns:
Optimization method.
- getPolygonFaces(self, orderPolygons: bool = true, useVisibility: bool = true, clipPolygon: bool = false) tuple[list[visp._visp.core.Polygon], list[list[visp._visp.core.Point]]] ¶
Get the list of polygons faces (a vpPolygon representing the projection of the face in the image and a list of face corners in 3D), with the possibility to order by distance to the camera or to use the visibility check to consider if the polygon face must be retrieved or not.
- Parameters:
- orderPolygons: bool = true¶
If true, the resulting list is ordered from the nearest polygon faces to the farther.
- useVisibility: bool = true¶
If true, only visible faces will be retrieved.
- clipPolygon: bool = false¶
If true, the polygons will be clipped according to the clipping flags set in vpMbTracker .
- Returns:
A pair object containing the list of vpPolygon and the list of face corners.
- getPose(*args, **kwargs)¶
Overloaded function.
getPose(self: visp._visp.mbt.MbTracker, cMo: visp._visp.core.HomogeneousMatrix) -> None
Get the current pose between the object and the camera. cMo is the matrix which can be used to express coordinates from the object frame to camera frame.
- Parameters:
- cMo
the pose
getPose(self: visp._visp.mbt.MbTracker) -> visp._visp.core.HomogeneousMatrix
Get the current pose between the object and the camera. cMo is the matrix which can be used to express coordinates from the object frame to camera frame.
- Returns:
the current pose
- getProjectionError(self) float ¶
Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal. The error is expressed in degree between 0 and 90. This value is computed if setProjectionErrorComputation() is turned on.
Note
See setProjectionErrorComputation()
- Returns:
the value for the error.
- getRobustWeights(self) visp._visp.core.ColVector ¶
Return the weights vector \(w_i\) computed by the robust scheme.
The following example shows how to use this function to compute the norm of the weighted residual and the norm of the weighted residual normalized by the sum of the weights associated to the features that are tracked:
tracker.track(I); vpColVector w = tracker.getRobustWeights(); vpColVector e = tracker.getError(); vpColVector we(w.size()); for(unsigned int i=0; i<w.size(); i++) we[i] = w[i]*e[i]; std::cout << "Weighted residual: " << sqrt( (we).sumSquare() ) << std::endl; std::cout << "Weighted residual normalized: " << sqrt( (we).sumSquare() ) / w.sum() << std::endl;
Note
See getError()
- getThresholdAcceptation(self) float ¶
<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7f6963b522c0>>
- Returns:
threshold_outlier : Threshold for the weight below which a point is rejected.
- initClick(*args, **kwargs)¶
Overloaded function.
initClick(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str, displayHelp: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
initClick(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str, displayHelp: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
initClick(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, points3D_list: list[visp._visp.core.Point], displayFile: str = ) -> None
initClick(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, points3D_list: list[visp._visp.core.Point], displayFile: str = ) -> None
- initFromPoints(*args, **kwargs)¶
Overloaded function.
initFromPoints(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str) -> None
Initialise the tracker by reading 3D point coordinates and the corresponding 2D image point coordinates from a file. Comments starting with # character are allowed. 3D point coordinates are expressed in meter in the object frame with X, Y and Z values. 2D point coordinates are expressied in pixel coordinates, with first the line and then the column of the pixel in the image. The structure of this file is the following.
# 3D point coordinates 4 # Number of 3D points in the file (minimum is four) 0.01 0.01 0.01 # \ ... # | 3D coordinates in meters in the object frame 0.01 -0.01 -0.01 # / # corresponding 2D point coordinates 4 # Number of image points in the file (has to be the same as the number of 3D points) 100 200 # \ ... # | 2D coordinates in pixel in the image 50 10 # /
- Parameters:
- I
Input grayscale image
- initFile
Path to the file containing all the points.
initFromPoints(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str) -> None
Initialise the tracker by reading 3D point coordinates and the corresponding 2D image point coordinates from a file. Comments starting with # character are allowed. 3D point coordinates are expressed in meter in the object frame with X, Y and Z values. 2D point coordinates are expressied in pixel coordinates, with first the line and then the column of the pixel in the image. The structure of this file is the following.
# 3D point coordinates 4 # Number of 3D points in the file (minimum is four) 0.01 0.01 0.01 # \ ... # | 3D coordinates in meters in the object frame 0.01 -0.01 -0.01 # / # corresponding 2D point coordinates 4 # Number of image points in the file (has to be the same as the number of 3D points) 100 200 # \ ... # | 2D coordinates in pixel in the image 50 10 # /
- Parameters:
- I_color
Input color image
- initFile
Path to the file containing all the points.
initFromPoints(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, points2D_list: list[visp._visp.core.ImagePoint], points3D_list: list[visp._visp.core.Point]) -> None
Initialise the tracking with the list of image points (points2D_list) and the list of corresponding 3D points (object frame) (points3D_list).
- Parameters:
- I
Input grayscale image
- points2D_list
List of image points.
- points3D_list
List of 3D points (object frame).
initFromPoints(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, points2D_list: list[visp._visp.core.ImagePoint], points3D_list: list[visp._visp.core.Point]) -> None
Initialise the tracking with the list of image points (points2D_list) and the list of corresponding 3D points (object frame) (points3D_list).
- Parameters:
- I_color
Input color grayscale image
- points2D_list
List of image points.
- points3D_list
List of 3D points (object frame).
- initFromPose(*args, **kwargs)¶
Overloaded function.
initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str) -> None
Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile. The structure of this file is (without the comments):
// The six value of the pose vector 0.0000 // \ 0.0000 // | 1.0000 // | Example of value for the pose vector where Z = 1 meter 0.0000 // | 0.0000 // | 0.0000 // /
Where the three firsts lines refer to the translation and the three last to the rotation in thetaU parametrisation (see vpThetaUVector ).
- Parameters:
- I
Input grayscale image
- initFile
Path to the file containing the pose.
initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str) -> None
Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile. The structure of this file is (without the comments):
// The six value of the pose vector 0.0000 // \ 0.0000 // | 1.0000 // | Example of value for the pose vector where Z = 1 meter 0.0000 // | 0.0000 // | 0.0000 // /
Where the three firsts lines refer to the translation and the three last to the rotation in thetaU parametrisation (see vpThetaUVector ).
- Parameters:
- I_color
Input color image
- initFile
Path to the file containing the pose.
initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix) -> None
Initialise the tracking thanks to the pose.
- Parameters:
- I
Input grayscale image
- cMo
Pose matrix.
initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, cMo: visp._visp.core.HomogeneousMatrix) -> None
Initialise the tracking thanks to the pose.
- Parameters:
- I_color
Input color image
- cMo
Pose matrix.
initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, cPo: visp._visp.core.PoseVector) -> None
Initialise the tracking thanks to the pose vector.
- Parameters:
- I
Input grayscale image
- cPo
Pose vector.
initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, cPo: visp._visp.core.PoseVector) -> None
Initialise the tracking thanks to the pose vector.
- Parameters:
- I_color
Input color image
- cPo
Pose vector.
- loadConfigFile(*args, **kwargs)¶
Overloaded function.
loadConfigFile(self: visp._visp.mbt.MbKltTracker, configFile: str, verbose: bool = true) -> None
Load the xml configuration file. From the configuration file initialize the parameters corresponding to the objects: KLT, camera.
The XML configuration file has the following form:
<?xml version="1.0"?> <conf> <camera> <width>640</width> <height>480</height> <u0>320</u0> <v0>240</v0> <px>686.24</px> <py>686.24</py> </camera> <face> <angle_appear>65</angle_appear> <angle_disappear>85</angle_disappear> <near_clipping>0.01</near_clipping> <far_clipping>0.90</far_clipping> <fov_clipping>1</fov_clipping> </face> <klt> <mask_border>10</mask_border> <max_features>10000</max_features> <window_size>5</window_size> <quality>0.02</quality> <min_distance>10</min_distance> <harris>0.02</harris> <size_block>3</size_block> <pyramid_lvl>3</pyramid_lvl> </klt> </conf>
- Parameters:
- configFile
full name of the xml file.
- verbose
Set true to activate the verbose mode, false otherwise.
loadConfigFile(self: visp._visp.mbt.MbTracker, configFile: str, verbose: bool = true) -> None
Load a config file to parameterise the behavior of the tracker.
Virtual method to adapt to each tracker.
- Parameters:
- configFile
An xml config file to parse.
- verbose
verbose flag.
- loadModel(self, modelFile: str, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) None ¶
Load a 3D model from the file in parameter. This file must either be a vrml file (.wrl) or a CAO file (.cao). CAO format is described in the loadCAOModel() method.
- reInitModel(self, I: visp._visp.core.ImageGray, cad_name: str, cMo: visp._visp.core.HomogeneousMatrix, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) None ¶
Re-initialize the model used by the tracker.
- Parameters:
- I: visp._visp.core.ImageGray¶
The image containing the object to initialize.
- cad_name: str¶
Path to the file containing the 3D model description.
- cMo: visp._visp.core.HomogeneousMatrix¶
The new vpHomogeneousMatrix between the camera and the new model
- verbose: bool = false¶
verbose option to print additional information when loading CAO model files which include other CAO model files.
- T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()¶
optional transformation matrix (currently only for .cao) to transform 3D points expressed in the original object frame to the desired object frame.
- resetTracker(self) None ¶
Reset the tracker. The model is removed and the pose is set to identity. The tracker needs to be initialized with a new model and a new pose.
- setAngleAppear(self, a: float) None ¶
Set the angle used to test polygons appearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value lower than this parameter, the polygon is considered as appearing. The polygon will then be tracked.
- setAngleDisappear(self, a: float) None ¶
Set the angle used to test polygons disappearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value greater than this parameter, the polygon is considered as disappearing. The tracking of the polygon will then be stopped.
- setCameraParameters(*args, **kwargs)¶
Overloaded function.
setCameraParameters(self: visp._visp.mbt.MbKltTracker, cam: visp._visp.core.CameraParameters) -> None
Set the camera parameters.
- Parameters:
- cam
the new camera parameters.
setCameraParameters(self: visp._visp.mbt.MbTracker, cam: visp._visp.core.CameraParameters) -> None
Set the camera parameters.
- Parameters:
- cam
The new camera parameters.
- setCovarianceComputation(self, flag: bool) None ¶
Set if the covariance matrix has to be computed.
Note
See getCovarianceMatrix()
- setDisplayFeatures(self, displayF: bool) None ¶
Enable to display the features. By features, we meant the moving edges (ME) and the klt points if used.
Note that if present, the moving edges can be displayed with different colors:
If green : The ME is a good point.
If blue : The ME is removed because of a contrast problem during the tracking phase.
If purple : The ME is removed because of a threshold problem during the tracking phase.
If red : The ME is removed because it is rejected by the robust approach in the virtual visual servoing scheme.
- setEstimatedDoF(self, v: visp._visp.core.ColVector) None ¶
Set a 6-dim column vector representing the degrees of freedom in the object frame that are estimated by the tracker. When set to 1, all the 6 dof are estimated.
Below we give the correspondence between the index of the vector and the considered dof:
v[0] = 1 if translation along X is estimated, 0 otherwise;
v[1] = 1 if translation along Y is estimated, 0 otherwise;
v[2] = 1 if translation along Z is estimated, 0 otherwise;
v[3] = 1 if rotation along X is estimated, 0 otherwise;
v[4] = 1 if rotation along Y is estimated, 0 otherwise;
v[5] = 1 if rotation along Z is estimated, 0 otherwise;
- setInitialMu(self, mu: float) None ¶
Set the initial value of mu for the Levenberg Marquardt optimization loop.
- setKltOpencv(self, t: visp._visp.klt.KltOpencv) None ¶
Set the new value of the klt tracker.
- Parameters:
- t: visp._visp.klt.KltOpencv¶
Klt tracker containing the new values.
- setKltThresholdAcceptation(self, th: float) None ¶
Set the threshold for the acceptation of a point.
- setLod(self: visp._visp.mbt.MbTracker, useLod: bool, name: str =) None ¶
Set the flag to consider if the level of detail (LOD) is used.
Note
See setMinLineLengthThresh() , setMinPolygonAreaThresh()
- Parameters:
- useLod
true if the level of detail must be used, false otherwise. When true, two parameters can be set, see setMinLineLengthThresh() and setMinPolygonAreaThresh() .
- name
name of the face we want to modify the LOD parameter.
- setMinLineLengthThresh(self: visp._visp.mbt.MbTracker, minLineLengthThresh: float, name: str =) None ¶
Set the threshold for the minimum line length to be considered as visible in the LOD case.
Note
See setLod() , setMinPolygonAreaThresh()
- Parameters:
- minLineLengthThresh
threshold for the minimum line length in pixel.
- name
name of the face we want to modify the LOD threshold.
- setMinPolygonAreaThresh(self: visp._visp.mbt.MbTracker, minPolygonAreaThresh: float, name: str =) None ¶
Set the minimum polygon area to be considered as visible in the LOD case.
Note
See setLod() , setMinLineLengthThresh()
- Parameters:
- minPolygonAreaThresh
threshold for the minimum polygon area in pixel.
- name
name of the face we want to modify the LOD threshold.
- setOgreShowConfigDialog(self, showConfigDialog: bool) None ¶
Enable/Disable the appearance of Ogre config dialog on startup.
Warning
This method has only effect when Ogre is used and Ogre visibility test is enabled using setOgreVisibilityTest() with true parameter.
- setOgreVisibilityTest(*args, **kwargs)¶
Overloaded function.
setOgreVisibilityTest(self: visp._visp.mbt.MbKltTracker, v: bool) -> None
Use Ogre3D for visibility tests
Warning
This function has to be called before the initialization of the tracker.
- Parameters:
- v
True to use it, False otherwise
setOgreVisibilityTest(self: visp._visp.mbt.MbTracker, v: bool) -> None
Use Ogre3D for visibility tests
Warning
This function has to be called before the initialization of the tracker.
- Parameters:
- v
True to use it, False otherwise
- setOptimizationMethod(self, opt: visp._visp.mbt.MbTracker.MbtOptimizationMethod) None ¶
- setPose(*args, **kwargs)¶
Overloaded function.
setPose(self: visp._visp.mbt.MbKltTracker, I: visp._visp.core.ImageGray, cdMo: visp._visp.core.HomogeneousMatrix) -> None
Set the pose to be used in entry (as guess) of the next call to the track() function. This pose will be just used once.
Warning
This functionnality is not available when tracking cylinders.
- Parameters:
- I
grayscale image corresponding to the desired pose.
- cdMo
Pose to affect.
setPose(self: visp._visp.mbt.MbKltTracker, I_color: visp._visp.core.ImageRGBa, cdMo: visp._visp.core.HomogeneousMatrix) -> None
Set the pose to be used in entry (as guess) of the next call to the track() function. This pose will be just used once.
Warning
This functionnality is not available when tracking cylinders.
- Parameters:
- I_color
color image corresponding to the desired pose.
- cdMo
Pose to affect.
- setPoseSavingFilename(self, filename: str) None ¶
Set the filename used to save the initial pose computed using the initClick() method. It is also used to read a previous pose in the same method. If the file is not set then, the initClick() method will create a .0.pos file in the root directory. This directory is the path to the file given to the method initClick() used to know the coordinates in the object frame.
- setProjectionErrorComputation(*args, **kwargs)¶
Overloaded function.
setProjectionErrorComputation(self: visp._visp.mbt.MbKltTracker, flag: bool) -> None
Set if the projection error criteria has to be computed.
- Parameters:
- flag
True if the projection error criteria has to be computed, false otherwise
setProjectionErrorComputation(self: visp._visp.mbt.MbTracker, flag: bool) -> None
Set if the projection error criteria has to be computed. This criteria could be used to detect the quality of the tracking. It computes an angle between 0 and 90 degrees that is available with getProjectionError() . Closer to 0 is the value, better is the tracking.
Note
See getProjectionError()
- Parameters:
- flag
True if the projection error criteria has to be computed, false otherwise.
- setProjectionErrorDisplay(self, display: bool) None ¶
Display or not gradient and model orientation when computing the projection error.
- setProjectionErrorDisplayArrowLength(self, length: int) None ¶
Arrow length used to display gradient and model orientation for projection error computation.
- setProjectionErrorDisplayArrowThickness(self, thickness: int) None ¶
Arrow thickness used to display gradient and model orientation for projection error computation.
- setProjectionErrorKernelSize(self, size: int) None ¶
Set kernel size used for projection error computation.
- setProjectionErrorMovingEdge(self, me: visp._visp.me.Me) None ¶
Set Moving-Edges parameters for projection error computation.
- Parameters:
- me: visp._visp.me.Me¶
Moving-Edges parameters.
- setScanLineVisibilityTest(*args, **kwargs)¶
Overloaded function.
setScanLineVisibilityTest(self: visp._visp.mbt.MbKltTracker, v: bool) -> None
Use Scanline algorithm for visibility tests
- Parameters:
- v
True to use it, False otherwise
setScanLineVisibilityTest(self: visp._visp.mbt.MbTracker, v: bool) -> None
- setStopCriteriaEpsilon(self, eps: float) None ¶
Set the minimal error (previous / current estimation) to determine if there is convergence or not.
- setThresholdAcceptation(self, th: float) None ¶
<unparsed xrefsect <doxmlparser.compound.docXRefSectType object at 0x7f6963b53640>>
- setUseKltTracking(self, name: str, useKltTracking: bool) None ¶
Set if the polygons that have the given name have to be considered during the tracking phase.
- testTracking(self) None ¶
Test the quality of the tracking. The tracking is supposed to fail if less than 10 points are tracked.
- track(*args, **kwargs)¶
Overloaded function.
track(self: visp._visp.mbt.MbKltTracker, I: visp._visp.core.ImageGray) -> None
Realize the tracking of the object in the image
- Parameters:
- I
the input grayscale image
track(self: visp._visp.mbt.MbKltTracker, I_color: visp._visp.core.ImageRGBa) -> None
Realize the tracking of the object in the image
- Parameters:
- I_color
the input color image