MbEdgeTracker¶
- class MbEdgeTracker(self)¶
Bases:
MbTracker
Make the complete tracking of an object by using its CAD model.
Warning
This class is deprecated for user usage. You should rather use the high level vpMbGenericTracker class.
This class allows to track an object or a scene given its 3D model. A video can be found on YouTube * https://www.youtube.com/watch?v=UK10KMMJFCI * The tutorial-tracking-mb-deprecated is also a good starting point to use this class.
The tracker requires the knowledge of the 3D model that could be provided in a vrml or in a cao file. The cao format is described in loadCAOModel() . It may also use an xml file used to tune the behavior of the tracker and an init file used to compute the pose at the very first image.
The following code shows the simplest way to use the tracker.
#include <visp3/core/vpCameraParameters.h> #include <visp3/core/vpHomogeneousMatrix.h> #include <visp3/core/vpImage.h> #include <visp3/gui/vpDisplayX.h> #include <visp3/io/vpImageIo.h> #include <visp3/mbt/vpMbEdgeTracker.h> int main() { vpMbEdgeTracker tracker; // Create a model based tracker. vpImage<unsigned char> I; vpHomogeneousMatrix cMo; // Pose computed using the tracker. vpCameraParameters cam; // Acquire an image vpImageIo::read(I, "cube.pgm"); #if defined(VISP_HAVE_X11) vpDisplayX display; display.init(I,100,100,"Mb Edge Tracker"); #endif tracker.loadConfigFile("cube.xml"); // Load the configuration of the tracker tracker.getCameraParameters(cam); // Get the camera parameters used by the tracker (from the configuration file). tracker.loadModel("cube.cao"); // Load the 3d model in cao format. No 3rd party library is required // Initialise manually the pose by clicking on the image points associated to the 3d points contained in the // cube.init file. tracker.initClick(I, "cube.init"); while(true){ // Acquire a new image vpDisplay::display(I); tracker.track(I); // Track the object on this image tracker.getPose(cMo); // Get the pose tracker.display(I, cMo, cam, vpColor::darkRed, 1); // Display the model at the computed pose. vpDisplay::flush(I); } return 0; }
For application with large inter-images displacement, multi-scale tracking is also possible, by setting the number of scales used and by activating (or not) them using a vector of booleans, as presented in the following code:
... vpHomogeneousMatrix cMo; // Pose computed using the tracker. vpCameraParameters cam; std::vector< bool > scales(3); //Three scales used scales.push_back(true); //First scale : active scales.push_back(false); //Second scale (/2) : not active scales.push_back(true); //Third scale (/4) : active tracker.setScales(scales); // Set active scales for multi-scale tracking tracker.loadConfigFile("cube.xml"); // Load the configuration of the tracker tracker.getCameraParameters(cam); // Get the camera parameters used by the tracker (from the configuration file). ...
The tracker can also be used without display, in that case the initial pose must be known (object always at the same initial pose for example) or computed using another method:
#include <visp3/core/vpCameraParameters.h> #include <visp3/core/vpHomogeneousMatrix.h> #include <visp3/core/vpImage.h> #include <visp3/io/vpImageIo.h> #include <visp3/mbt/vpMbEdgeTracker.h> #ifdef ENABLE_VISP_NAMESPACE using namespace VISP_NAMESPACE_NAME; #endif int main() { vpMbEdgeTracker tracker; // Create a model based tracker. vpImage<unsigned char> I; vpHomogeneousMatrix cMo; // Pose used in entry (has to be defined), then computed using the tracker. //acquire an image vpImageIo::read(I, "cube.pgm"); // Example of acquisition tracker.loadConfigFile("cube.xml"); // Load the configuration of the tracker // load the 3d model, to read .wrl model coin is required, if coin is not installed .cao file can be used. tracker.loadModel("cube.cao"); tracker.initFromPose(I, cMo); // initialize the tracker with the given pose. while(true){ // acquire a new image tracker.track(I); // track the object on this image tracker.getPose(cMo); // get the pose } return 0; }
Finally it can be used not to track an object but just to display a model at a given pose:
#include <visp3/core/vpCameraParameters.h> #include <visp3/core/vpHomogeneousMatrix.h> #include <visp3/core/vpImage.h> #include <visp3/gui/vpDisplayX.h> #include <visp3/io/vpImageIo.h> #include <visp3/mbt/vpMbEdgeTracker.h> #ifdef ENABLE_VISP_NAMESPACE using namespace VISP_NAMESPACE_NAME; #endif int main() { vpMbEdgeTracker tracker; // Create a model based tracker. vpImage<unsigned char> I; vpHomogeneousMatrix cMo; // Pose used to display the model. vpCameraParameters cam; // Acquire an image vpImageIo::read(I, "cube.pgm"); #if defined(VISP_HAVE_X11) vpDisplayX display; display.init(I,100,100,"Mb Edge Tracker"); #endif tracker.loadConfigFile("cube.xml"); // Load the configuration of the tracker tracker.getCameraParameters(cam); // Get the camera parameters used by the tracker (from the configuration file). // load the 3d model, to read .wrl model coin is required, if coin is not installed // .cao file can be used. tracker.loadModel("cube.cao"); while(true){ // acquire a new image // Get the pose using any method vpDisplay::display(I); tracker.display(I, cMo, cam, vpColor::darkRed, 1, true); // Display the model at the computed pose. vpDisplay::flush(I); } return 0; } $
Basic constructor
Methods
Basic constructor
Overloaded function.
Return the error vector \((s-s^*)\) reached after the virtual visual servoing process used to estimate the pose.
Note
See setGoodMovingEdgesRatioThreshold()
Get the list of the circles tracked for the specified level.
Get the list of the cylinders tracked for the specified level.
Get the list of the lines tracked for the specified level.
Return a list of primitives parameters to display the model at a given pose and camera parameters.
Overloaded function.
Return the number of good points ( vpMeSite ) tracked.
Return the weights vector \(w_i\) computed by the robust scheme.
Return the scales levels used for the tracking.
Overloaded function.
Re-initialize the model used by the tracker.
Reset the tracker.
Overloaded function.
Overloaded function.
Overloaded function.
Set the threshold value between 0 and 1 over good moving edges ratio.
Set the moving edge parameters.
Overloaded function.
Overloaded function.
Overloaded function.
Set the scales to use to realize the tracking.
Overloaded function.
Set if the polygons that have the given name have to be considered during the tracking phase.
Overloaded function.
Inherited Methods
Arrow length used to display gradient and model orientation for projection error computation.
Get the near distance for clipping.
Enable to display the features.
Set if the covariance matrix has to be computed.
Save the pose in the given filename
Set the value of the gain used to compute the control law.
Get the value of the gain used to compute the control law.
Get the far distance for clipping.
Enable/Disable the appearance of Ogre config dialog on startup.
Set the filename used to save the initial pose computed using the initClick() method.
Return the angle used to test polygons disappearance.
Overloaded function.
Set the minimal error (previous / current estimation) to determine if there is convergence or not.
Overloaded function.
Overloaded function.
LEVENBERG_MARQUARDT_OPT
Set the angle used to test polygons appearance.
Set Moving-Edges parameters for projection error computation.
Get the covariance matrix.
Set the initial value of mu for the Levenberg Marquardt optimization loop.
Set the angle used to test polygons disappearance.
Set a 6-dim column vector representing the degrees of freedom in the object frame that are estimated by the tracker.
Load a 3D model from the file in parameter.
Get the camera parameters.
Arrow thickness used to display gradient and model orientation for projection error computation.
Get the number of polygons (faces) representing the object to track.
Get the clipping used and defined in vpPolygon3D::vpMbtPolygonClippingType.
Compute projection error given an input image and camera pose, parameters.
Get the optimization method used during the tracking.
GAUSS_NEWTON_OPT
Get the initial value of mu used in the Levenberg Marquardt optimization loop.
Return the angle used to test polygons appearance.
Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal.
Set the flag to consider if the level of detail (LOD) is used.
Values:
Set the minimum polygon area to be considered as visible in the LOD case.
Overloaded function.
Return a reference to the faces structure.
Set kernel size used for projection error computation.
Display or not gradient and model orientation when computing the projection error.
Get the maximum number of iterations of the virtual visual servoing stage.
Get the list of polygons faces (a vpPolygon representing the projection of the face in the image and a list of face corners in 3D), with the possibility to order by distance to the camera or to use the visibility check to consider if the polygon face must be retrieved or not.
Get a 1x6 vpColVector representing the estimated degrees of freedom.
Set the threshold for the minimum line length to be considered as visible in the LOD case.
Set the maximum iteration of the virtual visual servoing stage.
Set if the projection error criteria has to be computed.
Operators
__annotations__
__doc__
Basic constructor
__module__
Attributes
GAUSS_NEWTON_OPT
LEVENBERG_MARQUARDT_OPT
__annotations__
- class MbtOptimizationMethod(self, value: int)¶
Bases:
pybind11_object
Values:
GAUSS_NEWTON_OPT
LEVENBERG_MARQUARDT_OPT
- __init__(self)¶
Basic constructor
- computeCurrentProjectionError(self, I: visp._visp.core.ImageGray, _cMo: visp._visp.core.HomogeneousMatrix, _cam: visp._visp.core.CameraParameters) float ¶
Compute projection error given an input image and camera pose, parameters. This projection error uses locations sampled exactly where the model is projected using the camera pose and intrinsic parameters. You may want to use
Note
See setProjectionErrorComputation
Note
See getProjectionError
to get a projection error computed at the ME locations after a call to track() . It works similarly to vpMbTracker::getProjectionError function:Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal. The error is expressed in degree between 0 and 90.
- Parameters:
- I: visp._visp.core.ImageGray¶
Input grayscale image.
- _cMo: visp._visp.core.HomogeneousMatrix¶
Camera pose.
- _cam: visp._visp.core.CameraParameters¶
Camera parameters.
- display(*args, **kwargs)¶
Overloaded function.
display(self: visp._visp.mbt.MbEdgeTracker, I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None
Display the 3D model from a given position of the camera.
- Parameters:
- I
The image.
- cMo
Pose used to project the 3D model into the image.
- cam
The camera parameters.
- col
The desired color.
- thickness
The thickness of the lines.
- displayFullModel
If true, the full model is displayed (even the non visible faces).
display(self: visp._visp.mbt.MbEdgeTracker, I: visp._visp.core.ImageRGBa, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None
Display the 3D model from a given position of the camera.
- Parameters:
- I
The image.
- cMo
Pose used to project the 3D model into the image.
- cam
The camera parameters.
- col
The desired color.
- thickness
The thickness of the lines.
- displayFullModel
If true, the full model is displayed (even the non visible surfaces).
- getCameraParameters(self, cam: visp._visp.core.CameraParameters) None ¶
Get the camera parameters.
- Parameters:
- cam: visp._visp.core.CameraParameters¶
copy of the camera parameters used by the tracker.
- getClipping(self) int ¶
Get the clipping used and defined in vpPolygon3D::vpMbtPolygonClippingType.
- Returns:
Clipping flags.
- getCovarianceMatrix(self) visp._visp.core.Matrix ¶
Get the covariance matrix. This matrix is only computed if setCovarianceComputation() is turned on.
Note
See setCovarianceComputation()
- getError(self) visp._visp.core.ColVector ¶
Return the error vector \((s-s^*)\) reached after the virtual visual servoing process used to estimate the pose.
The following example shows how to use this function to compute the norm of the residual and the norm of the residual normalized by the number of features that are tracked:
tracker.track(I); std::cout << "Residual: " << sqrt( (tracker.getError()).sumSquare()) << std::endl; std::cout << "Residual normalized: " << sqrt( (tracker.getError()).sumSquare())/tracker.getError().size() << std::endl;
Note
See getRobustWeights()
- getEstimatedDoF(self) visp._visp.core.ColVector ¶
Get a 1x6 vpColVector representing the estimated degrees of freedom. vpColVector [0] = 1 if translation on X is estimated, 0 otherwise; vpColVector [1] = 1 if translation on Y is estimated, 0 otherwise; vpColVector [2] = 1 if translation on Z is estimated, 0 otherwise; vpColVector [3] = 1 if rotation on X is estimated, 0 otherwise; vpColVector [4] = 1 if rotation on Y is estimated, 0 otherwise; vpColVector [5] = 1 if rotation on Z is estimated, 0 otherwise;
- Returns:
1x6 vpColVector representing the estimated degrees of freedom.
- getFaces(self) vpMbHiddenFaces<vpMbtPolygon> ¶
Return a reference to the faces structure.
- getFarClippingDistance(self) float ¶
Get the far distance for clipping.
- Returns:
Far clipping value.
- getGoodMovingEdgesRatioThreshold(self) float ¶
Note
See setGoodMovingEdgesRatioThreshold()
- Returns:
The threshold value between 0 and 1 over good moving edges ratio. It allows to decide if the tracker has enough valid moving edges to compute a pose. 1 means that all moving edges should be considered as good to have a valid pose, while 0.1 means that 10% of the moving edge are enough to declare a pose valid.
- getInitialMu(self) float ¶
Get the initial value of mu used in the Levenberg Marquardt optimization loop.
- Returns:
the initial mu value.
- getLambda(self) float ¶
Get the value of the gain used to compute the control law.
- Returns:
the value for the gain.
- getLcircle(self, circlesList: list[visp._visp.mbt.MbtDistanceCircle], level: int = 0) list[visp._visp.mbt.MbtDistanceCircle] ¶
Get the list of the circles tracked for the specified level. Each circle contains the list of the vpMeSite .
- Parameters:
- circlesList: list[visp._visp.mbt.MbtDistanceCircle]¶
The list of the circles of the model.
- level: int = 0¶
Level corresponding to the list to return.
- Returns:
A tuple containing:
circlesList: The list of the circles of the model.
- getLcylinder(self, cylindersList: list[visp._visp.mbt.MbtDistanceCylinder], level: int = 0) list[visp._visp.mbt.MbtDistanceCylinder] ¶
Get the list of the cylinders tracked for the specified level. Each cylinder contains the list of the vpMeSite .
- Parameters:
- cylindersList: list[visp._visp.mbt.MbtDistanceCylinder]¶
The list of the cylinders of the model.
- level: int = 0¶
Level corresponding to the list to return.
- Returns:
A tuple containing:
cylindersList: The list of the cylinders of the model.
- getLline(self, linesList: list[visp._visp.mbt.MbtDistanceLine], level: int = 0) list[visp._visp.mbt.MbtDistanceLine] ¶
Get the list of the lines tracked for the specified level. Each line contains the list of the vpMeSite .
- Parameters:
- linesList: list[visp._visp.mbt.MbtDistanceLine]¶
The list of the lines of the model.
- level: int = 0¶
Level corresponding to the list to return.
- Returns:
A tuple containing:
linesList: The list of the lines of the model.
- getMaxIter(self) int ¶
Get the maximum number of iterations of the virtual visual servoing stage.
- Returns:
the number of iteration
- getModelForDisplay(self, width: int, height: int, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, displayFullModel: bool = false) list[list[float]] ¶
Return a list of primitives parameters to display the model at a given pose and camera parameters.
Line parameters are: <primitive id (here 0 for line)> , <pt_start.i()> , <pt_start.j()> , <pt_end.i()> , <pt_end.j()>
Ellipse parameters are: <primitive id (here 1 for ellipse)> , <pt_center.i()> , <pt_center.j()> , <n_20> , <n_11> , <n_02> where <n_ij> are the second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area).
- Parameters:
- width: int¶
Image width.
- height: int¶
Image height.
- cMo: visp._visp.core.HomogeneousMatrix¶
Pose used to project the 3D model into the image.
- cam: visp._visp.core.CameraParameters¶
The camera parameters.
- displayFullModel: bool = false¶
If true, the line is displayed even if it is not
- getMovingEdge(*args, **kwargs)¶
Overloaded function.
getMovingEdge(self: visp._visp.mbt.MbEdgeTracker, p_me: visp._visp.me.Me) -> None
Get the moving edge parameters.
- Parameters:
- p_me
[out] : an instance of the moving edge parameters used by the tracker.
getMovingEdge(self: visp._visp.mbt.MbEdgeTracker) -> visp._visp.me.Me
Get the moving edge parameters.
- Returns:
an instance of the moving edge parameters used by the tracker.
- getNbPoints(self, level: int = 0) int ¶
Return the number of good points ( vpMeSite ) tracked. A good point is a vpMeSite with its flag “state” equal to 0. Only these points are used during the virtual visual servoing stage.
- Returns:
the number of good points.
- getNbPolygon(self) int ¶
Get the number of polygons (faces) representing the object to track.
- Returns:
Number of polygons.
- getNearClippingDistance(self) float ¶
Get the near distance for clipping.
- Returns:
Near clipping value.
- getOptimizationMethod(self) visp._visp.mbt.MbTracker.MbtOptimizationMethod ¶
Get the optimization method used during the tracking. 0 = Gauss-Newton approach. 1 = Levenberg-Marquardt approach.
- Returns:
Optimization method.
- getPolygonFaces(self, orderPolygons: bool = true, useVisibility: bool = true, clipPolygon: bool = false) tuple[list[visp._visp.core.Polygon], list[list[visp._visp.core.Point]]] ¶
Get the list of polygons faces (a vpPolygon representing the projection of the face in the image and a list of face corners in 3D), with the possibility to order by distance to the camera or to use the visibility check to consider if the polygon face must be retrieved or not.
- Parameters:
- orderPolygons: bool = true¶
If true, the resulting list is ordered from the nearest polygon faces to the farther.
- useVisibility: bool = true¶
If true, only visible faces will be retrieved.
- clipPolygon: bool = false¶
If true, the polygons will be clipped according to the clipping flags set in vpMbTracker .
- Returns:
A pair object containing the list of vpPolygon and the list of face corners.
- getPose(*args, **kwargs)¶
Overloaded function.
getPose(self: visp._visp.mbt.MbTracker, cMo: visp._visp.core.HomogeneousMatrix) -> None
Get the current pose between the object and the camera. cMo is the matrix which can be used to express coordinates from the object frame to camera frame.
- Parameters:
- cMo
the pose
getPose(self: visp._visp.mbt.MbTracker) -> visp._visp.core.HomogeneousMatrix
Get the current pose between the object and the camera. cMo is the matrix which can be used to express coordinates from the object frame to camera frame.
- Returns:
the current pose
- getProjectionError(self) float ¶
Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal. The error is expressed in degree between 0 and 90. This value is computed if setProjectionErrorComputation() is turned on.
Note
See setProjectionErrorComputation()
- Returns:
the value for the error.
- getRobustWeights(self) visp._visp.core.ColVector ¶
Return the weights vector \(w_i\) computed by the robust scheme.
The following example shows how to use this function to compute the norm of the weighted residual and the norm of the weighted residual normalized by the sum of the weights associated to the features that are tracked:
tracker.track(I); vpColVector w = tracker.getRobustWeights(); vpColVector e = tracker.getError(); vpColVector we(w.size()); for(unsigned int i=0; i<w.size(); i++) we[i] = w[i]*e[i]; std::cout << "Weighted residual: " << sqrt( (we).sumSquare() ) << std::endl; std::cout << "Weighted residual normalized: " << sqrt( (we).sumSquare() ) / w.sum() << std::endl;
Note
See getError()
- getScales(self) list[bool] ¶
Return the scales levels used for the tracking.
- Returns:
The scales levels used for the tracking.
- initClick(*args, **kwargs)¶
Overloaded function.
initClick(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str, displayHelp: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
initClick(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str, displayHelp: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
initClick(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, points3D_list: list[visp._visp.core.Point], displayFile: str = ) -> None
initClick(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, points3D_list: list[visp._visp.core.Point], displayFile: str = ) -> None
- initFromPoints(*args, **kwargs)¶
Overloaded function.
initFromPoints(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str) -> None
Initialise the tracker by reading 3D point coordinates and the corresponding 2D image point coordinates from a file. Comments starting with # character are allowed. 3D point coordinates are expressed in meter in the object frame with X, Y and Z values. 2D point coordinates are expressied in pixel coordinates, with first the line and then the column of the pixel in the image. The structure of this file is the following.
# 3D point coordinates 4 # Number of 3D points in the file (minimum is four) 0.01 0.01 0.01 # \ ... # | 3D coordinates in meters in the object frame 0.01 -0.01 -0.01 # / # corresponding 2D point coordinates 4 # Number of image points in the file (has to be the same as the number of 3D points) 100 200 # \ ... # | 2D coordinates in pixel in the image 50 10 # /
- Parameters:
- I
Input grayscale image
- initFile
Path to the file containing all the points.
initFromPoints(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str) -> None
Initialise the tracker by reading 3D point coordinates and the corresponding 2D image point coordinates from a file. Comments starting with # character are allowed. 3D point coordinates are expressed in meter in the object frame with X, Y and Z values. 2D point coordinates are expressied in pixel coordinates, with first the line and then the column of the pixel in the image. The structure of this file is the following.
# 3D point coordinates 4 # Number of 3D points in the file (minimum is four) 0.01 0.01 0.01 # \ ... # | 3D coordinates in meters in the object frame 0.01 -0.01 -0.01 # / # corresponding 2D point coordinates 4 # Number of image points in the file (has to be the same as the number of 3D points) 100 200 # \ ... # | 2D coordinates in pixel in the image 50 10 # /
- Parameters:
- I_color
Input color image
- initFile
Path to the file containing all the points.
initFromPoints(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, points2D_list: list[visp._visp.core.ImagePoint], points3D_list: list[visp._visp.core.Point]) -> None
Initialise the tracking with the list of image points (points2D_list) and the list of corresponding 3D points (object frame) (points3D_list).
- Parameters:
- I
Input grayscale image
- points2D_list
List of image points.
- points3D_list
List of 3D points (object frame).
initFromPoints(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, points2D_list: list[visp._visp.core.ImagePoint], points3D_list: list[visp._visp.core.Point]) -> None
Initialise the tracking with the list of image points (points2D_list) and the list of corresponding 3D points (object frame) (points3D_list).
- Parameters:
- I_color
Input color grayscale image
- points2D_list
List of image points.
- points3D_list
List of 3D points (object frame).
- initFromPose(*args, **kwargs)¶
Overloaded function.
initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str) -> None
Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile. The structure of this file is (without the comments):
// The six value of the pose vector 0.0000 // \ 0.0000 // | 1.0000 // | Example of value for the pose vector where Z = 1 meter 0.0000 // | 0.0000 // | 0.0000 // /
Where the three firsts lines refer to the translation and the three last to the rotation in thetaU parametrisation (see vpThetaUVector ).
- Parameters:
- I
Input grayscale image
- initFile
Path to the file containing the pose.
initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str) -> None
Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile. The structure of this file is (without the comments):
// The six value of the pose vector 0.0000 // \ 0.0000 // | 1.0000 // | Example of value for the pose vector where Z = 1 meter 0.0000 // | 0.0000 // | 0.0000 // /
Where the three firsts lines refer to the translation and the three last to the rotation in thetaU parametrisation (see vpThetaUVector ).
- Parameters:
- I_color
Input color image
- initFile
Path to the file containing the pose.
initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix) -> None
Initialise the tracking thanks to the pose.
- Parameters:
- I
Input grayscale image
- cMo
Pose matrix.
initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, cMo: visp._visp.core.HomogeneousMatrix) -> None
Initialise the tracking thanks to the pose.
- Parameters:
- I_color
Input color image
- cMo
Pose matrix.
initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, cPo: visp._visp.core.PoseVector) -> None
Initialise the tracking thanks to the pose vector.
- Parameters:
- I
Input grayscale image
- cPo
Pose vector.
initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, cPo: visp._visp.core.PoseVector) -> None
Initialise the tracking thanks to the pose vector.
- Parameters:
- I_color
Input color image
- cPo
Pose vector.
- loadConfigFile(*args, **kwargs)¶
Overloaded function.
loadConfigFile(self: visp._visp.mbt.MbEdgeTracker, configFile: str, verbose: bool = true) -> None
Load the xml configuration file. From the configuration file initialize the parameters corresponding to the objects: moving-edges, camera and visibility angles.
Note
See loadConfigFile(const char*)
- Parameters:
- configFile
full name of the xml file.
- verbose
verbose flag.
loadConfigFile(self: visp._visp.mbt.MbTracker, configFile: str, verbose: bool = true) -> None
Load a config file to parameterise the behavior of the tracker.
Virtual method to adapt to each tracker.
- Parameters:
- configFile
An xml config file to parse.
- verbose
verbose flag.
- loadModel(self, modelFile: str, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) None ¶
Load a 3D model from the file in parameter. This file must either be a vrml file (.wrl) or a CAO file (.cao). CAO format is described in the loadCAOModel() method.
- reInitModel(self, I: visp._visp.core.ImageGray, cad_name: str, cMo: visp._visp.core.HomogeneousMatrix, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) None ¶
Re-initialize the model used by the tracker.
- Parameters:
- I: visp._visp.core.ImageGray¶
The image containing the object to initialize.
- cad_name: str¶
Path to the file containing the 3D model description.
- cMo: visp._visp.core.HomogeneousMatrix¶
The new vpHomogeneousMatrix between the camera and the new model
- verbose: bool = false¶
verbose option to print additional information when loading CAO model files which include other CAO model files.
- T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()¶
optional transformation matrix (currently only for .cao) to transform 3D points expressed in the original object frame to the desired object frame.
- resetTracker(self) None ¶
Reset the tracker. The model is removed and the pose is set to identity. The tracker needs to be initialized with a new model and a new pose.
- setAngleAppear(self, a: float) None ¶
Set the angle used to test polygons appearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value lower than this parameter, the polygon is considered as appearing. The polygon will then be tracked.
- setAngleDisappear(self, a: float) None ¶
Set the angle used to test polygons disappearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value greater than this parameter, the polygon is considered as disappearing. The tracking of the polygon will then be stopped.
- setCameraParameters(*args, **kwargs)¶
Overloaded function.
setCameraParameters(self: visp._visp.mbt.MbEdgeTracker, cam: visp._visp.core.CameraParameters) -> None
Set the camera parameters.
- Parameters:
- cam
The new camera parameters.
setCameraParameters(self: visp._visp.mbt.MbTracker, cam: visp._visp.core.CameraParameters) -> None
Set the camera parameters.
- Parameters:
- cam
The new camera parameters.
- setClipping(*args, **kwargs)¶
Overloaded function.
setClipping(self: visp._visp.mbt.MbEdgeTracker, flags: int) -> None
Specify which clipping to use.
Note
See vpMbtPolygonClipping
- Parameters:
- flags
New clipping flags.
setClipping(self: visp._visp.mbt.MbTracker, flags: int) -> None
Specify which clipping to use.
Note
See vpMbtPolygonClipping
- Parameters:
- flags
New clipping flags.
- setCovarianceComputation(self, flag: bool) None ¶
Set if the covariance matrix has to be computed.
Note
See getCovarianceMatrix()
- setDisplayFeatures(self, displayF: bool) None ¶
Enable to display the features. By features, we meant the moving edges (ME) and the klt points if used.
Note that if present, the moving edges can be displayed with different colors:
If green : The ME is a good point.
If blue : The ME is removed because of a contrast problem during the tracking phase.
If purple : The ME is removed because of a threshold problem during the tracking phase.
If red : The ME is removed because it is rejected by the robust approach in the virtual visual servoing scheme.
- setEstimatedDoF(self, v: visp._visp.core.ColVector) None ¶
Set a 6-dim column vector representing the degrees of freedom in the object frame that are estimated by the tracker. When set to 1, all the 6 dof are estimated.
Below we give the correspondence between the index of the vector and the considered dof:
v[0] = 1 if translation along X is estimated, 0 otherwise;
v[1] = 1 if translation along Y is estimated, 0 otherwise;
v[2] = 1 if translation along Z is estimated, 0 otherwise;
v[3] = 1 if rotation along X is estimated, 0 otherwise;
v[4] = 1 if rotation along Y is estimated, 0 otherwise;
v[5] = 1 if rotation along Z is estimated, 0 otherwise;
- setFarClippingDistance(*args, **kwargs)¶
Overloaded function.
setFarClippingDistance(self: visp._visp.mbt.MbEdgeTracker, dist: float) -> None
Set the far distance for clipping.
- Parameters:
- dist
Far clipping value.
setFarClippingDistance(self: visp._visp.mbt.MbTracker, dist: float) -> None
Set the far distance for clipping.
- Parameters:
- dist
Far clipping value.
- setGoodMovingEdgesRatioThreshold(self, threshold: float) None ¶
Set the threshold value between 0 and 1 over good moving edges ratio. It allows to decide if the tracker has enough valid moving edges to compute a pose. 1 means that all moving edges should be considered as good to have a valid pose, while 0.1 means that 10% of the moving edge are enough to declare a pose valid.
Note
See getGoodMovingEdgesRatioThreshold()
- setInitialMu(self, mu: float) None ¶
Set the initial value of mu for the Levenberg Marquardt optimization loop.
- setLod(self: visp._visp.mbt.MbTracker, useLod: bool, name: str =) None ¶
Set the flag to consider if the level of detail (LOD) is used.
Note
See setMinLineLengthThresh() , setMinPolygonAreaThresh()
- Parameters:
- useLod
true if the level of detail must be used, false otherwise. When true, two parameters can be set, see setMinLineLengthThresh() and setMinPolygonAreaThresh() .
- name
name of the face we want to modify the LOD parameter.
- setMinLineLengthThresh(self: visp._visp.mbt.MbTracker, minLineLengthThresh: float, name: str =) None ¶
Set the threshold for the minimum line length to be considered as visible in the LOD case.
Note
See setLod() , setMinPolygonAreaThresh()
- Parameters:
- minLineLengthThresh
threshold for the minimum line length in pixel.
- name
name of the face we want to modify the LOD threshold.
- setMinPolygonAreaThresh(self: visp._visp.mbt.MbTracker, minPolygonAreaThresh: float, name: str =) None ¶
Set the minimum polygon area to be considered as visible in the LOD case.
Note
See setLod() , setMinLineLengthThresh()
- Parameters:
- minPolygonAreaThresh
threshold for the minimum polygon area in pixel.
- name
name of the face we want to modify the LOD threshold.
- setMovingEdge(self, me: visp._visp.me.Me) None ¶
Set the moving edge parameters.
- setNearClippingDistance(*args, **kwargs)¶
Overloaded function.
setNearClippingDistance(self: visp._visp.mbt.MbEdgeTracker, dist: float) -> None
Set the near distance for clipping.
- Parameters:
- dist
Near clipping value.
setNearClippingDistance(self: visp._visp.mbt.MbTracker, dist: float) -> None
Set the near distance for clipping.
- Parameters:
- dist
Near clipping value.
- setOgreShowConfigDialog(self, showConfigDialog: bool) None ¶
Enable/Disable the appearance of Ogre config dialog on startup.
Warning
This method has only effect when Ogre is used and Ogre visibility test is enabled using setOgreVisibilityTest() with true parameter.
- setOgreVisibilityTest(*args, **kwargs)¶
Overloaded function.
setOgreVisibilityTest(self: visp._visp.mbt.MbEdgeTracker, v: bool) -> None
Use Ogre3D for visibility tests
Warning
This function has to be called before the initialization of the tracker.
- Parameters:
- v
True to use it, False otherwise
setOgreVisibilityTest(self: visp._visp.mbt.MbTracker, v: bool) -> None
Use Ogre3D for visibility tests
Warning
This function has to be called before the initialization of the tracker.
- Parameters:
- v
True to use it, False otherwise
- setOptimizationMethod(self, opt: visp._visp.mbt.MbTracker.MbtOptimizationMethod) None ¶
- setPose(*args, **kwargs)¶
Overloaded function.
setPose(self: visp._visp.mbt.MbEdgeTracker, I: visp._visp.core.ImageGray, cdMo: visp._visp.core.HomogeneousMatrix) -> None
Set the pose to be used in entry of the next call to the track() function. This pose will be just used once.
- Parameters:
- I
grayscale image corresponding to the desired pose.
- cdMo
Pose to affect.
setPose(self: visp._visp.mbt.MbEdgeTracker, I_color: visp._visp.core.ImageRGBa, cdMo: visp._visp.core.HomogeneousMatrix) -> None
Set the pose to be used in entry of the next call to the track() function. This pose will be just used once.
- Parameters:
- I_color
color image corresponding to the desired pose.
- cdMo
Pose to affect.
- setPoseSavingFilename(self, filename: str) None ¶
Set the filename used to save the initial pose computed using the initClick() method. It is also used to read a previous pose in the same method. If the file is not set then, the initClick() method will create a .0.pos file in the root directory. This directory is the path to the file given to the method initClick() used to know the coordinates in the object frame.
- setProjectionErrorComputation(self, flag: bool) None ¶
Set if the projection error criteria has to be computed. This criteria could be used to detect the quality of the tracking. It computes an angle between 0 and 90 degrees that is available with getProjectionError() . Closer to 0 is the value, better is the tracking.
Note
See getProjectionError()
- setProjectionErrorDisplay(self, display: bool) None ¶
Display or not gradient and model orientation when computing the projection error.
- setProjectionErrorDisplayArrowLength(self, length: int) None ¶
Arrow length used to display gradient and model orientation for projection error computation.
- setProjectionErrorDisplayArrowThickness(self, thickness: int) None ¶
Arrow thickness used to display gradient and model orientation for projection error computation.
- setProjectionErrorKernelSize(self, size: int) None ¶
Set kernel size used for projection error computation.
- setProjectionErrorMovingEdge(self, me: visp._visp.me.Me) None ¶
Set Moving-Edges parameters for projection error computation.
- Parameters:
- me: visp._visp.me.Me¶
Moving-Edges parameters.
- setScales(self, _scales: list[bool]) None ¶
Set the scales to use to realize the tracking. The vector of boolean activates or not the scales to set for the object tracking. The first element of the list correspond to the tracking on the full image, the second element corresponds to the tracking on an image subsampled by two.
Using multi scale tracking allows to track the object with greater moves. It requires the computation of a pyramid of images, but the total tracking can be faster than a tracking based only on the full scale. The pose is computed from the smallest image to the biggest. This may be dangerous if the object to track is small in the image, because the subsampled scale(s) will have only few points to compute the pose (it could result in a loss of precision).
Warning
This method must be used before the tracker has been initialized ( before the call of the loadConfigFile() or loadModel() methods).
Warning
At least one level must be activated.
- setScanLineVisibilityTest(*args, **kwargs)¶
Overloaded function.
setScanLineVisibilityTest(self: visp._visp.mbt.MbEdgeTracker, v: bool) -> None
Use Scanline algorithm for visibility tests
- Parameters:
- v
True to use it, False otherwise
setScanLineVisibilityTest(self: visp._visp.mbt.MbTracker, v: bool) -> None
- setStopCriteriaEpsilon(self, eps: float) None ¶
Set the minimal error (previous / current estimation) to determine if there is convergence or not.
- setUseEdgeTracking(self, name: str, useEdgeTracking: bool) None ¶
Set if the polygons that have the given name have to be considered during the tracking phase.
- track(*args, **kwargs)¶
Overloaded function.
track(self: visp._visp.mbt.MbEdgeTracker, I: visp._visp.core.ImageGray) -> None
Compute each state of the tracking procedure for all the feature sets.
If the tracking is considered as failed an exception is thrown.
- Parameters:
- I
The image.
track(self: visp._visp.mbt.MbEdgeTracker, I: visp._visp.core.ImageRGBa) -> None
Track the object in the given image
- Parameters:
- I
The current image.