MbGenericTracker¶
- class MbGenericTracker(*args, **kwargs)¶
Bases:
MbTracker
Real-time 6D object pose tracking using its CAD model.
The tracker requires the knowledge of the 3D model that could be provided in a vrml or in a cao file. The cao format is described in loadCAOModel() . It may also use an xml file used to tune the behavior of the tracker and an init file used to compute the pose at the very first image.
This class allows tracking an object or a scene given its 3D model. More information in [43] . A lot of videos can be found on YouTube VispTeam channel.
<unparsed htmlonly <doxmlparser.compound.docHtmlOnlyType object at 0x7f121395c070>>
The tutorial-tracking-mb-generic is a good starting point to use this class. If you want to track an object with a stereo camera refer to tutorial-tracking-mb-generic-stereo. If you want rather use a RGB-D camera and exploit the depth information, you may see tutorial-tracking-mb-generic-rgbd. There is also tutorial-detection-object that shows how to initialize the tracker from an initial pose provided by a detection algorithm.
JSON serialization
Since ViSP 3.6.0, if ViSP is build with soft_tool_json 3rd-party we introduce JSON serialization capabilities for vpMbGenericTracker . The following sample code shows how to save a model-based tracker settings in a file named ` mbt.json ` and reload the values from this JSON file.
#include <visp3/mbt/vpMbGenericTracker.h> #ifdef ENABLE_VISP_NAMESPACE using namespace VISP_NAMESPACE_NAME; #endif int main() { #if defined(VISP_HAVE_NLOHMANN_JSON) std::string filename = "mbt-generic.json"; { vpMbGenericTracker mbt; mbt.saveConfigFile(filename); } { vpMbGenericTracker mbt; bool verbose = false; std::cout << "Read model-based tracker settings from " << filename << std::endl; mbt.loadConfigFile(filename, verbose); } #endif }
If you build and execute the sample code, it will produce the following output:
Read model-based tracker settings from mbt-generic.json
The content of the ` mbt.json ` file is the following:
$ cat mbt-generic.json { "referenceCameraName": "Camera", "trackers": { "Camera": { "angleAppear": 89.0, "angleDisappear": 89.0, "camTref": { "cols": 4, "data": [ 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0 ], "rows": 4, "type": "vpHomogeneousMatrix" }, "camera": { "model": "perspectiveWithoutDistortion", "px": 600.0, "py": 600.0, "u0": 192.0, "v0": 144.0 }, "clipping": { "far": 100.0, "flags": [ "none" ], "near": 0.001 }, "display": { "features": false, "projectionError": false }, "edge": { "maskSign": 0, "maskSize": 5, "minSampleStep": 4.0, "mu": [ 0.5, 0.5 ], "nMask": 180, "ntotalSample": 0, "pointsToTrack": 500, "range": 4, "sampleStep": 10.0, "strip": 2, "thresholdType": "normalized", "threshold": 20.0 }, "lod": { "minLineLengthThresholdGeneral": 50.0, "minPolygonAreaThresholdGeneral": 2500.0, "useLod": false }, "type": [ "edge" ], "visibilityTest": { "ogre": false, "scanline": false } } }, "version": "1.0" }
Overloaded function.
__init__(self: visp._visp.mbt.MbGenericTracker) -> None
json namespace shortcut
__init__(self: visp._visp.mbt.MbGenericTracker, nbCameras: int, trackerType: int = EDGE_TRACKER) -> None
__init__(self: visp._visp.mbt.MbGenericTracker, trackerTypes: list[int]) -> None
__init__(self: visp._visp.mbt.MbGenericTracker, cameraNames: list[str], trackerTypes: list[int]) -> None
Methods
Overloaded function.
Overloaded function.
Overloaded function.
Get the camera names.
Overloaded function.
Get the camera tracker types.
Overloaded function.
Return the error vector \((s-s^*)\) reached after the virtual visual servoing process used to estimate the pose.
Overloaded function.
Overloaded function.
Note
See setGoodMovingEdgesRatioThreshold()
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Return the number of depth dense features taken into account in the virtual visual-servoing scheme.
Return the number of depth normal features features taken into account in the virtual visual-servoing scheme.
Return the number of moving-edges features taken into account in the virtual visual-servoing scheme.
Return the number of klt keypoints features taken into account in the virtual visual-servoing scheme.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Get name of the reference camera.
Return the weights vector \(w_i\) computed by the robust scheme.
The tracker type for the reference camera.
Initialise the tracking.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Reset the tracker.
Save the current tracker settings to a configuration file.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Set maximum distance to consider a face.
Set method to discard a face, e.g.if outside of the depth range.
Set minimum distance to consider a face.
Set depth occupancy ratio to consider a face, used to discard faces where the depth map is not well reconstructed.
Set depth dense sampling step.
Set method to compute the centroid for display for depth tracker.
Set depth feature estimation method.
Set depth PCL plane estimation method.
Set depth PCL RANSAC maximum number of iterations.
Set depth PCL RANSAC threshold.
Set depth sampling step.
Overloaded function.
Overloaded function.
Set the feature factors used in the VVS stage (ponderation between the feature types).
Set the threshold value between 0 and 1 over good moving edges ratio.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Overloaded function.
Set the reference camera name.
Overloaded function.
Overloaded function.
Set if the polygon that has the given name has to be considered during the tracking phase.
Set if the polygon that has the given name has to be considered during the tracking phase.
Set if the polygon that has the given name has to be considered during the tracking phase.
Test the quality of the tracking.
Overloaded function.
Inherited Methods
Get the maximum number of iterations of the virtual visual servoing stage.
Set the maximum iteration of the virtual visual servoing stage.
GAUSS_NEWTON_OPT
LEVENBERG_MARQUARDT_OPT
Set the initial value of mu for the Levenberg Marquardt optimization loop.
Set Moving-Edges parameters for projection error computation.
Get the value of the gain used to compute the control law.
Save the pose in the given filename
Set if the covariance matrix has to be computed.
Set the value of the gain used to compute the control law.
Set the minimal error (previous / current estimation) to determine if there is convergence or not.
Set a 6-dim column vector representing the degrees of freedom in the object frame that are estimated by the tracker.
Return the angle used to test polygons appearance.
Set the filename used to save the initial pose computed using the initClick() method.
Get a 1x6 vpColVector representing the estimated degrees of freedom.
Get the optimization method used during the tracking.
Get the covariance matrix.
Values:
Get the initial value of mu used in the Levenberg Marquardt optimization loop.
Get the far distance for clipping.
Set kernel size used for projection error computation.
Get the near distance for clipping.
Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal.
Return the angle used to test polygons disappearance.
Operators
__doc__
Overloaded function.
__module__
Attributes
DEPTH_DENSE_TRACKER
DEPTH_NORMAL_TRACKER
EDGE_TRACKER
GAUSS_NEWTON_OPT
KLT_TRACKER
LEVENBERG_MARQUARDT_OPT
__annotations__
- class MbtOptimizationMethod(self, value: int)¶
Bases:
pybind11_object
Values:
GAUSS_NEWTON_OPT
LEVENBERG_MARQUARDT_OPT
- class TrackerType(self, value: int)¶
Bases:
pybind11_object
Values:
EDGE_TRACKER: Model-based tracking using moving edges features.
DEPTH_NORMAL_TRACKER: Model-based tracking using depth normal features.
DEPTH_DENSE_TRACKER: Model-based tracking using depth dense features.
- __init__(*args, **kwargs)¶
Overloaded function.
__init__(self: visp._visp.mbt.MbGenericTracker) -> None
json namespace shortcut
__init__(self: visp._visp.mbt.MbGenericTracker, nbCameras: int, trackerType: int = EDGE_TRACKER) -> None
__init__(self: visp._visp.mbt.MbGenericTracker, trackerTypes: list[int]) -> None
__init__(self: visp._visp.mbt.MbGenericTracker, cameraNames: list[str], trackerTypes: list[int]) -> None
- computeCurrentProjectionError(*args, **kwargs)¶
Overloaded function.
computeCurrentProjectionError(self: visp._visp.mbt.MbGenericTracker, I: visp._visp.core.ImageGray, _cMo: visp._visp.core.HomogeneousMatrix, _cam: visp._visp.core.CameraParameters) -> float
Compute projection error given an input image and camera pose, parameters. This projection error uses locations sampled exactly where the model is projected using the camera pose and intrinsic parameters. You may want to use
Note
See setProjectionErrorComputation
Note
See getProjectionError
to get a projection error computed at the ME locations after a call to track() . It works similarly to vpMbTracker::getProjectionError function:Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal. The error is expressed in degree between 0 and 90.
- Parameters:
- I
Input grayscale image.
computeCurrentProjectionError(self: visp._visp.mbt.MbGenericTracker, I: visp._visp.core.ImageRGBa, _cMo: visp._visp.core.HomogeneousMatrix, _cam: visp._visp.core.CameraParameters) -> float
Compute projection error given an input image and camera pose, parameters. This projection error uses locations sampled exactly where the model is projected using the camera pose and intrinsic parameters. You may want to use
Note
See setProjectionErrorComputation
Note
See getProjectionError
to get a projection error computed at the ME locations after a call to track() . It works similarly to vpMbTracker::getProjectionError function:Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal. The error is expressed in degree between 0 and 90.
- Parameters:
- _cMo
Camera pose.
- _cam
Camera parameters.
computeCurrentProjectionError(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, _cMo: visp._visp.core.HomogeneousMatrix, _cam: visp._visp.core.CameraParameters) -> float
Compute projection error given an input image and camera pose, parameters. This projection error uses locations sampled exactly where the model is projected using the camera pose and intrinsic parameters. You may want to use
Note
See setProjectionErrorComputation
Note
See getProjectionError
to get a projection error computed at the ME locations after a call to track() . It works similarly to vpMbTracker::getProjectionError function:Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal. The error is expressed in degree between 0 and 90.
- Parameters:
- I
Input grayscale image.
- _cMo
Camera pose.
- _cam
Camera parameters.
- display(*args, **kwargs)¶
Overloaded function.
display(self: visp._visp.mbt.MbGenericTracker, I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None
Display the 3D model from a given position of the camera.
Note
This function will display the model only for the reference camera.
- Parameters:
- I
The grayscale image.
- cMo
Pose used to project the 3D model into the image.
- cam
The camera parameters.
- col
The desired color.
- thickness
The thickness of the lines.
- displayFullModel
If true, the full model is displayed (even the non visible faces).
display(self: visp._visp.mbt.MbGenericTracker, I: visp._visp.core.ImageRGBa, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None
Display the 3D model from a given position of the camera.
Note
This function will display the model only for the reference camera.
- Parameters:
- I
The color image.
- cMo
Pose used to project the 3D model into the image.
- cam
The camera parameters.
- col
The desired color.
- thickness
The thickness of the lines.
- displayFullModel
If true, the full model is displayed (even the non visible faces).
display(self: visp._visp.mbt.MbGenericTracker, I1: visp._visp.core.ImageGray, I2: visp._visp.core.ImageGray, c1Mo: visp._visp.core.HomogeneousMatrix, c2Mo: visp._visp.core.HomogeneousMatrix, cam1: visp._visp.core.CameraParameters, cam2: visp._visp.core.CameraParameters, color: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None
Display the 3D model from a given position of the camera.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- I1
The first grayscale image.
- I2
The second grayscale image.
- c1Mo
Pose used to project the 3D model into the first image.
- c2Mo
Pose used to project the 3D model into the second image.
- cam1
The first camera parameters.
- cam2
The second camera parameters.
- color
The desired color.
- thickness
The thickness of the lines.
- displayFullModel
If true, the full model is displayed (even the non visible faces).
display(self: visp._visp.mbt.MbGenericTracker, I1: visp._visp.core.ImageRGBa, I2: visp._visp.core.ImageRGBa, c1Mo: visp._visp.core.HomogeneousMatrix, c2Mo: visp._visp.core.HomogeneousMatrix, cam1: visp._visp.core.CameraParameters, cam2: visp._visp.core.CameraParameters, color: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None
Display the 3D model from a given position of the camera.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- I1
The first color image.
- I2
The second color image.
- c1Mo
Pose used to project the 3D model into the first image.
- c2Mo
Pose used to project the 3D model into the second image.
- cam1
The first camera parameters.
- cam2
The second camera parameters.
- color
The desired color.
- thickness
The thickness of the lines.
- displayFullModel
If true, the full model is displayed (even the non visible faces).
display(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfCameraPoses: dict[str, visp._visp.core.HomogeneousMatrix], mapOfCameraParameters: dict[str, visp._visp.core.CameraParameters], col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None
Display the 3D model from a given position of the camera.
- Parameters:
- mapOfImages
Map of grayscale images.
- mapOfCameraPoses
Map of camera poses.
- mapOfCameraParameters
Map of camera parameters.
- col
The desired color.
- thickness
The thickness of the lines.
- displayFullModel
If true, the full model is displayed (even the non visible faces).
display(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageRGBa], mapOfCameraPoses: dict[str, visp._visp.core.HomogeneousMatrix], mapOfCameraParameters: dict[str, visp._visp.core.CameraParameters], col: visp._visp.core.Color, thickness: int = 1, displayFullModel: bool = false) -> None
Display the 3D model from a given position of the camera.
- Parameters:
- mapOfImages
Map of color images.
- mapOfCameraPoses
Map of camera poses.
- mapOfCameraParameters
Map of camera parameters.
- col
The desired color.
- thickness
The thickness of the lines.
- displayFullModel
If true, the full model is displayed (even the non visible faces).
- getCameraParameters(*args, **kwargs)¶
Overloaded function.
getCameraParameters(self: visp._visp.mbt.MbGenericTracker, camera: visp._visp.core.CameraParameters) -> None
Get the camera parameters.
getCameraParameters(self: visp._visp.mbt.MbGenericTracker, cam1: visp._visp.core.CameraParameters, cam2: visp._visp.core.CameraParameters) -> None
Get all the camera parameters.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- cam1
Copy of the camera parameters for the first camera.
- cam2
Copy of the camera parameters for the second camera.
getCameraParameters(self: visp._visp.mbt.MbGenericTracker, mapOfCameraParameters: dict[str, visp._visp.core.CameraParameters]) -> None
Get all the camera parameters.
- Parameters:
- mapOfCameraParameters
Map of camera parameters.
getCameraParameters(self: visp._visp.mbt.MbTracker, cam: visp._visp.core.CameraParameters) -> None
Get the camera parameters.
- Parameters:
- cam
copy of the camera parameters used by the tracker.
- getCameraTrackerTypes(self) dict[str, int] ¶
Get the camera tracker types.
Note
See vpTrackerType
- Returns:
The map of camera tracker types.
- getClipping(*args, **kwargs)¶
Overloaded function.
getClipping(self: visp._visp.mbt.MbGenericTracker, clippingFlag1: int, clippingFlag2: int) -> tuple[int, int]
Get the clipping used and defined in vpPolygon3D::vpMbtPolygonClippingType.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- clippingFlag1
Clipping flags for the first camera.
- clippingFlag2
Clipping flags for the second camera.
- Returns:
A tuple containing:
clippingFlag1: Clipping flags for the first camera.
clippingFlag2: Clipping flags for the second camera.
getClipping(self: visp._visp.mbt.MbGenericTracker, mapOfClippingFlags: dict[str, int]) -> None
Get the clipping used and defined in vpPolygon3D::vpMbtPolygonClippingType.
- Parameters:
- mapOfClippingFlags
Map of clipping flags.
getClipping(self: visp._visp.mbt.MbTracker) -> int
Get the clipping used and defined in vpPolygon3D::vpMbtPolygonClippingType.
- Returns:
Clipping flags.
- getCovarianceMatrix(self) visp._visp.core.Matrix ¶
Get the covariance matrix. This matrix is only computed if setCovarianceComputation() is turned on.
Note
See setCovarianceComputation()
- getError(self) visp._visp.core.ColVector ¶
Return the error vector \((s-s^*)\) reached after the virtual visual servoing process used to estimate the pose.
The following example shows how to use this function to compute the norm of the residual and the norm of the residual normalized by the number of features that are tracked:
tracker.track(I); std::cout << "Residual: " << sqrt( (tracker.getError()).sumSquare()) << std::endl; std::cout << "Residual normalized: " << sqrt( (tracker.getError()).sumSquare())/tracker.getError().size() << std::endl;
Note
See getRobustWeights()
- getEstimatedDoF(self) visp._visp.core.ColVector ¶
Get a 1x6 vpColVector representing the estimated degrees of freedom. vpColVector [0] = 1 if translation on X is estimated, 0 otherwise; vpColVector [1] = 1 if translation on Y is estimated, 0 otherwise; vpColVector [2] = 1 if translation on Z is estimated, 0 otherwise; vpColVector [3] = 1 if rotation on X is estimated, 0 otherwise; vpColVector [4] = 1 if rotation on Y is estimated, 0 otherwise; vpColVector [5] = 1 if rotation on Z is estimated, 0 otherwise;
- Returns:
1x6 vpColVector representing the estimated degrees of freedom.
- getFaces(*args, **kwargs)¶
Overloaded function.
getFaces(self: visp._visp.mbt.MbGenericTracker) -> vpMbHiddenFaces<vpMbtPolygon>
Return a reference to the faces structure.
- Returns:
Reference to the face structure.
getFaces(self: visp._visp.mbt.MbGenericTracker, cameraName: str) -> vpMbHiddenFaces<vpMbtPolygon>
Return a reference to the faces structure for the given camera name.
- Returns:
Reference to the face structure.
getFaces(self: visp._visp.mbt.MbTracker) -> vpMbHiddenFaces<vpMbtPolygon>
Return a reference to the faces structure.
- getFarClippingDistance(self) float ¶
Get the far distance for clipping.
- Returns:
Far clipping value.
- getFeaturesCircle(self) list[visp._visp.mbt.MbtDistanceCircle] ¶
- getFeaturesForDisplay(*args, **kwargs)¶
Overloaded function.
getFeaturesForDisplay(self: visp._visp.mbt.MbGenericTracker) -> list[list[float]]
Returns a list of visual features parameters for the reference camera. The first element of the vector indicates the feature type that is either a moving-edge (ME) when value is 0, or a keypoint (KLT) when value is 1. Then behind, the second element of the vector gives the corresponding feature parameters.
Moving-edges parameters are: <feature id (here 0 for ME)> , <pt.i()> , <pt.j()> , <state> where pt.i(), pt.j() are the coordinates of the moving-edge point feature, and state with values in range [0,4] indicates the state of the ME
0 for vpMeSite::NO_SUPPRESSION
1 for vpMeSite::CONTRAST
2 for vpMeSite::THRESHOLD
3 for vpMeSite::M_ESTIMATOR
4 for vpMeSite::TOO_NEAR
KLT parameters are: <feature id (here 1 for KLT)> , <pt.i()> , <pt.j()> , <klt_id.i()> , <klt_id.j()> , <klt_id.id>
When the tracking is achieved with features from multiple cameras you can use rather getFeaturesForDisplay(std::map<std::string, std::vector<std::vector<double> > > &) .
It can be used to display the 3D model with a render engine of your choice.
Note
It returns the visual features for the reference camera.
Note
See getModelForDisplay(unsigned int, unsigned int, const vpHomogeneousMatrix &, const vpCameraParameters &, bool)
getFeaturesForDisplay(self: visp._visp.mbt.MbGenericTracker, mapOfFeatures: dict[str, list[list[float]]]) -> None
Get a list of visual features parameters for multiple cameras. The considered camera name is the first element of the map. The second element of the map contains the visual features parameters where the first element of the vector indicates the feature type that is either a moving-edge (ME) when value is 0, or a keypoint (KLT) when value is 1. Then behind, the second element of the vector gives the corresponding feature parameters.
Moving-edges parameters are: <feature id (here 0 for ME)> , <pt.i()> , <pt.j()> , <state> where pt.i(), pt.j() are the coordinates of the moving-edge point feature, and state with values in range [0,4] indicates the state of the ME
0 for vpMeSite::NO_SUPPRESSION
1 for vpMeSite::CONTRAST
2 for vpMeSite::THRESHOLD
3 for vpMeSite::M_ESTIMATOR
4 for vpMeSite::TOO_NEAR
KLT parameters are: <feature id (here 1 for KLT)> , <pt.i()> , <pt.j()> , <klt_id.i()> , <klt_id.j()> , <klt_id.id> It can be used to display the 3D model with a render engine of your choice.
When the tracking is achieved with features from a single camera you can use rather getFeaturesForDisplay() .
Note
See getModelForDisplay (std::map<std::string, std::vector<std::vector<double> > > &, const std::map<std::string, unsigned int> &, const std::map<std::string, unsigned int> &, const std::map<std::string, vpHomogeneousMatrix> &, const std::map<std::string, vpCameraParameters> &, bool)
- getFeaturesKlt(self) list[visp._visp.mbt.MbtDistanceKltPoints] ¶
- getFeaturesKltCylinder(self) list[visp._visp.mbt.MbtDistanceKltCylinder] ¶
- getGoodMovingEdgesRatioThreshold(self) float ¶
Note
See setGoodMovingEdgesRatioThreshold()
- Returns:
The threshold value between 0 and 1 over good moving edges ratio. It allows to decide if the tracker has enough valid moving edges to compute a pose. 1 means that all moving edges should be considered as good to have a valid pose, while 0.1 means that 10% of the moving edge are enough to declare a pose valid.
- getInitialMu(self) float ¶
Get the initial value of mu used in the Levenberg Marquardt optimization loop.
- Returns:
the initial mu value.
- getKltImagePoints(self) list[visp._visp.core.ImagePoint] ¶
- getKltImagePointsWithId(self) dict[int, visp._visp.core.ImagePoint] ¶
- getKltOpencv(*args, **kwargs)¶
Overloaded function.
getKltOpencv(self: visp._visp.mbt.MbGenericTracker) -> visp._visp.klt.KltOpencv
getKltOpencv(self: visp._visp.mbt.MbGenericTracker, klt1: visp._visp.klt.KltOpencv, klt2: visp._visp.klt.KltOpencv) -> None
getKltOpencv(self: visp._visp.mbt.MbGenericTracker, mapOfKlts: dict[str, visp._visp.klt.KltOpencv]) -> None
- getKltPoints(self) list[cv::Point_<float>] ¶
- getLambda(self) float ¶
Get the value of the gain used to compute the control law.
- Returns:
the value for the gain.
- getLcircle(*args, **kwargs)¶
Overloaded function.
getLcircle(self: visp._visp.mbt.MbGenericTracker, circlesList: list[visp._visp.mbt.MbtDistanceCircle], level: int = 0) -> list[visp._visp.mbt.MbtDistanceCircle]
Get the list of the circles tracked for the specified level. Each circle contains the list of the vpMeSite .
Note
Multi-scale moving edge tracking is not possible, scale level=0 must be used.
- Parameters:
- circlesList
The list of the circles of the model.
- level
Level corresponding to the list to return.
- Returns:
A tuple containing:
circlesList: The list of the circles of the model.
getLcircle(self: visp._visp.mbt.MbGenericTracker, cameraName: str, circlesList: list[visp._visp.mbt.MbtDistanceCircle], level: int = 0) -> list[visp._visp.mbt.MbtDistanceCircle]
Get the list of the circles tracked for the specified level. Each circle contains the list of the vpMeSite .
Note
Multi-scale moving edge tracking is not possible, scale level=0 must be used.
- Parameters:
- cameraName
Camera name for which we want to get the list of vpMbtDistanceCircle .
- circlesList
The list of the circles of the model.
- level
Level corresponding to the list to return.
- Returns:
A tuple containing:
circlesList: The list of the circles of the model.
- getLcylinder(*args, **kwargs)¶
Overloaded function.
getLcylinder(self: visp._visp.mbt.MbGenericTracker, cylindersList: list[visp._visp.mbt.MbtDistanceCylinder], level: int = 0) -> list[visp._visp.mbt.MbtDistanceCylinder]
Get the list of the cylinders tracked for the specified level. Each cylinder contains the list of the vpMeSite .
Note
Multi-scale moving edge tracking is not possible, scale level=0 must be used.
- Parameters:
- cylindersList
The list of the cylinders of the model.
- level
Level corresponding to the list to return.
- Returns:
A tuple containing:
cylindersList: The list of the cylinders of the model.
getLcylinder(self: visp._visp.mbt.MbGenericTracker, cameraName: str, cylindersList: list[visp._visp.mbt.MbtDistanceCylinder], level: int = 0) -> list[visp._visp.mbt.MbtDistanceCylinder]
Get the list of the cylinders tracked for the specified level. Each cylinder contains the list of the vpMeSite .
Note
Multi-scale moving edge tracking is not possible, scale level=0 must be used.
- Parameters:
- cameraName
Camera name for which we want to get the list of vpMbtDistanceCylinder .
- cylindersList
The list of the cylinders of the model.
- level
Level corresponding to the list to return.
- Returns:
A tuple containing:
cylindersList: The list of the cylinders of the model.
- getLline(*args, **kwargs)¶
Overloaded function.
getLline(self: visp._visp.mbt.MbGenericTracker, linesList: list[visp._visp.mbt.MbtDistanceLine], level: int = 0) -> list[visp._visp.mbt.MbtDistanceLine]
Get the list of the lines tracked for the specified level. Each line contains the list of the vpMeSite .
Note
Multi-scale moving edge tracking is not possible, scale level=0 must be used.
- Parameters:
- linesList
The list of the lines of the model.
- level
Level corresponding to the list to return.
- Returns:
A tuple containing:
linesList: The list of the lines of the model.
getLline(self: visp._visp.mbt.MbGenericTracker, cameraName: str, linesList: list[visp._visp.mbt.MbtDistanceLine], level: int = 0) -> list[visp._visp.mbt.MbtDistanceLine]
Get the list of the lines tracked for the specified level. Each line contains the list of the vpMeSite .
Note
Multi-scale moving edge tracking is not possible, scale level=0 must be used.
- Parameters:
- cameraName
Camera name for which we want to get the list of vpMbtDistanceLine .
- linesList
The list of the lines of the model.
- level
Level corresponding to the list to return.
- Returns:
A tuple containing:
linesList: The list of the lines of the model.
- getMaxIter(self) int ¶
Get the maximum number of iterations of the virtual visual servoing stage.
- Returns:
the number of iteration
- getModelForDisplay(*args, **kwargs)¶
Overloaded function.
getModelForDisplay(self: visp._visp.mbt.MbGenericTracker, width: int, height: int, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, displayFullModel: bool = false) -> list[list[float]]
Get primitive parameters to display the object CAD model for the reference camera.
It can be used to display the 3D model with a render engine of your choice.
When tracking is performed using multiple cameras, you should rather use getModelForDisplay(std::map<std::string, std::vector<std::vector<double> > > &, const std::map<std::string, unsigned int> &, const std::map<std::string, unsigned int> &, const std::map<std::string, vpHomogeneousMatrix> &, const std::map<std::string, vpCameraParameters> &, bool)
Note
See getFeaturesForDisplay()
- Parameters:
- width
Image width.
- height
Image height.
- cMo
Pose used to project the 3D model into the image.
- cam
The camera parameters.
- displayFullModel
If true, the line is displayed even if it is not
- Returns:
List of primitives parameters corresponding to the reference camera in order to display the model to a given pose with camera parameters. The first element of the vector indicates the type of parameters: 0 for a line and 1 for an ellipse. Then the second element gives the corresponding parameters.
Line parameters are: <primitive id (here 0 for line)> , <pt_start.i()> , <pt_start.j()> , <pt_end.i()> , <pt_end.j()> .
Ellipse parameters are: <primitive id (here 1 for ellipse)> , <pt_center.i()> , <pt_center.j()> , <n_20> , <n_11> , <n_02> where <n_ij> are the second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area).
getModelForDisplay(self: visp._visp.mbt.MbGenericTracker, mapOfModels: dict[str, list[list[float]]], mapOfwidths: dict[str, int], mapOfheights: dict[str, int], mapOfcMos: dict[str, visp._visp.core.HomogeneousMatrix], mapOfCams: dict[str, visp._visp.core.CameraParameters], displayFullModel: bool = false) -> None
Get primitive parameters to display the object CAD model for the multiple cameras.
It can be used to display the 3D model with a render engine of your choice.
Each first element of the map correspond to the camera name.
If you are using a single camera you should rather use getModelForDisplay(unsigned int, unsigned int, const vpHomogeneousMatrix &, const vpCameraParameters &, bool)
Note
See getFeaturesForDisplay(std::map<std::string, std::vector<std::vector<double> > > &)
- Parameters:
- mapOfModels
Map of models. The second element of the map contains a list of primitives parameters to display the model at a given pose with corresponding camera parameters. The first element of the vector indicates the type of parameters: 0 for a line and 1 for an ellipse. Then the second element gives the corresponding parameters.
Line parameters are: <primitive id (here 0 for line)> , <pt_start.i()> , <pt_start.j()> , <pt_end.i()> , <pt_end.j()> .
Ellipse parameters are: <primitive id (here 1 for ellipse)> , <pt_center.i()> , <pt_center.j()> , <n_20> , <n_11> , <n_02> where <n_ij> are the second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area).
- mapOfwidths
Map of images width.
- mapOfheights
Map of images height.
- mapOfcMos
Map of poses used to project the 3D model into the images.
- mapOfCams
The camera parameters.
- displayFullModel
If true, the line is displayed even if it is not
- getMovingEdge(*args, **kwargs)¶
Overloaded function.
getMovingEdge(self: visp._visp.mbt.MbGenericTracker) -> visp._visp.me.Me
Get the moving edge parameters for the reference camera.
- Returns:
an instance of the moving edge parameters used by the tracker.
getMovingEdge(self: visp._visp.mbt.MbGenericTracker, me1: visp._visp.me.Me, me2: visp._visp.me.Me) -> None
Get the moving edge parameters.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- me1
Moving edge parameters for the first camera.
- me2
Moving edge parameters for the second camera.
getMovingEdge(self: visp._visp.mbt.MbGenericTracker, mapOfMovingEdges: dict[str, visp._visp.me.Me]) -> None
Get the moving edge parameters for all the cameras
- Parameters:
- mapOfMovingEdges
Map of moving edge parameters for all the cameras.
- getNbFeaturesDepthDense(self) int ¶
Return the number of depth dense features taken into account in the virtual visual-servoing scheme.
- getNbFeaturesDepthNormal(self) int ¶
Return the number of depth normal features features taken into account in the virtual visual-servoing scheme.
- getNbFeaturesEdge(self) int ¶
Return the number of moving-edges features taken into account in the virtual visual-servoing scheme.
This function is similar to getNbPoints() .
- getNbFeaturesKlt(self) int ¶
Return the number of klt keypoints features taken into account in the virtual visual-servoing scheme.
- getNbPoints(*args, **kwargs)¶
Overloaded function.
getNbPoints(self: visp._visp.mbt.MbGenericTracker, level: int = 0) -> int
Return the number of good points ( vpMeSite ) tracked. A good point is a vpMeSite with its flag “state” equal to 0. Only these points are used during the virtual visual servoing stage.
Note
Multi-scale moving edge tracking is not possible, scale level=0 must be used.
Note
See getNbFeaturesEdge()
- Parameters:
- level
Pyramid level to consider.
- Returns:
the number of good points for the reference camera.
getNbPoints(self: visp._visp.mbt.MbGenericTracker, mapOfNbPoints: dict[str, int], level: int = 0) -> None
Return the number of good points ( vpMeSite ) tracked. A good point is a vpMeSite with its flag “state” equal to 0. Only these points are used during the virtual visual servoing stage.
Note
Multi-scale moving edge tracking is not possible, scale level=0 must be used.
- Parameters:
- mapOfNbPoints
Map of number of good points ( vpMeSite ) tracked for all the cameras.
- level
Pyramid level to consider.
- getNbPolygon(*args, **kwargs)¶
Overloaded function.
getNbPolygon(self: visp._visp.mbt.MbGenericTracker) -> int
Get the number of polygons (faces) representing the object to track.
- Returns:
Number of polygons for the reference camera.
getNbPolygon(self: visp._visp.mbt.MbGenericTracker, mapOfNbPolygons: dict[str, int]) -> None
Get the number of polygons (faces) representing the object to track.
- Parameters:
- mapOfNbPolygons
Map that contains the number of polygons for all the cameras.
getNbPolygon(self: visp._visp.mbt.MbTracker) -> int
Get the number of polygons (faces) representing the object to track.
- Returns:
Number of polygons.
- getNearClippingDistance(self) float ¶
Get the near distance for clipping.
- Returns:
Near clipping value.
- getOptimizationMethod(self) visp._visp.mbt.MbTracker.MbtOptimizationMethod ¶
Get the optimization method used during the tracking. 0 = Gauss-Newton approach. 1 = Levenberg-Marquardt approach.
- Returns:
Optimization method.
- getPolygonFaces(*args, **kwargs)¶
Overloaded function.
getPolygonFaces(self: visp._visp.mbt.MbGenericTracker, orderPolygons: bool = true, useVisibility: bool = true, clipPolygon: bool = false) -> tuple[list[visp._visp.core.Polygon], list[list[visp._visp.core.Point]]]
Get the list of polygons faces (a vpPolygon representing the projection of the face in the image and a list of face corners in 3D), with the possibility to order by distance to the camera or to use the visibility check to consider if the polygon face must be retrieved or not.
Note
This function will return the 2D polygons faces and 3D face points only for the reference camera.
- Parameters:
- orderPolygons
If true, the resulting list is ordered from the nearest polygon faces to the farther.
- useVisibility
If true, only visible faces will be retrieved.
- clipPolygon
If true, the polygons will be clipped according to the clipping flags set in vpMbTracker .
- Returns:
A pair object containing the list of vpPolygon and the list of face corners.
getPolygonFaces(self: visp._visp.mbt.MbGenericTracker, mapOfPolygons: dict[str, list[visp._visp.core.Polygon]], mapOfPoints: dict[str, list[list[visp._visp.core.Point]]], orderPolygons: bool = true, useVisibility: bool = true, clipPolygon: bool = false) -> None
Get the list of polygons faces (a vpPolygon representing the projection of the face in the image and a list of face corners in 3D), with the possibility to order by distance to the camera or to use the visibility check to consider if the polygon face must be retrieved or not.
Note
This function will return the 2D polygons faces and 3D face points only for all the cameras.
- Parameters:
- mapOfPolygons
Map of 2D polygon faces.
- mapOfPoints
Map of face 3D points.
- orderPolygons
If true, the resulting list is ordered from the nearest polygon faces to the farther.
- useVisibility
If true, only visible faces will be retrieved.
- clipPolygon
If true, the polygons will be clipped according to the clipping flags set in vpMbTracker .
getPolygonFaces(self: visp._visp.mbt.MbTracker, orderPolygons: bool = true, useVisibility: bool = true, clipPolygon: bool = false) -> tuple[list[visp._visp.core.Polygon], list[list[visp._visp.core.Point]]]
Get the list of polygons faces (a vpPolygon representing the projection of the face in the image and a list of face corners in 3D), with the possibility to order by distance to the camera or to use the visibility check to consider if the polygon face must be retrieved or not.
- Parameters:
- orderPolygons
If true, the resulting list is ordered from the nearest polygon faces to the farther.
- useVisibility
If true, only visible faces will be retrieved.
- clipPolygon
If true, the polygons will be clipped according to the clipping flags set in vpMbTracker .
- Returns:
A pair object containing the list of vpPolygon and the list of face corners.
- getPose(*args, **kwargs)¶
Overloaded function.
getPose(self: visp._visp.mbt.MbGenericTracker, cMo: visp._visp.core.HomogeneousMatrix) -> None
Get the current pose between the object and the camera. cMo is the matrix which can be used to express coordinates from the object frame to camera frame.
- Parameters:
- cMo
the pose
getPose(self: visp._visp.mbt.MbGenericTracker, c1Mo: visp._visp.core.HomogeneousMatrix, c2Mo: visp._visp.core.HomogeneousMatrix) -> None
Get the current pose between the object and the cameras.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- c1Mo
The camera pose for the first camera.
- c2Mo
The camera pose for the second camera.
getPose(self: visp._visp.mbt.MbGenericTracker, mapOfCameraPoses: dict[str, visp._visp.core.HomogeneousMatrix]) -> None
Get the current pose between the object and the cameras.
- Parameters:
- mapOfCameraPoses
The map of camera poses for all the cameras.
getPose(self: visp._visp.mbt.MbTracker, cMo: visp._visp.core.HomogeneousMatrix) -> None
Get the current pose between the object and the camera. cMo is the matrix which can be used to express coordinates from the object frame to camera frame.
- Parameters:
- cMo
the pose
getPose(self: visp._visp.mbt.MbTracker) -> visp._visp.core.HomogeneousMatrix
Get the current pose between the object and the camera. cMo is the matrix which can be used to express coordinates from the object frame to camera frame.
- Returns:
the current pose
- getProjectionError(self) float ¶
Get the error angle between the gradient direction of the model features projected at the resulting pose and their normal. The error is expressed in degree between 0 and 90. This value is computed if setProjectionErrorComputation() is turned on.
Note
See setProjectionErrorComputation()
- Returns:
the value for the error.
- getRobustWeights(self) visp._visp.core.ColVector ¶
Return the weights vector \(w_i\) computed by the robust scheme.
The following example shows how to use this function to compute the norm of the weighted residual and the norm of the weighted residual normalized by the sum of the weights associated to the features that are tracked:
tracker.track(I); vpColVector w = tracker.getRobustWeights(); vpColVector e = tracker.getError(); vpColVector we(w.size()); for(unsigned int i=0; i<w.size(); i++) we[i] = w[i]*e[i]; std::cout << "Weighted residual: " << sqrt( (we).sumSquare() ) << std::endl; std::cout << "Weighted residual normalized: " << sqrt( (we).sumSquare() ) / w.sum() << std::endl;
Note
See getError()
- init(self, I: visp._visp.core.ImageGray) None ¶
Initialise the tracking.
- Parameters:
- I: visp._visp.core.ImageGray¶
Input image.
- initClick(*args, **kwargs)¶
Overloaded function.
initClick(self: visp._visp.mbt.MbGenericTracker, I1: visp._visp.core.ImageGray, I2: visp._visp.core.ImageGray, initFile1: str, initFile2: str, displayHelp: bool = false, T1: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix(), T2: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
initClick(self: visp._visp.mbt.MbGenericTracker, I_color1: visp._visp.core.ImageRGBa, I_color2: visp._visp.core.ImageRGBa, initFile1: str, initFile2: str, displayHelp: bool = false, T1: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix(), T2: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
initClick(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfInitFiles: dict[str, str], displayHelp: bool = false, mapOfT: dict[str, visp._visp.core.HomogeneousMatrix] = std::map<std::string,vpHomogeneousMatrix>()) -> None
initClick(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageRGBa], mapOfInitFiles: dict[str, str], displayHelp: bool = false, mapOfT: dict[str, visp._visp.core.HomogeneousMatrix] = std::map<std::string,vpHomogeneousMatrix>()) -> None
initClick(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str, displayHelp: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
initClick(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str, displayHelp: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
initClick(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, points3D_list: list[visp._visp.core.Point], displayFile: str = ) -> None
initClick(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, points3D_list: list[visp._visp.core.Point], displayFile: str = ) -> None
- initFromPoints(*args, **kwargs)¶
Overloaded function.
initFromPoints(self: visp._visp.mbt.MbGenericTracker, I1: visp._visp.core.ImageGray, I2: visp._visp.core.ImageGray, initFile1: str, initFile2: str) -> None
Initialise the tracker by reading 3D point coordinates and the corresponding 2D image point coordinates from a file. Comments starting with # character are allowed. 3D point coordinates are expressed in meter in the object frame with X, Y and Z values. 2D point coordinates are expressied in pixel coordinates, with first the line and then the column of the pixel in the image. The structure of this file is the following.
# 3D point coordinates 4 # Number of 3D points in the file (minimum is four) 0.01 0.01 0.01 # \ ... # | 3D coordinates in meters in the object frame 0.01 -0.01 -0.01 # / # corresponding 2D point coordinates 4 # Number of image points in the file (has to be the same as the number of 3D points) 100 200 # \ ... # | 2D coordinates in pixel in the image 50 10 # /
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- I1
Input grayscale image for the first camera.
- I2
Input grayscale image for the second camera.
- initFile1
Path to the file containing all the points for the first camera.
- initFile2
Path to the file containing all the points for the second camera.
initFromPoints(self: visp._visp.mbt.MbGenericTracker, I_color1: visp._visp.core.ImageRGBa, I_color2: visp._visp.core.ImageRGBa, initFile1: str, initFile2: str) -> None
Initialise the tracker by reading 3D point coordinates and the corresponding 2D image point coordinates from a file. Comments starting with # character are allowed. 3D point coordinates are expressed in meter in the object frame with X, Y and Z values. 2D point coordinates are expressied in pixel coordinates, with first the line and then the column of the pixel in the image. The structure of this file is the following.
# 3D point coordinates 4 # Number of 3D points in the file (minimum is four) 0.01 0.01 0.01 # \ ... # | 3D coordinates in meters in the object frame 0.01 -0.01 -0.01 # / # corresponding 2D point coordinates 4 # Number of image points in the file (has to be the same as the number of 3D points) 100 200 # \ ... # | 2D coordinates in pixel in the image 50 10 # /
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- I_color1
Input color image for the first camera.
- I_color2
Input color image for the second camera.
- initFile1
Path to the file containing all the points for the first camera.
- initFile2
Path to the file containing all the points for the second camera.
initFromPoints(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfInitPoints: dict[str, str]) -> None
initFromPoints(self: visp._visp.mbt.MbGenericTracker, mapOfColorImages: dict[str, visp._visp.core.ImageRGBa], mapOfInitPoints: dict[str, str]) -> None
initFromPoints(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str) -> None
Initialise the tracker by reading 3D point coordinates and the corresponding 2D image point coordinates from a file. Comments starting with # character are allowed. 3D point coordinates are expressed in meter in the object frame with X, Y and Z values. 2D point coordinates are expressied in pixel coordinates, with first the line and then the column of the pixel in the image. The structure of this file is the following.
# 3D point coordinates 4 # Number of 3D points in the file (minimum is four) 0.01 0.01 0.01 # \ ... # | 3D coordinates in meters in the object frame 0.01 -0.01 -0.01 # / # corresponding 2D point coordinates 4 # Number of image points in the file (has to be the same as the number of 3D points) 100 200 # \ ... # | 2D coordinates in pixel in the image 50 10 # /
- Parameters:
- I
Input grayscale image
- initFile
Path to the file containing all the points.
initFromPoints(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str) -> None
Initialise the tracker by reading 3D point coordinates and the corresponding 2D image point coordinates from a file. Comments starting with # character are allowed. 3D point coordinates are expressed in meter in the object frame with X, Y and Z values. 2D point coordinates are expressied in pixel coordinates, with first the line and then the column of the pixel in the image. The structure of this file is the following.
# 3D point coordinates 4 # Number of 3D points in the file (minimum is four) 0.01 0.01 0.01 # \ ... # | 3D coordinates in meters in the object frame 0.01 -0.01 -0.01 # / # corresponding 2D point coordinates 4 # Number of image points in the file (has to be the same as the number of 3D points) 100 200 # \ ... # | 2D coordinates in pixel in the image 50 10 # /
- Parameters:
- I_color
Input color image
- initFile
Path to the file containing all the points.
initFromPoints(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, points2D_list: list[visp._visp.core.ImagePoint], points3D_list: list[visp._visp.core.Point]) -> None
Initialise the tracking with the list of image points (points2D_list) and the list of corresponding 3D points (object frame) (points3D_list).
- Parameters:
- I
Input grayscale image
- points2D_list
List of image points.
- points3D_list
List of 3D points (object frame).
initFromPoints(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, points2D_list: list[visp._visp.core.ImagePoint], points3D_list: list[visp._visp.core.Point]) -> None
Initialise the tracking with the list of image points (points2D_list) and the list of corresponding 3D points (object frame) (points3D_list).
- Parameters:
- I_color
Input color grayscale image
- points2D_list
List of image points.
- points3D_list
List of 3D points (object frame).
- initFromPose(*args, **kwargs)¶
Overloaded function.
initFromPose(self: visp._visp.mbt.MbGenericTracker, I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix) -> None
Initialise the tracking thanks to the pose.
- Parameters:
- I
Input grayscale image
- cMo
Pose matrix.
initFromPose(self: visp._visp.mbt.MbGenericTracker, I1: visp._visp.core.ImageGray, I2: visp._visp.core.ImageGray, initFile1: str, initFile2: str) -> None
Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- I1
Input grayscale image for the first camera.
- I2
Input grayscale image for the second camera.
- initFile1
Init pose file for the first camera.
- initFile2
Init pose file for the second camera.
initFromPose(self: visp._visp.mbt.MbGenericTracker, I_color1: visp._visp.core.ImageRGBa, I_color2: visp._visp.core.ImageRGBa, initFile1: str, initFile2: str) -> None
Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- I_color1
Input color image for the first camera.
- I_color2
Input color image for the second camera.
- initFile1
Init pose file for the first camera.
- initFile2
Init pose file for the second camera.
initFromPose(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfInitPoses: dict[str, str]) -> None
Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile.
Note
Image and init pose file must be supplied for the reference camera. The images for all the cameras must be supplied to correctly initialize the trackers but some init pose files can be omitted. In this case, they will be initialized using the pose computed from the reference camera pose and using the known geometric transformation between each camera (see setCameraTransformationMatrix() ).
- Parameters:
- mapOfImages
Map of grayscale images.
- mapOfInitPoses
Map of init pose files.
initFromPose(self: visp._visp.mbt.MbGenericTracker, mapOfColorImages: dict[str, visp._visp.core.ImageRGBa], mapOfInitPoses: dict[str, str]) -> None
Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile.
Note
Image and init pose file must be supplied for the reference camera. The images for all the cameras must be supplied to correctly initialize the trackers but some init pose files can be omitted. In this case, they will be initialized using the pose computed from the reference camera pose and using the known geometric transformation between each camera (see setCameraTransformationMatrix() ).
- Parameters:
- mapOfColorImages
Map of color images.
- mapOfInitPoses
Map of init pose files.
initFromPose(self: visp._visp.mbt.MbGenericTracker, I1: visp._visp.core.ImageGray, I2: visp._visp.core.ImageGray, c1Mo: visp._visp.core.HomogeneousMatrix, c2Mo: visp._visp.core.HomogeneousMatrix) -> None
Initialize the tracking thanks to the pose.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- I1
Input grayscale image for the first camera.
- I2
Input grayscale image for the second camera.
- c1Mo
Pose matrix for the first camera.
- c2Mo
Pose matrix for the second camera.
initFromPose(self: visp._visp.mbt.MbGenericTracker, I_color1: visp._visp.core.ImageRGBa, I_color2: visp._visp.core.ImageRGBa, c1Mo: visp._visp.core.HomogeneousMatrix, c2Mo: visp._visp.core.HomogeneousMatrix) -> None
Initialize the tracking thanks to the pose.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- I_color1
Input color image for the first camera.
- I_color2
Input color image for the second camera.
- c1Mo
Pose matrix for the first camera.
- c2Mo
Pose matrix for the second camera.
initFromPose(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfCameraPoses: dict[str, visp._visp.core.HomogeneousMatrix]) -> None
Initialize the tracking thanks to the pose.
Note
Image and camera pose must be supplied for the reference camera. The images for all the cameras must be supplied to correctly initialize the trackers but some camera poses can be omitted. In this case, they will be initialized using the pose computed from the reference camera pose and using the known geometric transformation between each camera (see setCameraTransformationMatrix() ).
- Parameters:
- mapOfImages
Map of grayscale images.
- mapOfCameraPoses
Map of pose matrix.
initFromPose(self: visp._visp.mbt.MbGenericTracker, mapOfColorImages: dict[str, visp._visp.core.ImageRGBa], mapOfCameraPoses: dict[str, visp._visp.core.HomogeneousMatrix]) -> None
Initialize the tracking thanks to the pose.
Note
Image and camera pose must be supplied for the reference camera. The images for all the cameras must be supplied to correctly initialize the trackers but some camera poses can be omitted. In this case, they will be initialized using the pose computed from the reference camera pose and using the known geometric transformation between each camera (see setCameraTransformationMatrix() ).
- Parameters:
- mapOfColorImages
Map of color images.
- mapOfCameraPoses
Map of pose matrix.
initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, initFile: str) -> None
Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile. The structure of this file is (without the comments):
// The six value of the pose vector 0.0000 // \ 0.0000 // | 1.0000 // | Example of value for the pose vector where Z = 1 meter 0.0000 // | 0.0000 // | 0.0000 // /
Where the three firsts lines refer to the translation and the three last to the rotation in thetaU parametrisation (see vpThetaUVector ).
- Parameters:
- I
Input grayscale image
- initFile
Path to the file containing the pose.
initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, initFile: str) -> None
Initialise the tracking thanks to the pose in vpPoseVector format, and read in the file initFile. The structure of this file is (without the comments):
// The six value of the pose vector 0.0000 // \ 0.0000 // | 1.0000 // | Example of value for the pose vector where Z = 1 meter 0.0000 // | 0.0000 // | 0.0000 // /
Where the three firsts lines refer to the translation and the three last to the rotation in thetaU parametrisation (see vpThetaUVector ).
- Parameters:
- I_color
Input color image
- initFile
Path to the file containing the pose.
initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix) -> None
Initialise the tracking thanks to the pose.
- Parameters:
- I
Input grayscale image
- cMo
Pose matrix.
initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, cMo: visp._visp.core.HomogeneousMatrix) -> None
Initialise the tracking thanks to the pose.
- Parameters:
- I_color
Input color image
- cMo
Pose matrix.
initFromPose(self: visp._visp.mbt.MbTracker, I: visp._visp.core.ImageGray, cPo: visp._visp.core.PoseVector) -> None
Initialise the tracking thanks to the pose vector.
- Parameters:
- I
Input grayscale image
- cPo
Pose vector.
initFromPose(self: visp._visp.mbt.MbTracker, I_color: visp._visp.core.ImageRGBa, cPo: visp._visp.core.PoseVector) -> None
Initialise the tracking thanks to the pose vector.
- Parameters:
- I_color
Input color image
- cPo
Pose vector.
- loadConfigFile(*args, **kwargs)¶
Overloaded function.
loadConfigFile(self: visp._visp.mbt.MbGenericTracker, configFile: str, verbose: bool = true) -> None
Load the configuration file. This file can be in XML format(.xml) or in JSON (.json) if ViSP is compiled with the JSON option. From the configuration file initialize the parameters corresponding to the objects: tracking parameters, camera intrinsic parameters.
- Parameters:
- configFile
full name of the xml or json file.
- verbose
verbose flag. Ignored when parsing JSON
loadConfigFile(self: visp._visp.mbt.MbGenericTracker, configFile1: str, configFile2: str, verbose: bool = true) -> None
Load the xml configuration files. From the configuration file initialize the parameters corresponding to the objects: tracking parameters, camera intrinsic parameters.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- configFile1
Full name of the xml file for the first camera.
- configFile2
Full name of the xml file for the second camera.
- verbose
verbose flag.
loadConfigFile(self: visp._visp.mbt.MbGenericTracker, mapOfConfigFiles: dict[str, str], verbose: bool = true) -> None
Load the xml configuration files. From the configuration file initialize the parameters corresponding to the objects: tracking parameters, camera intrinsic parameters.
Note
Configuration files must be supplied for all the cameras.
- Parameters:
- mapOfConfigFiles
Map of xml files.
- verbose
verbose flag.
loadConfigFile(self: visp._visp.mbt.MbTracker, configFile: str, verbose: bool = true) -> None
Load a config file to parameterise the behavior of the tracker.
Virtual method to adapt to each tracker.
- Parameters:
- configFile
An xml config file to parse.
- verbose
verbose flag.
- loadModel(*args, **kwargs)¶
Overloaded function.
loadModel(self: visp._visp.mbt.MbGenericTracker, modelFile: str, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
Load a 3D model from the file in parameter. This file must either be a vrml file (.wrl) or a CAO file (.cao). CAO format is described in the loadCAOModel() method.
Note
All the trackers will use the same model in case of stereo / multiple cameras configuration.
- Parameters:
- modelFile
the file containing the 3D model description. The extension of this file is either .wrl or .cao.
- verbose
verbose option to print additional information when loading CAO model files which include other CAO model files.
- T
optional transformation matrix (currently only for .cao) to transform 3D points expressed in the original object frame to the desired object frame.
loadModel(self: visp._visp.mbt.MbGenericTracker, modelFile1: str, modelFile2: str, verbose: bool = false, T1: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix(), T2: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
Load a 3D model from the file in parameter. This file must either be a vrml file (.wrl) or a CAO file (.cao). CAO format is described in the loadCAOModel() method.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- modelFile1
the file containing the 3D model description for the first camera. The extension of this file is either .wrl or .cao.
- modelFile2
the file containing the the 3D model description for the second camera. The extension of this file is either .wrl or .cao.
- verbose
verbose option to print additional information when loading CAO model files which include other CAO model files.
- T1
optional transformation matrix (currently only for .cao) to transform 3D points in modelFile1 expressed in the original object frame to the desired object frame.
- T2
optional transformation matrix (currently only for .cao) to transform 3D points in modelFile2 expressed in the original object frame to the desired object frame ( T2==T1 if the two models have the same object frame which should be the case most of the time).
loadModel(self: visp._visp.mbt.MbGenericTracker, mapOfModelFiles: dict[str, str], verbose: bool = false, mapOfT: dict[str, visp._visp.core.HomogeneousMatrix] = std::map<std::string,vpHomogeneousMatrix>()) -> None
Load a 3D model from the file in parameter. This file must either be a vrml file (.wrl) or a CAO file (.cao). CAO format is described in the loadCAOModel() method.
Note
Each camera must have a model file.
- Parameters:
- mapOfModelFiles
map of files containing the 3D model description. The extension of this file is either .wrl or .cao.
- verbose
verbose option to print additional information when loading CAO model files which include other CAO model files.
- mapOfT
optional map of transformation matrices (currently only for .cao) to transform 3D points in mapOfModelFiles expressed in the original object frame to the desired object frame (if the models have the same object frame which should be the case most of the time, all the transformation matrices are identical).
loadModel(self: visp._visp.mbt.MbTracker, modelFile: str, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
Load a 3D model from the file in parameter. This file must either be a vrml file (.wrl) or a CAO file (.cao). CAO format is described in the loadCAOModel() method.
- Parameters:
- modelFile
the file containing the the 3D model description. The extension of this file is either .wrl or .cao.
- verbose
verbose option to print additional information when loading CAO model files which include other CAO model files.
- reInitModel(*args, **kwargs)¶
Overloaded function.
reInitModel(self: visp._visp.mbt.MbGenericTracker, I: visp._visp.core.ImageGray, cad_name: str, cMo: visp._visp.core.HomogeneousMatrix, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
Re-initialize the model used by the tracker.
- Parameters:
- I
The grayscale image containing the object to initialize.
- cad_name
Path to the file containing the 3D model description.
- cMo
The new vpHomogeneousMatrix between the camera and the new model.
- verbose
verbose option to print additional information when loading CAO model files which include other CAO model files.
- T
optional transformation matrix (currently only for .cao).
reInitModel(self: visp._visp.mbt.MbGenericTracker, I_color: visp._visp.core.ImageRGBa, cad_name: str, cMo: visp._visp.core.HomogeneousMatrix, verbose: bool = false, T: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
Re-initialize the model used by the tracker.
- Parameters:
- I_color
The color image containing the object to initialize.
- cad_name
Path to the file containing the 3D model description.
- cMo
The new vpHomogeneousMatrix between the camera and the new model.
- verbose
verbose option to print additional information when loading CAO model files which include other CAO model files.
- T
optional transformation matrix (currently only for .cao).
reInitModel(self: visp._visp.mbt.MbGenericTracker, I1: visp._visp.core.ImageGray, I2: visp._visp.core.ImageGray, cad_name1: str, cad_name2: str, c1Mo: visp._visp.core.HomogeneousMatrix, c2Mo: visp._visp.core.HomogeneousMatrix, verbose: bool = false, T1: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix(), T2: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
Re-initialize the model used by the tracker.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- I1
The grayscale image containing the object to initialize for the first camera.
- I2
The grayscale image containing the object to initialize for the second camera.
- cad_name1
Path to the file containing the 3D model description for the first camera.
- cad_name2
Path to the file containing the 3D model description for the second camera.
- c1Mo
The new vpHomogeneousMatrix between the first camera and the new model.
- c2Mo
The new vpHomogeneousMatrix between the second camera and the new model.
- verbose
verbose option to print additional information when loading CAO model files which include other CAO model files.
- T1
optional transformation matrix (currently only for .cao) to transform 3D points in cad_name1 expressed in the original object frame to the desired object frame.
- T2
optional transformation matrix (currently only for .cao) to transform 3D points in cad_name2 expressed in the original object frame to the desired object frame ( T2==T1 if the two models have the same object frame which should be the case most of the time).
reInitModel(self: visp._visp.mbt.MbGenericTracker, I_color1: visp._visp.core.ImageRGBa, I_color2: visp._visp.core.ImageRGBa, cad_name1: str, cad_name2: str, c1Mo: visp._visp.core.HomogeneousMatrix, c2Mo: visp._visp.core.HomogeneousMatrix, verbose: bool = false, T1: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix(), T2: visp._visp.core.HomogeneousMatrix = vpHomogeneousMatrix()) -> None
Re-initialize the model used by the tracker.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- I_color1
The color image containing the object to initialize for the first camera.
- I_color2
The color image containing the object to initialize for the second camera.
- cad_name1
Path to the file containing the 3D model description for the first camera.
- cad_name2
Path to the file containing the 3D model description for the second camera.
- c1Mo
The new vpHomogeneousMatrix between the first camera and the new model.
- c2Mo
The new vpHomogeneousMatrix between the second camera and the new model.
- verbose
verbose option to print additional information when loading CAO model files which include other CAO model files.
- T1
optional transformation matrix (currently only for .cao) to transform 3D points in cad_name1 expressed in the original object frame to the desired object frame.
- T2
optional transformation matrix (currently only for .cao) to transform 3D points in cad_name2 expressed in the original object frame to the desired object frame ( T2==T1 if the two models have the same object frame which should be the case most of the time).
reInitModel(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfModelFiles: dict[str, str], mapOfCameraPoses: dict[str, visp._visp.core.HomogeneousMatrix], verbose: bool = false, mapOfT: dict[str, visp._visp.core.HomogeneousMatrix] = std::map<std::string,vpHomogeneousMatrix>()) -> None
Re-initialize the model used by the tracker.
- Parameters:
- mapOfImages
Map of grayscale images.
- mapOfModelFiles
Map of model files.
- mapOfCameraPoses
The new vpHomogeneousMatrix between the cameras and the current object position.
- verbose
Verbose option to print additional information when loading CAO model files which include other CAO model files.
- mapOfT
optional map of transformation matrices (currently only for .cao) to transform 3D points in mapOfModelFiles expressed in the original object frame to the desired object frame (if the models have the same object frame which should be the case most of the time, all the transformation matrices are identical).
reInitModel(self: visp._visp.mbt.MbGenericTracker, mapOfColorImages: dict[str, visp._visp.core.ImageRGBa], mapOfModelFiles: dict[str, str], mapOfCameraPoses: dict[str, visp._visp.core.HomogeneousMatrix], verbose: bool = false, mapOfT: dict[str, visp._visp.core.HomogeneousMatrix] = std::map<std::string,vpHomogeneousMatrix>()) -> None
Re-initialize the model used by the tracker.
- Parameters:
- mapOfColorImages
Map of color images.
- mapOfModelFiles
Map of model files.
- mapOfCameraPoses
The new vpHomogeneousMatrix between the cameras and the current object position.
- verbose
Verbose option to print additional information when loading CAO model files which include other CAO model files.
- mapOfT
optional map of transformation matrices (currently only for .cao) to transform 3D points in mapOfModelFiles expressed in the original object frame to the desired object frame (if the models have the same object frame which should be the case most of the time, all the transformation matrices are identical).
- resetTracker(self) None ¶
Reset the tracker. The model is removed and the pose is set to identity. The tracker needs to be initialized with a new model and a new pose.
- saveConfigFile(self, settingsFile: str) None ¶
Save the current tracker settings to a configuration file. This configuration does not include the model path, only the different tracker and camera parameters. As of now, only saving to a JSON file is supported.
- setAngleAppear(*args, **kwargs)¶
Overloaded function.
setAngleAppear(self: visp._visp.mbt.MbGenericTracker, a: float) -> None
Set the angle used to test polygons appearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value lower than this parameter, the polygon is considered as appearing. The polygon will then be tracked.
- Parameters:
- a
new angle in radian.
setAngleAppear(self: visp._visp.mbt.MbGenericTracker, a1: float, a2: float) -> None
Set the angle used to test polygons appearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value lower than this parameter, the polygon is considered as appearing. The polygon will then be tracked.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- a1
new angle in radian for the first camera.
- a2
new angle in radian for the second camera.
setAngleAppear(self: visp._visp.mbt.MbGenericTracker, mapOfAngles: dict[str, float]) -> None
Set the angle used to test polygons appearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value lower than this parameter, the polygon is considered as appearing. The polygon will then be tracked.
- Parameters:
- mapOfAngles
Map of new angles in radian.
setAngleAppear(self: visp._visp.mbt.MbTracker, a: float) -> None
Set the angle used to test polygons appearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value lower than this parameter, the polygon is considered as appearing. The polygon will then be tracked.
- Parameters:
- a
new angle in radian.
- setAngleDisappear(*args, **kwargs)¶
Overloaded function.
setAngleDisappear(self: visp._visp.mbt.MbGenericTracker, a: float) -> None
Set the angle used to test polygons disappearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value greater than this parameter, the polygon is considered as disappearing. The tracking of the polygon will then be stopped.
- Parameters:
- a
new angle in radian.
setAngleDisappear(self: visp._visp.mbt.MbGenericTracker, a1: float, a2: float) -> None
Set the angle used to test polygons disappearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value greater than this parameter, the polygon is considered as disappearing. The tracking of the polygon will then be stopped.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- a1
new angle in radian for the first camera.
- a2
new angle in radian for the second camera.
setAngleDisappear(self: visp._visp.mbt.MbGenericTracker, mapOfAngles: dict[str, float]) -> None
Set the angle used to test polygons disappearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value greater than this parameter, the polygon is considered as disappearing. The tracking of the polygon will then be stopped.
- Parameters:
- mapOfAngles
Map of new angles in radian.
setAngleDisappear(self: visp._visp.mbt.MbTracker, a: float) -> None
Set the angle used to test polygons disappearance. If the angle between the normal of the polygon and the line going from the camera to the polygon center has a value greater than this parameter, the polygon is considered as disappearing. The tracking of the polygon will then be stopped.
- Parameters:
- a
new angle in radian.
- setCameraParameters(*args, **kwargs)¶
Overloaded function.
setCameraParameters(self: visp._visp.mbt.MbGenericTracker, camera: visp._visp.core.CameraParameters) -> None
Set the camera parameters.
- Parameters:
- camera
the new camera parameters.
setCameraParameters(self: visp._visp.mbt.MbGenericTracker, camera1: visp._visp.core.CameraParameters, camera2: visp._visp.core.CameraParameters) -> None
Set the camera parameters.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- camera1
the new camera parameters for the first camera.
- camera2
the new camera parameters for the second camera.
setCameraParameters(self: visp._visp.mbt.MbGenericTracker, mapOfCameraParameters: dict[str, visp._visp.core.CameraParameters]) -> None
Set the camera parameters.
Note
This function will set the camera parameters only for the supplied camera names.
- Parameters:
- mapOfCameraParameters
map of new camera parameters.
setCameraParameters(self: visp._visp.mbt.MbTracker, cam: visp._visp.core.CameraParameters) -> None
Set the camera parameters.
- Parameters:
- cam
The new camera parameters.
- setCameraTransformationMatrix(*args, **kwargs)¶
Overloaded function.
setCameraTransformationMatrix(self: visp._visp.mbt.MbGenericTracker, cameraName: str, cameraTransformationMatrix: visp._visp.core.HomogeneousMatrix) -> None
Set the camera transformation matrix for the specified camera ( \(_{}^{c_{current}}\textrm{M}_{c_{reference}}\) ).
- Parameters:
- cameraName
Camera name.
- cameraTransformationMatrix
Camera transformation matrix between the current and the reference camera.
setCameraTransformationMatrix(self: visp._visp.mbt.MbGenericTracker, mapOfTransformationMatrix: dict[str, visp._visp.core.HomogeneousMatrix]) -> None
Set the map of camera transformation matrices ( \(_{}^{c_1}\textrm{M}_{c_1}, _{}^{c_2}\textrm{M}_{c_1}, _{}^{c_3}\textrm{M}_{c_1}, \cdots, _{}^{c_n}\textrm{M}_{c_1}\) ).
- Parameters:
- mapOfTransformationMatrix
map of camera transformation matrices.
- setClipping(*args, **kwargs)¶
Overloaded function.
setClipping(self: visp._visp.mbt.MbGenericTracker, flags: int) -> None
Specify which clipping to use.
Note
See vpMbtPolygonClipping
Note
This function will set the new parameter for all the cameras.
- Parameters:
- flags
New clipping flags.
setClipping(self: visp._visp.mbt.MbGenericTracker, flags1: int, flags2: int) -> None
Specify which clipping to use.
Note
See vpMbtPolygonClipping
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- flags1
New clipping flags for the first camera.
- flags2
New clipping flags for the second camera.
setClipping(self: visp._visp.mbt.MbGenericTracker, mapOfClippingFlags: dict[str, int]) -> None
Specify which clipping to use.
Note
See vpMbtPolygonClipping
- Parameters:
- mapOfClippingFlags
Map of new clipping flags.
setClipping(self: visp._visp.mbt.MbTracker, flags: int) -> None
Specify which clipping to use.
Note
See vpMbtPolygonClipping
- Parameters:
- flags
New clipping flags.
- setCovarianceComputation(self, flag: bool) None ¶
Set if the covariance matrix has to be computed.
Note
See getCovarianceMatrix()
- setDepthDenseFilteringMaxDistance(self, maxDistance: float) None ¶
Set maximum distance to consider a face. You should use the maximum depth range of the sensor used.
Note
See setDepthDenseFilteringMethod
Note
This function will set the new parameter for all the cameras.
- setDepthDenseFilteringMethod(self, method: int) None ¶
Set method to discard a face, e.g.if outside of the depth range.
Note
See vpMbtFaceDepthDense::vpDepthDenseFilteringType
Note
This function will set the new parameter for all the cameras.
- setDepthDenseFilteringMinDistance(self, minDistance: float) None ¶
Set minimum distance to consider a face. You should use the minimum depth range of the sensor used.
Note
See setDepthDenseFilteringMethod
Note
This function will set the new parameter for all the cameras.
- setDepthDenseFilteringOccupancyRatio(self, occupancyRatio: float) None ¶
Set depth occupancy ratio to consider a face, used to discard faces where the depth map is not well reconstructed.
Note
See setDepthDenseFilteringMethod
Note
This function will set the new parameter for all the cameras.
- setDepthDenseSamplingStep(self, stepX: int, stepY: int) None ¶
Set depth dense sampling step.
Note
This function will set the new parameter for all the cameras.
- setDepthNormalFaceCentroidMethod(self, method: visp._visp.mbt.MbtFaceDepthNormal.FaceCentroidType) None ¶
Set method to compute the centroid for display for depth tracker.
Note
This function will set the new parameter for all the cameras.
- Parameters:
- method: visp._visp.mbt.MbtFaceDepthNormal.FaceCentroidType¶
Centroid computation method.
- setDepthNormalFeatureEstimationMethod(self, method: visp._visp.mbt.MbtFaceDepthNormal.FeatureEstimationType) None ¶
Set depth feature estimation method.
Note
This function will set the new parameter for all the cameras.
- Parameters:
- method: visp._visp.mbt.MbtFaceDepthNormal.FeatureEstimationType¶
Depth feature estimation method.
- setDepthNormalPclPlaneEstimationMethod(self, method: int) None ¶
Set depth PCL plane estimation method.
Note
This function will set the new parameter for all the cameras.
- setDepthNormalPclPlaneEstimationRansacMaxIter(self, maxIter: int) None ¶
Set depth PCL RANSAC maximum number of iterations.
Note
This function will set the new parameter for all the cameras.
- setDepthNormalPclPlaneEstimationRansacThreshold(self, threshold: float) None ¶
Set depth PCL RANSAC threshold.
Note
This function will set the new parameter for all the cameras.
- setDepthNormalSamplingStep(self, stepX: int, stepY: int) None ¶
Set depth sampling step.
Note
This function will set the new parameter for all the cameras.
- setDisplayFeatures(*args, **kwargs)¶
Overloaded function.
setDisplayFeatures(self: visp._visp.mbt.MbGenericTracker, displayF: bool) -> None
Enable to display the features. By features, we meant the moving edges (ME) and the klt points if used.
Note that if present, the moving edges can be displayed with different colors:
If green : The ME is a good point.
If blue : The ME is removed because of a contrast problem during the tracking phase.
If purple : The ME is removed because of a threshold problem during the tracking phase.
If red : The ME is removed because it is rejected by the robust approach in the virtual visual servoing scheme.
Note
This function will set the new parameter for all the cameras.
- Parameters:
- displayF
set it to true to display the features.
setDisplayFeatures(self: visp._visp.mbt.MbTracker, displayF: bool) -> None
Enable to display the features. By features, we meant the moving edges (ME) and the klt points if used.
Note that if present, the moving edges can be displayed with different colors:
If green : The ME is a good point.
If blue : The ME is removed because of a contrast problem during the tracking phase.
If purple : The ME is removed because of a threshold problem during the tracking phase.
If red : The ME is removed because it is rejected by the robust approach in the virtual visual servoing scheme.
- Parameters:
- displayF
set it to true to display the features.
- setEstimatedDoF(self, v: visp._visp.core.ColVector) None ¶
Set a 6-dim column vector representing the degrees of freedom in the object frame that are estimated by the tracker. When set to 1, all the 6 dof are estimated.
Below we give the correspondence between the index of the vector and the considered dof:
v[0] = 1 if translation along X is estimated, 0 otherwise;
v[1] = 1 if translation along Y is estimated, 0 otherwise;
v[2] = 1 if translation along Z is estimated, 0 otherwise;
v[3] = 1 if rotation along X is estimated, 0 otherwise;
v[4] = 1 if rotation along Y is estimated, 0 otherwise;
v[5] = 1 if rotation along Z is estimated, 0 otherwise;
- setFarClippingDistance(*args, **kwargs)¶
Overloaded function.
setFarClippingDistance(self: visp._visp.mbt.MbGenericTracker, dist: float) -> None
Set the far distance for clipping.
Note
This function will set the new parameter for all the cameras.
- Parameters:
- dist
Far clipping value.
setFarClippingDistance(self: visp._visp.mbt.MbGenericTracker, dist1: float, dist2: float) -> None
Set the far distance for clipping.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- dist1
Far clipping value for the first camera.
- dist2
Far clipping value for the second camera.
setFarClippingDistance(self: visp._visp.mbt.MbGenericTracker, mapOfClippingDists: dict[str, float]) -> None
Set the far distance for clipping.
- Parameters:
- mapOfClippingDists
Map of far clipping values.
setFarClippingDistance(self: visp._visp.mbt.MbTracker, dist: float) -> None
Set the far distance for clipping.
- Parameters:
- dist
Far clipping value.
- setFeatureFactors(self, mapOfFeatureFactors: dict[visp._visp.mbt.MbGenericTracker.TrackerType, float]) None ¶
Set the feature factors used in the VVS stage (ponderation between the feature types).
- Parameters:
- mapOfFeatureFactors: dict[visp._visp.mbt.MbGenericTracker.TrackerType, float]¶
Map of feature factors.
- setGoodMovingEdgesRatioThreshold(self, threshold: float) None ¶
Set the threshold value between 0 and 1 over good moving edges ratio. It allows to decide if the tracker has enough valid moving edges to compute a pose. 1 means that all moving edges should be considered as good to have a valid pose, while 0.1 means that 10% of the moving edge are enough to declare a pose valid.
Note
See getGoodMovingEdgesRatioThreshold()
Note
This function will set the new parameter for all the cameras.
- setInitialMu(self, mu: float) None ¶
Set the initial value of mu for the Levenberg Marquardt optimization loop.
- setKltMaskBorder(*args, **kwargs)¶
Overloaded function.
setKltMaskBorder(self: visp._visp.mbt.MbGenericTracker, e: int) -> None
setKltMaskBorder(self: visp._visp.mbt.MbGenericTracker, e1: int, e2: int) -> None
setKltMaskBorder(self: visp._visp.mbt.MbGenericTracker, mapOfErosions: dict[str, int]) -> None
- setKltOpencv(*args, **kwargs)¶
Overloaded function.
setKltOpencv(self: visp._visp.mbt.MbGenericTracker, t: visp._visp.klt.KltOpencv) -> None
setKltOpencv(self: visp._visp.mbt.MbGenericTracker, t1: visp._visp.klt.KltOpencv, t2: visp._visp.klt.KltOpencv) -> None
setKltOpencv(self: visp._visp.mbt.MbGenericTracker, mapOfKlts: dict[str, visp._visp.klt.KltOpencv]) -> None
- setLod(*args, **kwargs)¶
Overloaded function.
setLod(self: visp._visp.mbt.MbGenericTracker, useLod: bool, name: str = ) -> None
Set the flag to consider if the level of detail (LOD) is used.
Note
See setMinLineLengthThresh() , setMinPolygonAreaThresh()
Note
This function will set the new parameter for all the cameras.
- Parameters:
- useLod
true if the level of detail must be used, false otherwise. When true, two parameters can be set, see setMinLineLengthThresh() and setMinPolygonAreaThresh() .
- name
name of the face we want to modify the LOD parameter.
setLod(self: visp._visp.mbt.MbTracker, useLod: bool, name: str = ) -> None
Set the flag to consider if the level of detail (LOD) is used.
Note
See setMinLineLengthThresh() , setMinPolygonAreaThresh()
- Parameters:
- useLod
true if the level of detail must be used, false otherwise. When true, two parameters can be set, see setMinLineLengthThresh() and setMinPolygonAreaThresh() .
- name
name of the face we want to modify the LOD parameter.
- setMask(*args, **kwargs)¶
Overloaded function.
setMask(self: visp._visp.mbt.MbGenericTracker, mask: vpImage<bool>) -> None
Set the visibility mask.
- Parameters:
- mask
visibility mask.
setMask(self: visp._visp.mbt.MbTracker, mask: vpImage<bool>) -> None
- setMinLineLengthThresh(*args, **kwargs)¶
Overloaded function.
setMinLineLengthThresh(self: visp._visp.mbt.MbGenericTracker, minLineLengthThresh: float, name: str = ) -> None
Set the threshold for the minimum line length to be considered as visible in the LOD case.
Note
See setLod() , setMinPolygonAreaThresh()
Note
This function will set the new parameter for all the cameras.
- Parameters:
- minLineLengthThresh
threshold for the minimum line length in pixel.
- name
name of the face we want to modify the LOD threshold.
setMinLineLengthThresh(self: visp._visp.mbt.MbTracker, minLineLengthThresh: float, name: str = ) -> None
Set the threshold for the minimum line length to be considered as visible in the LOD case.
Note
See setLod() , setMinPolygonAreaThresh()
- Parameters:
- minLineLengthThresh
threshold for the minimum line length in pixel.
- name
name of the face we want to modify the LOD threshold.
- setMinPolygonAreaThresh(*args, **kwargs)¶
Overloaded function.
setMinPolygonAreaThresh(self: visp._visp.mbt.MbGenericTracker, minPolygonAreaThresh: float, name: str = ) -> None
Set the minimum polygon area to be considered as visible in the LOD case.
Note
See setLod() , setMinLineLengthThresh()
Note
This function will set the new parameter for all the cameras.
- Parameters:
- minPolygonAreaThresh
threshold for the minimum polygon area in pixel.
- name
name of the face we want to modify the LOD threshold.
setMinPolygonAreaThresh(self: visp._visp.mbt.MbTracker, minPolygonAreaThresh: float, name: str = ) -> None
Set the minimum polygon area to be considered as visible in the LOD case.
Note
See setLod() , setMinLineLengthThresh()
- Parameters:
- minPolygonAreaThresh
threshold for the minimum polygon area in pixel.
- name
name of the face we want to modify the LOD threshold.
- setMovingEdge(*args, **kwargs)¶
Overloaded function.
setMovingEdge(self: visp._visp.mbt.MbGenericTracker, me: visp._visp.me.Me) -> None
Set the moving edge parameters.
Note
This function will set the new parameter for all the cameras.
- Parameters:
- me
an instance of vpMe containing all the desired parameters.
setMovingEdge(self: visp._visp.mbt.MbGenericTracker, me1: visp._visp.me.Me, me2: visp._visp.me.Me) -> None
Set the moving edge parameters.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- me1
an instance of vpMe containing all the desired parameters for the first camera.
- me2
an instance of vpMe containing all the desired parameters for the second camera.
setMovingEdge(self: visp._visp.mbt.MbGenericTracker, mapOfMe: dict[str, visp._visp.me.Me]) -> None
Set the moving edge parameters.
- Parameters:
- mapOfMe
Map of vpMe containing all the desired parameters.
- setNearClippingDistance(*args, **kwargs)¶
Overloaded function.
setNearClippingDistance(self: visp._visp.mbt.MbGenericTracker, dist: float) -> None
Set the near distance for clipping.
Note
This function will set the new parameter for all the cameras.
- Parameters:
- dist
Near clipping value.
setNearClippingDistance(self: visp._visp.mbt.MbGenericTracker, dist1: float, dist2: float) -> None
Set the near distance for clipping.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- dist1
Near clipping value for the first camera.
- dist2
Near clipping value for the second camera.
setNearClippingDistance(self: visp._visp.mbt.MbGenericTracker, mapOfDists: dict[str, float]) -> None
Set the near distance for clipping.
- Parameters:
- mapOfDists
Map of near clipping values.
setNearClippingDistance(self: visp._visp.mbt.MbTracker, dist: float) -> None
Set the near distance for clipping.
- Parameters:
- dist
Near clipping value.
- setOgreShowConfigDialog(*args, **kwargs)¶
Overloaded function.
setOgreShowConfigDialog(self: visp._visp.mbt.MbGenericTracker, showConfigDialog: bool) -> None
Enable/Disable the appearance of Ogre config dialog on startup.
Warning
This method has only effect when Ogre is used and Ogre visibility test is enabled using setOgreVisibilityTest() with true parameter.
Note
This function will set the new parameter for all the cameras.
- Parameters:
- showConfigDialog
if true, shows Ogre dialog window (used to set Ogre rendering options) when Ogre visibility is enabled. By default, this functionality is turned off.
setOgreShowConfigDialog(self: visp._visp.mbt.MbTracker, showConfigDialog: bool) -> None
Enable/Disable the appearance of Ogre config dialog on startup.
Warning
This method has only effect when Ogre is used and Ogre visibility test is enabled using setOgreVisibilityTest() with true parameter.
- Parameters:
- showConfigDialog
if true, shows Ogre dialog window (used to set Ogre rendering options) when Ogre visibility is enabled. By default, this functionality is turned off.
- setOgreVisibilityTest(*args, **kwargs)¶
Overloaded function.
setOgreVisibilityTest(self: visp._visp.mbt.MbGenericTracker, v: bool) -> None
Use Ogre3D for visibility tests
Warning
This function has to be called before the initialization of the tracker.
Note
This function will set the new parameter for all the cameras.
- Parameters:
- v
True to use it, False otherwise
setOgreVisibilityTest(self: visp._visp.mbt.MbTracker, v: bool) -> None
Use Ogre3D for visibility tests
Warning
This function has to be called before the initialization of the tracker.
- Parameters:
- v
True to use it, False otherwise
- setOptimizationMethod(*args, **kwargs)¶
Overloaded function.
setOptimizationMethod(self: visp._visp.mbt.MbGenericTracker, opt: visp._visp.mbt.MbTracker.MbtOptimizationMethod) -> None
setOptimizationMethod(self: visp._visp.mbt.MbTracker, opt: visp._visp.mbt.MbTracker.MbtOptimizationMethod) -> None
- setPose(*args, **kwargs)¶
Overloaded function.
setPose(self: visp._visp.mbt.MbGenericTracker, I: visp._visp.core.ImageGray, cdMo: visp._visp.core.HomogeneousMatrix) -> None
Set the pose to be used in entry (as guess) of the next call to the track() function. This pose will be just used once.
Warning
This functionnality is not available when tracking cylinders with the KLT tracking.
Note
This function will set the new parameter for all the cameras.
- Parameters:
- I
grayscale image corresponding to the desired pose.
- cdMo
Pose to affect.
setPose(self: visp._visp.mbt.MbGenericTracker, I_color: visp._visp.core.ImageRGBa, cdMo: visp._visp.core.HomogeneousMatrix) -> None
Set the pose to be used in entry (as guess) of the next call to the track() function. This pose will be just used once.
Warning
This functionnality is not available when tracking cylinders with the KLT tracking.
Note
This function will set the new parameter for all the cameras.
- Parameters:
- I_color
color image corresponding to the desired pose.
- cdMo
Pose to affect.
setPose(self: visp._visp.mbt.MbGenericTracker, I1: visp._visp.core.ImageGray, I2: visp._visp.core.ImageGray, c1Mo: visp._visp.core.HomogeneousMatrix, c2Mo: visp._visp.core.HomogeneousMatrix) -> None
Set the pose to be used in entry of the next call to the track() function. This pose will be just used once.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- I1
First grayscale image corresponding to the desired pose.
- I2
Second grayscale image corresponding to the desired pose.
- c1Mo
First pose to affect.
- c2Mo
Second pose to affect.
setPose(self: visp._visp.mbt.MbGenericTracker, I_color1: visp._visp.core.ImageRGBa, I_color2: visp._visp.core.ImageRGBa, c1Mo: visp._visp.core.HomogeneousMatrix, c2Mo: visp._visp.core.HomogeneousMatrix) -> None
Set the pose to be used in entry of the next call to the track() function. This pose will be just used once.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- I_color1
First color image corresponding to the desired pose.
- I_color2
Second color image corresponding to the desired pose.
- c1Mo
First pose to affect.
- c2Mo
Second pose to affect.
setPose(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfCameraPoses: dict[str, visp._visp.core.HomogeneousMatrix]) -> None
Set the pose to be used in entry of the next call to the track() function. This pose will be just used once. The camera transformation matrices have to be set before.
Note
Image and camera pose must be supplied for the reference camera. The images for all the cameras must be supplied to correctly initialize the trackers but some camera poses can be omitted. In this case, they will be initialized using the pose computed from the reference camera pose and using the known geometric transformation between each camera (see setCameraTransformationMatrix() ).
- Parameters:
- mapOfImages
Map of grayscale images.
- mapOfCameraPoses
Map of pose to affect to the cameras.
setPose(self: visp._visp.mbt.MbGenericTracker, mapOfColorImages: dict[str, visp._visp.core.ImageRGBa], mapOfCameraPoses: dict[str, visp._visp.core.HomogeneousMatrix]) -> None
Set the pose to be used in entry of the next call to the track() function. This pose will be just used once. The camera transformation matrices have to be set before.
Note
Image and camera pose must be supplied for the reference camera. The images for all the cameras must be supplied to correctly initialize the trackers but some camera poses can be omitted. In this case, they will be initialized using the pose computed from the reference camera pose and using the known geometric transformation between each camera(see setCameraTransformationMatrix() ).
- Parameters:
- mapOfColorImages
Map of color images.
- mapOfCameraPoses
Map of pose to affect to the cameras.
- setPoseSavingFilename(self, filename: str) None ¶
Set the filename used to save the initial pose computed using the initClick() method. It is also used to read a previous pose in the same method. If the file is not set then, the initClick() method will create a .0.pos file in the root directory. This directory is the path to the file given to the method initClick() used to know the coordinates in the object frame.
- setProjectionErrorComputation(*args, **kwargs)¶
Overloaded function.
setProjectionErrorComputation(self: visp._visp.mbt.MbGenericTracker, flag: bool) -> None
Set if the projection error criteria has to be computed. This criteria could be used to detect the quality of the tracking. It computes an angle between 0 and 90 degrees that is available with getProjectionError() . Closer to 0 is the value, better is the tracking.
Note
See getProjectionError()
Note
Available only if the edge features are used (e.g. Edge tracking or Edge + KLT tracking). Otherwise, the value of 90 degrees will be returned.
- Parameters:
- flag
True if the projection error criteria has to be computed, false otherwise.
setProjectionErrorComputation(self: visp._visp.mbt.MbTracker, flag: bool) -> None
Set if the projection error criteria has to be computed. This criteria could be used to detect the quality of the tracking. It computes an angle between 0 and 90 degrees that is available with getProjectionError() . Closer to 0 is the value, better is the tracking.
Note
See getProjectionError()
- Parameters:
- flag
True if the projection error criteria has to be computed, false otherwise.
- setProjectionErrorDisplay(*args, **kwargs)¶
Overloaded function.
setProjectionErrorDisplay(self: visp._visp.mbt.MbGenericTracker, display: bool) -> None
Display or not gradient and model orientation when computing the projection error.
setProjectionErrorDisplay(self: visp._visp.mbt.MbTracker, display: bool) -> None
Display or not gradient and model orientation when computing the projection error.
- setProjectionErrorDisplayArrowLength(*args, **kwargs)¶
Overloaded function.
setProjectionErrorDisplayArrowLength(self: visp._visp.mbt.MbGenericTracker, length: int) -> None
Arrow length used to display gradient and model orientation for projection error computation.
setProjectionErrorDisplayArrowLength(self: visp._visp.mbt.MbTracker, length: int) -> None
Arrow length used to display gradient and model orientation for projection error computation.
- setProjectionErrorDisplayArrowThickness(*args, **kwargs)¶
Overloaded function.
setProjectionErrorDisplayArrowThickness(self: visp._visp.mbt.MbGenericTracker, thickness: int) -> None
Arrow thickness used to display gradient and model orientation for projection error computation.
setProjectionErrorDisplayArrowThickness(self: visp._visp.mbt.MbTracker, thickness: int) -> None
Arrow thickness used to display gradient and model orientation for projection error computation.
- setProjectionErrorKernelSize(self, size: int) None ¶
Set kernel size used for projection error computation.
- setProjectionErrorMovingEdge(self, me: visp._visp.me.Me) None ¶
Set Moving-Edges parameters for projection error computation.
- Parameters:
- me: visp._visp.me.Me¶
Moving-Edges parameters.
- setReferenceCameraName(self, referenceCameraName: str) None ¶
Set the reference camera name.
- setScanLineVisibilityTest(*args, **kwargs)¶
Overloaded function.
setScanLineVisibilityTest(self: visp._visp.mbt.MbGenericTracker, v: bool) -> None
setScanLineVisibilityTest(self: visp._visp.mbt.MbTracker, v: bool) -> None
- setStopCriteriaEpsilon(self, eps: float) None ¶
Set the minimal error (previous / current estimation) to determine if there is convergence or not.
- setTrackerType(*args, **kwargs)¶
Overloaded function.
setTrackerType(self: visp._visp.mbt.MbGenericTracker, type: int) -> None
Set the tracker type.
Note
This function will set the new parameter for all the cameras.
Warning
This function has to be called before the loading of the CAD model.
- Parameters:
- type
Type of features to used, see vpTrackerType (e.g. vpMbGenericTracker::EDGE_TRACKER or vpMbGenericTracker::EDGE_TRACKER | vpMbGenericTracker::KLT_TRACKER).
setTrackerType(self: visp._visp.mbt.MbGenericTracker, mapOfTrackerTypes: dict[str, int]) -> None
Set the tracker types.
Warning
This function has to be called before the loading of the CAD model.
- Parameters:
- mapOfTrackerTypes
Map of feature types to used, see vpTrackerType (e.g. vpMbGenericTracker::EDGE_TRACKER or vpMbGenericTracker::EDGE_TRACKER | vpMbGenericTracker::KLT_TRACKER).
- setUseDepthDenseTracking(self, name: str, useDepthDenseTracking: bool) None ¶
Set if the polygon that has the given name has to be considered during the tracking phase.
Note
This function will set the new parameter for all the cameras.
- setUseDepthNormalTracking(self, name: str, useDepthNormalTracking: bool) None ¶
Set if the polygon that has the given name has to be considered during the tracking phase.
Note
This function will set the new parameter for all the cameras.
- setUseEdgeTracking(self, name: str, useEdgeTracking: bool) None ¶
Set if the polygon that has the given name has to be considered during the tracking phase.
Note
This function will set the new parameter for all the cameras.
- track(*args, **kwargs)¶
Overloaded function.
track(self: visp._visp.mbt.MbGenericTracker, I: visp._visp.core.ImageGray) -> None
Realize the tracking of the object in the image.
Note
This function will track only for the reference camera.
- Parameters:
- I
The current grayscale image.
track(self: visp._visp.mbt.MbGenericTracker, I_color: visp._visp.core.ImageRGBa) -> None
Realize the tracking of the object in the image.
Note
This function will track only for the reference camera.
- Parameters:
- I_color
The current color image.
track(self: visp._visp.mbt.MbGenericTracker, I1: visp._visp.core.ImageGray, I2: visp._visp.core.ImageGray) -> None
Realize the tracking of the object in the image.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- I1
The first grayscale image.
- I2
The second grayscale image.
track(self: visp._visp.mbt.MbGenericTracker, I_color1: visp._visp.core.ImageRGBa, I_color2: visp._visp.core.ImageRGBa) -> None
Realize the tracking of the object in the image.
Note
This function assumes a stereo configuration of the generic tracker.
- Parameters:
- I_color1
The first color image.
track(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray]) -> None
Realize the tracking of the object in the image.
- Parameters:
- mapOfImages
Map of images.
track(self: visp._visp.mbt.MbGenericTracker, mapOfColorImages: dict[str, visp._visp.core.ImageRGBa]) -> None
Realize the tracking of the object in the image.
- Parameters:
- mapOfColorImages
Map of color images.
track(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfPointClouds: dict[str, pcl::PointCloud<pcl::PointXYZ>]) -> None
Realize the tracking of the object in the image.
- Parameters:
- mapOfImages
Map of images.
- mapOfPointClouds
Map of PCL pointclouds.
track(self: visp._visp.mbt.MbGenericTracker, mapOfColorImages: dict[str, visp._visp.core.ImageRGBa], mapOfPointClouds: dict[str, pcl::PointCloud<pcl::PointXYZ>]) -> None
Realize the tracking of the object in the image.
- Parameters:
- mapOfColorImages
Map of color images.
- mapOfPointClouds
Map of PCL pointclouds.
track(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfPointClouds: dict[str, list[visp._visp.core.ColVector]], mapOfPointCloudWidths: dict[str, int], mapOfPointCloudHeights: dict[str, int]) -> None
Realize the tracking of the object in the image.
- Parameters:
- mapOfImages
Map of images.
- mapOfPointClouds
Map of pointclouds.
- mapOfPointCloudWidths
Map of pointcloud widths.
- mapOfPointCloudHeights
Map of pointcloud heights.
track(self: visp._visp.mbt.MbGenericTracker, mapOfColorImages: dict[str, visp._visp.core.ImageRGBa], mapOfPointClouds: dict[str, list[visp._visp.core.ColVector]], mapOfPointCloudWidths: dict[str, int], mapOfPointCloudHeights: dict[str, int]) -> None
Realize the tracking of the object in the image.
- Parameters:
- mapOfColorImages
Map of images.
- mapOfPointClouds
Map of pointclouds.
- mapOfPointCloudWidths
Map of pointcloud widths.
- mapOfPointCloudHeights
Map of pointcloud heights.
track(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfPointClouds: dict[str, visp._visp.core.Matrix], mapOfPointCloudWidths: dict[str, int], mapOfPointCloudHeights: dict[str, int]) -> None
track(self: visp._visp.mbt.MbGenericTracker, mapOfColorImages: dict[str, visp._visp.core.ImageRGBa], mapOfPointClouds: dict[str, visp._visp.core.Matrix], mapOfPointCloudWidths: dict[str, int], mapOfPointCloudHeights: dict[str, int]) -> None
Realize the tracking of the object in the image.
- Parameters:
- mapOfColorImages
Map of images.
- mapOfPointClouds
Map of pointclouds.
- mapOfPointCloudWidths
Map of pointcloud widths.
- mapOfPointCloudHeights
Map of pointcloud heights.
track(self: visp._visp.mbt.MbGenericTracker, mapOfImages: dict[str, visp._visp.core.ImageGray], mapOfPointclouds: dict[str, numpy.ndarray[numpy.float64]]) -> None
Perform tracking, with point clouds being represented as numpy arrays
- Parameters:
- mapOfImages
Dictionary mapping from a camera name to a grayscale image
- Param:
mapOfPointclouds: Dictionary mapping from a camera name to a point cloud.
A point cloud is represented as a H x W x 3 double NumPy array.