RBSilhouetteMeTracker¶
- class RBSilhouetteMeTracker(self)¶
Bases:
RBFeatureTracker
Moving edge feature tracking from depth-extracted object contours.
Methods
Extract the geometric features from the list of collected silhouette points.
Overloaded function.
Returns the minimum mask confidence that a pixel linked to depth point should have if it should be kept during tracking.
Overloaded function.
Method called after the tracking iteration has finished.
Method called when starting a tracking iteration.
Whether this tracker requires depth image to extract features.
Whether this tracker requires RGB image to extract features.
Whether this tracker requires Silhouette candidates.
Returns whether the tracking algorithm should filter out points that are unlikely to be on the object according to the mask.
Track the features.
Inherited Methods
vvsConverged
Retrieve the 6 x 6 pose covariance matrix, computed from the weights associated to each feature.
Get the right-side term of the Gauss-Newton optimization term.
LTL
Return the type of feature that is used by this tracker.
Get a weighted version of the error vector.
error
userVvsWeight
cov
LTR
Get the importance of this tracker in the optimization step.
Returns whether the tracker is considered as having converged to the desired pose.
weights
enableDisplay
Get the left-side term of the Gauss-Newton optimization term.
numFeatures
covWeightDiag
L
weighted_error
Update the covariance matrix.
Operators
__doc__
__module__
Attributes
L
LTL
LTR
__annotations__
cov
covWeightDiag
enableDisplay
error
numFeatures
userVvsWeight
vvsConverged
weighted_error
weights
- __init__(self)¶
- static computeCovarianceMatrix(A: visp._visp.core.Matrix, b: visp._visp.core.ColVector, W: visp._visp.core.Matrix) visp._visp.core.Matrix ¶
- static computeJTR(interaction: visp._visp.core.Matrix, error: visp._visp.core.ColVector, JTR: visp._visp.core.ColVector) None ¶
- computeVVSIter(self, frame: visp._visp.rbt.RBFeatureTrackerInput, cMo: visp._visp.core.HomogeneousMatrix, iteration: int) None ¶
- display(self, cam: visp._visp.core.CameraParameters, I: visp._visp.core.ImageGray, IRGB: visp._visp.core.ImageRGBa, depth: visp._visp.core.ImageGray) None ¶
- extractFeatures(self, frame: visp._visp.rbt.RBFeatureTrackerInput, previousFrame: visp._visp.rbt.RBFeatureTrackerInput, cMo: visp._visp.core.HomogeneousMatrix) None ¶
Extract the geometric features from the list of collected silhouette points.
- getCovariance(self) visp._visp.core.Matrix ¶
Retrieve the 6 x 6 pose covariance matrix, computed from the weights associated to each feature.
The updateCovariance method should have been called before
- getLTL(self) visp._visp.core.Matrix ¶
Get the left-side term of the Gauss-Newton optimization term.
- getLTR(self) visp._visp.core.ColVector ¶
Get the right-side term of the Gauss-Newton optimization term.
- getMe(*args, **kwargs)¶
Overloaded function.
getMe(self: visp._visp.rbt.RBSilhouetteMeTracker) -> visp._visp.me.Me
getMe(self: visp._visp.rbt.RBSilhouetteMeTracker) -> visp._visp.me.Me
- getMinimumMaskConfidence(self) float ¶
Returns the minimum mask confidence that a pixel linked to depth point should have if it should be kept during tracking.
This value is between 0 and 1
- getNumFeatures(self) int ¶
Return the type of feature that is used by this tracker.
Get the number of features used to compute the pose update
- Returns:
vpRBFeatureType
- getVVSTrackerWeight(self) float ¶
Get the importance of this tracker in the optimization step. The default computation is the following: \(w / N\) , where \(w\) is the weight defined by setTrackerWeight, and \(N\) is the number of features.
- getWeightedError(self) visp._visp.core.ColVector ¶
Get a weighted version of the error vector. This should not include the userVVSWeight, but may include reweighting to remove outliers, occlusions, etc.
- initVVS(self, frame: visp._visp.rbt.RBFeatureTrackerInput, previousFrame: visp._visp.rbt.RBFeatureTrackerInput, cMo: visp._visp.core.HomogeneousMatrix) None ¶
- loadJsonConfiguration(*args, **kwargs)¶
Overloaded function.
loadJsonConfiguration(self: visp._visp.rbt.RBSilhouetteMeTracker, j: nlohmann::basic_json<std::map, std::vector, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, long, unsigned long, double, std::allocator, nlohmann::adl_serializer, std::vector<unsigned char, std::allocator<unsigned char> > >) -> None
loadJsonConfiguration(self: visp._visp.rbt.RBFeatureTracker, j: nlohmann::basic_json<std::map, std::vector, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, long, unsigned long, double, std::allocator, nlohmann::adl_serializer, std::vector<unsigned char, std::allocator<unsigned char> > >) -> None
- requiresRGB(self) bool ¶
Whether this tracker requires RGB image to extract features.
- Returns:
true if the tracker requires an RGB imagefalse otherwise
- setMovingEdge(self, me: visp._visp.me.Me) None ¶
- shouldUseMask(self) bool ¶
Returns whether the tracking algorithm should filter out points that are unlikely to be on the object according to the mask. If the mask is not computed beforehand, then it has no effect.
- trackFeatures(self, frame: visp._visp.rbt.RBFeatureTrackerInput, previousFrame: visp._visp.rbt.RBFeatureTrackerInput, cMo: visp._visp.core.HomogeneousMatrix) None ¶
Track the features.