RBSilhouetteMeTracker

class RBSilhouetteMeTracker(self)

Bases: RBFeatureTracker

Moving edge feature tracking from depth-extracted object contours.

Methods

__init__

computeVVSIter

display

extractFeatures

Extract the geometric features from the list of collected silhouette points.

getGlobalConvergenceMinimumRatio

getMe

Overloaded function.

getMinRobustThreshold

getMinimumMaskConfidence

Returns the minimum mask confidence that a pixel linked to depth point should have if it should be kept during tracking.

getNumCandidates

getSinglePointConvergenceThreshold

initVVS

loadJsonConfiguration

Overloaded function.

onTrackingIterEnd

Method called after the tracking iteration has finished.

onTrackingIterStart

Method called when starting a tracking iteration.

requiresDepth

Whether this tracker requires depth image to extract features.

requiresRGB

Whether this tracker requires RGB image to extract features.

requiresSilhouetteCandidates

Whether this tracker requires Silhouette candidates.

setGlobalConvergenceMinimumRatio

setMinRobustThreshold

setMinimumMaskConfidence

setMovingEdge

setNumCandidates

setShouldUseMask

setSinglePointConvergenceThreshold

shouldUseMask

Returns whether the tracking algorithm should filter out points that are unlikely to be on the object according to the mask.

trackFeatures

Track the features.

Inherited Methods

vvsConverged

getCovariance

Retrieve the 6 x 6 pose covariance matrix, computed from the weights associated to each feature.

getLTR

Get the right-side term of the Gauss-Newton optimization term.

LTL

getNumFeatures

Return the type of feature that is used by this tracker.

getWeightedError

Get a weighted version of the error vector.

error

userVvsWeight

computeCovarianceMatrix

setTrackerWeight

cov

LTR

getVVSTrackerWeight

Get the importance of this tracker in the optimization step.

vvsHasConverged

Returns whether the tracker is considered as having converged to the desired pose.

weights

enableDisplay

getLTL

Get the left-side term of the Gauss-Newton optimization term.

numFeatures

computeJTR

featuresShouldBeDisplayed

covWeightDiag

L

weighted_error

updateCovariance

Update the covariance matrix.

setFeaturesShouldBeDisplayed

Operators

__doc__

__init__

__module__

Attributes

L

LTL

LTR

__annotations__

cov

covWeightDiag

enableDisplay

error

numFeatures

userVvsWeight

vvsConverged

weighted_error

weights

__init__(self)
static computeCovarianceMatrix(A: visp._visp.core.Matrix, b: visp._visp.core.ColVector, W: visp._visp.core.Matrix) visp._visp.core.Matrix
static computeJTR(interaction: visp._visp.core.Matrix, error: visp._visp.core.ColVector, JTR: visp._visp.core.ColVector) None
computeVVSIter(self, frame: visp._visp.rbt.RBFeatureTrackerInput, cMo: visp._visp.core.HomogeneousMatrix, iteration: int) None
display(self, cam: visp._visp.core.CameraParameters, I: visp._visp.core.ImageGray, IRGB: visp._visp.core.ImageRGBa, depth: visp._visp.core.ImageGray) None
extractFeatures(self, frame: visp._visp.rbt.RBFeatureTrackerInput, previousFrame: visp._visp.rbt.RBFeatureTrackerInput, cMo: visp._visp.core.HomogeneousMatrix) None

Extract the geometric features from the list of collected silhouette points.

featuresShouldBeDisplayed(self) bool
getCovariance(self) visp._visp.core.Matrix

Retrieve the 6 x 6 pose covariance matrix, computed from the weights associated to each feature.

The updateCovariance method should have been called before

getGlobalConvergenceMinimumRatio(self) float
getLTL(self) visp._visp.core.Matrix

Get the left-side term of the Gauss-Newton optimization term.

getLTR(self) visp._visp.core.ColVector

Get the right-side term of the Gauss-Newton optimization term.

getMe(*args, **kwargs)

Overloaded function.

  1. getMe(self: visp._visp.rbt.RBSilhouetteMeTracker) -> visp._visp.me.Me

  2. getMe(self: visp._visp.rbt.RBSilhouetteMeTracker) -> visp._visp.me.Me

getMinRobustThreshold(self) float
getMinimumMaskConfidence(self) float

Returns the minimum mask confidence that a pixel linked to depth point should have if it should be kept during tracking.

This value is between 0 and 1

getNumCandidates(self) int
getNumFeatures(self) int

Return the type of feature that is used by this tracker.

Get the number of features used to compute the pose update

Returns:

vpRBFeatureType

getSinglePointConvergenceThreshold(self) float
getVVSTrackerWeight(self) float

Get the importance of this tracker in the optimization step. The default computation is the following: \(w / N\) , where \(w\) is the weight defined by setTrackerWeight, and \(N\) is the number of features.

getWeightedError(self) visp._visp.core.ColVector

Get a weighted version of the error vector. This should not include the userVVSWeight, but may include reweighting to remove outliers, occlusions, etc.

initVVS(self, frame: visp._visp.rbt.RBFeatureTrackerInput, previousFrame: visp._visp.rbt.RBFeatureTrackerInput, cMo: visp._visp.core.HomogeneousMatrix) None
loadJsonConfiguration(*args, **kwargs)

Overloaded function.

  1. loadJsonConfiguration(self: visp._visp.rbt.RBSilhouetteMeTracker, j: nlohmann::basic_json<std::map, std::vector, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, long, unsigned long, double, std::allocator, nlohmann::adl_serializer, std::vector<unsigned char, std::allocator<unsigned char> > >) -> None

  2. loadJsonConfiguration(self: visp._visp.rbt.RBFeatureTracker, j: nlohmann::basic_json<std::map, std::vector, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, long, unsigned long, double, std::allocator, nlohmann::adl_serializer, std::vector<unsigned char, std::allocator<unsigned char> > >) -> None

onTrackingIterEnd(self) None

Method called after the tracking iteration has finished.

onTrackingIterStart(self) None

Method called when starting a tracking iteration.

requiresDepth(self) bool

Whether this tracker requires depth image to extract features.

requiresRGB(self) bool

Whether this tracker requires RGB image to extract features.

Returns:

true if the tracker requires an RGB imagefalse otherwise

requiresSilhouetteCandidates(self) bool

Whether this tracker requires Silhouette candidates.

setFeaturesShouldBeDisplayed(self, enableDisplay: bool) None
setGlobalConvergenceMinimumRatio(self, threshold: float) None
setMinRobustThreshold(self, threshold: float) None
setMinimumMaskConfidence(self, confidence: float) None
setMovingEdge(self, me: visp._visp.me.Me) None
setNumCandidates(self, candidates: int) None
setShouldUseMask(self, useMask: bool) None
setSinglePointConvergenceThreshold(self, threshold: float) None
setTrackerWeight(self, weight: float) None
shouldUseMask(self) bool

Returns whether the tracking algorithm should filter out points that are unlikely to be on the object according to the mask. If the mask is not computed beforehand, then it has no effect.

trackFeatures(self, frame: visp._visp.rbt.RBFeatureTrackerInput, previousFrame: visp._visp.rbt.RBFeatureTrackerInput, cMo: visp._visp.core.HomogeneousMatrix) None

Track the features.

updateCovariance(self: visp._visp.rbt.RBFeatureTracker, lambda: float) None

Update the covariance matrix.

Parameters:
lambda

the visual servoing gain

vvsHasConverged(self) bool

Returns whether the tracker is considered as having converged to the desired pose.