Pose¶
- class Pose(*args, **kwargs)¶
Bases:
pybind11_object
Class used for pose computation from N points (pose from point only). Some of the algorithms implemented in this class are described in [31] .
Note
It is also possible to estimate a pose from other features using vpPoseFeatures class.
To see how to use this class you can follow the tutorial-pose-estimation.
Overloaded function.
__init__(self: visp._visp.vision.Pose) -> None
Default constructor.
__init__(self: visp._visp.vision.Pose, lP: list[visp._visp.core.Point]) -> None
Methods
Overloaded function.
Add a new point in the array of points.
Add (append) a list of points in the array of points.
Delete the array of point
Overloaded function.
Method that first computes the pose cMo using the linear approaches of Dementhon and Lagrange and then uses the non-linear Virtual Visual Servoing approach to affine the pose which had the lowest residual.
- param probability:
Probability that at least one of the random samples is free from outliers (typically p=0.99).
Overloaded function.
Overloaded function.
Overloaded function.
Match a vector p2D of 2D point (x,y) and a vector p3D of 3D points (X,Y,Z) using the Ransac algorithm.
Get the covariance matrix computed in the Virtual Visual Servoing approach.
Get the number of threads for the parallel RANSAC implementation.
Get the vector of points.
Get the vector of indexes corresponding to inliers.
Get the vector of inliers.
Get the number of inliers.
Note
See setUseParallelRansac
Compute the pose using Dementhon approach for non planar objects.
Compute the pose using Dementhon approach for planar objects this is a direct implementation of the algorithm proposed by Dementhon in his PhD.
Carries out the camera pose the image of a rectangle and the intrinsic parameters, the length on x axis is known but the proportion of the rectangle are unknown.
Compute the pose of a non planar object using Lagrange approach.
Compute the pose using the Lowe non linear approach it consider the minimization of a residual using the levenberg marquartd approach.
Compute the pose using the Ransac approach.
Compute the pose using virtual visual servoing approach.
Compute the pose using virtual visual servoing approach and a robust control law.
Print to std::cout points used as input.
Set if the covariance matrix has to be computed in the Virtual Visual Servoing approach.
Set singular value threshold in Dementhon pose estimation method.
Set distance threshold to consider that when a point belongs to a plane.
Set virtual visual servoing gain.
Set the number of threads for the parallel RANSAC implementation.
Set Ransac number of trials.
Set Ransac requested number of inliers to reach consensus.
Set Ransac threshold.
Set if parallel RANSAC version should be used or not (only if C++11).
Set virtual visual servoing epsilon value used in the pseudo-inverse.
Set virtual visual servoing pose estimator maximum number od iterations.
Inherited Methods
Operators
__doc__
Overloaded function.
__module__
Attributes
CHECK_DEGENERATE_POINTS
DEMENTHON
DEMENTHON_LAGRANGE_VIRTUAL_VS
DEMENTHON_LOWE
DEMENTHON_VIRTUAL_VS
LAGRANGE
LAGRANGE_LOWE
LAGRANGE_VIRTUAL_VS
LOWE
NO_FILTER
PREFILTER_DEGENERATE_POINTS
RANSAC
VIRTUAL_VS
__annotations__
listP
npt
residual
- class PoseMethodType(self, value: int)¶
Bases:
pybind11_object
Filter applied in Ransac
Values:
NO_FILTER: No filter is applied.
PREFILTER_DEGENERATE_POINTS: Remove degenerate points (same 3D or 2D coordinates) before the RANSAC.
CHECK_DEGENERATE_POINTS: Check for degenerate points during the RANSAC.
- class RANSAC_FILTER_FLAGS(self, value: int)¶
Bases:
pybind11_object
Filter applied in Ransac
Values:
NO_FILTER: No filter is applied.
PREFILTER_DEGENERATE_POINTS: Remove degenerate points (same 3D or 2D coordinates) before the RANSAC.
CHECK_DEGENERATE_POINTS: Check for degenerate points during the RANSAC.
- __init__(*args, **kwargs)¶
Overloaded function.
__init__(self: visp._visp.vision.Pose) -> None
Default constructor.
__init__(self: visp._visp.vision.Pose, lP: list[visp._visp.core.Point]) -> None
- addPoint(self, P: visp._visp.core.Point) None ¶
Add a new point in the array of points.
Warning
Considering a point from the class vpPoint , oX, oY, and oZ will represent the 3D coordinates of the point in the object frame and x and y its 2D coordinates in the image plane. These 5 fields must be initialized to be used within this function.
- Parameters:
- P: visp._visp.core.Point¶
Point to add in the array of point.
- addPoints(self, lP: list[visp._visp.core.Point]) None ¶
Add (append) a list of points in the array of points.
Warning
Considering a point from the class vpPoint , oX, oY, and oZ will represent the 3D coordinates of the point in the object frame and x and y its 2D coordinates in the image plane. These 5 fields must be initialized to be used within this function.
- Parameters:
- lP: list[visp._visp.core.Point]¶
List of points to add (append).
- static computePlanarObjectPoseFromRGBD(arg0: visp._visp.core.ImageFloat, arg1: list[visp._visp.core.ImagePoint], arg2: visp._visp.core.CameraParameters, arg3: list[visp._visp.core.Point]) tuple[bool, visp._visp.core.HomogeneousMatrix, float] ¶
- computePose(*args, **kwargs)¶
Overloaded function.
computePose(self: visp._visp.vision.Pose, method: visp._visp.vision.Pose.PoseMethodType, cMo: visp._visp.core.HomogeneousMatrix, func: bool (vpHomogeneousMatrix const&)) -> bool
Compute the pose according to the desired method which are:
vpPose::LAGRANGE : Linear Lagrange approach (test is done to switch between planar and non planar algorithm)
vpPose::DEMENTHON : Linear Dementhon approach (test is done to switch between planar and non planar algorithm)
vpPose::LOWE : Lowe aproach based on a Levenberg Marquartd non linear minimization scheme that needs an initialization from Lagrange or Dementhon aproach
vpPose::LAGRANGE_LOWE : Non linear Lowe aproach initialized by Lagrange approach
vpPose::DEMENTHON_LOWE : Non linear Lowe aproach initialized by Dementhon approach
vpPose::VIRTUAL_VS : Non linear virtual visual servoing approach that needs an initialization from Lagrange or Dementhon aproach
vpPose::DEMENTHON_VIRTUAL_VS : Non linear virtual visual servoing approach initialized by Dementhon approach
vpPose::LAGRANGE_VIRTUAL_VS : Non linear virtual visual servoing approach initialized by Lagrange approach
vpPose::DEMENTHON_LAGRANGE_VIRTUAL_VS : Non linear virtual visual servoing approach initialized by either Dementhon or Lagrange approach, depending on which method has the smallest residual.
vpPose::RANSAC : Robust Ransac aproach (doesn’t need an initialization)
computePose(self: visp._visp.vision.Pose, arg0: visp._visp.vision.Pose.PoseMethodType, arg1: visp._visp.core.HomogeneousMatrix) -> bool
- computePoseDementhonLagrangeVVS(self, cMo: visp._visp.core.HomogeneousMatrix) bool ¶
Method that first computes the pose cMo using the linear approaches of Dementhon and Lagrange and then uses the non-linear Virtual Visual Servoing approach to affine the pose which had the lowest residual.
- Parameters:
- cMo: visp._visp.core.HomogeneousMatrix¶
the pose of the object with regard to the camera.
- Returns:
true the pose computation was successful.false an error occurred during the pose computation.
- static computeRansacIterations(probability: float, epsilon: float, sampleSize: int = 4, maxIterations: int = 2000) int ¶
- Parameters:
- probability: float¶
Probability that at least one of the random samples is free from outliers (typically p=0.99).
- epsilon: float¶
Probability that a selected point is an outlier (between 0 and 1).
- sampleSize: int = 4¶
Minimum number of points to estimate the model (4 for a pose estimation).
- maxIterations: int = 2000¶
Upper bound on the number of iterations or -1 for INT_MAX.
- Returns:
The number of RANSAC iterations to ensure with a probability p that at least one of the random samples of s points is free from outliers or maxIterations if it exceeds the desired upper bound or INT_MAX if maxIterations=-1.
- computeResidual(*args, **kwargs)¶
Overloaded function.
computeResidual(self: visp._visp.vision.Pose, cMo: visp._visp.core.HomogeneousMatrix) -> float
Compute and return the sum of squared residuals expressed in meter^2 for the pose matrix cMo .
Note
There is also the possibility to compute the residual expressed in pixel^2 using one of the following methods:
vpPose::computeResidual(const vpHomogeneousMatrix &, const vpCameraParameters &)
vpPose::computeResidual(const vpHomogeneousMatrix &, const vpCameraParameters &am, vpColVector &)
- Parameters:
- cMo
Input pose. The matrix that defines the pose to be tested.
- Returns:
The value of the sum of squared residuals in meter^2.
computeResidual(self: visp._visp.vision.Pose, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters) -> float
Compute and return the sum of squared residuals expressed in pixel^2 for the pose matrix cMo .
Note
There is also the possibility to compute the residual expressed in meter^2 using vpPose::computeResidual(const vpHomogeneousMatrix &)
- Parameters:
- cMo
Input pose. The matrix that defines the pose to be tested.
- cam
Camera parameters used to observe the points.
- Returns:
The value of the sum of squared residuals in pixel^2.
computeResidual(self: visp._visp.vision.Pose, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, squaredResidual: visp._visp.core.ColVector) -> float
Compute and return the sum of squared residuals expressed in pixel^2 for the pose matrix cMo .
Note
There is also the possibility to compute the residual expressed in meter^2 using vpPose::computeResidual(const vpHomogeneousMatrix &)
- Parameters:
- cMo
Input pose. The matrix that defines the pose to be tested.
- cam
Camera parameters used to observe the points.
- squaredResidual
Input/output vector that will be resized and will contain the squared residuals expressed in pixel^2 of each point.
- Returns:
The value of the sum of squared residuals in pixel^2.
- static display(*args, **kwargs)¶
Overloaded function.
display(I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, size: float, col: visp._visp.core.Color) -> None
Display in the image I the pose represented by its homogenous transformation cMo as a 3 axis frame.
- Parameters:
- I
Image where the pose is displayed in overlay.
- cMo
Considered pose to display.
- cam
Camera parameters associated to image I .
- size
length in meter of the axis that will be displayed.
- col
Color used to display the 3 axis. If vpColor::none , red, green and blue will represent x-axis, y-axis and z-axis respectively.
display(I: visp._visp.core.ImageRGBa, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, size: float, col: visp._visp.core.Color) -> None
Display in the image I the pose represented by its homogenous transformation cMo as a 3 axis frame.
- Parameters:
- I
Image where the pose is displayed in overlay.
- cMo
Considered pose to display.
- cam
Camera parameters associated to image I .
- size
length in meter of the axis that will be displayed.
- col
Color used to display the 3 axis. If vpColor::none , red, green and blue will represent x-axis, y-axis and z-axis respectively.
- displayModel(*args, **kwargs)¶
Overloaded function.
displayModel(self: visp._visp.vision.Pose, I: visp._visp.core.ImageGray, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color) -> None
Display the coordinates of the points in the image plane that are used to compute the pose in image I.
displayModel(self: visp._visp.vision.Pose, I: visp._visp.core.ImageRGBa, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color) -> None
Display the coordinates of the points in the image plane that are used to compute the pose in image I.
- static findMatch(p2D: list[visp._visp.core.Point], p3D: list[visp._visp.core.Point], numberOfInlierToReachAConsensus: int, threshold: float, ninliers: int, listInliers: list[visp._visp.core.Point], cMo: visp._visp.core.HomogeneousMatrix, maxNbTrials: int = 10000, useParallelRansac: bool = true, nthreads: int = 0, func: bool (vpHomogeneousMatrix const&)) tuple[list[visp._visp.core.Point], list[visp._visp.core.Point], int, list[visp._visp.core.Point]] ¶
Match a vector p2D of 2D point (x,y) and a vector p3D of 3D points (X,Y,Z) using the Ransac algorithm.
At least numberOfInlierToReachAConsensus of true correspondence are required to validate the pose
The inliers are given in a vector of vpPoint listInliers.
The pose is returned in cMo.
- Parameters:
- p2D
Vector of 2d points (x and y attributes are used).
- p3D
Vector of 3d points (oX, oY and oZ attributes are used).
- numberOfInlierToReachAConsensus
The minimum number of inlier to have to consider a trial as correct.
- threshold
The maximum error allowed between the 2d points and the reprojection of its associated 3d points by the current pose (in meter).
- ninliers
Number of inliers found for the best solution.
- listInliers
Vector of points (2d and 3d) that are inliers for the best solution.
- cMo
The computed pose (best solution).
- maxNbTrials
Maximum number of trials before considering a solution fitting the required numberOfInlierToReachAConsensus and threshold cannot be found.
- useParallelRansac
If true, use parallel RANSAC version (if C++11 is available).
- nthreads
Number of threads to use, if 0 the number of CPU threads will be determined.
- func
Pointer to a function that takes in parameter a vpHomogeneousMatrix and returns true if the pose check is OK or false otherwise
- Returns:
A tuple containing:
p2D: Vector of 2d points (x and y attributes are used).
p3D: Vector of 3d points (oX, oY and oZ attributes are used).
ninliers: Number of inliers found for the best solution.
listInliers: Vector of points (2d and 3d) that are inliers for the best solution.
- getCovarianceMatrix(self) visp._visp.core.Matrix ¶
Get the covariance matrix computed in the Virtual Visual Servoing approach.
Warning
The compute covariance flag has to be true if you want to compute the covariance matrix.
Note
See setCovarianceComputation
- getNbParallelRansacThreads(self) int ¶
Get the number of threads for the parallel RANSAC implementation.
Note
See setNbParallelRansacThreads
- getPoints(self) list[visp._visp.core.Point] ¶
Get the vector of points.
- Returns:
The vector of points.
- getRansacInliers(self) list[visp._visp.core.Point] ¶
Get the vector of inliers.
- getUseParallelRansac(self) bool ¶
Note
See setUseParallelRansac
- Returns:
True if the parallel RANSAC version should be used (depends also to C++11 availability).
- poseDementhonNonPlan(self, cMo: visp._visp.core.HomogeneousMatrix) None ¶
Compute the pose using Dementhon approach for non planar objects. This is a direct implementation of the algorithm proposed by Dementhon and Davis in their 1995 paper [13] .
- poseDementhonPlan(self, cMo: visp._visp.core.HomogeneousMatrix) None ¶
Compute the pose using Dementhon approach for planar objects this is a direct implementation of the algorithm proposed by Dementhon in his PhD.
- static poseFromRectangle(p1: visp._visp.core.Point, p2: visp._visp.core.Point, p3: visp._visp.core.Point, p4: visp._visp.core.Point, lx: float, cam: visp._visp.core.CameraParameters, cMo: visp._visp.core.HomogeneousMatrix) float ¶
Carries out the camera pose the image of a rectangle and the intrinsic parameters, the length on x axis is known but the proportion of the rectangle are unknown.
This method is taken from “Markerless Tracking using Planar Structures in the Scene” by Gilles Simon. The idea is to compute the homography H giving the image point of the rectangle by associating them with the coordinates (0,0)(1,0)(1,1/s)(0,1/s) (the rectangle is on the Z=0 plane). If K is the intrinsic parameters matrix, we have s = ||Kh1||/ ||Kh2||. s gives us the proportion of the rectangle
- Parameters:
- p1: visp._visp.core.Point¶
the image of the corners of the rectangle (respectively the image of (0,0),(lx,0),(lx,lx/s) and (0,lx/s)) (input)
- p2: visp._visp.core.Point¶
the image of the corners of the rectangle (respectively the image of (0,0),(lx,0),(lx,lx/s) and (0,lx/s)) (input)
- p3: visp._visp.core.Point¶
the image of the corners of the rectangle (respectively the image of (0,0),(lx,0),(lx,lx/s) and (0,lx/s)) (input)
- p4: visp._visp.core.Point¶
the image of the corners of the rectangle (respectively the image of (0,0),(lx,0),(lx,lx/s) and (0,lx/s)) (input)
- lx: float¶
the rectangle size on the x axis (input)
- cam: visp._visp.core.CameraParameters¶
the camera used (input)
- cMo: visp._visp.core.HomogeneousMatrix¶
the camera pose (output)
- Returns:
int : OK if no pb occurs
- poseLagrangeNonPlan(self, cMo: visp._visp.core.HomogeneousMatrix) None ¶
Compute the pose of a non planar object using Lagrange approach.
- Parameters:
- cMo: visp._visp.core.HomogeneousMatrix¶
Estimated pose. No initialisation is requested to estimate cMo.
- poseLowe(self, cMo: visp._visp.core.HomogeneousMatrix) None ¶
Compute the pose using the Lowe non linear approach it consider the minimization of a residual using the levenberg marquartd approach.
The approach has been proposed by D.G Lowe in 1992 paper [24] .
- poseRansac(self: visp._visp.vision.Pose, cMo: visp._visp.core.HomogeneousMatrix, func: bool (vpHomogeneousMatrix const&)) bool ¶
Compute the pose using the Ransac approach.
Note
You can enable a multithreaded version if you have C++11 enabled using setUseParallelRansac() . The number of threads used can then be set with setNbParallelRansacThreads() . Filter flag can be used with setRansacFilterFlag() .
- Parameters:
- cMo
Computed pose
- func
Pointer to a function that takes in parameter a vpHomogeneousMatrix and returns true if the pose check is OK or false otherwise
- Returns:
True if we found at least 4 points with a reprojection error below ransacThreshold.
- poseVirtualVS(self, cMo: visp._visp.core.HomogeneousMatrix) None ¶
Compute the pose using virtual visual servoing approach.
This approach is described in [29] .
- static poseVirtualVSWithDepth(points: list[visp._visp.core.Point], cMo: visp._visp.core.HomogeneousMatrix) visp._visp.core.HomogeneousMatrix | None ¶
- poseVirtualVSrobust(self, cMo: visp._visp.core.HomogeneousMatrix) None ¶
Compute the pose using virtual visual servoing approach and a robust control law.
This approach is described in [7] .
- setCovarianceComputation(self, flag: bool) None ¶
Set if the covariance matrix has to be computed in the Virtual Visual Servoing approach.
- setDementhonSvThreshold(self, svThresh: float) None ¶
Set singular value threshold in Dementhon pose estimation method.
- setDistToPlaneForCoplanTest(self, d: float) None ¶
Set distance threshold to consider that when a point belongs to a plane.
- setNbParallelRansacThreads(self, nb: int) None ¶
Set the number of threads for the parallel RANSAC implementation.
Note
You have to enable the parallel version with setUseParallelRansac() . If the number of threads is 0, the number of threads to use is automatically determined with C++11.
Note
See setUseParallelRansac
- setRansacFilterFlag(self, flag: visp._visp.vision.Pose.RANSAC_FILTER_FLAGS) None ¶
- setRansacNbInliersToReachConsensus(self, nbC: int) None ¶
Set Ransac requested number of inliers to reach consensus.
- setUseParallelRansac(self, use: bool) None ¶
Set if parallel RANSAC version should be used or not (only if C++11).
Note
Need C++11 or higher.