Pose

class Pose(*args, **kwargs)

Bases: pybind11_object

Class used for pose computation from N points (pose from point only). Some of the algorithms implemented in this class are described in [31] .

Note

It is also possible to estimate a pose from other features using vpPoseFeatures class.

To see how to use this class you can follow the tutorial-pose-estimation.

Overloaded function.

  1. __init__(self: visp._visp.vision.Pose) -> None

Default constructor.

  1. __init__(self: visp._visp.vision.Pose, lP: list[visp._visp.core.Point]) -> None

Constructor from a vector of points.

Methods

__init__

Overloaded function.

addPoint

Add a new point in the array of points.

addPoints

Add (append) a list of points in the array of points.

clearPoint

Delete the array of point

computePoseDementhonLagrangeVVS

Method that first computes the pose cMo using the linear approaches of Dementhon and Lagrange and then uses the non-linear Virtual Visual Servoing approach to affine the pose which had the lowest residual.

computeRansacIterations

param probability:

Probability that at least one of the random samples is free from outliers (typically p=0.99).

computeResidual

Overloaded function.

display

Overloaded function.

displayModel

Overloaded function.

getCovarianceMatrix

Get the covariance matrix computed in the Virtual Visual Servoing approach.

getNbParallelRansacThreads

Get the number of threads for the parallel RANSAC implementation.

getPoints

Get the vector of points.

getRansacInlierIndex

Get the vector of indexes corresponding to inliers.

getRansacInliers

Get the vector of inliers.

getRansacNbInliers

Get the number of inliers.

getUseParallelRansac

Note

See setUseParallelRansac

poseDementhonNonPlan

Compute the pose using Dementhon approach for non planar objects.

poseDementhonPlan

Compute the pose using Dementhon approach for planar objects this is a direct implementation of the algorithm proposed by Dementhon in his PhD.

poseFromRectangle

Carries out the camera pose the image of a rectangle and the intrinsic parameters, the length on x axis is known but the proportion of the rectangle are unknown.

poseLagrangeNonPlan

Compute the pose of a non planar object using Lagrange approach.

poseLowe

Compute the pose using the Lowe non linear approach it consider the minimization of a residual using the levenberg marquartd approach.

poseVirtualVS

Compute the pose using virtual visual servoing approach.

poseVirtualVSWithDepth

poseVirtualVSrobust

Compute the pose using virtual visual servoing approach and a robust control law.

printPoint

Print to std::cout points used as input.

setCovarianceComputation

Set if the covariance matrix has to be computed in the Virtual Visual Servoing approach.

setDementhonSvThreshold

Set singular value threshold in Dementhon pose estimation method.

setDistToPlaneForCoplanTest

Set distance threshold to consider that when a point belongs to a plane.

setLambda

Set virtual visual servoing gain.

setNbParallelRansacThreads

Set the number of threads for the parallel RANSAC implementation.

setRansacFilterFlag

setRansacMaxTrials

Set Ransac number of trials.

setRansacNbInliersToReachConsensus

Set Ransac requested number of inliers to reach consensus.

setRansacThreshold

Set Ransac threshold.

setUseParallelRansac

Set if parallel RANSAC version should be used or not (only if C++11).

setVvsEpsilon

Set virtual visual servoing epsilon value used in the pseudo-inverse.

setVvsIterMax

Set virtual visual servoing pose estimator maximum number od iterations.

Inherited Methods

Operators

__doc__

__init__

Overloaded function.

__module__

Attributes

CHECK_DEGENERATE_POINTS

DEMENTHON

DEMENTHON_LAGRANGE_VIRTUAL_VS

DEMENTHON_LOWE

DEMENTHON_VIRTUAL_VS

LAGRANGE

LAGRANGE_LOWE

LAGRANGE_VIRTUAL_VS

LOWE

NO_FILTER

PREFILTER_DEGENERATE_POINTS

RANSAC

VIRTUAL_VS

__annotations__

listP

npt

residual

class PoseMethodType(self, value: int)

Bases: pybind11_object

Filter applied in Ransac

Values:

  • NO_FILTER: No filter is applied.

  • PREFILTER_DEGENERATE_POINTS: Remove degenerate points (same 3D or 2D coordinates) before the RANSAC.

  • CHECK_DEGENERATE_POINTS: Check for degenerate points during the RANSAC.

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
class RANSAC_FILTER_FLAGS(self, value: int)

Bases: pybind11_object

Filter applied in Ransac

Values:

  • NO_FILTER: No filter is applied.

  • PREFILTER_DEGENERATE_POINTS: Remove degenerate points (same 3D or 2D coordinates) before the RANSAC.

  • CHECK_DEGENERATE_POINTS: Check for degenerate points during the RANSAC.

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: visp._visp.vision.Pose) -> None

Default constructor.

  1. __init__(self: visp._visp.vision.Pose, lP: list[visp._visp.core.Point]) -> None

Constructor from a vector of points.

addPoint(self, P: visp._visp.core.Point) None

Add a new point in the array of points.

Warning

Considering a point from the class vpPoint , oX, oY, and oZ will represent the 3D coordinates of the point in the object frame and x and y its 2D coordinates in the image plane. These 5 fields must be initialized to be used within this function.

Parameters:
P: visp._visp.core.Point

Point to add in the array of point.

addPoints(self, lP: list[visp._visp.core.Point]) None

Add (append) a list of points in the array of points.

Warning

Considering a point from the class vpPoint , oX, oY, and oZ will represent the 3D coordinates of the point in the object frame and x and y its 2D coordinates in the image plane. These 5 fields must be initialized to be used within this function.

Parameters:
lP: list[visp._visp.core.Point]

List of points to add (append).

clearPoint(self) None

Delete the array of point

computePoseDementhonLagrangeVVS(self, cMo: visp._visp.core.HomogeneousMatrix) bool

Method that first computes the pose cMo using the linear approaches of Dementhon and Lagrange and then uses the non-linear Virtual Visual Servoing approach to affine the pose which had the lowest residual.

Parameters:
cMo: visp._visp.core.HomogeneousMatrix

the pose of the object with regard to the camera.

Returns:

true the pose computation was successful.false an error occurred during the pose computation.

static computeRansacIterations(probability: float, epsilon: float, sampleSize: int = 4, maxIterations: int = 2000) int
Parameters:
probability: float

Probability that at least one of the random samples is free from outliers (typically p=0.99).

epsilon: float

Probability that a selected point is an outlier (between 0 and 1).

sampleSize: int = 4

Minimum number of points to estimate the model (4 for a pose estimation).

maxIterations: int = 2000

Upper bound on the number of iterations or -1 for INT_MAX.

Returns:

The number of RANSAC iterations to ensure with a probability p that at least one of the random samples of s points is free from outliers or maxIterations if it exceeds the desired upper bound or INT_MAX if maxIterations=-1.

computeResidual(*args, **kwargs)

Overloaded function.

  1. computeResidual(self: visp._visp.vision.Pose, cMo: visp._visp.core.HomogeneousMatrix) -> float

Compute and return the sum of squared residuals expressed in meter^2 for the pose matrix cMo .

Note

There is also the possibility to compute the residual expressed in pixel^2 using one of the following methods:

  • vpPose::computeResidual(const vpHomogeneousMatrix &, const vpCameraParameters &)

  • vpPose::computeResidual(const vpHomogeneousMatrix &, const vpCameraParameters &am, vpColVector &)

Parameters:
cMo

Input pose. The matrix that defines the pose to be tested.

Returns:

The value of the sum of squared residuals in meter^2.

  1. computeResidual(self: visp._visp.vision.Pose, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters) -> float

Compute and return the sum of squared residuals expressed in pixel^2 for the pose matrix cMo .

Note

There is also the possibility to compute the residual expressed in meter^2 using vpPose::computeResidual(const vpHomogeneousMatrix &)

Parameters:
cMo

Input pose. The matrix that defines the pose to be tested.

cam

Camera parameters used to observe the points.

Returns:

The value of the sum of squared residuals in pixel^2.

  1. computeResidual(self: visp._visp.vision.Pose, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, squaredResidual: visp._visp.core.ColVector) -> float

Compute and return the sum of squared residuals expressed in pixel^2 for the pose matrix cMo .

Note

There is also the possibility to compute the residual expressed in meter^2 using vpPose::computeResidual(const vpHomogeneousMatrix &)

Parameters:
cMo

Input pose. The matrix that defines the pose to be tested.

cam

Camera parameters used to observe the points.

squaredResidual

Input/output vector that will be resized and will contain the squared residuals expressed in pixel^2 of each point.

Returns:

The value of the sum of squared residuals in pixel^2.

static display(*args, **kwargs)

Overloaded function.

  1. display(I: visp._visp.core.ImageGray, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, size: float, col: visp._visp.core.Color) -> None

Display in the image I the pose represented by its homogenous transformation cMo as a 3 axis frame.

Parameters:
I

Image where the pose is displayed in overlay.

cMo

Considered pose to display.

cam

Camera parameters associated to image I .

size

length in meter of the axis that will be displayed.

col

Color used to display the 3 axis. If vpColor::none , red, green and blue will represent x-axis, y-axis and z-axis respectively.

  1. display(I: visp._visp.core.ImageRGBa, cMo: visp._visp.core.HomogeneousMatrix, cam: visp._visp.core.CameraParameters, size: float, col: visp._visp.core.Color) -> None

Display in the image I the pose represented by its homogenous transformation cMo as a 3 axis frame.

Parameters:
I

Image where the pose is displayed in overlay.

cMo

Considered pose to display.

cam

Camera parameters associated to image I .

size

length in meter of the axis that will be displayed.

col

Color used to display the 3 axis. If vpColor::none , red, green and blue will represent x-axis, y-axis and z-axis respectively.

displayModel(*args, **kwargs)

Overloaded function.

  1. displayModel(self: visp._visp.vision.Pose, I: visp._visp.core.ImageGray, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color) -> None

Display the coordinates of the points in the image plane that are used to compute the pose in image I.

  1. displayModel(self: visp._visp.vision.Pose, I: visp._visp.core.ImageRGBa, cam: visp._visp.core.CameraParameters, col: visp._visp.core.Color) -> None

Display the coordinates of the points in the image plane that are used to compute the pose in image I.

getCovarianceMatrix(self) visp._visp.core.Matrix

Get the covariance matrix computed in the Virtual Visual Servoing approach.

Warning

The compute covariance flag has to be true if you want to compute the covariance matrix.

Note

See setCovarianceComputation

getNbParallelRansacThreads(self) int

Get the number of threads for the parallel RANSAC implementation.

Note

See setNbParallelRansacThreads

getPoints(self) list[visp._visp.core.Point]

Get the vector of points.

Returns:

The vector of points.

getRansacInlierIndex(self) list[int]

Get the vector of indexes corresponding to inliers.

getRansacInliers(self) list[visp._visp.core.Point]

Get the vector of inliers.

getRansacNbInliers(self) int

Get the number of inliers.

getUseParallelRansac(self) bool

Note

See setUseParallelRansac

Returns:

True if the parallel RANSAC version should be used (depends also to C++11 availability).

poseDementhonNonPlan(self, cMo: visp._visp.core.HomogeneousMatrix) None

Compute the pose using Dementhon approach for non planar objects. This is a direct implementation of the algorithm proposed by Dementhon and Davis in their 1995 paper [13] .

poseDementhonPlan(self, cMo: visp._visp.core.HomogeneousMatrix) None

Compute the pose using Dementhon approach for planar objects this is a direct implementation of the algorithm proposed by Dementhon in his PhD.

static poseFromRectangle(p1: visp._visp.core.Point, p2: visp._visp.core.Point, p3: visp._visp.core.Point, p4: visp._visp.core.Point, lx: float, cam: visp._visp.core.CameraParameters, cMo: visp._visp.core.HomogeneousMatrix) float

Carries out the camera pose the image of a rectangle and the intrinsic parameters, the length on x axis is known but the proportion of the rectangle are unknown.

This method is taken from “Markerless Tracking using Planar Structures in the Scene” by Gilles Simon. The idea is to compute the homography H giving the image point of the rectangle by associating them with the coordinates (0,0)(1,0)(1,1/s)(0,1/s) (the rectangle is on the Z=0 plane). If K is the intrinsic parameters matrix, we have s = ||Kh1||/ ||Kh2||. s gives us the proportion of the rectangle

Parameters:
p1: visp._visp.core.Point

the image of the corners of the rectangle (respectively the image of (0,0),(lx,0),(lx,lx/s) and (0,lx/s)) (input)

p2: visp._visp.core.Point

the image of the corners of the rectangle (respectively the image of (0,0),(lx,0),(lx,lx/s) and (0,lx/s)) (input)

p3: visp._visp.core.Point

the image of the corners of the rectangle (respectively the image of (0,0),(lx,0),(lx,lx/s) and (0,lx/s)) (input)

p4: visp._visp.core.Point

the image of the corners of the rectangle (respectively the image of (0,0),(lx,0),(lx,lx/s) and (0,lx/s)) (input)

lx: float

the rectangle size on the x axis (input)

cam: visp._visp.core.CameraParameters

the camera used (input)

cMo: visp._visp.core.HomogeneousMatrix

the camera pose (output)

Returns:

int : OK if no pb occurs

poseLagrangeNonPlan(self, cMo: visp._visp.core.HomogeneousMatrix) None

Compute the pose of a non planar object using Lagrange approach.

Parameters:
cMo: visp._visp.core.HomogeneousMatrix

Estimated pose. No initialisation is requested to estimate cMo.

poseLowe(self, cMo: visp._visp.core.HomogeneousMatrix) None

Compute the pose using the Lowe non linear approach it consider the minimization of a residual using the levenberg marquartd approach.

The approach has been proposed by D.G Lowe in 1992 paper [24] .

poseVirtualVS(self, cMo: visp._visp.core.HomogeneousMatrix) None

Compute the pose using virtual visual servoing approach.

This approach is described in [29] .

static poseVirtualVSWithDepth(points: list[visp._visp.core.Point], cMo: visp._visp.core.HomogeneousMatrix) visp._visp.core.HomogeneousMatrix | None
poseVirtualVSrobust(self, cMo: visp._visp.core.HomogeneousMatrix) None

Compute the pose using virtual visual servoing approach and a robust control law.

This approach is described in [7] .

printPoint(self) None

Print to std::cout points used as input.

setCovarianceComputation(self, flag: bool) None

Set if the covariance matrix has to be computed in the Virtual Visual Servoing approach.

Parameters:
flag: bool

True if the covariance has to be computed, false otherwise.

setDementhonSvThreshold(self, svThresh: float) None

Set singular value threshold in Dementhon pose estimation method.

setDistToPlaneForCoplanTest(self, d: float) None

Set distance threshold to consider that when a point belongs to a plane.

setLambda(self: visp._visp.vision.Pose, lambda: float) None

Set virtual visual servoing gain.

setNbParallelRansacThreads(self, nb: int) None

Set the number of threads for the parallel RANSAC implementation.

Note

You have to enable the parallel version with setUseParallelRansac() . If the number of threads is 0, the number of threads to use is automatically determined with C++11.

Note

See setUseParallelRansac

setRansacFilterFlag(self, flag: visp._visp.vision.Pose.RANSAC_FILTER_FLAGS) None
setRansacMaxTrials(self, rM: int) None

Set Ransac number of trials.

setRansacNbInliersToReachConsensus(self, nbC: int) None

Set Ransac requested number of inliers to reach consensus.

setRansacThreshold(self, t: float) None

Set Ransac threshold.

setUseParallelRansac(self, use: bool) None

Set if parallel RANSAC version should be used or not (only if C++11).

Note

Need C++11 or higher.

setVvsEpsilon(self, eps: float) None

Set virtual visual servoing epsilon value used in the pseudo-inverse.

setVvsIterMax(self, nb: int) None

Set virtual visual servoing pose estimator maximum number od iterations.