Homography

class Homography(*args, **kwargs)

Bases: ArrayDouble2D

Implementation of an homography and operations on homographies.

This class aims to compute the homography wrt. two images [31] .

The vpHomography class is derived from vpArray2D<double> .

These two images are both described by a set of points. The 2 sets (one per image) are sets of corresponding points : for a point in a image, there is the corresponding point (image of the same 3D point) in the other image points set. These 2 sets are the only data needed to compute the homography. One method used is the one introduced by Ezio Malis during his PhD [26] . A normalization is carried out on this points in order to improve the conditioning of the problem, what leads to improve the stability of the result.

Store and compute the homography such that

\[^a{\bf p} = ^a{\bf H}_b\; ^b{\bf p} \]

with

\[* ^a{\bf H}_b = ^a{\bf R}_b + \frac{^a{\bf t}_b}{^bd} { ^b{\bf n}^T} \]

The tutorial-homography explains how to use this class.

The example below shows also how to manipulate this class to first compute a ground truth homography from camera poses, project pixel coordinates points using an homography and lastly estimate an homography from a subset of 4 matched points in frame a and frame b respectively.

 #include <visp3/core/vpHomogeneousMatrix.h>
 #include <visp3/core/vpMath.h>
 #include <visp3/core/vpMeterPixelConversion.h>
 #include <visp3/vision/vpHomography.h>

 int main()
 {
   // Initialize in the object frame the coordinates in meters of 4 points that
   // belong to a planar object
   vpPoint Po[4];
   Po[0].setWorldCoordinates(-0.1, -0.1, 0);
   Po[1].setWorldCoordinates( 0.2, -0.1, 0);
   Po[2].setWorldCoordinates( 0.1,  0.1, 0);
   Po[3].setWorldCoordinates(-0.1,  0.3, 0);

   // Initialize the pose between camera frame a and object frame o
   vpHomogeneousMatrix aMo(0, 0, 1, 0, 0, 0); // Camera is 1 meter far

   // Initialize the pose between camera frame a and camera frame
   // b. These two frames correspond for example to two successive
   // camera positions
   vpHomogeneousMatrix aMb(0.2, 0.1, 0, 0, 0, vpMath::rad(2));

   // Compute the pose between camera frame b and object frame
   vpHomogeneousMatrix bMo = aMb.inverse() * aMo;

   // Initialize camera intrinsic parameters
   vpCameraParameters cam;

   // Compute the coordinates in pixels of the 4 object points in the
   // camera frame a
   vpPoint Pa[4];
   std::vector<double> xa(4), ya(4); // Coordinates in pixels of the points in frame a
   for(int i=0 ; i < 4 ; ++i) {
     Pa[i] = Po[i]; Pa[i].project(aMo); // Project the points from object frame to camera frame a
     vpMeterPixelConversion::convertPoint(cam,
                                         Pa[i].get_x(), Pa[i].get_y(),
                                         xa[i], ya[i]);
   }

   // Compute the coordinates in pixels of the 4 object points in the
   // camera frame b
   vpPoint Pb[4];
   std::vector<double> xb(4), yb(4); // Coordinates in pixels of the points in frame b
   for(int i=0 ; i < 4 ; ++i) {
     Pb[i] = Po[i]; Pb[i].project(bMo); // Project the points from object frame to camera frame a
   }

   // Compute equation of the 3D plane containing the points in camera frame b
   vpPlane bP(Pb[0], Pb[1], Pb[2]);

   // Compute the corresponding ground truth homography
   vpHomography aHb(aMb, bP);

   std::cout << "Ground truth homography aHb: \n" << aHb<< std::endl;

   // Compute the coordinates of the points in frame b using the ground
   // truth homography and the coordinates of the points in frame a
   vpHomography bHa = aHb.inverse();
   for(int i = 0; i < 4 ; ++i){
     double inv_z = 1. / (bHa[2][0] * xa[i] + bHa[2][1] * ya[i] + bHa[2][2]);

     xb[i] = (bHa[0][0] * xa[i] + bHa[0][1] * ya[i] + bHa[0][2]) * inv_z;
     yb[i] = (bHa[1][0] * xa[i] + bHa[1][1] * ya[i] + bHa[1][2]) * inv_z;
   }

   // Estimate the homography from 4 points coordinates expressed in pixels
   vpHomography::DLT(xb, yb, xa, ya, aHb, true);
   aHb /= aHb[2][2]; // Apply a scale factor to have aHb[2][2] = 1

   std::cout << "Estimated homography aHb: \n" << aHb<< std::endl;
}

Overloaded function.

  1. __init__(self: visp._visp.vision.Homography) -> None

Initialize an homography as identity.

  1. __init__(self: visp._visp.vision.Homography, H: visp._visp.vision.Homography) -> None

Initialize an homography from another homography.

  1. __init__(self: visp._visp.vision.Homography, aMb: visp._visp.core.HomogeneousMatrix, bP: visp._visp.core.Plane) -> None

Construction from translation and rotation and a plane.

  1. __init__(self: visp._visp.vision.Homography, aRb: visp._visp.core.RotationMatrix, atb: visp._visp.core.TranslationVector, bP: visp._visp.core.Plane) -> None

Construction from translation and rotation and a plane.

  1. __init__(self: visp._visp.vision.Homography, tu: visp._visp.core.ThetaUVector, atb: visp._visp.core.TranslationVector, bP: visp._visp.core.Plane) -> None

Construction from translation and rotation and a plane.

  1. __init__(self: visp._visp.vision.Homography, arb: visp._visp.core.PoseVector, bP: visp._visp.core.Plane) -> None

Construction from translation and rotation and a plane.

Methods

DLT

From couples of matched points \(^a{\bf p}=(x_a,y_a,1)\) in image a and \(^b{\bf p}=(x_b,y_b,1)\) in image b with homogeneous coordinates, computes the homography matrix by resolving \(^a{\bf p} = ^a{\bf H}_b\; ^b{\bf p}\) using the DLT (Direct Linear Transform) algorithm.

HLM

From couples of matched points \(^a{\bf p}=(x_a,y_a,1)\) in image a and \(^b{\bf p}=(x_b,y_b,1)\) in image b with homogeneous coordinates, computes the homography matrix by resolving \(^a{\bf p} = ^a{\bf H}_b\; ^b{\bf p}\) using Ezio Malis linear method (HLM) [25] .

__init__

Overloaded function.

buildFrom

Overloaded function.

collineation2homography

Transform an homography from pixel space to calibrated domain.

computeDisplacement

Overloaded function.

convert

return:

The 3x3 matrix corresponding to the homography.

det

Return homography determinant.

eye

Set the homography as identity transformation by setting the diagonal to 1 and all other values to 0.

homography2collineation

Transform an homography from calibrated domain to pixel space.

inverse

Invert the homography.

load

Read an homography in a file, verify if it is really an homogeneous matrix.

project

Overloaded function.

projection

Project the current image point (in frame b) into the frame a using the homography aHb.

ransac

From couples of matched points \(^a{\bf p}=(x_a,y_a,1)\) in image a and \(^b{\bf p}=(x_b,y_b,1)\) in image b with homogeneous coordinates, computes the homography matrix by resolving \(^a{\bf p} = ^a{\bf H}_b\; ^b{\bf p}\) using Ransac algorithm.

resize

This function is not applicable to an homography that is always a 3-by-3 matrix.

robust

From couples of matched points \(^a{\bf p}=(x_a,y_a,1)\) in image a and \(^b{\bf p}=(x_b,y_b,1)\) in image b with homogeneous coordinates, computes the homography matrix by resolving \(^a{\bf p} = ^a{\bf H}_b\; ^b{\bf p}\) using a robust estimation scheme.

save

Save an homography in a file.

Inherited Methods

insert

Overloaded function.

getCols

Return the number of columns of the 2D array.

insertStatic

Insert array B in array A at the given position.

reshape

getRows

Return the number of rows of the 2D array.

size

Return the number of elements of the 2D array.

getMinValue

Return the array min value.

getMaxValue

Return the array max value.

hadamard

param m:

Second matrix;

saveYAML

Save an array in a YAML-formatted file.

conv2

Overloaded function.

t

Compute the transpose of the array.

numpy

Numpy view of the underlying array data.

Operators

__doc__

__init__

Overloaded function.

__itruediv__

Divide all the element of the homography matrix by v : Hij = Hij / v

__module__

__mul__

Overloaded function.

__truediv__

Divide an homography by a scalar.

Attributes

__annotations__

__hash__

static DLT(xb: list[float], yb: list[float], xa: list[float], ya: list[float], aHb: visp._visp.vision.Homography, normalization: bool = true) None

From couples of matched points \(^a{\bf p}=(x_a,y_a,1)\) in image a and \(^b{\bf p}=(x_b,y_b,1)\) in image b with homogeneous coordinates, computes the homography matrix by resolving \(^a{\bf p} = ^a{\bf H}_b\; ^b{\bf p}\) using the DLT (Direct Linear Transform) algorithm.

At least 4 couples of points are needed.

To do so, we use the DLT algorithm on the data, ie we resolve the linear system by SDV : \(\bf{Ah} =0\) where \(\bf{h}\) is the vector with the terms of \(^a{\bf H}_b\) and \(\mathbf{A}\) depends on the points coordinates.

For each point, in homogeneous coordinates we have:

\[^a{\bf p} = ^a{\bf H}_b\; ^b{\bf p} \]

which is equivalent to:

\[^a{\bf p} \times {^a{\bf H}_b \; ^b{\bf p}} =0 \]

If we note \(\mathbf{h}_j^T\) the \(j^{\textrm{th}}\) line of \(^a{\bf H}_b\) , we can write:

\[\begin{split}^a{\bf H}_b \; ^b{\bf p} = \left( \begin{array}{c}\mathbf{h}_1^T \;^b{\bf p} \\\mathbf{h}_2^T \; ^b{\bf p} \\\mathbf{h}_3^T \;^b{\bf p} \end{array}\right) \end{split}\]

Setting \(^a{\bf p}=(x_{a},y_{a},w_{a})\) , the cross product can be rewritten by:

\[\begin{split}^a{\bf p} \times ^a{\bf H}_b \; ^b{\bf p} =\left( \begin{array}{c}y_{a}\mathbf{h}_3^T \; ^b{\bf p}-w_{a}\mathbf{h}_2^T \; ^b{\bf p} \\w_{a}\mathbf{h}_1^T \; ^b{\bf p} -x_{a}\mathbf{h}_3^T \; ^b{\bf p} \\x_{a}\mathbf{h}_2^T \; ^b{\bf p}- y_{a}\mathbf{h}_1^T \; ^b{\bf p}\end{array}\right) \end{split}\]
\[\begin{split}\underbrace{\left( \begin{array}{ccc}\mathbf{0}^T & -w_{a} \; ^b{\bf p}^T & y_{a} \; ^b{\bf p}^T \\w_{a} \; ^b{\bf p}^T&\mathbf{0}^T & -x_{a} \; ^b{\bf p}^T \\-y_{a} \; ^b{\bf p}^T & x_{a} \; ^b{\bf p}^T & \mathbf{0}^T\end{array}\right)}_{\mathbf{A}_i (3\times 9)} \underbrace{\left( \begin{array}{c}\mathbf{h}_{1}^{T} \\\mathbf{h}_{2}^{T}\\\mathbf{h}_{3}^{T}\end{array}\right)}_{\mathbf{h} (9\times 1)}=0 \end{split}\]

leading to an homogeneous system to be solved: \(\mathbf{A}\mathbf{h}=0\) with \(\mathbf{A}=\left(\mathbf{A}_1^T, ..., \mathbf{A}_i^T, ..., \mathbf{A}_n^T \right)^T\) .

It can be solved using an SVD decomposition:

\[\bf A = UDV^T \]

h is the column of V associated with the smallest singular value of A

Parameters:
xb: list[float]

Coordinates vector of matched points in image b. These coordinates are expressed in meters.

yb: list[float]

Coordinates vector of matched points in image b. These coordinates are expressed in meters.

xa: list[float]

Coordinates vector of matched points in image a. These coordinates are expressed in meters.

ya: list[float]

Coordinates vector of matched points in image a. These coordinates are expressed in meters.

aHb: visp._visp.vision.Homography

Estimated homography that relies the transformation from image a to image b.

normalization: bool = true

When set to true, the coordinates of the points are normalized. The normalization carried out is the one preconized by Hartley.

static HLM(xb: list[float], yb: list[float], xa: list[float], ya: list[float], isplanar: bool, aHb: visp._visp.vision.Homography) None

From couples of matched points \(^a{\bf p}=(x_a,y_a,1)\) in image a and \(^b{\bf p}=(x_b,y_b,1)\) in image b with homogeneous coordinates, computes the homography matrix by resolving \(^a{\bf p} = ^a{\bf H}_b\; ^b{\bf p}\) using Ezio Malis linear method (HLM) [25] .

This method can consider points that are planar or non planar. The algorithm for planar scene implemented in this file is described in Ezio Malis PhD thesis [26] .

If the boolean isplanar is true the points are assumed to be in a plane otherwise there are assumed to be non planar.

Note

See DLT() when the scene is planar.

Parameters:
xb: list[float]

Coordinates vector of matched points in image b. These coordinates are expressed in meters.

yb: list[float]

Coordinates vector of matched points in image b. These coordinates are expressed in meters.

xa: list[float]

Coordinates vector of matched points in image a. These coordinates are expressed in meters.

ya: list[float]

Coordinates vector of matched points in image a. These coordinates are expressed in meters.

isplanar: bool

If true the points are assumed to be in a plane, otherwise there are assumed to be non planar.

aHb: visp._visp.vision.Homography

Estimated homography that relies the transformation from image a to image b.

__eq__(*args, **kwargs)

Overloaded function.

  1. __eq__(self: visp._visp.core.ArrayDouble2D, A: visp._visp.core.ArrayDouble2D) -> bool

Equal to comparison operator of a 2D array.

  1. __eq__(self: visp._visp.core.ArrayDouble2D, A: visp._visp.core.ArrayDouble2D) -> bool

Equal to comparison operator of a 2D array.

  1. __eq__(self: visp._visp.core.ArrayDouble2D, A: visp._visp.core.ArrayDouble2D) -> bool

Equal to comparison operator of a 2D array.

__getitem__(*args, **kwargs)

Overloaded function.

  1. __getitem__(self: visp._visp.core.ArrayDouble2D, arg0: tuple[int, int]) -> float

  2. __getitem__(self: visp._visp.core.ArrayDouble2D, arg0: int) -> numpy.ndarray[numpy.float64]

  3. __getitem__(self: visp._visp.core.ArrayDouble2D, arg0: slice) -> numpy.ndarray[numpy.float64]

  4. __getitem__(self: visp._visp.core.ArrayDouble2D, arg0: tuple) -> numpy.ndarray[numpy.float64]

__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: visp._visp.vision.Homography) -> None

Initialize an homography as identity.

  1. __init__(self: visp._visp.vision.Homography, H: visp._visp.vision.Homography) -> None

Initialize an homography from another homography.

  1. __init__(self: visp._visp.vision.Homography, aMb: visp._visp.core.HomogeneousMatrix, bP: visp._visp.core.Plane) -> None

Construction from translation and rotation and a plane.

  1. __init__(self: visp._visp.vision.Homography, aRb: visp._visp.core.RotationMatrix, atb: visp._visp.core.TranslationVector, bP: visp._visp.core.Plane) -> None

Construction from translation and rotation and a plane.

  1. __init__(self: visp._visp.vision.Homography, tu: visp._visp.core.ThetaUVector, atb: visp._visp.core.TranslationVector, bP: visp._visp.core.Plane) -> None

Construction from translation and rotation and a plane.

  1. __init__(self: visp._visp.vision.Homography, arb: visp._visp.core.PoseVector, bP: visp._visp.core.Plane) -> None

Construction from translation and rotation and a plane.

__itruediv__(self, v: float) visp._visp.vision.Homography

Divide all the element of the homography matrix by v : Hij = Hij / v

__mul__(*args, **kwargs)

Overloaded function.

  1. __mul__(self: visp._visp.vision.Homography, H: visp._visp.vision.Homography) -> visp._visp.vision.Homography

Multiplication by an homography.

vpHomography aHb, bHc;
// Initialize aHb and bHc homographies
vpHomography aHc = aHb * bHc;
Parameters:
H

Homography to multiply with.

  1. __mul__(self: visp._visp.vision.Homography, v: float) -> visp._visp.vision.Homography

Multiply an homography by a scalar.

double v = 1.1;
vpHomography aHb;
// Initialize aHb
vpHomography H = aHb * v;
Parameters:
v

Value of the scalar.

  1. __mul__(self: visp._visp.vision.Homography, b: visp._visp.core.ColVector) -> visp._visp.core.ColVector

Operation a = aHb * b.

Parameters:
b

3 dimension vector.

  1. __mul__(self: visp._visp.vision.Homography, b_P: visp._visp.core.Point) -> visp._visp.core.Point

From the coordinates of the point in image plane b and the homography between image a and b computes the coordinates of the point in image plane a.

Parameters:
b_P

2D coordinates of the point in the image plane b.

Returns:

A point with 2D coordinates in the image plane a.

__ne__(*args, **kwargs)

Overloaded function.

  1. __ne__(self: visp._visp.core.ArrayDouble2D, A: visp._visp.core.ArrayDouble2D) -> bool

Not equal to comparison operator of a 2D array.

  1. __ne__(self: visp._visp.core.ArrayDouble2D, A: visp._visp.core.ArrayDouble2D) -> bool

Not equal to comparison operator of a 2D array.

  1. __ne__(self: visp._visp.core.ArrayDouble2D, A: visp._visp.core.ArrayDouble2D) -> bool

Not equal to comparison operator of a 2D array.

__truediv__(self, v: float) visp._visp.vision.Homography

Divide an homography by a scalar.

vpHomography aHb;
// Initialize aHb
vpHomography H = aHb / aHb[2][2];
Parameters:
v: float

Value of the scalar.

buildFrom(*args, **kwargs)

Overloaded function.

  1. buildFrom(self: visp._visp.vision.Homography, aRb: visp._visp.core.RotationMatrix, atb: visp._visp.core.TranslationVector, bP: visp._visp.core.Plane) -> None

Construction from translation and rotation and a plane.

  1. buildFrom(self: visp._visp.vision.Homography, tu: visp._visp.core.ThetaUVector, atb: visp._visp.core.TranslationVector, bP: visp._visp.core.Plane) -> None

Construction from translation and rotation and a plane.

  1. buildFrom(self: visp._visp.vision.Homography, arb: visp._visp.core.PoseVector, bP: visp._visp.core.Plane) -> None

Construction from translation and rotation and a plane.

  1. buildFrom(self: visp._visp.vision.Homography, aMb: visp._visp.core.HomogeneousMatrix, bP: visp._visp.core.Plane) -> None

Construction from homogeneous matrix and a plane.

collineation2homography(self, cam: visp._visp.core.CameraParameters) visp._visp.vision.Homography

Transform an homography from pixel space to calibrated domain.

Note

See homography2collineation()

Parameters:
cam: visp._visp.core.CameraParameters

Camera parameters used to fill \({\bf K}\) matrix such as

\[\begin{split}{\bf K} = \left[ \begin{array}{ccc} p_x & 0 & u_0 \\0 & p_y & v_0 \\0 & 0 & 1 \end{array}\right]\end{split}\]

Returns:

The corresponding homography matrix \(\bf H\) in the Euclidean space or calibrated domain.

computeDisplacement(*args, **kwargs)

Overloaded function.

  1. computeDisplacement(self: visp._visp.vision.Homography, aRb: visp._visp.core.RotationMatrix, atb: visp._visp.core.TranslationVector, n: visp._visp.core.ColVector) -> None

Compute the camera displacement between two images from the homography \({^a}{\bf H}_b\) which is here an implicit parameter (*this).

Parameters:
aRb

Rotation matrix as an output \({^a}{\bf R}_b\) .

atb

Translation vector as an output \(^a{\bf t}_b\) .

n

Normal vector to the plane as an output.

  1. computeDisplacement(self: visp._visp.vision.Homography, nd: visp._visp.core.ColVector, aRb: visp._visp.core.RotationMatrix, atb: visp._visp.core.TranslationVector, n: visp._visp.core.ColVector) -> None

Compute the camera displacement between two images from the homography \({^a}{\bf H}_b\) which is here an implicit parameter (*this).

Camera displacement between \({^a}{\bf p}\) and \({^a}{\bf p}\) is represented as a rotation matrix \({^a}{\bf R}_b\) and a translation vector \(^a{\bf t}_b\) from which an homogeneous matrix can be build ( vpHomogeneousMatrix ).

Parameters:
nd

Input normal vector to the plane used to compar with the normal vector n extracted from the homography.

aRb

Rotation matrix as an output \({^a}{\bf R}_b\) .

atb

Translation vector as an output \(^a{\bf t}_b\) .

n

Normal vector to the plane as an output.

static conv2(*args, **kwargs)

Overloaded function.

  1. conv2(M: visp._visp.core.ArrayDouble2D, kernel: visp._visp.core.ArrayDouble2D, mode: str) -> visp._visp.core.ArrayDouble2D

Perform a 2D convolution similar to Matlab conv2 function: \(M \star kernel\) .

<unparsed image <doxmlparser.compound.docImageType object at 0x7ff6a36292a0>>

Note

This is a very basic implementation that does not use FFT.

Parameters:
M

First matrix.

kernel

Second matrix.

mode

Convolution mode: “full” (default), “same”, “valid”.

  1. conv2(M: visp._visp.core.ArrayDouble2D, kernel: visp._visp.core.ArrayDouble2D, res: visp._visp.core.ArrayDouble2D, mode: str) -> None

Perform a 2D convolution similar to Matlab conv2 function: \(M \star kernel\) .

<unparsed image <doxmlparser.compound.docImageType object at 0x7ff6a362aec0>>

Note

This is a very basic implementation that does not use FFT.

Parameters:
M

First array.

kernel

Second array.

res

Result.

mode

Convolution mode: “full” (default), “same”, “valid”.

convert(self) visp._visp.core.Matrix
Returns:

The 3x3 matrix corresponding to the homography.

det(self) float

Return homography determinant.

eye(self) None

Set the homography as identity transformation by setting the diagonal to 1 and all other values to 0.

getCols(self) int

Return the number of columns of the 2D array.

Note

See getRows() , size()

getMaxValue(self) float

Return the array max value.

getMinValue(self) float

Return the array min value.

getRows(self) int

Return the number of rows of the 2D array.

Note

See getCols() , size()

hadamard(self, m: visp._visp.core.ArrayDouble2D) visp._visp.core.ArrayDouble2D
Parameters:
m: visp._visp.core.ArrayDouble2D

Second matrix;

Returns:

m1.hadamard(m2) The Hadamard product : \(m1 \circ m2 = (m1 \circ m2)_{i,j} = (m1)_{i,j} (m2)_{i,j}\)

homography2collineation(self, cam: visp._visp.core.CameraParameters) visp._visp.vision.Homography

Transform an homography from calibrated domain to pixel space.

Note

See collineation2homography()

Parameters:
cam: visp._visp.core.CameraParameters

Camera parameters used to fill \({\bf K}\) matrix such as

\[\begin{split}{\bf K} = \left[ \begin{array}{ccc} p_x & 0 & u_0 \\0 & p_y & v_0 \\0 & 0 & 1 \end{array}\right]\end{split}\]

Returns:

The corresponding collineation matrix \(\bf G\) in the pixel space.

insert(*args, **kwargs)

Overloaded function.

  1. insert(self: visp._visp.core.ArrayDouble2D, A: visp._visp.core.ArrayDouble2D, r: int, c: int) -> None

Insert array A at the given position in the current array.

Warning

Throw vpException::dimensionError if the dimensions of the matrices do not allow the operation.

Parameters:
A

The array to insert.

r

The index of the row to begin to insert data.

c

The index of the column to begin to insert data.

  1. insert(self: visp._visp.core.ArrayDouble2D, A: visp._visp.core.ArrayDouble2D, B: visp._visp.core.ArrayDouble2D, r: int, c: int) -> visp._visp.core.ArrayDouble2D

Insert array B in array A at the given position.

Warning

Throw exception if the sizes of the arrays do not allow the insertion.

Parameters:
A

Main array.

B

Array to insert.

r

Index of the row where to add the array.

c

Index of the column where to add the array.

Returns:

Array with B insert in A.

static insertStatic(A: visp._visp.core.ArrayDouble2D, B: visp._visp.core.ArrayDouble2D, C: visp._visp.core.ArrayDouble2D, r: int, c: int) None

Insert array B in array A at the given position.

Warning

Throw exception if the sizes of the arrays do not allow the insertion.

Parameters:
A: visp._visp.core.ArrayDouble2D

Main array.

B: visp._visp.core.ArrayDouble2D

Array to insert.

C: visp._visp.core.ArrayDouble2D

Result array.

r: int

Index of the row where to insert array B.

c: int

Index of the column where to insert array B.

inverse(self, bHa: visp._visp.vision.Homography) None

Invert the homography.

Parameters:
bHa: visp._visp.vision.Homography

math:bf H^{-1} with H = *this.

load(self: visp._visp.vision.Homography, f: std::basic_ifstream<char, std::char_traits<char> >) None

Read an homography in a file, verify if it is really an homogeneous matrix.

Note

See save()

Parameters:
f

the file. This file has to be written using save() .

numpy(self) numpy.ndarray[numpy.float64]

Numpy view of the underlying array data. This numpy view can be used to directly modify the array.

static project(*args, **kwargs)

Overloaded function.

  1. project(cam: visp._visp.core.CameraParameters, bHa: visp._visp.vision.Homography, iPa: visp._visp.core.ImagePoint) -> visp._visp.core.ImagePoint

Given iPa a pixel with coordinates \((u_a,v_a)\) in image a, and the homography bHa in the Euclidean space or calibrated domain that links image a and b, computes the coordinates of the pixel \((u_b,v_b)\) in the image b using the camera parameters matrix \(\bf K\) .

Compute \(^b{\bf p} = {\bf K} \; {^b}{\bf H}_a \; {\bf K}^{-1} {^a}{\bf p}\) with \(^a{\bf p}=(u_a,v_a,1)\) and \(^b{\bf p}=(u_b,v_b,1)\)

Returns:

The coordinates in pixel of the point with coordinates \((u_b,v_b)\) .

  1. project(bHa: visp._visp.vision.Homography, Pa: visp._visp.core.Point) -> visp._visp.core.Point

Given Pa a point with normalized coordinates \((x_a,y_a,1)\) in the image plane a, and the homography bHa in the Euclidean space that links image a and b, computes the normalized coordinates of the point \((x_b,y_b,1)\) in the image plane b.

Compute \(^b{\bf p} = {^b}{\bf H}_a \; {^a}{\bf p}\) with \(^a{\bf p}=(x_a,y_a,1)\) and \(^b{\bf p}=(x_b,y_b,1)\)

Returns:

The coordinates in meter of the point with coordinates \((x_b,y_b)\) .

projection(self, ipb: visp._visp.core.ImagePoint) visp._visp.core.ImagePoint

Project the current image point (in frame b) into the frame a using the homography aHb.

Parameters:
ipb: visp._visp.core.ImagePoint

Homography defining the relation between frame a and frame b.

Returns:

The projected image point in the frame a.

static ransac(xb: list[float], yb: list[float], xa: list[float], ya: list[float], aHb: visp._visp.vision.Homography, inliers: list[bool], residual: float, nbInliersConsensus: int, threshold: float, normalization: bool = true) tuple[bool, list[bool], float]

From couples of matched points \(^a{\bf p}=(x_a,y_a,1)\) in image a and \(^b{\bf p}=(x_b,y_b,1)\) in image b with homogeneous coordinates, computes the homography matrix by resolving \(^a{\bf p} = ^a{\bf H}_b\; ^b{\bf p}\) using Ransac algorithm.

Parameters:
xb: list[float]

Coordinates vector of matched points in image b. These coordinates are expressed in meters.

yb: list[float]

Coordinates vector of matched points in image b. These coordinates are expressed in meters.

xa: list[float]

Coordinates vector of matched points in image a. These coordinates are expressed in meters.

ya: list[float]

Coordinates vector of matched points in image a. These coordinates are expressed in meters.

aHb: visp._visp.vision.Homography

Estimated homography that relies the transformation from image a to image b.

inliers: list[bool]

Vector that indicates if a matched point is an inlier (true) or an outlier (false).

residual: float

Global residual computed as \(r = \sqrt{1/n \sum_{inliers} {\| {^a{\bf p} - {\hat{^a{\bf H}_b}} {^b{\bf p}}} \|}^{2}}\) with \(n\) the number of inliers.

nbInliersConsensus: int

Minimal number of points requested to fit the estimated homography.

threshold: float

Threshold for outlier removing. A point is considered as an outlier if the reprojection error \(\| {^a{\bf p} - {\hat{^a{\bf H}_b}} {^b{\bf p}}} \|\) is greater than this threshold.

normalization: bool = true

When set to true, the coordinates of the points are normalized. The normalization carried out is the one preconized by Hartley.

Returns:

A tuple containing:

  • true if the homography could be computed, false otherwise.

  • inliers: Vector that indicates if a matched point is an inlier (true) or an outlier (false).

  • residual: Global residual computed as \(r = \sqrt{1/n \sum_{inliers} {\| {^a{\bf p} - {\hat{^a{\bf H}_b}} {^b{\bf p}}} \|}^{2}}\) with \(n\) the number of inliers.

reshape(self, nrows: int, ncols: int) None
resize(self, nrows: int, ncols: int, flagNullify: bool = true) None

This function is not applicable to an homography that is always a 3-by-3 matrix.

static robust(xb: list[float], yb: list[float], xa: list[float], ya: list[float], aHb: visp._visp.vision.Homography, inliers: list[bool], residual: float, weights_threshold: float = 0.4, niter: int = 4, normalization: bool = true) tuple[list[bool], float]

From couples of matched points \(^a{\bf p}=(x_a,y_a,1)\) in image a and \(^b{\bf p}=(x_b,y_b,1)\) in image b with homogeneous coordinates, computes the homography matrix by resolving \(^a{\bf p} = ^a{\bf H}_b\; ^b{\bf p}\) using a robust estimation scheme.

This method is to compare to DLT() except that here a robust estimator is used to reject couples of points that are considered as outliers.

At least 4 couples of points are needed.

Note

See DLT() , ransac()

Parameters:
xb: list[float]

Coordinates vector of matched points in image b. These coordinates are expressed in meters.

yb: list[float]

Coordinates vector of matched points in image b. These coordinates are expressed in meters.

xa: list[float]

Coordinates vector of matched points in image a. These coordinates are expressed in meters.

ya: list[float]

Coordinates vector of matched points in image a. These coordinates are expressed in meters.

aHb: visp._visp.vision.Homography

Estimated homography that relies the transformation from image a to image b.

inliers: list[bool]

Vector that indicates if a matched point is an inlier (true) or an outlier (false).

residual: float

Global residual computed as \(r = \sqrt{1/n \sum_{inliers} {\| {^a{\bf p} - {\hat{^a{\bf H}_b}} {^b{\bf p}}} \|}^{2}}\) with \(n\) the number of inliers.

weights_threshold: float = 0.4

Threshold applied on the weights updated during the robust estimation and used to consider if a point is an outlier or an inlier. Values should be in [0:1]. A couple of matched points that have a weight lower than this threshold is considered as an outlier. A value equal to zero indicates that all the points are inliers.

niter: int = 4

Number of iterations of the estimation process.

normalization: bool = true

When set to true, the coordinates of the points are normalized. The normalization carried out is the one preconized by Hartley.

Returns:

A tuple containing:

  • inliers: Vector that indicates if a matched point is an inlier (true) or an outlier (false).

  • residual: Global residual computed as \(r = \sqrt{1/n \sum_{inliers} {\| {^a{\bf p} - {\hat{^a{\bf H}_b}} {^b{\bf p}}} \|}^{2}}\) with \(n\) the number of inliers.

save(self: visp._visp.vision.Homography, f: std::basic_ofstream<char, std::char_traits<char> >) None

Save an homography in a file. The load() function allows then to read and set the homography from this file.

Note

See load()

static saveYAML(filename: str, A: visp._visp.core.ArrayDouble2D, header: str =) bool

Save an array in a YAML-formatted file.

Here is an example of outputs.

vpArray2D<double> M(3,4);
vpArray2D::saveYAML("matrix.yml", M, "example: a YAML-formatted header");
vpArray2D::saveYAML("matrixIndent.yml", M, "example:\n    - a YAML-formatted \
header\n    - with inner indentation");

Content of matrix.yml:

example: a YAML-formatted header
rows: 3
cols: 4
data:
  - [0, 0, 0, 0]
  - [0, 0, 0, 0]
  - [0, 0, 0, 0]

Content of matrixIndent.yml:

example:
    - a YAML-formatted header
    - with inner indentation
rows: 3
cols: 4
data:
    - [0, 0, 0, 0]
    - [0, 0, 0, 0]
    - [0, 0, 0, 0]

Note

See loadYAML()

Parameters:
filename

absolute file name.

A

array to be saved in the file.

header

optional lines that will be saved at the beginning of the file. Should be YAML-formatted and will adapt to the indentation if any.

Returns:

Returns true if success.

size(self) int

Return the number of elements of the 2D array.

t(self) visp._visp.core.ArrayDouble2D

Compute the transpose of the array.

Returns:

vpArray2D<Type> C = A^T

__hash__ = None