MeterPixelConversion

class MeterPixelConversion

Bases: pybind11_object

Various conversion functions to transform primitives (2D ellipse, 2D line, 2D point) from normalized coordinates in meter in the image plane into pixel coordinates.

Transformation relies either on ViSP camera parameters implemented in vpCameraParameters or on OpenCV camera parameters that are set from a projection matrix and a distortion coefficients vector.

Methods

__init__

convertEllipse

Overloaded function.

convertLine

Line parameters conversion from normalized coordinates \((\rho_m,\theta_m)\) expressed in the image plane to pixel coordinates \((\rho_p,\theta_p)\) using ViSP camera parameters.

convertPoint

Overloaded function.

convertPoints

Convert a set of 2D normalized coordinates to pixel coordinates.

Inherited Methods

Operators

__doc__

__init__

__module__

Attributes

__annotations__

__init__(*args, **kwargs)
static convertEllipse(*args, **kwargs)

Overloaded function.

  1. convertEllipse(cam: visp._visp.core.CameraParameters, sphere: visp._visp.core.Sphere, center_p: visp._visp.core.ImagePoint) -> tuple[float, float, float]

Noting that the perspective projection of a 3D sphere is usually an ellipse, using the camera intrinsic parameters converts the parameters of the 3D sphere expressed in the image plane (these parameters are obtained after perspective projection of the 3D sphere) in the image with values in pixels.

The ellipse resulting from the conversion is here represented by its parameters \(u_c,v_c,n_{20}, n_{11}, n_{02}\) corresponding to its center coordinates in pixel and the centered moments normalized by its area.

The following code shows how to use this function:

vpSphere sphere;
double n20_p, n11_p, n02_p;
sphere.changeFrame(cMo);
sphere.projection();
vpMeterPixelConversion::convertEllipse(cam, sphere, center_p, n20_p, n11_p, n02_p);
vpDisplay::displayEllipse(I, center_p, n20_p, n11_p, n02_p, true, vpColor::red);
Parameters:
cam

Intrinsic camera parameters.

sphere

3D sphere with internal vector circle.p[] that contains the ellipse parameters expressed in the image plane. These parameters are internally updated after perspective projection of the sphere.

center_p

Center \((u_c, v_c)\) of the corresponding ellipse in the image with coordinates expressed in pixels.

Returns:

A tuple containing:

  • n20_p: Second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area) expressed in pixels.

  • n11_p: Second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area) expressed in pixels.

  • n02_p: Second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area) expressed in pixels.

  1. convertEllipse(cam: visp._visp.core.CameraParameters, circle: visp._visp.core.Circle, center_p: visp._visp.core.ImagePoint, n20_p: float, n11_p: float, n02_p: float) -> tuple[float, float, float]

Noting that the perspective projection of a 3D circle is usually an ellipse, using the camera intrinsic parameters converts the parameters of the 3D circle expressed in the image plane (these parameters are obtained after perspective projection of the 3D circle) in the image with values in pixels using ViSP camera parameters.

The ellipse resulting from the conversion is here represented by its parameters \(u_c,v_c,n_{20}, n_{11}, n_{02}\) corresponding to its center coordinates in pixel and the centered moments normalized by its area.

The following code shows how to use this function:

vpCircle circle;
double n20_p, n11_p, n02_p;
circle.changeFrame(cMo);
circle.projection();
vpMeterPixelConversion::convertEllipse(cam, circle, center_p, n20_p, n11_p, n02_p);
vpDisplay::displayEllipse(I, center_p, n20_p, n11_p, n02_p, true, vpColor::red);
Parameters:
cam

Intrinsic camera parameters.

circle

3D circle with internal vector circle.p[] that contains the ellipse parameters expressed in the image plane. These parameters are internally updated after perspective projection of the sphere.

center_p

Center \((u_c, v_c)\) of the corresponding ellipse in the image with coordinates expressed in pixels.

n20_p

Second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area) expressed in pixels.

n11_p

Second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area) expressed in pixels.

n02_p

Second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area) expressed in pixels.

Returns:

A tuple containing:

  • n20_p: Second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area) expressed in pixels.

  • n11_p: Second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area) expressed in pixels.

  • n02_p: Second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area) expressed in pixels.

  1. convertEllipse(cam: visp._visp.core.CameraParameters, xc_m: float, yc_m: float, n20_m: float, n11_m: float, n02_m: float, center_p: visp._visp.core.ImagePoint) -> tuple[float, float, float]

Convert parameters of an ellipse expressed in the image plane (these parameters are obtained after perspective projection of the 3D sphere) in the image with values in pixels using ViSP intrinsic camera parameters.

The ellipse resulting from the conversion is here represented by its parameters \(u_c,v_c,n_{20}, n_{11}, n_{02}\) corresponding to its center coordinates in pixel and the centered moments normalized by its area.

Parameters:
cam

Intrinsic camera parameters.

xc_m

Center of the ellipse in the image plane with normalized coordinates expressed in meters.

yc_m

Center of the ellipse in the image plane with normalized coordinates expressed in meters.

n20_m

Second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area) expressed in meter.

n11_m

Second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area) expressed in meter.

n02_m

Second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area) expressed in meter.

center_p

Center \((u_c, v_c)\) of the corresponding ellipse in the image with coordinates expressed in pixels.

Returns:

A tuple containing:

  • n20_p: Second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area) expressed in pixels.

  • n11_p: Second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area) expressed in pixels.

  • n02_p: Second order centered moments of the ellipse normalized by its area (i.e., such that \(n_{ij} = \mu_{ij}/a\) where \(\mu_{ij}\) are the centered moments and a the area) expressed in pixels.

static convertLine(cam: visp._visp.core.CameraParameters, rho_m: float, theta_m: float) tuple[float, float]

Line parameters conversion from normalized coordinates \((\rho_m,\theta_m)\) expressed in the image plane to pixel coordinates \((\rho_p,\theta_p)\) using ViSP camera parameters. This function doesn’t use distortion coefficients.

Parameters:
cam: visp._visp.core.CameraParameters

camera parameters.

rho_m: float

Line parameters expressed in meters in the image plane.

theta_m: float

Line parameters expressed in meters in the image plane.

Returns:

A tuple containing:

  • rho_p: Line parameters expressed in pixels.

  • theta_p: Line parameters expressed in pixels.

static convertPoint(*args, **kwargs)

Overloaded function.

  1. convertPoint(cam: visp._visp.core.CameraParameters, x: float, y: float) -> tuple[float, float]

Point coordinates conversion from normalized coordinates \((x,y)\) in meter in the image plane to pixel coordinates \((u,v)\) in the image using ViSP camera parameters.

The used formula depends on the projection model of the camera. To know the currently used projection model use vpCameraParameter::get_projModel()

\(u = x*p_x + u_0\) and \(v = y*p_y + v_0\) in the case of perspective projection without distortion.

\(u = x*p_x*(1+k_{ud}*r^2)+u_0\) and \(v = y*p_y*(1+k_{ud}*r^2)+v_0\) with \(r^2 = x^2+y^2\) in the case of perspective projection with distortion.

In the case of a projection with Kannala-Brandt distortion, refer to [20] .

Parameters:
cam

camera parameters.

x

input coordinate in meter along image plane x-axis.

y

input coordinate in meter along image plane y-axis.

Returns:

A tuple containing:

  • u: output coordinate in pixels along image horizontal axis.

  • v: output coordinate in pixels along image vertical axis.

  1. convertPoint(cam: visp._visp.core.CameraParameters, x: float, y: float, iP: visp._visp.core.ImagePoint) -> None

Point coordinates conversion from normalized coordinates \((x,y)\) in meter in the image plane to pixel coordinates in the image using ViSP camera parameters.

The used formula depends on the projection model of the camera. To know the currently used projection model use vpCameraParameter::get_projModel()

In the frame (u,v) the result is given by:

\(u = x*p_x + u_0\) and \(v = y*p_y + v_0\) in the case of perspective projection without distortion.

\(u = x*p_x*(1+k_{ud}*r^2)+u_0\) and \(v = y*p_y*(1+k_{ud}*r^2)+v_0\) with \(r^2 = x^2+y^2\) in the case of perspective projection with distortion.

In the case of a projection with Kannala-Brandt distortion, refer to [20] .

Parameters:
cam

camera parameters.

x

input coordinate in meter along image plane x-axis.

y

input coordinate in meter along image plane y-axis.

iP

output coordinates in pixels.

  1. convertPoint(cameraMatrix: cv::Mat, distCoeffs: cv::Mat, x: float, y: float, iP: visp._visp.core.ImagePoint) -> None

Point coordinates conversion from normalized coordinates \((x,y)\) in meter in the image plane to pixel coordinates \((u,v)\) in the image using OpenCV camera parameters.

Parameters:
cameraMatrix

Camera Matrix \(\begin{bmatrix} f_x & 0 & c_x \\0 & f_y & c_y \\0 & 0 & 1\end{bmatrix}\)

distCoeffs

Input vector of distortion coefficients \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) of 4, 5, 8, 12 or 14 elements. If the vector is nullptr/empty, the zero distortion coefficients are assumed.

x

input coordinate in meter along image plane x-axis.

y

input coordinate in meter along image plane y-axis.

iP

output coordinates in pixels.

static convertPoints(cam: visp._visp.core.CameraParameters, xs: numpy.ndarray[numpy.float64], ys: numpy.ndarray[numpy.float64]) tuple[numpy.ndarray[numpy.float64], numpy.ndarray[numpy.float64]]

Convert a set of 2D normalized coordinates to pixel coordinates.

Parameters:
cam: visp._visp.core.CameraParameters

The camera intrinsics with which to convert normalized coordinates to pixels.

xs: numpy.ndarray[numpy.float64]

The normalized coordinates along the horizontal axis.

ys: numpy.ndarray[numpy.float64]

The normalized coordinates along the vertical axis.

Raises:

RuntimeError – If xs and ys do not have the same dimensions and shape.

Returns:

A tuple containing the u,v pixel coordinate arrays of the input normalized coordinates.

Both arrays have the same shape as xs and ys.

Example usage:

from visp.core import MeterPixelConversion, CameraParameters
import numpy as np

cam = CameraParameters(px=600, py=600, u0=320, v0=240)
n = 20
xs, ys = np.random.rand(n), np.random.rand(n)


us, vs = MeterPixelConversion.convertPoints(cam, xs, ys)

# xs and ys have the same shape as us and vs
assert us.shape == (n,) and vs.shape == (n,)

# Converting a numpy array to pixel coords has the same effect as calling on a single image point
x, y = xs[0],  ys[0]
u, v = MeterPixelConversion.convertPoint(cam, x, y)
assert u == us[0] and v == vs[0]