PixelMeterConversion

class PixelMeterConversion

Bases: pybind11_object

Various conversion functions to transform primitives (2D line, moments, 2D point) from pixel to normalized coordinates in meter in the image plane.

Transformation relies either on ViSP camera parameters implemented in vpCameraParameters or on OpenCV camera parameters that are set from a projection matrix and a distortion coefficients vector.

Methods

__init__

convertEllipse

Convert ellipse parameters (ie ellipse center and normalized centered moments) from pixels \((u_c, v_c, n_{{20}_p}, n_{{11}_p}, n_{{02}_p})\) to meters \((x_c, y_c, n_{{20}_m}, n_{{11}_m}, n_{{02}_m})\) in the image plane.

convertLine

Line parameters conversion from pixel \((\rho_p,\theta_p)\) to normalized coordinates \((\rho_m,\theta_m)\) in meter using ViSP camera parameters.

convertMoment

Overloaded function.

convertPoint

Overloaded function.

convertPoints

Convert a set of 2D pixel coordinates to normalized coordinates.

Inherited Methods

Operators

__doc__

__init__

__module__

Attributes

__annotations__

__init__(*args, **kwargs)
static convertEllipse(cam: visp._visp.core.CameraParameters, center_p: visp._visp.core.ImagePoint, n20_p: float, n11_p: float, n02_p: float) tuple[float, float, float, float, float]

Convert ellipse parameters (ie ellipse center and normalized centered moments) from pixels \((u_c, v_c, n_{{20}_p}, n_{{11}_p}, n_{{02}_p})\) to meters \((x_c, y_c, n_{{20}_m}, n_{{11}_m}, n_{{02}_m})\) in the image plane.

Parameters:
cam: visp._visp.core.CameraParameters

Camera parameters.

center_p: visp._visp.core.ImagePoint

Center of the ellipse (uc, vc) with pixel coordinates.

n20_p: float

Normalized second order moments of the ellipse in pixels.

n11_p: float

Normalized second order moments of the ellipse in pixels.

n02_p: float

Normalized second order moments of the ellipse in pixels.

Returns:

A tuple containing:

  • xc_m: Center of the ellipse with coordinates in meters in the image plane.

  • yc_m: Center of the ellipse with coordinates in meters in the image plane.

  • n20_m: Normalized second order moments of the ellipse in meters in the image plane.

  • n11_m: Normalized second order moments of the ellipse in meters in the image plane.

  • n02_m: Normalized second order moments of the ellipse in meters in the image plane.

static convertLine(cam: visp._visp.core.CameraParameters, rho_p: float, theta_p: float) tuple[float, float]

Line parameters conversion from pixel \((\rho_p,\theta_p)\) to normalized coordinates \((\rho_m,\theta_m)\) in meter using ViSP camera parameters. This function doesn’t use distortion coefficients.

Parameters:
cam: visp._visp.core.CameraParameters

camera parameters.

rho_p: float

Line parameters expressed in pixels.

theta_p: float

Line parameters expressed in pixels.

Returns:

A tuple containing:

  • rho_m: Line parameters expressed in meters in the image plane.

  • theta_m: Line parameters expressed in meters in the image plane.

static convertMoment(*args, **kwargs)

Overloaded function.

  1. convertMoment(cam: visp._visp.core.CameraParameters, order: int, moment_pixel: visp._visp.core.Matrix, moment_meter: visp._visp.core.Matrix) -> None

Moments conversion from pixel to normalized coordinates in meter using ViSP camera parameters. This function doesn’t use distortion coefficients.

The following example show how to use this function.

unsigned int order = 3;
vpMatrix M_p(order, order); // 3-by-3 matrix with mij moments expressed in pixels
M_p = 0;
vpMatrix M_m(order, order); // 3-by-3 matrix with mij moments expressed in meters
M_m = 0;

// Fill input Matrix with mij moments in pixels
M_p[0][0] = m00_p;
M_p[1][0] = m10_p;
M_p[0][1] = m01_p;
M_p[2][0] = m20_p;
M_p[1][1] = m11_p;
M_p[0][2] = m02_p;

vpPixelMeterConversion::convertMoment(cam, order, M_p, M_m);

// Moments mij in meters
double m00 = M_m[0][0];
double m01 = M_m[0][1];
double m10 = M_m[1][0];
double m02 = M_m[0][2];
double m11 = M_m[1][1];
double m20 = M_m[2][0];
Parameters:
cam

camera parameters.

order

Moment order.

moment_pixel

Moment values in pixels.

moment_meter

Moment values in meters in the image plane.

  1. convertMoment(cameraMatrix: cv::Mat, order: int, moment_pixel: visp._visp.core.Matrix, moment_meter: visp._visp.core.Matrix) -> None

Moments conversion from pixel to normalized coordinates in meter using OpenCV camera parameters. This function doesn’t use distortion coefficients.

Parameters:
cameraMatrix

Camera Matrix \(\begin{bmatrix} f_x & 0 & c_x \\0 & f_y & c_y \\0 & 0 & 1\end{bmatrix}\)

order

Moment order.

moment_pixel

Moment values in pixels.

moment_meter

Moment values in meters in the image plane.

static convertPoint(*args, **kwargs)

Overloaded function.

  1. convertPoint(cam: visp._visp.core.CameraParameters, u: float, v: float) -> tuple[float, float]

Point coordinates conversion from pixel coordinates \((u,v)\) to normalized coordinates \((x,y)\) in meter using ViSP camera parameters.

The used formula depends on the projection model of the camera. To know the currently used projection model use vpCameraParameter::get_projModel()

\(x = (u-u_0)/p_x\) and \(y = (v-v_0)/p_y\) in the case of perspective projection without distortion.

\(x = (u-u_0)*(1+k_{du}*r^2)/p_x\) and \(y = (v-v_0)*(1+k_{du}*r^2)/p_y\) with \(r^2=((u - u_0)/p_x)^2+((v-v_0)/p_y)^2\) in the case of perspective projection with distortion.

In the case of a projection with Kannala-Brandt distortion, refer to [20] .

Parameters:
cam

camera parameters.

u

input coordinate in pixels along image horizontal axis.

v

input coordinate in pixels along image vertical axis.

Returns:

A tuple containing:

  • x: output coordinate in meter along image plane x-axis.

  • y: output coordinate in meter along image plane y-axis.

  1. convertPoint(cam: visp._visp.core.CameraParameters, iP: visp._visp.core.ImagePoint) -> tuple[float, float]

Point coordinates conversion from pixel coordinates Coordinates in pixel to normalized coordinates \((x,y)\) in meter using ViSP camera parameters.

The used formula depends on the projection model of the camera. To know the currently used projection model use vpCameraParameter::get_projModel()

Thanks to the pixel coordinates in the frame (u,v), the meter coordinates are given by :

\(x = (u-u_0)/p_x\) and \(y = (v-v_0)/p_y\) in the case of perspective projection without distortion.

\(x = (u-u_0)*(1+k_{du}*r^2)/p_x\) and \(y = (v-v_0)*(1+k_{du}*r^2)/p_y\) with \(r^2=((u - u_0)/p_x)^2+((v-v_0)/p_y)^2\) in the case of perspective projection with distortion.

In the case of a projection with Kannala-Brandt distortion, refer to [20] .

Parameters:
cam

camera parameters.

iP

input coordinates in pixels.

Returns:

A tuple containing:

  • x: output coordinate in meter along image plane x-axis.

  • y: output coordinate in meter along image plane y-axis.

static convertPoints(cam: visp._visp.core.CameraParameters, us: numpy.ndarray[numpy.float64], vs: numpy.ndarray[numpy.float64]) tuple[numpy.ndarray[numpy.float64], numpy.ndarray[numpy.float64]]

Convert a set of 2D pixel coordinates to normalized coordinates.

Parameters:
cam: visp._visp.core.CameraParameters

The camera intrinsics with which to convert pixels to normalized coordinates.

us: numpy.ndarray[numpy.float64]

The pixel coordinates along the horizontal axis.

vs: numpy.ndarray[numpy.float64]

The pixel coordinates along the vertical axis.

Raises:

RuntimeError – If us and vs do not have the same dimensions and shape.

Returns:

A tuple containing the x and y normalized coordinates of the input pixels.

Both arrays have the same shape as xs and ys.

Example usage:

from visp.core import PixelMeterConversion, CameraParameters
import numpy as np

h, w = 240, 320
cam = CameraParameters(px=600, py=600, u0=320, v0=240)

vs, us = np.meshgrid(range(h), range(w), indexing='ij') # vs and us are 2D arrays
vs.shape == (h, w) and us.shape == (h, w)

xs, ys = PixelMeterConversion.convertPoints(cam, us, vs)
# xs and ys have the same shape as us and vs
assert xs.shape == (h, w) and ys.shape == (h, w)

# Converting a numpy array to normalized coords has the same effect as calling on a single image point
u, v = 120, 120
x, y = PixelMeterConversion.convertPoint(cam, u, v)
assert x == xs[v, u] and y == ys[v, u]