Visual Servoing Platform  version 3.6.1 under development (2024-11-21)
vpMeterPixelConversion Class Reference

#include <visp3/core/vpMeterPixelConversion.h>

Static Public Member Functions

Using ViSP camera parameters <br>
static void convertEllipse (const vpCameraParameters &cam, const vpSphere &sphere, vpImagePoint &center_p, double &n20_p, double &n11_p, double &n02_p)
 
static void convertEllipse (const vpCameraParameters &cam, const vpCircle &circle, vpImagePoint &center_p, double &n20_p, double &n11_p, double &n02_p)
 
static void convertEllipse (const vpCameraParameters &cam, double xc_m, double yc_m, double n20_m, double n11_m, double n02_m, vpImagePoint &center_p, double &n20_p, double &n11_p, double &n02_p)
 
static void convertLine (const vpCameraParameters &cam, const double &rho_m, const double &theta_m, double &rho_p, double &theta_p)
 
static void convertPoint (const vpCameraParameters &cam, const double &x, const double &y, double &u, double &v)
 
static void convertPoint (const vpCameraParameters &cam, const double &x, const double &y, vpImagePoint &iP)
 
Using OpenCV camera parameters <br>
static void convertEllipse (const cv::Mat &cameraMatrix, const vpCircle &circle, vpImagePoint &center, double &n20_p, double &n11_p, double &n02_p)
 
static void convertEllipse (const cv::Mat &cameraMatrix, const vpSphere &sphere, vpImagePoint &center, double &n20_p, double &n11_p, double &n02_p)
 
static void convertEllipse (const cv::Mat &cameraMatrix, double xc_m, double yc_m, double n20_m, double n11_m, double n02_m, vpImagePoint &center_p, double &n20_p, double &n11_p, double &n02_p)
 
static void convertLine (const cv::Mat &cameraMatrix, const double &rho_m, const double &theta_m, double &rho_p, double &theta_p)
 
static void convertPoint (const cv::Mat &cameraMatrix, const cv::Mat &distCoeffs, const double &x, const double &y, double &u, double &v)
 
static void convertPoint (const cv::Mat &cameraMatrix, const cv::Mat &distCoeffs, const double &x, const double &y, vpImagePoint &iP)
 

Detailed Description

Various conversion functions to transform primitives (2D ellipse, 2D line, 2D point) from normalized coordinates in meter in the image plane into pixel coordinates.

Transformation relies either on ViSP camera parameters implemented in vpCameraParameters or on OpenCV camera parameters that are set from a projection matrix and a distortion coefficients vector.

Definition at line 66 of file vpMeterPixelConversion.h.

Member Function Documentation

◆ convertEllipse() [1/6]

void vpMeterPixelConversion::convertEllipse ( const cv::Mat &  cameraMatrix,
const vpCircle circle,
vpImagePoint center,
double &  n20_p,
double &  n11_p,
double &  n02_p 
)
static

Noting that the perspective projection of a 3D circle is usually an ellipse, using the camera intrinsic parameters converts the parameters of the 3D circle expressed in the image plane (these parameters are obtained after perspective projection of the 3D circle) in the image with values in pixels using OpenCV camera parameters.

The ellipse resulting from the conversion is here represented by its parameters $u_c,v_c,n_{20}, n_{11}, n_{02}$ corresponding to its center coordinates in pixel and the centered moments normalized by its area.

Parameters
[in]cameraMatrix: Camera Matrix $\begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1\end{bmatrix}$
[in]circle: 3D circle with internal vector circle.p[] that contains the ellipse parameters expressed in the image plane. These parameters are internally updated after perspective projection of the sphere.
[out]center: Center of the corresponding ellipse in the image with coordinates expressed in pixels.
[out]n20_p,n11_p,n02_p: Second order centered moments of the ellipse normalized by its area (i.e., such that $n_{ij} = \mu_{ij}/a$ where $\mu_{ij}$ are the centered moments and a the area) expressed in pixels.

The following code shows how to use this function:

vpCircle circle;
double n20_p, n11_p, n02_p;
circle.changeFrame(cMo);
circle.projection();
cv::Mat cameraMatrix = (cv::Mat_<double>(3,3) << px, 0, u0,
0, py, v0,
0, 0, 1);
vpMeterPixelConversion::convertEllipse(cameraMatrix, circle, center_p, n20_p, n11_p, n02_p);
vpDisplay::displayEllipse(I, center_p, n20_p, n11_p, n02_p, true, vpColor::red);
Class that defines a 3D circle in the object frame and allows forward projection of a 3D circle in th...
Definition: vpCircle.h:87
void changeFrame(const vpHomogeneousMatrix &noMo, vpColVector &noP) const VP_OVERRIDE
Definition: vpCircle.cpp:262
void projection() VP_OVERRIDE
Definition: vpCircle.cpp:144
static const vpColor red
Definition: vpColor.h:217
static void displayEllipse(const vpImage< unsigned char > &I, const vpImagePoint &center, const double &coef1, const double &coef2, const double &coef3, bool use_normalized_centered_moments, const vpColor &color, unsigned int thickness=1, bool display_center=false, bool display_arc=false)
static void convertEllipse(const vpCameraParameters &cam, const vpSphere &sphere, vpImagePoint &center_p, double &n20_p, double &n11_p, double &n02_p)

Definition at line 257 of file vpMeterPixelConversion.cpp.

References convertPoint(), vpTracker::p, and vpMath::sqr().

◆ convertEllipse() [2/6]

void vpMeterPixelConversion::convertEllipse ( const cv::Mat &  cameraMatrix,
const vpSphere sphere,
vpImagePoint center,
double &  n20_p,
double &  n11_p,
double &  n02_p 
)
static

Noting that the perspective projection of a 3D sphere is usually an ellipse, using the camera intrinsic parameters converts the parameters of the 3D sphere expressed in the image plane (these parameters are obtained after perspective projection of the 3D sphere) in the image with values in pixels using OpenCV camera parameters.

The ellipse resulting from the conversion is here represented by its parameters $u_c,v_c,n_{20}, n_{11}, n_{02}$ corresponding to its center coordinates in pixel and the centered moments normalized by its area.

Parameters
[in]cameraMatrix: Camera Matrix $\begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1\end{bmatrix}$
[in]sphere: 3D sphere with internal vector circle.p[] that contains the ellipse parameters expressed in the image plane. These parameters are internally updated after perspective projection of the sphere.
[out]center: Center of the corresponding ellipse in the image with coordinates expressed in pixels.
[out]n20_p,n11_p,n02_p: Second order centered moments of the ellipse normalized by its area (i.e., such that $n_{ij} = \mu_{ij}/a$ where $\mu_{ij}$ are the centered moments and a the area) expressed in pixels.

The following code shows how to use this function:

vpSphere sphere;
double n20_p, n11_p, n02_p;
sphere.changeFrame(cMo);
sphere.projection();
cv::Mat cameraMatrix = (cv::Mat_<double>(3,3) << px, 0, u0,
0, py, v0,
0, 0, 1);
vpMeterPixelConversion::convertEllipse(cameraMatrix, sphere, center_p, n20_p, n11_p, n02_p);
vpDisplay::displayEllipse(I, center_p, n20_p, n11_p, n02_p, true, vpColor::red);
Class that defines a 3D sphere in the object frame and allows forward projection of a 3D sphere in th...
Definition: vpSphere.h:80
void projection() VP_OVERRIDE
Definition: vpSphere.cpp:123
void changeFrame(const vpHomogeneousMatrix &cMo, vpColVector &cP) const VP_OVERRIDE
Definition: vpSphere.cpp:218

Definition at line 312 of file vpMeterPixelConversion.cpp.

References convertPoint(), vpTracker::p, and vpMath::sqr().

◆ convertEllipse() [3/6]

void vpMeterPixelConversion::convertEllipse ( const cv::Mat &  cameraMatrix,
double  xc_m,
double  yc_m,
double  n20_m,
double  n11_m,
double  n02_m,
vpImagePoint center_p,
double &  n20_p,
double &  n11_p,
double &  n02_p 
)
static

Convert parameters of an ellipse expressed in the image plane (these parameters are obtained after perspective projection of the 3D sphere) in the image with values in pixels using ViSP intrinsic camera parameters.

The ellipse resulting from the conversion is here represented by its parameters $u_c,v_c,n_{20}, n_{11}, n_{02}$ corresponding to its center coordinates in pixel and the centered moments normalized by its area.

Parameters
[in]cameraMatrix: Camera Matrix $\begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1\end{bmatrix}$
[in]xc_m,yc_m: Center of the ellipse in the image plane with normalized coordinates expressed in meters.
[in]n20_m,n11_m,n02_m: Second order centered moments of the ellipse normalized by its area (i.e., such that $n_{ij} = \mu_{ij}/a$ where $\mu_{ij}$ are the centered moments and a the area) expressed in meter.
[out]center_p: Center $(u_c, v_c)$ of the corresponding ellipse in the image with coordinates expressed in pixels.
[out]n20_p,n11_p,n02_p: Second order centered moments of the ellipse normalized by its area (i.e., such that $n_{ij} = \mu_{ij}/a$ where $\mu_{ij}$ are the centered moments and a the area) expressed in pixels.

Definition at line 356 of file vpMeterPixelConversion.cpp.

References convertPoint(), and vpMath::sqr().

◆ convertEllipse() [4/6]

void vpMeterPixelConversion::convertEllipse ( const vpCameraParameters cam,
const vpCircle circle,
vpImagePoint center_p,
double &  n20_p,
double &  n11_p,
double &  n02_p 
)
static

Noting that the perspective projection of a 3D circle is usually an ellipse, using the camera intrinsic parameters converts the parameters of the 3D circle expressed in the image plane (these parameters are obtained after perspective projection of the 3D circle) in the image with values in pixels using ViSP camera parameters.

The ellipse resulting from the conversion is here represented by its parameters $u_c,v_c,n_{20}, n_{11}, n_{02}$ corresponding to its center coordinates in pixel and the centered moments normalized by its area.

Parameters
[in]cam: Intrinsic camera parameters.
[in]circle: 3D circle with internal vector circle.p[] that contains the ellipse parameters expressed in the image plane. These parameters are internally updated after perspective projection of the sphere.
[out]center_p: Center $(u_c, v_c)$ of the corresponding ellipse in the image with coordinates expressed in pixels.
[out]n20_p,n11_p,n02_p: Second order centered moments of the ellipse normalized by its area (i.e., such that $n_{ij} = \mu_{ij}/a$ where $\mu_{ij}$ are the centered moments and a the area) expressed in pixels.

The following code shows how to use this function:

vpCircle circle;
double n20_p, n11_p, n02_p;
circle.changeFrame(cMo);
circle.projection();
vpMeterPixelConversion::convertEllipse(cam, circle, center_p, n20_p, n11_p, n02_p);
vpDisplay::displayEllipse(I, center_p, n20_p, n11_p, n02_p, true, vpColor::red);

Definition at line 97 of file vpMeterPixelConversion.cpp.

References convertPoint(), vpCameraParameters::get_px(), vpCameraParameters::get_py(), vpTracker::p, and vpMath::sqr().

◆ convertEllipse() [5/6]

void vpMeterPixelConversion::convertEllipse ( const vpCameraParameters cam,
const vpSphere sphere,
vpImagePoint center_p,
double &  n20_p,
double &  n11_p,
double &  n02_p 
)
static

Noting that the perspective projection of a 3D sphere is usually an ellipse, using the camera intrinsic parameters converts the parameters of the 3D sphere expressed in the image plane (these parameters are obtained after perspective projection of the 3D sphere) in the image with values in pixels.

The ellipse resulting from the conversion is here represented by its parameters $u_c,v_c,n_{20}, n_{11}, n_{02}$ corresponding to its center coordinates in pixel and the centered moments normalized by its area.

Parameters
[in]cam: Intrinsic camera parameters.
[in]sphere: 3D sphere with internal vector circle.p[] that contains the ellipse parameters expressed in the image plane. These parameters are internally updated after perspective projection of the sphere.
[out]center_p: Center $(u_c, v_c)$ of the corresponding ellipse in the image with coordinates expressed in pixels.
[out]n20_p,n11_p,n02_p: Second order centered moments of the ellipse normalized by its area (i.e., such that $n_{ij} = \mu_{ij}/a$ where $\mu_{ij}$ are the centered moments and a the area) expressed in pixels.

The following code shows how to use this function:

vpSphere sphere;
double n20_p, n11_p, n02_p;
sphere.changeFrame(cMo);
sphere.projection();
vpMeterPixelConversion::convertEllipse(cam, sphere, center_p, n20_p, n11_p, n02_p);
vpDisplay::displayEllipse(I, center_p, n20_p, n11_p, n02_p, true, vpColor::red);
Examples
testCameraParametersConversion.cpp.

Definition at line 145 of file vpMeterPixelConversion.cpp.

References convertPoint(), vpCameraParameters::get_px(), vpCameraParameters::get_py(), vpTracker::p, and vpMath::sqr().

Referenced by vpFeatureDisplay::displayEllipse(), vpMbtDistanceCircle::getModelForDisplay(), vpMbtDistanceCircle::initMovingEdge(), and vpMbtDistanceCircle::updateMovingEdge().

◆ convertEllipse() [6/6]

void vpMeterPixelConversion::convertEllipse ( const vpCameraParameters cam,
double  xc_m,
double  yc_m,
double  n20_m,
double  n11_m,
double  n02_m,
vpImagePoint center_p,
double &  n20_p,
double &  n11_p,
double &  n02_p 
)
static

Convert parameters of an ellipse expressed in the image plane (these parameters are obtained after perspective projection of the 3D sphere) in the image with values in pixels using ViSP intrinsic camera parameters.

The ellipse resulting from the conversion is here represented by its parameters $u_c,v_c,n_{20}, n_{11}, n_{02}$ corresponding to its center coordinates in pixel and the centered moments normalized by its area.

Parameters
[in]cam: Intrinsic camera parameters.
[in]xc_m,yc_m: Center of the ellipse in the image plane with normalized coordinates expressed in meters.
[in]n20_m,n11_m,n02_m: Second order centered moments of the ellipse normalized by its area (i.e., such that $n_{ij} = \mu_{ij}/a$ where $\mu_{ij}$ are the centered moments and a the area) expressed in meter.
[out]center_p: Center $(u_c, v_c)$ of the corresponding ellipse in the image with coordinates expressed in pixels.
[out]n20_p,n11_p,n02_p: Second order centered moments of the ellipse normalized by its area (i.e., such that $n_{ij} = \mu_{ij}/a$ where $\mu_{ij}$ are the centered moments and a the area) expressed in pixels.

Definition at line 186 of file vpMeterPixelConversion.cpp.

References convertPoint(), vpCameraParameters::get_px(), vpCameraParameters::get_py(), and vpMath::sqr().

◆ convertLine() [1/2]

void vpMeterPixelConversion::convertLine ( const cv::Mat &  cameraMatrix,
const double &  rho_m,
const double &  theta_m,
double &  rho_p,
double &  theta_p 
)
static

Line parameters conversion from normalized coordinates $(\rho_m,\theta_m)$ expressed in the image plane to pixel coordinates $(\rho_p,\theta_p)$ using OpenCV camera parameters. This function doesn't use distortion coefficients.

Parameters
[in]cameraMatrix: Camera Matrix $\begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1\end{bmatrix}$
[in]rho_p,theta_p: Line parameters expressed in pixels.
[out]rho_m,theta_m: Line parameters expressed in meters in the image plane.

Definition at line 208 of file vpMeterPixelConversion.cpp.

References vpException::divideByZeroError, and vpMath::sqr().

◆ convertLine() [2/2]

BEGIN_VISP_NAMESPACE void vpMeterPixelConversion::convertLine ( const vpCameraParameters cam,
const double &  rho_m,
const double &  theta_m,
double &  rho_p,
double &  theta_p 
)
static

Line parameters conversion from normalized coordinates $(\rho_m,\theta_m)$ expressed in the image plane to pixel coordinates $(\rho_p,\theta_p)$ using ViSP camera parameters. This function doesn't use distortion coefficients.

Parameters
[in]cam: camera parameters.
[in]rho_p,theta_p: Line parameters expressed in pixels.
[out]rho_m,theta_m: Line parameters expressed in meters in the image plane.
Examples
testCameraParametersConversion.cpp.

Definition at line 55 of file vpMeterPixelConversion.cpp.

References vpException::divideByZeroError, and vpMath::sqr().

Referenced by vpFeatureDisplay::displayLine(), vpMbtDistanceCylinder::getModelForDisplay(), vpMbtDistanceCylinder::initMovingEdge(), vpMbtDistanceLine::initMovingEdge(), vpMbtDistanceCylinder::updateMovingEdge(), and vpMbtDistanceLine::updateMovingEdge().

◆ convertPoint() [1/4]

void vpMeterPixelConversion::convertPoint ( const cv::Mat &  cameraMatrix,
const cv::Mat &  distCoeffs,
const double &  x,
const double &  y,
double &  u,
double &  v 
)
static

Point coordinates conversion from normalized coordinates $(x,y)$ in meter in the image plane to pixel coordinates $(u,v)$ in the image using OpenCV camera parameters.

Parameters
[in]cameraMatrix: Camera Matrix $\begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1\end{bmatrix}$
[in]distCoeffs: Input vector of distortion coefficients $(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])$ of 4, 5, 8, 12 or 14 elements. If the vector is nullptr/empty, the zero distortion coefficients are assumed.
[in]x: input coordinate in meter along image plane x-axis.
[in]y: input coordinate in meter along image plane y-axis.
[out]u: output coordinate in pixels along image horizontal axis.
[out]v: output coordinate in pixels along image vertical axis.

Definition at line 386 of file vpMeterPixelConversion.cpp.

◆ convertPoint() [2/4]

void vpMeterPixelConversion::convertPoint ( const cv::Mat &  cameraMatrix,
const cv::Mat &  distCoeffs,
const double &  x,
const double &  y,
vpImagePoint iP 
)
static

Point coordinates conversion from normalized coordinates $(x,y)$ in meter in the image plane to pixel coordinates $(u,v)$ in the image using OpenCV camera parameters.

Parameters
[in]cameraMatrix: Camera Matrix $\begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1\end{bmatrix}$
[in]distCoeffs: Input vector of distortion coefficients $(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])$ of 4, 5, 8, 12 or 14 elements. If the vector is nullptr/empty, the zero distortion coefficients are assumed.
[in]x: input coordinate in meter along image plane x-axis.
[in]y: input coordinate in meter along image plane y-axis.
[out]iP: output coordinates in pixels.

Definition at line 412 of file vpMeterPixelConversion.cpp.

References vpImagePoint::set_u(), and vpImagePoint::set_v().

◆ convertPoint() [3/4]

static void vpMeterPixelConversion::convertPoint ( const vpCameraParameters cam,
const double &  x,
const double &  y,
double &  u,
double &  v 
)
inlinestatic

Point coordinates conversion from normalized coordinates $(x,y)$ in meter in the image plane to pixel coordinates $(u,v)$ in the image using ViSP camera parameters.

The used formula depends on the projection model of the camera. To know the currently used projection model use vpCameraParameter::get_projModel()

Parameters
[in]cam: camera parameters.
[in]x: input coordinate in meter along image plane x-axis.
[in]y: input coordinate in meter along image plane y-axis.
[out]u: output coordinate in pixels along image horizontal axis.
[out]v: output coordinate in pixels along image vertical axis.

$ u = x*p_x + u_0 $ and $ v = y*p_y + v_0 $ in the case of perspective projection without distortion.

$ u = x*p_x*(1+k_{ud}*r^2)+u_0 $ and $ v = y*p_y*(1+k_{ud}*r^2)+v_0 $ with $ r^2 = x^2+y^2 $ in the case of perspective projection with distortion.

In the case of a projection with Kannala-Brandt distortion, refer to [22].

Examples
grabRealSense2_T265.cpp, servoAfma6AprilTagIBVS.cpp, servoFrankaIBVS.cpp, servoUniversalRobotsIBVS.cpp, testCameraParametersConversion.cpp, testPose.cpp, testRealSense2_D435_align.cpp, testRealSense2_D435_opencv.cpp, testRealSense2_T265_images_odometry.cpp, testRealSense2_T265_images_odometry_async.cpp, testRealSense2_T265_odometry.cpp, testVirtuoseWithGlove.cpp, tutorial-homography-from-points.cpp, tutorial-ibvs-4pts-display.cpp, tutorial-ibvs-4pts-wireframe-camera.cpp, tutorial-ibvs-4pts-wireframe-robot-afma6.cpp, tutorial-ibvs-4pts-wireframe-robot-viper.cpp, tutorial-pf.cpp, tutorial-pose-from-planar-object.cpp, and tutorial-ukf.cpp.

Definition at line 105 of file vpMeterPixelConversion.h.

References vpCameraParameters::perspectiveProjWithDistortion, vpCameraParameters::perspectiveProjWithoutDistortion, and vpCameraParameters::ProjWithKannalaBrandtDistortion.

Referenced by vpPolygon::buildFrom(), vpPose::computeResidual(), vpMbtFaceDepthNormal::computeROI(), vpMbtFaceDepthDense::computeROI(), convertEllipse(), vpFeatureBuilder::create(), vpFeatureSegment::display(), vpProjectionDisplay::display(), vpProjectionDisplay::displayCamera(), vpMbtFaceDepthNormal::displayFeature(), vpPose::displayModel(), vpFeatureDisplay::displayPoint(), vpImageDraw::drawFrame(), vpOccipitalStructure::getColoredPointcloud(), vpMbtFaceDepthNormal::getFeaturesForDisplay(), vpMbtDistanceLine::getModelForDisplay(), vpPolygon3D::getNbCornerInsideImage(), vpPolygon3D::getRoi(), vpPolygon3D::getRoiClipped(), vpMbtDistanceLine::initMovingEdge(), vpMarkersMeasurements::likelihood(), vpKeyPoint::matchPointAndDetect(), vpMarkersMeasurements::measureGT(), vpWireFrameSimulator::projectCameraTrajectory(), vpMarkersMeasurements::state_to_measurement(), vpMbtDistanceLine::updateMovingEdge(), and vpKinect::warpRGBFrame().

◆ convertPoint() [4/4]

static void vpMeterPixelConversion::convertPoint ( const vpCameraParameters cam,
const double &  x,
const double &  y,
vpImagePoint iP 
)
inlinestatic

Point coordinates conversion from normalized coordinates $(x,y)$ in meter in the image plane to pixel coordinates in the image using ViSP camera parameters.

The used formula depends on the projection model of the camera. To know the currently used projection model use vpCameraParameter::get_projModel()

Parameters
[in]cam: camera parameters.
[in]x: input coordinate in meter along image plane x-axis.
[in]y: input coordinate in meter along image plane y-axis.
[out]iP: output coordinates in pixels.

In the frame (u,v) the result is given by:

$ u = x*p_x + u_0 $ and $ v = y*p_y + v_0 $ in the case of perspective projection without distortion.

$ u = x*p_x*(1+k_{ud}*r^2)+u_0 $ and $ v = y*p_y*(1+k_{ud}*r^2)+v_0 $ with $ r^2 = x^2+y^2 $ in the case of perspective projection with distortion.

In the case of a projection with Kannala-Brandt distortion, refer to [22].

Definition at line 149 of file vpMeterPixelConversion.h.

References vpCameraParameters::perspectiveProjWithDistortion, vpCameraParameters::perspectiveProjWithoutDistortion, and vpCameraParameters::ProjWithKannalaBrandtDistortion.