Visual Servoing Platform  version 3.2.0 under development (2018-12-13)
vpMeterPixelConversion Class Reference

#include <visp3/core/vpMeterPixelConversion.h>

Static Public Member Functions

Using ViSP camera parameters
static void convertEllipse (const vpCameraParameters &cam, const vpSphere &sphere, vpImagePoint &center, double &mu20_p, double &mu11_p, double &mu02_p)
 
static void convertEllipse (const vpCameraParameters &cam, const vpCircle &circle, vpImagePoint &center, double &mu20_p, double &mu11_p, double &mu02_p)
 
static void convertLine (const vpCameraParameters &cam, const double &rho_m, const double &theta_m, double &rho_p, double &theta_p)
 
static void convertPoint (const vpCameraParameters &cam, const double &x, const double &y, double &u, double &v)
 
static void convertPoint (const vpCameraParameters &cam, const double &x, const double &y, vpImagePoint &iP)
 
Using OpenCV camera parameters
static void convertEllipse (const cv::Mat &cameraMatrix, const vpCircle &circle, vpImagePoint &center, double &mu20_p, double &mu11_p, double &mu02_p)
 
static void convertEllipse (const cv::Mat &cameraMatrix, const vpSphere &sphere, vpImagePoint &center, double &mu20_p, double &mu11_p, double &mu02_p)
 
static void convertLine (const cv::Mat &cameraMatrix, const double &rho_m, const double &theta_m, double &rho_p, double &theta_p)
 
static void convertPoint (const cv::Mat &cameraMatrix, const cv::Mat &distCoeffs, const double &x, const double &y, double &u, double &v)
 
static void convertPoint (const cv::Mat &cameraMatrix, const cv::Mat &distCoeffs, const double &x, const double &y, vpImagePoint &iP)
 

Detailed Description

Various conversion functions to transform primitives (2D ellipse, 2D line, 2D point) from normalized coordinates in meter in the image plane into pixel coordinates.

Tranformation relies either on ViSP camera parameters implemented in vpCameraParameters or on OpenCV camera parameters that are set from a projection matrix and a distorsion coefficients vector.

Definition at line 72 of file vpMeterPixelConversion.h.

Member Function Documentation

void vpMeterPixelConversion::convertEllipse ( const vpCameraParameters cam,
const vpSphere sphere,
vpImagePoint center,
double &  mu20_p,
double &  mu11_p,
double &  mu02_p 
)
static

Noting that the perspective projection of a 3D sphere is usually an ellipse, using the camera intrinsic parameters converts the parameters of the 3D sphere expressed in the image plane (these parameters are obtained after perspective projection of the 3D sphere) in the image with values in pixels using ViSP camera parameters.

The ellipse resulting from the perspective projection is here represented by its parameters $x_c,y_c,\mu_{20}, \mu_{11}, \mu_{02}$ corresponding to its center coordinates in pixel and the centered moments.

Parameters
[in]cam: Intrinsic camera parameters.
[in]sphere: 3D sphere with internal vector circle.p[] that contains the ellipse parameters expressed in the image plane. These parameters are internaly updated after perspective projection of the sphere.
[out]center: Center of the corresponding ellipse in the image with coordinates expressed in pixels.
[out]mu20_p,mu11_p,mu02_p: Centered moments expressed in pixels.

The following code shows how to use this function:

vpSphere sphere;
double mu20_p, mu11_p, mu02_p;
sphere.changeFrame(cMo);
sphere.projection();
vpMeterPixelConversion::convertEllipse(cam, sphere, center_p, mu20_p, mu11_p, mu02_p);
vpDisplay::displayEllipse(I, center_p, mu20_p, mu11_p, mu02_p);

Definition at line 144 of file vpMeterPixelConversion.cpp.

References convertPoint(), vpCameraParameters::get_px(), vpCameraParameters::get_py(), vpTracker::p, and vpMath::sqr().

Referenced by vpMbtDistanceCircle::display(), vpFeatureDisplay::displayEllipse(), vpMbtDistanceCircle::initMovingEdge(), and vpMbtDistanceCircle::updateMovingEdge().

void vpMeterPixelConversion::convertEllipse ( const vpCameraParameters cam,
const vpCircle circle,
vpImagePoint center,
double &  mu20_p,
double &  mu11_p,
double &  mu02_p 
)
static

Noting that the perspective projection of a 3D circle is usually an ellipse, using the camera intrinsic parameters converts the parameters of the 3D circle expressed in the image plane (these parameters are obtained after perspective projection of the 3D circle) in the image with values in pixels using ViSP camera parameters.

The ellipse resulting from the perspective projection is here represented by its parameters $x_c, y_c, \mu_{20}, \mu_{11}, \mu_{02}$ corresponding to its center coordinates in pixel and the centered moments.

Parameters
[in]cam: Intrinsic camera parameters.
[in]circle: 3D circle with internal vector circle.p[] that contains the ellipse parameters expressed in the image plane. These parameters are internaly updated after perspective projection of the sphere.
[out]center: Center of the corresponding ellipse in the image with coordinates expressed in pixels.
[out]mu20_p,mu11_p,mu02_p: Centered moments expressed in pixels.

The following code shows how to use this function:

vpCircle circle;
double mu20_p, mu11_p, mu02_p;
circle.changeFrame(cMo);
circle.projection();
vpMeterPixelConversion::convertEllipse(cam, circle, center_p, mu20_p, mu11_p, mu02_p);
vpDisplay::displayEllipse(I, center_p, mu20_p, mu11_p, mu02_p);

Definition at line 102 of file vpMeterPixelConversion.cpp.

References convertPoint(), vpCameraParameters::get_px(), vpCameraParameters::get_py(), vpTracker::p, and vpMath::sqr().

void vpMeterPixelConversion::convertEllipse ( const cv::Mat &  cameraMatrix,
const vpCircle circle,
vpImagePoint center,
double &  mu20_p,
double &  mu11_p,
double &  mu02_p 
)
static

Noting that the perspective projection of a 3D circle is usually an ellipse, using the camera intrinsic parameters converts the parameters of the 3D circle expressed in the image plane (these parameters are obtained after perspective projection of the 3D circle) in the image with values in pixels using OpenCV camera parameters.

The ellipse resulting from the perspective projection is here represented by its parameters $x_c, y_c, \mu_{20}, \mu_{11}, \mu_{02}$ corresponding to its center coordinates in pixel and the centered moments.

Parameters
[in]cameraMatrix: Camera Matrix $\begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1\end{bmatrix}$
[in]circle: 3D circle with internal vector circle.p[] that contains the ellipse parameters expressed in the image plane. These parameters are internaly updated after perspective projection of the sphere.
[out]center: Center of the corresponding ellipse in the image with coordinates expressed in pixels.
[out]mu20_p,mu11_p,mu02_p: Centered moments expressed in pixels.

The following code shows how to use this function:

vpCircle circle;
double mu20_p, mu11_p, mu02_p;
circle.changeFrame(cMo);
circle.projection();
cv::Mat cameraMatrix = (cv::Mat_<double>(3,3) << px, 0, u0,
0, py, v0,
0, 0, 1);
vpMeterPixelConversion::convertEllipse(cameraMatrix, circle, center_p, mu20_p, mu11_p, mu02_p);
vpDisplay::displayEllipse(I, center_p, mu20_p, mu11_p, mu02_p);

Definition at line 220 of file vpMeterPixelConversion.cpp.

References convertPoint(), vpTracker::p, and vpMath::sqr().

void vpMeterPixelConversion::convertEllipse ( const cv::Mat &  cameraMatrix,
const vpSphere sphere,
vpImagePoint center,
double &  mu20_p,
double &  mu11_p,
double &  mu02_p 
)
static

Noting that the perspective projection of a 3D sphere is usually an ellipse, using the camera intrinsic parameters converts the parameters of the 3D sphere expressed in the image plane (these parameters are obtained after perspective projection of the 3D sphere) in the image with values in pixels using OpenCV camera parameters.

The ellipse resulting from the perspective projection is here represented by its parameters $x_c,y_c,\mu_{20}, \mu_{11}, \mu_{02}$ corresponding to its center coordinates in pixel and the centered moments.

Parameters
[in]cameraMatrix: Camera Matrix $\begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1\end{bmatrix}$
[in]sphere: 3D sphere with internal vector circle.p[] that contains the ellipse parameters expressed in the image plane. These parameters are internaly updated after perspective projection of the sphere.
[out]center: Center of the corresponding ellipse in the image with coordinates expressed in pixels.
[out]mu20_p,mu11_p,mu02_p: Centered moments expressed in pixels.

The following code shows how to use this function:

vpSphere sphere;
double mu20_p, mu11_p, mu02_p;
sphere.changeFrame(cMo);
sphere.projection();
cv::Mat cameraMatrix = (cv::Mat_<double>(3,3) << px, 0, u0,
0, py, v0,
0, 0, 1);
vpMeterPixelConversion::convertEllipse(cameraMatrix, sphere, center_p, mu20_p, mu11_p, mu02_p);
vpDisplay::displayEllipse(I, center_p, mu20_p, mu11_p, mu02_p);

Definition at line 268 of file vpMeterPixelConversion.cpp.

References convertPoint(), vpTracker::p, and vpMath::sqr().

void vpMeterPixelConversion::convertLine ( const vpCameraParameters cam,
const double &  rho_m,
const double &  theta_m,
double &  rho_p,
double &  theta_p 
)
static

Line parameters conversion from normalized coordinates $(\rho_m,\theta_m)$ expressed in the image plane to pixel coordinates $(\rho_p,\theta_p)$ using ViSP camera parameters. This function doesn't use distorsion coefficients.

Parameters
[in]cam: camera parameters.
[in]rho_p,theta_p: Line parameters expressed in pixels.
[out]rho_m,theta_m: Line parameters expressed in meters in the image plane.

Definition at line 60 of file vpMeterPixelConversion.cpp.

References vpException::divideByZeroError, vpMath::sqr(), and vpERROR_TRACE.

Referenced by vpMbtDistanceKltCylinder::display(), vpMbtDistanceCylinder::display(), vpFeatureDisplay::displayLine(), vpMbtDistanceLine::initMovingEdge(), vpMbtDistanceCylinder::initMovingEdge(), vpMbtDistanceLine::updateMovingEdge(), and vpMbtDistanceCylinder::updateMovingEdge().

void vpMeterPixelConversion::convertLine ( const cv::Mat &  cameraMatrix,
const double &  rho_m,
const double &  theta_m,
double &  rho_p,
double &  theta_p 
)
static

Line parameters conversion from normalized coordinates $(\rho_m,\theta_m)$ expressed in the image plane to pixel coordinates $(\rho_p,\theta_p)$ using OpenCV camera parameters. This function doesn't use distorsion coefficients.

Parameters
[in]cameraMatrix: Camera Matrix $\begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1\end{bmatrix}$
[in]rho_p,theta_p: Line parameters expressed in pixels.
[out]rho_m,theta_m: Line parameters expressed in meters in the image plane.

Definition at line 171 of file vpMeterPixelConversion.cpp.

References vpException::divideByZeroError, vpMath::sqr(), and vpERROR_TRACE.

static void vpMeterPixelConversion::convertPoint ( const vpCameraParameters cam,
const double &  x,
const double &  y,
double &  u,
double &  v 
)
inlinestatic

Point coordinates conversion from normalized coordinates $(x,y)$ in meter in the image plane to pixel coordinates $(u,v)$ in the image using ViSP camera parameters.

The used formula depends on the projection model of the camera. To know the currently used projection model use vpCameraParameter::get_projModel()

Parameters
[in]cam: camera parameters.
[in]x: input coordinate in meter along image plane x-axis.
[in]y: input coordinate in meter along image plane y-axis.
[out]u: output coordinate in pixels along image horizontal axis.
[out]v: output coordinate in pixels along image vertical axis.

$ u = x*p_x + u_0 $ and $ v = y*p_y + v_0 $ in the case of perspective projection without distortion.

$ u = x*p_x*(1+k_{ud}*r^2)+u_0 $ and $ v = y*p_y*(1+k_{ud}*r^2)+v_0 $ with $ r^2 = x^2+y^2 $ in the case of perspective projection with distortion.

Examples:
testVirtuoseWithGlove.cpp, tutorial-homography-from-points.cpp, tutorial-ibvs-4pts-display.cpp, tutorial-ibvs-4pts-wireframe-camera.cpp, tutorial-ibvs-4pts-wireframe-robot-afma6.cpp, and tutorial-ibvs-4pts-wireframe-robot-viper.cpp.

Definition at line 107 of file vpMeterPixelConversion.h.

References vpCameraParameters::perspectiveProjWithDistortion, and vpCameraParameters::perspectiveProjWithoutDistortion.

Referenced by vpPolygon::buildFrom(), vpKeyPoint::computePose(), vpMbtFaceDepthDense::computeROI(), vpMbtFaceDepthNormal::computeROI(), convertEllipse(), vpFeatureBuilder::create(), vpFeatureSegment::display(), vpProjectionDisplay::display(), vpMbtDistanceLine::display(), vpMbtDistanceKltPoints::display(), vpProjectionDisplay::displayCamera(), vpMbtFaceDepthNormal::displayFeature(), vpPose::displayModel(), vpFeatureDisplay::displayPoint(), vpPolygon3D::getNbCornerInsideImage(), vpPolygon3D::getRoi(), vpPolygon3D::getRoiClipped(), vpImageSimulator::init(), vpMbtDistanceLine::initMovingEdge(), vpKeyPoint::matchPointAndDetect(), vpWireFrameSimulator::projectCameraTrajectory(), vpSimulatorAfma6::updateArticularPosition(), vpSimulatorViper850::updateArticularPosition(), vpMbtDistanceLine::updateMovingEdge(), and vpKinect::warpRGBFrame().

static void vpMeterPixelConversion::convertPoint ( const vpCameraParameters cam,
const double &  x,
const double &  y,
vpImagePoint iP 
)
inlinestatic

Point coordinates conversion from normalized coordinates $(x,y)$ in meter in the image plane to pixel coordinates in the image using ViSP camera parameters.

The used formula depends on the projection model of the camera. To know the currently used projection model use vpCameraParameter::get_projModel()

Parameters
[in]cam: camera parameters.
[in]x: input coordinate in meter along image plane x-axis.
[in]y: input coordinate in meter along image plane y-axis.
[out]iP: output coordinates in pixels.

In the frame (u,v) the result is given by:

$ u = x*p_x + u_0 $ and $ v = y*p_y + v_0 $ in the case of perspective projection without distortion.

$ u = x*p_x*(1+k_{ud}*r^2)+u_0 $ and $ v = y*p_y*(1+k_{ud}*r^2)+v_0 $ with $ r^2 = x^2+y^2 $ in the case of perspective projection with distortion.

Definition at line 144 of file vpMeterPixelConversion.h.

References vpCameraParameters::perspectiveProjWithDistortion, vpCameraParameters::perspectiveProjWithoutDistortion, vpImagePoint::set_u(), and vpImagePoint::set_v().

void vpMeterPixelConversion::convertPoint ( const cv::Mat &  cameraMatrix,
const cv::Mat &  distCoeffs,
const double &  x,
const double &  y,
double &  u,
double &  v 
)
static

Point coordinates conversion from normalized coordinates $(x,y)$ in meter in the image plane to pixel coordinates $(u,v)$ in the image using OpenCV camera parameters.

Parameters
[in]cameraMatrix: Camera Matrix $\begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1\end{bmatrix}$
[in]distCoeffs: Input vector of distortion coefficients $(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])$ of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
[in]x: input coordinate in meter along image plane x-axis.
[in]y: input coordinate in meter along image plane y-axis.
[out]u: output coordinate in pixels along image horizontal axis.
[out]v: output coordinate in pixels along image vertical axis.

Definition at line 304 of file vpMeterPixelConversion.cpp.

void vpMeterPixelConversion::convertPoint ( const cv::Mat &  cameraMatrix,
const cv::Mat &  distCoeffs,
const double &  x,
const double &  y,
vpImagePoint iP 
)
static

Point coordinates conversion from normalized coordinates $(x,y)$ in meter in the image plane to pixel coordinates $(u,v)$ in the image using OpenCV camera parameters.

Parameters
[in]cameraMatrix: Camera Matrix $\begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1\end{bmatrix}$
[in]distCoeffs: Input vector of distortion coefficients $(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])$ of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
[in]x: input coordinate in meter along image plane x-axis.
[in]y: input coordinate in meter along image plane y-axis.
[out]iP: output coordinates in pixels.

Definition at line 329 of file vpMeterPixelConversion.cpp.

References vpImagePoint::set_u(), and vpImagePoint::set_v().