Visual Servoing Platform  version 3.2.0 under development (2018-10-15)
vpMeterPixelConversion Class Reference

#include <visp3/core/vpMeterPixelConversion.h>

Static Public Member Functions

static void convertEllipse (const vpCameraParameters &cam, const vpSphere &sphere, vpImagePoint &center, double &mu20_p, double &mu11_p, double &mu02_p)
 
static void convertEllipse (const vpCameraParameters &cam, const vpCircle &circle, vpImagePoint &center, double &mu20_p, double &mu11_p, double &mu02_p)
 
static void convertLine (const vpCameraParameters &cam, const double &rho_m, const double &theta_m, double &rho_p, double &theta_p)
 
static void convertPoint (const vpCameraParameters &cam, const double &x, const double &y, double &u, double &v)
 
static void convertPoint (const vpCameraParameters &cam, const double &x, const double &y, vpImagePoint &iP)
 

Detailed Description

Conversion from normalized coordinates $(x,y)$ in meter to pixel coordinates $(u,v)$.

This class relates to vpCameraParameters.

Definition at line 67 of file vpMeterPixelConversion.h.

Member Function Documentation

void vpMeterPixelConversion::convertEllipse ( const vpCameraParameters cam,
const vpSphere sphere,
vpImagePoint center,
double &  mu20_p,
double &  mu11_p,
double &  mu02_p 
)
static

Noting that the perspective projection of a 3D sphere is usually an ellipse, using the camera intrinsic parameters converts the parameters of the 3D sphere expressed in the image plane (these parameters are obtained after perspective projection of the 3D sphere) in the image with values in pixels.

The ellipse resulting from the perspective projection is here represented by its parameters $x_c,y_c,\mu_{20}, \mu_{11}, \mu_{02}$ corresponding to its center coordinates in pixel and the centered moments.

Parameters
cam[in]: Intrinsic camera parameters.
sphere[in]: 3D sphere with internal vector circle.p[] that contains the ellipse parameters expressed in the image plane. These parameters are internaly updated after perspective projection of the sphere.
center[out]: Center of the corresponding ellipse in the image with coordinates expressed in pixels.
mu20_p,mu11_p,mu02_p[out]: Centered moments expressed in pixels.

The following code shows how to use this function:

vpSphere sphere;
double mu20_p, mu11_p, mu02_p;
sphere.changeFrame(cMo);
sphere.projection();
vpMeterPixelConversion::convertEllipse(cam, sphere, center_p, mu20_p, mu11_p, mu02_p);
vpDisplay::displayEllipse(I, center_p, mu20_p, mu11_p, mu02_p);

Definition at line 134 of file vpMeterPixelConversion.cpp.

References convertPoint(), vpCameraParameters::get_px(), vpCameraParameters::get_py(), vpTracker::p, and vpMath::sqr().

Referenced by vpMbtDistanceCircle::display(), vpFeatureDisplay::displayEllipse(), vpMbtDistanceCircle::initMovingEdge(), and vpMbtDistanceCircle::updateMovingEdge().

void vpMeterPixelConversion::convertEllipse ( const vpCameraParameters cam,
const vpCircle circle,
vpImagePoint center,
double &  mu20_p,
double &  mu11_p,
double &  mu02_p 
)
static

Noting that the perspective projection of a 3D circle is usually an ellipse, using the camera intrinsic parameters converts the parameters of the 3D circle expressed in the image plane (these parameters are obtained after perspective projection of the 3D circle) in the image with values in pixels.

The ellipse resulting from the perspective projection is here represented by its parameters $x_c, y_c, \mu_{20}, \mu_{11}, \mu_{02}$ corresponding to its center coordinates in pixel and the centered moments.

Parameters
cam[in]: Intrinsic camera parameters.
circle[in]: 3D circle with internal vector circle.p[] that contains the ellipse parameters expressed in the image plane. These parameters are internaly updated after perspective projection of the sphere.
center[out]: Center of the corresponding ellipse in the image with coordinates expressed in pixels.
mu20_p,mu11_p,mu02_p[out]: Centered moments expressed in pixels.

The following code shows how to use this function:

vpCircle circle;
double mu20_p, mu11_p, mu02_p;
circle.changeFrame(cMo);
circle.projection();
vpMeterPixelConversion::convertEllipse(cam, circle, center_p, mu20_p, mu11_p, mu02_p);
vpDisplay::displayEllipse(I, center_p, mu20_p, mu11_p, mu02_p);

Definition at line 93 of file vpMeterPixelConversion.cpp.

References convertPoint(), vpCameraParameters::get_px(), vpCameraParameters::get_py(), vpTracker::p, and vpMath::sqr().

void vpMeterPixelConversion::convertLine ( const vpCameraParameters cam,
const double &  rho_m,
const double &  theta_m,
double &  rho_p,
double &  theta_p 
)
static
static void vpMeterPixelConversion::convertPoint ( const vpCameraParameters cam,
const double &  x,
const double &  y,
double &  u,
double &  v 
)
inlinestatic

Point coordinates conversion from normalized coordinates $(x,y)$ in meter in the image plane to pixel coordinates $(u,v)$ in the image.

The used formula depends on the projection model of the camera. To know the currently used projection model use vpCameraParameter::get_projModel()

Parameters
cam: camera parameters.
x: input coordinate in meter along image plane x-axis.
y: input coordinate in meter along image plane y-axis.
u: output coordinate in pixels along image horizontal axis.
v: output coordinate in pixels along image vertical axis.

$ u = x*p_x + u_0 $ and $ v = y*p_y + v_0 $ in the case of perspective projection without distortion.

$ u = x*p_x*(1+k_{ud}*r^2)+u_0 $ and $ v = y*p_y*(1+k_{ud}*r^2)+v_0 $ with $ r^2 = x^2+y^2 $ in the case of perspective projection with distortion.

Examples:
testVirtuoseWithGlove.cpp, tutorial-homography-from-points.cpp, tutorial-ibvs-4pts-display.cpp, tutorial-ibvs-4pts-wireframe-camera.cpp, tutorial-ibvs-4pts-wireframe-robot-afma6.cpp, and tutorial-ibvs-4pts-wireframe-robot-viper.cpp.

Definition at line 100 of file vpMeterPixelConversion.h.

References vpCameraParameters::perspectiveProjWithDistortion, and vpCameraParameters::perspectiveProjWithoutDistortion.

Referenced by vpPolygon::buildFrom(), vpKeyPoint::computePose(), vpMbtFaceDepthDense::computeROI(), vpMbtFaceDepthNormal::computeROI(), convertEllipse(), vpFeatureBuilder::create(), vpFeatureSegment::display(), vpProjectionDisplay::display(), vpMbtDistanceLine::display(), vpMbtDistanceKltPoints::display(), vpProjectionDisplay::displayCamera(), vpMbtFaceDepthNormal::displayFeature(), vpPose::displayModel(), vpFeatureDisplay::displayPoint(), vpPolygon3D::getNbCornerInsideImage(), vpPolygon3D::getRoi(), vpPolygon3D::getRoiClipped(), vpImageSimulator::init(), vpMbtDistanceLine::initMovingEdge(), vpKeyPoint::matchPointAndDetect(), vpWireFrameSimulator::projectCameraTrajectory(), vpSimulatorAfma6::updateArticularPosition(), vpSimulatorViper850::updateArticularPosition(), vpMbtDistanceLine::updateMovingEdge(), and vpKinect::warpRGBFrame().

static void vpMeterPixelConversion::convertPoint ( const vpCameraParameters cam,
const double &  x,
const double &  y,
vpImagePoint iP 
)
inlinestatic

Point coordinates conversion from normalized coordinates $(x,y)$ in meter in the image plane to pixel coordinates in the image.

The used formula depends on the projection model of the camera. To know the currently used projection model use vpCameraParameter::get_projModel()

Parameters
cam: camera parameters.
x: input coordinate in meter along image plane x-axis.
y: input coordinate in meter along image plane y-axis.
iP: output coordinates in pixels.

In the frame (u,v) the result is given by:

$ u = x*p_x + u_0 $ and $ v = y*p_y + v_0 $ in the case of perspective projection without distortion.

$ u = x*p_x*(1+k_{ud}*r^2)+u_0 $ and $ v = y*p_y*(1+k_{ud}*r^2)+v_0 $ with $ r^2 = x^2+y^2 $ in the case of perspective projection with distortion.

Definition at line 136 of file vpMeterPixelConversion.h.

References vpCameraParameters::perspectiveProjWithDistortion, vpCameraParameters::perspectiveProjWithoutDistortion, vpImagePoint::set_u(), and vpImagePoint::set_v().