Visual Servoing Platform  version 3.6.1 under development (2024-12-12)
vpCameraParameters Class Reference

#include <visp3/core/vpCameraParameters.h>

Public Types

enum  vpCameraParametersProjType { perspectiveProjWithoutDistortion , perspectiveProjWithDistortion , ProjWithKannalaBrandtDistortion }
 

Public Member Functions

 vpCameraParameters ()
 
 vpCameraParameters (const vpCameraParameters &c)
 
 vpCameraParameters (double px, double py, double u0, double v0)
 
 vpCameraParameters (double px, double py, double u0, double v0, double kud, double kdu)
 
 vpCameraParameters (double px, double py, double u0, double v0, const std::vector< double > &distortion_coefficients)
 
vpCameraParametersoperator= (const vpCameraParameters &c)
 
bool operator== (const vpCameraParameters &c) const
 
bool operator!= (const vpCameraParameters &c) const
 
virtual ~vpCameraParameters ()
 
void init ()
 
void init (const vpCameraParameters &c)
 
void initFromCalibrationMatrix (const vpMatrix &_K)
 
void initFromFov (const unsigned int &w, const unsigned int &h, const double &hfov, const double &vfov)
 
void initPersProjWithoutDistortion (double px, double py, double u0, double v0)
 
void initPersProjWithDistortion (double px, double py, double u0, double v0, double kud, double kdu)
 
void initProjWithKannalaBrandtDistortion (double px, double py, double u0, double v0, const std::vector< double > &distortion_coefficients)
 
bool isFovComputed () const
 
void computeFov (const unsigned int &w, const unsigned int &h)
 
double getHorizontalFovAngle () const
 
double getVerticalFovAngle () const
 
std::vector< vpColVectorgetFovNormals () const
 
double get_px () const
 
double get_px_inverse () const
 
double get_py_inverse () const
 
double get_py () const
 
double get_u0 () const
 
double get_v0 () const
 
double get_kud () const
 
double get_kdu () const
 
std::vector< double > getKannalaBrandtDistortionCoefficients () const
 
vpCameraParametersProjType get_projModel () const
 
vpMatrix get_K () const
 
vpMatrix get_K_inverse () const
 
void printParameters ()
 

Friends

class vpMeterPixelConversion
 
class vpPixelMeterConversion
 
VISP_EXPORT std::ostream & operator<< (std::ostream &os, const vpCameraParameters &cam)
 
void to_json (nlohmann::json &j, const vpCameraParameters &cam)
 
void from_json (const nlohmann::json &j, vpCameraParameters &cam)
 

Detailed Description

Generic class defining intrinsic camera parameters.

1. Supported camera models

Two camera models are implemented in ViSP.

1.1. Pinhole camera model

In this model [34], a scene view is formed by projecting 3D points into the image plane using a perspective transformation.

\[ \left[ \begin{array}{c} u \\ v \\ 1 \end{array}\right] = \left[ \begin{array}{ccc} p_x & 0 & u_0 \\ 0 & p_y & v_0 \\ 0 & 0 & 1 \end{array}\right] \left[ \begin{array}{c} x \\ y \\ 1 \end{array}\right] \]

where:

  • $(X_c,Y_c,Z_c)$ are the coordinates of a 3D point in the camera frame
  • $(x,y)$ are the coordinates of the projection of the 3D point in the image plane
  • $(u,v)$ are the coordinates in pixels of the projected 3D point
  • $(u_0,v_0)$ are the coordinates of the principal point (the intersection of the optical axes with the image plane) that is usually near the image center
  • $p_x$ (resp $p_y$) is the ratio between the focal length of the lens $f$ in meters and the size of the pixel $l_x$ in meters: $p_x=f/l_x$ (resp, $l_y$ being the height of a pixel, $p_y=f/l_y$).

When $Z_c \neq 0$, the previous equation is equivalent to the following:

\[ \begin{array}{lcl} x &=& X_c / Z_c \\ y &=& Y_c / Z_c \\ u &=& u_0 + x \; p_x \\ v &=& v_0 + y \; p_y \end{array} \]

Real lenses usually have some radial distortion. So, the above model is extended as:

\[ \begin{array}{lcl} x &=& X_c / Z_c \\ y &=& Y_c / Z_c \\ x^{'} &=& x (1 + k_{ud} r^2) \\ y^{'} &=& y (1 + k_{ud} r^2) \\ r^2 &=& x^2 + y^2 \\ u &=& u_0 + x^{'} \; p_x \\ v &=& v_0 + y^{'} \; p_y \end{array} \]

where $k_{ud}$ is the first order radial distortion. Higher order distortion coefficients are not considered in ViSP.

Now in ViSP we consider also the inverse transformation, where from pixel coordinates we want to compute their normalized coordinates in the image plane. Previous equations could be written like:

\[ \begin{array}{lcl} x &=& (u - u_0) / p_x \\ y &=& (v - v_0) / p_y \end{array} \]

Considering radial distortion, the above model is extended as:

\[ \begin{array}{lcl} (u-u_0)^{'} &=& (u-u_0) (1 + k_{du} r^2) \\ (v-v_0)^{'} &=& (v-v_0) (1 + k_{du} r^2) \\ r^2 &=& ((u-u_0)/p_x)^2 + ((v-v_0)/p_y)^2 \\ x &=& (u - u_0)^{'} / p_x \\ y &=& (v - v_0)^{'} / p_y \end{array} \]

Finally, in ViSP the main intrinsic camera parameters are $(p_x, p_y)$ the ratio between the focal length and the size of a pixel, and $(u_0, v_0)$ the coordinates of the principal point in pixel. The lens distortion can also be considered by two additional parameters $(k_{ud}, k_{du})$.

Note
The Tutorial: Camera intrinsic calibration shows how to calibrate a camera to estimate the parameters corresponding to the model implemented in this class.
Note also that Tutorial: Bridge over OpenCV gives the correspondence between ViSP and OpenCV camera modelization.
The conversion from pixel coordinates $(u,v)$ in the normalized space $(x,y)$ is implemented in vpPixelMeterConversion, whereas the conversion from normalized coordinates into pixel is implemented in vpMeterPixelConversion.

From a practical point of view, two kinds of camera modelization are implemented in this class:

1.1.1. Camera parameters for a perspective projection without distortion model

In this modelization, only $u_0,v_0,p_x,p_y$ parameters are considered.

Initialization of such a model can be done using:

1.1.2. Camera parameters for a perspective projection with distortion model

In this modelization, all the parameters $u_0,v_0,p_x,p_y,k_{ud},k_{du}$ are considered. Initialization of such a model can be done using:

The selection of the camera model (without or with distortion) is done during vpCameraParameters initialisation.

Here an example of camera initialisation, for a model without distortion. A complete example is given in initPersProjWithoutDistortion().

double px = 600; double py = 600; double u0 = 320; double v0 = 240;
// Create a camera parameter container
// Camera initialization with a perspective projection without distortion
// model
// It is also possible to print the current camera parameters
std::cout << cam << std::endl;
Generic class defining intrinsic camera parameters.
void initPersProjWithoutDistortion(double px, double py, double u0, double v0)

Here an example of camera initialisation, for a model with distortion. A complete example is given in initPersProjWithDistortion().

double px = 600; double py = 600;
double u0 = 320; double v0 = 240;
double kud = -0.19; double kdu = 0.20;
// Create a camera parameter container
// Camera initialization with a perspective projection without distortion
model cam.initPersProjWithDistortion(px,py,u0,v0,kud,kdu);
void initPersProjWithDistortion(double px, double py, double u0, double v0, double kud, double kdu)

The code below shows how to know the currently used projection model:

...
vpCameraParameters::vpCameraParametersProjType projModel;
projModel = cam.get_projModel(); // Get the projection model type
vpCameraParametersProjType get_projModel() const

An XML parser for camera parameters is also provided in vpXmlParserCamera.

1.2. Kannala-Brandt camera model

This model [22] deals with fish-eye lenses designed to cover the whole hemispherical field in front of the camera and the angle of view is very large. In this case, the inherent distortion of a fish-eye lens should not be considered only as a derivation from the pinhole model.

The following projection in the general form is adapted:

\[ \begin{array}{lcl} r(\theta) &=& k_1 \theta + k_2 \theta^3 + k_3 \theta^5 + k_4 \theta^7 + k_5 \theta^9 \end{array} \]

where:

  • $\theta$ is the angle in rad between a point in the real world and the optical axis.
  • $r$ is the distance between the image point and the principal point.

In ViSP, we only consider radially symmetric distortions (caused by fisheye lenses).

2. JSON serialization

Since ViSP 3.6.0, if ViSP is build with JSON for modern C++ 3rd-party we introduce JSON serialization capabilities for vpCameraParameters. The following sample code shows how to save camera parameters in a file named cam.json and reload the parameters from this JSON file.

#include <visp3/core/vpCameraParameters.h>
#ifdef ENABLE_VISP_NAMESPACE
using namespace VISP_NAMESPACE_NAME;
#endif
int main()
{
#if defined(VISP_HAVE_NLOHMANN_JSON)
std::string filename = "cam.json";
{
// Save camera parameters in a JSON file
vpCameraParameters cam(801, 802, 325, 245);
std::ofstream file(filename);
const nlohmann::json j = cam;
file << j;
file.close();
}
{
// Load camera parameters from a JSON file
std::ifstream file(filename);
const nlohmann::json j = nlohmann::json::parse(file);
cam = j;
file.close();
std::cout << "Read camera parameters from " << filename << ":\n" << cam << std::endl;
}
#endif
}

If you build and execute the sample code, it will produce the following output:

Read camera parameters from cam.json:
Camera parameters for perspective projection without distortion:
px = 801 py = 802
u0 = 325 v0 = 245

The content of the cam.json file is the following:

$ cat cam.json
{"model":"perspectiveWithoutDistortion","px":801.0,"py":802.0,"u0":325.0,"v0":245.0}
Examples
AROgre.cpp, AROgreBasic.cpp, HelloWorldOgre.cpp, HelloWorldOgreAdvanced.cpp, catchAprilTag.cpp, catchGenericTrackerDeterminist.cpp, catchJsonCamera.cpp, grabRealSense2_T265.cpp, manGeometricFeatures.cpp, manServo4PointsDisplay.cpp, manSimu4Dots.cpp, manSimu4Points.cpp, mbot-apriltag-2D-half-vs.cpp, mbot-apriltag-ibvs.cpp, mbot-apriltag-pbvs.cpp, mbtEdgeKltTracking.cpp, mbtEdgeTracking.cpp, mbtGenericTracking.cpp, mbtGenericTracking2.cpp, mbtGenericTrackingDepth.cpp, mbtGenericTrackingDepthOnly.cpp, mbtKltTracking.cpp, perfApriltagDetection.cpp, perfGenericTracker.cpp, photometricMappingVisualServoing.cpp, photometricVisualServoing.cpp, photometricVisualServoingWithoutVpServo.cpp, poseVirtualVS.cpp, saveRealSenseData.cpp, servoAfma4Point2DArtVelocity.cpp, servoAfma4Point2DCamVelocity.cpp, servoAfma4Point2DCamVelocityKalman.cpp, servoAfma62DhalfCamVelocity.cpp, servoAfma6AprilTagIBVS.cpp, servoAfma6AprilTagPBVS.cpp, servoAfma6Cylinder2DCamVelocity.cpp, servoAfma6Cylinder2DCamVelocitySecondaryTask.cpp, servoAfma6Ellipse2DCamVelocity.cpp, servoAfma6FourPoints2DArtVelocity.cpp, servoAfma6FourPoints2DCamVelocityLs_cur.cpp, servoAfma6FourPoints2DCamVelocityLs_cur_integrator.cpp, servoAfma6FourPoints2DCamVelocityLs_des.cpp, servoAfma6Line2DCamVelocity.cpp, servoAfma6MegaposePBVS.cpp, servoAfma6Point2DArtVelocity.cpp, servoAfma6Point2DCamVelocity.cpp, servoAfma6Points2DCamVelocityEyeToHand.cpp, servoAfma6Segment2DCamVelocity.cpp, servoAfma6SquareLines2DCamVelocity.cpp, servoAfma6TwoLines2DCamVelocity.cpp, servoBebop2.cpp, servoBiclopsPoint2DArtVelocity.cpp, servoFlirPtuIBVS.cpp, servoFrankaIBVS.cpp, servoFrankaPBVS.cpp, servoMomentImage.cpp, servoMomentPoints.cpp, servoMomentPolygon.cpp, servoPioneerPanSegment3D.cpp, servoPioneerPoint2DDepth.cpp, servoPioneerPoint2DDepthWithoutVpServo.cpp, servoPololuPtuPoint2DJointVelocity.cpp, servoPtu46Point2DArtVelocity.cpp, servoSimu4Points.cpp, servoSimuAfma6FourPoints2DCamVelocity.cpp, servoSimuCircle2DCamVelocityDisplay.cpp, servoSimuCylinder.cpp, servoSimuCylinder2DCamVelocityDisplay.cpp, servoSimuCylinder2DCamVelocityDisplaySecondaryTask.cpp, servoSimuFourPoints2DCamVelocityDisplay.cpp, servoSimuFourPoints2DPolarCamVelocityDisplay.cpp, servoSimuLine2DCamVelocityDisplay.cpp, servoSimuSphere.cpp, servoSimuSphere2DCamVelocityDisplay.cpp, servoSimuSphere2DCamVelocityDisplaySecondaryTask.cpp, servoSimuSquareLine2DCamVelocityDisplay.cpp, servoSimuViper850FourPoints2DCamVelocity.cpp, servoUniversalRobotsIBVS.cpp, servoUniversalRobotsPBVS.cpp, servoViper650FourPoints2DArtVelocityLs_cur.cpp, servoViper650FourPoints2DCamVelocityLs_cur-SR300.cpp, servoViper650FourPoints2DCamVelocityLs_cur.cpp, servoViper650Point2DCamVelocity.cpp, servoViper850FourPoints2DArtVelocityLs_cur.cpp, servoViper850FourPoints2DArtVelocityLs_des.cpp, servoViper850FourPoints2DCamVelocityLs_cur.cpp, servoViper850FourPointsKinect.cpp, servoViper850Point2DArtVelocity-jointAvoidance-basic.cpp, servoViper850Point2DArtVelocity-jointAvoidance-gpa.cpp, servoViper850Point2DArtVelocity-jointAvoidance-large.cpp, servoViper850Point2DArtVelocity.cpp, servoViper850Point2DCamVelocity.cpp, servoViper850Point2DCamVelocityKalman.cpp, simulateCircle2DCamVelocity.cpp, simulateFourPoints2DCartesianCamVelocity.cpp, simulateFourPoints2DPolarCamVelocity.cpp, testAfma6.cpp, testCameraParametersConversion.cpp, testDisplays.cpp, testFeatureSegment.cpp, testGenericTracker.cpp, testGenericTrackerDepth.cpp, testImageDraw.cpp, testKeyPoint-2.cpp, testKeyPoint-4.cpp, testMbtXmlGenericParser.cpp, testMomentAlpha.cpp, testPose.cpp, testRealSense2_D435_align.cpp, testRealSense2_D435_opencv.cpp, testRealSense2_T265_images_odometry.cpp, testRealSense2_T265_images_odometry_async.cpp, testRealSense2_T265_odometry.cpp, testRealSense2_T265_undistort.cpp, testRobotAfma6.cpp, testRobotAfma6Pose.cpp, testRobotViper850.cpp, testRobotViper850Pose.cpp, testTrackDot.cpp, testUndistortImage.cpp, testViper650.cpp, testViper850.cpp, testVirtuoseWithGlove.cpp, testXmlParserCamera.cpp, trackMeLine.cpp, tutorial-apriltag-detector-live-T265-realsense.cpp, tutorial-apriltag-detector-live-rgbd-realsense.cpp, tutorial-apriltag-detector-live-rgbd-structure-core.cpp, tutorial-apriltag-detector-live.cpp, tutorial-apriltag-detector.cpp, tutorial-bridge-opencv-camera-param.cpp, tutorial-detection-object-mbt-deprecated.cpp, tutorial-detection-object-mbt.cpp, tutorial-detection-object-mbt2-deprecated.cpp, tutorial-detection-object-mbt2.cpp, tutorial-draw-frame.cpp, tutorial-flir-ptu-ibvs.cpp, tutorial-grabber-realsense.cpp, tutorial-homography-from-points.cpp, tutorial-hsv-segmentation-pcl-viewer.cpp, tutorial-hsv-segmentation-pcl.cpp, tutorial-ibvs-4pts-display.cpp, tutorial-ibvs-4pts-image-tracking.cpp, tutorial-ibvs-4pts-ogre-tracking.cpp, tutorial-ibvs-4pts-ogre.cpp, tutorial-ibvs-4pts-wireframe-camera.cpp, tutorial-ibvs-4pts-wireframe-robot-afma6.cpp, tutorial-ibvs-4pts-wireframe-robot-viper.cpp, tutorial-image-simulator.cpp, tutorial-matching-keypoint-homography.cpp, tutorial-mb-edge-tracker.cpp, tutorial-mb-generic-tracker-apriltag-rs2.cpp, tutorial-mb-generic-tracker-apriltag-webcam.cpp, tutorial-mb-generic-tracker-full.cpp, tutorial-mb-generic-tracker-live.cpp, tutorial-mb-generic-tracker-read.cpp, tutorial-mb-generic-tracker-rgbd-blender.cpp, tutorial-mb-generic-tracker-rgbd-realsense-json.cpp, tutorial-mb-generic-tracker-rgbd-realsense.cpp, tutorial-mb-generic-tracker-rgbd-structure-core.cpp, tutorial-mb-generic-tracker-rgbd.cpp, tutorial-mb-generic-tracker-save.cpp, tutorial-mb-generic-tracker-stereo-mono.cpp, tutorial-mb-generic-tracker-stereo.cpp, tutorial-mb-generic-tracker.cpp, tutorial-mb-hybrid-tracker.cpp, tutorial-mb-klt-tracker.cpp, tutorial-mb-tracker-full.cpp, tutorial-mb-tracker.cpp, tutorial-megapose-live-single-object-tracking.cpp, tutorial-panda3d-renderer.cpp, tutorial-pf.cpp, tutorial-pose-from-planar-object.cpp, tutorial-pose-from-points-image.cpp, tutorial-pose-from-points-live.cpp, tutorial-pose-from-points-realsense-T265.cpp, tutorial-pose-from-qrcode-image.cpp, tutorial-ukf.cpp, tutorial-undistort.cpp, and wireframeSimulator.cpp.

Definition at line 309 of file vpCameraParameters.h.

Member Enumeration Documentation

◆ vpCameraParametersProjType

Enumerator
perspectiveProjWithoutDistortion 

Perspective projection without distortion model.

perspectiveProjWithDistortion 

Perspective projection with distortion model.

ProjWithKannalaBrandtDistortion 

Projection with Kannala-Brandt distortion model.

Definition at line 315 of file vpCameraParameters.h.

Constructor & Destructor Documentation

◆ vpCameraParameters() [1/5]

vpCameraParameters::vpCameraParameters ( )

Default constructor. By default, a perspective projection without distortion model is set.

See also
init()

Definition at line 67 of file vpCameraParameters.cpp.

References init().

◆ vpCameraParameters() [2/5]

vpCameraParameters::vpCameraParameters ( const vpCameraParameters c)

Copy constructor

Definition at line 79 of file vpCameraParameters.cpp.

References init().

◆ vpCameraParameters() [3/5]

vpCameraParameters::vpCameraParameters ( double  cam_px,
double  cam_py,
double  cam_u0,
double  cam_v0 
)

Constructor for perspective projection without distortion model

Parameters
cam_px: Pixel size along x axis (horizontal).
cam_py: Pixel size along y axis (vertical)
cam_u0: Principal point coordinate in pixel along x.
cam_v0: Principal point coordinate in pixel along y.

Definition at line 96 of file vpCameraParameters.cpp.

References initPersProjWithoutDistortion().

◆ vpCameraParameters() [4/5]

vpCameraParameters::vpCameraParameters ( double  cam_px,
double  cam_py,
double  cam_u0,
double  cam_v0,
double  cam_kud,
double  cam_kdu 
)

Constructor for perspective projection with distortion model

Parameters
cam_px: Pixel size along x axis (horizontal).
cam_py: Pixel size along y axis (vertical)
cam_u0: Principal point coordinate in pixel along x.
cam_v0: Principal point coordinate in pixel along y.
cam_kud: Undistorted to distorted radial distortion.
cam_kdu: Distorted to undistorted radial distortion.

Definition at line 115 of file vpCameraParameters.cpp.

References initPersProjWithDistortion().

◆ vpCameraParameters() [5/5]

vpCameraParameters::vpCameraParameters ( double  cam_px,
double  cam_py,
double  cam_u0,
double  cam_v0,
const std::vector< double > &  coefficients 
)

Constructor for projection with Kannala-Brandt distortion model

Parameters
cam_px: Pixel size along x axis (horizontal).
cam_py: Pixel size along y axis (vertical)
cam_u0: Principal point coordinate in pixel along x.
cam_v0: Principal point coordinate in pixel along y.
coefficients: distortion model coefficients

Definition at line 134 of file vpCameraParameters.cpp.

References initProjWithKannalaBrandtDistortion().

◆ ~vpCameraParameters()

vpCameraParameters::~vpCameraParameters ( )
virtual

Destructor that does nothing.

Definition at line 334 of file vpCameraParameters.cpp.

Member Function Documentation

◆ computeFov()

◆ get_K()

vpMatrix vpCameraParameters::get_K ( ) const

Return the camera matrix $K$ given by:

$ K = \left[\begin{array}{ccc} p_x & 0 & u_0 \\ 0 & p_y & v_0 \\ 0 & 0 & 1 \end{array} \right] $

See also
get_K_inverse()

Definition at line 591 of file vpCameraParameters.cpp.

Referenced by vpMbEdgeTracker::downScale(), vpPose::poseFromRectangle(), and vpMbEdgeTracker::upScale().

◆ get_K_inverse()

vpMatrix vpCameraParameters::get_K_inverse ( ) const

Return the inverted camera matrix $K^{-1}$ given by:

$ K^{-1} = \left[\begin{array}{ccc} 1/p_x & 0 & -u_0/p_x \\ 0 & 1/p_y & -v_0/p_y \\ 0 & 0 & 1 \end{array} \right] $

See also
get_K()

Definition at line 616 of file vpCameraParameters.cpp.

◆ get_kdu()

double vpCameraParameters::get_kdu ( ) const
inline

Definition at line 411 of file vpCameraParameters.h.

Referenced by vpCalibration::computeStdDeviation_dist().

◆ get_kud()

◆ get_projModel()

◆ get_px()

double vpCameraParameters::get_px ( ) const
inline
Examples
mbot-apriltag-2D-half-vs.cpp, servoPioneerPanSegment3D.cpp, servoPioneerPoint2DDepth.cpp, servoPioneerPoint2DDepthWithoutVpServo.cpp, servoPololuPtuPoint2DJointVelocity.cpp, testKeyPoint-2.cpp, tutorial-apriltag-detector-live-T265-realsense.cpp, tutorial-bridge-opencv-camera-param.cpp, tutorial-detection-object-mbt2-deprecated.cpp, tutorial-detection-object-mbt2.cpp, tutorial-matching-keypoint-homography.cpp, tutorial-mb-generic-tracker-full.cpp, and tutorial-mb-generic-tracker-save.cpp.

Definition at line 404 of file vpCameraParameters.h.

Referenced by vpFeatureLuminance::buildFrom(), vpHomography::collineation2homography(), vpCalibration::computeCalibration(), vpCalibration::computeCalibrationMulti(), vpMbtDistanceLine::computeInteractionMatrixError(), vpMbtDistanceCylinder::computeInteractionMatrixError(), vpCircle::computeIntersectionPoint(), vpKeyPoint::computePose(), vpCalibration::computeStdDeviation(), vpCalibration::computeStdDeviation_dist(), vpMbEdgeTracker::computeVVSFirstPhase(), vpMbEdgeTracker::computeVVSInit(), vpMeterPixelConversion::convertEllipse(), vpPixelMeterConversion::convertEllipse(), vpCalibration::displayGrid(), vpMomentObject::fromImage(), vpHomography::homography2collineation(), vpImageTools::initUndistortMap(), vpWireFrameSimulator::setExternalCameraParameters(), vpSimulator::setExternalCameraParameters(), vpWireFrameSimulator::setInternalCameraParameters(), vpSimulator::setInternalCameraParameters(), vpMegaPose::setIntrinsics(), vpPanda3DRenderParameters::setupPandaCamera(), vpImageTools::undistort(), and vpAROgre::updateCameraProjection().

◆ get_px_inverse()

double vpCameraParameters::get_px_inverse ( ) const
inline

◆ get_py()

◆ get_py_inverse()

double vpCameraParameters::get_py_inverse ( ) const
inline

◆ get_u0()

◆ get_v0()

◆ getFovNormals()

std::vector<vpColVector> vpCameraParameters::getFovNormals ( ) const
inline

Get the list of the normals corresponding to planes describing the field of view.

  • vector[0] : Left Normal.
  • vector[1] : Right Normal.
  • vector[2] : Up Normal.
  • vector[3] : Down Normal.
See also
computeFov()
Returns
List of the normals.

Definition at line 396 of file vpCameraParameters.h.

Referenced by vpPolygon3D::computePolygonClipped().

◆ getHorizontalFovAngle()

double vpCameraParameters::getHorizontalFovAngle ( ) const
inline

Get the horizontal angle in radian of the field of view.

Returns
FOV horizontal angle computed with px and width.
See also
computeFov(), getVerticalFovAngle()

Definition at line 361 of file vpCameraParameters.h.

◆ getKannalaBrandtDistortionCoefficients()

std::vector<double> vpCameraParameters::getKannalaBrandtDistortionCoefficients ( ) const
inline

Definition at line 412 of file vpCameraParameters.h.

Referenced by vpImageTools::initUndistortMap().

◆ getVerticalFovAngle()

double vpCameraParameters::getVerticalFovAngle ( ) const
inline

Get the vertical angle in radian of the field of view.

Returns
FOV vertical angle computed with py and height.
See also
computeFov(), getHorizontalFovAngle()

Definition at line 376 of file vpCameraParameters.h.

◆ init() [1/2]

void vpCameraParameters::init ( )

Basic initialization with the default parameters.

Examples
AROgre.cpp, AROgreBasic.cpp, and tutorial-ibvs-4pts-image-tracking.cpp.

Definition at line 147 of file vpCameraParameters.cpp.

References vpException::divideByZeroError.

Referenced by vpCameraParameters(), and vpVirtualGrabber::vpVirtualGrabber().

◆ init() [2/2]

void vpCameraParameters::init ( const vpCameraParameters c)

Initialization from another vpCameraParameters object.

Definition at line 339 of file vpCameraParameters.cpp.

◆ initFromCalibrationMatrix()

void vpCameraParameters::initFromCalibrationMatrix ( const vpMatrix K)

Initialise the camera from a calibration matrix. Using a calibration matrix leads to a camera without distortion.

The K matrix in parameters must be like:

$ K = \left(\begin{array}{ccc} p_x & 0 & u_0 \\ 0 & p_y & v_0 \\ 0 & 0 & 1 \end{array} \right) $

Parameters
K: the 3-by-3 calibration matrix

Definition at line 355 of file vpCameraParameters.cpp.

References vpException::badValue, vpException::dimensionError, vpArray2D< Type >::getCols(), vpArray2D< Type >::getRows(), and initPersProjWithoutDistortion().

Referenced by vpMbEdgeTracker::downScale(), and vpMbEdgeTracker::upScale().

◆ initFromFov()

void vpCameraParameters::initFromFov ( const unsigned int &  w,
const unsigned int &  h,
const double &  hfov,
const double &  vfov 
)

Initialize the camera model without distortion from the image dimension and the camera field of view.

Parameters
w: Image width.
h: Image height.
hfov: Camera horizontal field of view angle expressed in radians.
vfov: Camera vertical field of view angle expressed in radians.

The following sample code shows how to use this function:

#include <visp3/core/vpCameraParameters.h>
#include <visp3/core/vpImage.h>
#ifdef ENABLE_VISP_NAMESPACE
using namespace VISP_NAMESPACE_NAME;
#endif
int main()
{
double hfov = vpMath::rad(56);
double vfov = vpMath::rad(43);
cam.initFromFov(I.getWidth(), I.getHeight(), hfov, vfov);
std::cout << cam << std::endl;
std::cout << "Field of view (horizontal: " << vpMath::deg(cam.getHorizontalFovAngle())
<< " and vertical: " << vpMath::deg(cam.getVerticalFovAngle()) << " degrees)" << std::endl;
}
void initFromFov(const unsigned int &w, const unsigned int &h, const double &hfov, const double &vfov)
double getHorizontalFovAngle() const
double getVerticalFovAngle() const
unsigned int getWidth() const
Definition: vpImage.h:242
unsigned int getHeight() const
Definition: vpImage.h:181
static double rad(double deg)
Definition: vpMath.h:129
static double deg(double rad)
Definition: vpMath.h:119

It produces the following output:

Camera parameters for perspective projection without distortion:
px = 601.832 py = 609.275
u0 = 320 v0 = 240
Field of view (horizontal: 56 and vertical: 43 degrees)

Definition at line 409 of file vpCameraParameters.cpp.

References computeFov(), and perspectiveProjWithoutDistortion.

◆ initPersProjWithDistortion()

void vpCameraParameters::initPersProjWithDistortion ( double  cam_px,
double  cam_py,
double  cam_u0,
double  cam_v0,
double  cam_kud,
double  cam_kdu 
)

Initialization with specific parameters using perspective projection with distortion model.

Parameters
cam_px: Pixel size along x axis (horizontal).
cam_py: Pixel size along y axis (vertical)
cam_u0: Principal point coordinate in pixel along x.
cam_v0: Principal point coordinate in pixel along y.
cam_kud: Undistorted to distorted radial distortion.
cam_kdu: Distorted to undistorted radial distortion.

The following sample code shows how to use this function:

#include <visp3/core/vpCameraParameters.h>
#include <visp3/core/vpImage.h>
#ifdef ENABLE_VISP_NAMESPACE
using namespace VISP_NAMESPACE_NAME;
#endif
int main()
{
double u0 = I.getWidth() / 2.;
double v0 = I.getHeight() / 2.;
double px = 600;
double py = 600;
double kud = -0.19;
double kdu = 0.20;
cam.initPersProjWithDistortion(px, py, u0, v0, kud, kdu);
cam.computeFov(I.getWidth(), I.getHeight());
std::cout << cam << std::endl;
std::cout << "Field of view (horizontal: " << vpMath::deg(cam.getHorizontalFovAngle())
<< " and vertical: " << vpMath::deg(cam.getVerticalFovAngle())
<< " degrees)" << std::endl;
}
void computeFov(const unsigned int &w, const unsigned int &h)

It produces the following output:

Camera parameters for perspective projection with distortion:
px = 600 py = 600
u0 = 320 v0 = 240
kud = -0.19
kdu = 0.2
Field of view (horizontal: 56.14497387 and vertical: 43.60281897 degrees)
Examples
catchJsonCamera.cpp, testCameraParametersConversion.cpp, testUndistortImage.cpp, testXmlParserCamera.cpp, and tutorial-undistort.cpp.

Definition at line 274 of file vpCameraParameters.cpp.

References vpException::divideByZeroError, and perspectiveProjWithDistortion.

Referenced by vpRealSense2::getCameraParameters(), vpRealSense::getCameraParameters(), vpAfma6::getCameraParameters(), vpViper650::getCameraParameters(), vpViper850::getCameraParameters(), vpKinect::start(), and vpCameraParameters().

◆ initPersProjWithoutDistortion()

void vpCameraParameters::initPersProjWithoutDistortion ( double  cam_px,
double  cam_py,
double  cam_u0,
double  cam_v0 
)

Initialization with specific parameters using perspective projection without distortion model.

Parameters
cam_px: Pixel size along x axis (horizontal).
cam_py: Pixel size along y axis (vertical)
cam_u0: Principal point coordinate in pixel along x.
cam_v0: Principal point coordinate in pixel along y.

The following sample code shows how to use this function:

#include <visp3/core/vpCameraParameters.h>
#include <visp3/core/vpImage.h>
#ifdef ENABLE_VISP_NAMESPACE
using namespace VISP_NAMESPACE_NAME;
#endif
int main()
{
double u0 = I.getWidth() / 2.;
double v0 = I.getHeight() / 2.;
double px = 600;
double py = 600;
cam.initPersProjWithoutDistortion(px, py, u0, v0);
cam.computeFov(I.getWidth(), I.getHeight());
std::cout << cam << std::endl;
std::cout << "Field of view (horizontal: " << vpMath::deg(cam.getHorizontalFovAngle())
<< " and vertical: " << vpMath::deg(cam.getVerticalFovAngle())
<< " degrees)" << std::endl;
}

It produces the following output:

Camera parameters for perspective projection without distortion:
px = 600 py = 600
u0 = 320 v0 = 240
Field of view (horizontal: 56.145 and vertical: 43.6028 degrees)
Examples
catchAprilTag.cpp, catchGenericTrackerDeterminist.cpp, catchJsonCamera.cpp, mbot-apriltag-2D-half-vs.cpp, mbot-apriltag-ibvs.cpp, mbot-apriltag-pbvs.cpp, mbtEdgeKltTracking.cpp, mbtEdgeTracking.cpp, mbtGenericTracking.cpp, mbtGenericTracking2.cpp, mbtGenericTrackingDepth.cpp, mbtGenericTrackingDepthOnly.cpp, mbtKltTracking.cpp, perfGenericTracker.cpp, servoBebop2.cpp, servoPioneerPanSegment3D.cpp, servoPioneerPoint2DDepth.cpp, servoPioneerPoint2DDepthWithoutVpServo.cpp, testCameraParametersConversion.cpp, testGenericTracker.cpp, testGenericTrackerDepth.cpp, testKeyPoint-2.cpp, testKeyPoint-4.cpp, testMbtXmlGenericParser.cpp, testXmlParserCamera.cpp, tutorial-apriltag-detector-live-T265-realsense.cpp, tutorial-apriltag-detector-live.cpp, tutorial-apriltag-detector.cpp, tutorial-detection-object-mbt-deprecated.cpp, tutorial-detection-object-mbt.cpp, tutorial-detection-object-mbt2-deprecated.cpp, tutorial-detection-object-mbt2.cpp, tutorial-draw-frame.cpp, tutorial-mb-edge-tracker.cpp, tutorial-mb-generic-tracker-apriltag-webcam.cpp, tutorial-mb-generic-tracker-full.cpp, tutorial-mb-generic-tracker-live.cpp, tutorial-mb-generic-tracker-rgbd.cpp, tutorial-mb-generic-tracker-save.cpp, tutorial-mb-generic-tracker-stereo-mono.cpp, tutorial-mb-generic-tracker.cpp, tutorial-mb-hybrid-tracker.cpp, tutorial-mb-klt-tracker.cpp, tutorial-mb-tracker-full.cpp, tutorial-mb-tracker.cpp, tutorial-pf.cpp, and tutorial-ukf.cpp.

Definition at line 202 of file vpCameraParameters.cpp.

References vpException::divideByZeroError, and perspectiveProjWithoutDistortion.

Referenced by vpRealSense2::getCameraParameters(), vpRealSense::getCameraParameters(), vpAfma6::getCameraParameters(), vpViper650::getCameraParameters(), vpViper850::getCameraParameters(), initFromCalibrationMatrix(), and vpCameraParameters().

◆ initProjWithKannalaBrandtDistortion()

void vpCameraParameters::initProjWithKannalaBrandtDistortion ( double  cam_px,
double  cam_py,
double  cam_u0,
double  cam_v0,
const std::vector< double > &  coefficients 
)

Initialization with specific parameters using Kannala-Brandt distortion model

Parameters
cam_px: Pixel size along x axis (horizontal).
cam_py: Pixel size along y axis (vertical)
cam_u0: Principal point coordinate in pixel along x.
cam_v0: Principal point coordinate in pixel along y.
coefficients: Distortion coefficients.
Examples
catchJsonCamera.cpp, and testXmlParserCamera.cpp.

Definition at line 306 of file vpCameraParameters.cpp.

References vpException::divideByZeroError, and ProjWithKannalaBrandtDistortion.

Referenced by vpRealSense2::getCameraParameters(), and vpCameraParameters().

◆ isFovComputed()

bool vpCameraParameters::isFovComputed ( ) const
inline

Specify if the fov has been computed.

See also
computeFov()
Returns
True if the fov has been computed, False otherwise.

Definition at line 350 of file vpCameraParameters.h.

Referenced by vpPolygon3D::computePolygonClipped().

◆ operator!=()

bool vpCameraParameters::operator!= ( const vpCameraParameters c) const

False if the two objects are absolutely identical.

Definition at line 516 of file vpCameraParameters.cpp.

◆ operator=()

vpCameraParameters & vpCameraParameters::operator= ( const vpCameraParameters cam)

Copy operator.

Definition at line 427 of file vpCameraParameters.cpp.

◆ operator==()

bool vpCameraParameters::operator== ( const vpCameraParameters c) const

True if the two objects are absolutely identical.

Definition at line 454 of file vpCameraParameters.cpp.

References vpMath::equal().

◆ printParameters()

Friends And Related Function Documentation

◆ from_json

void from_json ( const nlohmann::json &  j,
vpCameraParameters cam 
)
friend

Deserialize a JSON object into camera parameters. The minimal required properties are:

  • Pixel size: px, py
  • Principal point: u0, v0

If a projection model (vpCameraParameters::vpCameraParametersProjType) is supplied, then other parameters may be expected:

  • In the case of perspective projection with distortion, ku, and kud must be supplied.
  • In the case of Kannala-Brandt distortion, the list of coefficients must be supplied.

An example of a JSON object representing a camera is:

{
"px": 300.0,
"py": 300.0,
"u0": 120.5,
"v0": 115.0,
"model": "perspectiveWithDistortion", // one of ["perspectiveWithoutDistortion", "perspectiveWithDistortion", "kannalaBrandtDistortion"]. If omitted, camera is assumed to have no distortion
"kud": 0.5, // required since "model" == perspectiveWithDistortion
"kdu": 0.5
}
Parameters
jThe json object to deserialize.
camThe modified camera.

Definition at line 520 of file vpCameraParameters.h.

◆ operator<<

VISP_EXPORT std::ostream& operator<< ( std::ostream &  os,
const vpCameraParameters cam 
)
friend

Print on the output stream os the camera parameters.

Parameters
os: Output stream.
cam: Camera parameters.

Definition at line 682 of file vpCameraParameters.cpp.

◆ to_json

void to_json ( nlohmann::json &  j,
const vpCameraParameters cam 
)
friend

Converts camera parameters into a JSON representation.

See also
from_json() for more information on the content.
Parameters
jThe resulting JSON object.
camThe camera to serialize.

Definition at line 467 of file vpCameraParameters.h.

◆ vpMeterPixelConversion

friend class vpMeterPixelConversion
friend

Definition at line 311 of file vpCameraParameters.h.

◆ vpPixelMeterConversion

friend class vpPixelMeterConversion
friend

Definition at line 312 of file vpCameraParameters.h.