FeatureBuilder

class FeatureBuilder

Bases: pybind11_object

Class that defines conversion between trackers and visual features.

Methods

__init__

create

Overloaded function.

Inherited Methods

Operators

__doc__

__init__

__module__

Attributes

__annotations__

__init__(*args, **kwargs)
static create(*args, **kwargs)

Overloaded function.

  1. create(s: visp._visp.visual_features.FeaturePoint, cam: visp._visp.core.CameraParameters, d: visp._visp.blob.Dot) -> None

Create a vpFeaturePoint thanks to a vpDot and the parameters of the camera. The vpDot contains only the pixel coordinates of the point in an image. Thus this method uses the camera parameters to compute the meter coordinates \(x\) and \(y\) in the image plan. Those coordinates are stored in the vpFeaturePoint .

Warning

It is not possible to compute the depth of the point \(Z\) in the camera frame thanks to a vpDot . This coordinate is needed in vpFeaturePoint to compute the interaction matrix. So this value must be computed outside this function.

The code below shows how to initialize a vpFeaturePoint visual feature. First, we initialize the \(x,y\) , and lastly we set the 3D depth \(Z\) of the point which is generally the result of a pose estimation.

vpImage<unsigned char> I; // Image container
vpCameraParameters cam;   // Default intrinsic camera parameters
vpDot dot;               // Dot tracker

vpFeaturePoint s;    // Point feature
...
// Tracking on the dot
dot.track(I);

// Initialize rho,theta visual feature
vpFeatureBuilder::create(s, cam, dot);

// A pose estimation is requested to initialize Z, the depth of the
// point in the camera frame.
double Z = 1; // Depth of the point in meters
....
s.set_Z(Z);
Parameters:
s

Visual feature \((x, y)\) to initialize. Be aware, the 3D depth \(Z\) requested to compute the interaction matrix is not initialized by this function.

cam

The parameters of the camera used to acquire the image containing the vpDot .

d

The vpDot used to create the vpFeaturePoint .

  1. create(s: visp._visp.visual_features.FeaturePoint, cam: visp._visp.core.CameraParameters, d: visp._visp.blob.Dot2) -> None

Create a vpFeaturePoint thanks to a vpDot2 and the parameters of the camera. The vpDot2 contains only the pixel coordinates of the point in an image. Thus this method uses the camera parameters to compute the meter coordinates \(x\) and \(y\) in the image plan. Those coordinates are stored in the vpFeaturePoint .

Warning

It is not possible to compute the depth of the point \(Z\) in the camera frame thanks to a vpDot2 . This coordinate is needed in vpFeaturePoint to compute the interaction matrix. So this value must be computed outside this function.

The code below shows how to initialize a vpFeaturePoint visual feature. First, we initialize the \(x,y\) , and lastly we set the 3D depth \(Z\) of the point which is generally the result of a pose estimation.

vpImage<unsigned char> I; // Image container
vpCameraParameters cam;   // Default intrinsic camera parameters
vpDot2 dot;               // Dot tracker

vpFeaturePoint s;    // Point feature
...
// Tracking on the dot
dot.track(I);

// Initialize rho,theta visual feature
vpFeatureBuilder::create(s, cam, dot);

// A pose estimation is requested to initialize Z, the depth of the
// point in the camera frame.
double Z = 1; // Depth of the point in meters
....
s.set_Z(Z);
Parameters:
s

The feature point.

cam

The parameters of the camera used to acquire the image containing the vpDot2 .

d

The vpDot2 used to create the vpFeaturePoint .

  1. create(s: visp._visp.visual_features.FeaturePoint, cam: visp._visp.core.CameraParameters, t: visp._visp.core.ImagePoint) -> None

Create a vpFeaturePoint thanks to a vpImagePoint and the parameters of the camera. The vpImagePoint contains only the pixel coordinates of the point in an image. Thus this method uses the camera parameters to compute the meter coordinates \(x\) and \(y\) in the image plan. Those coordinates are stored in the vpFeaturePoint .

Warning

It is not possible to compute the depth of the point \(Z\) in the camera frame thanks to a vpImagePoint . This coordinate is needed in vpFeaturePoint to compute the interaction matrix. So this value must be computed outside this function.

The code below shows how to initialize a vpFeaturePoint visual feature. First, we initialize the \(x,y\) , and lastly we set the 3D depth \(Z\) of the point which is generally the result of a pose estimation.

vpImage<unsigned char> I; // Image container
vpCameraParameters cam;   // Default intrinsic camera parameters
vpImagePoint iP;               // the point in the image

vpFeaturePoint s;    // Point feature
...
// Set the point coordinates in the image (here the coordinates are given in
the (i,j) frame iP.set_i(0); iP.set_j(0);

// Initialize rho,theta visual feature
vpFeatureBuilder::create(s, cam, iP);

// A pose estimation is requested to initialize Z, the depth of the
// point in the camera frame.
double Z = 1; // Depth of the point in meters
....
s.set_Z(Z);
Parameters:
s

The feature point.

cam

The parameters of the camera used to acquire the image containing the point.

  1. create(s: visp._visp.visual_features.FeaturePoint, p: visp._visp.core.Point) -> None

Create a vpFeaturePoint thanks to a vpPoint . This method uses the point coordinates \(x\) and \(y\) in the image plan to set the visual feature parameters. The value of the depth \(Z\) in the camera frame is also computed thanks to the coordinates in the camera frame which are stored in vpPoint .

Warning

To be sure that the vpFeaturePoint is well initialized, you have to be sure that at least the point coordinates in the image plan and in the camera frame are computed and stored in the vpPoint .

Parameters:
s

The feature point.

p

The vpPoint used to create the vpFeaturePoint .

  1. create(s: visp._visp.visual_features.FeaturePoint, goodCam: visp._visp.core.CameraParameters, wrongCam: visp._visp.core.CameraParameters, p: visp._visp.core.Point) -> None

Create a vpFeaturePoint thanks to a vpPoint . In this method noise is introduced during the initialization of the vpFeaturePoint . This method uses the point coordinates \(x\) and \(y\) in the image plan to set the visual feature parameters. The value of the depth \(Z\) in the camera frame is also computed thanks to the coordinates in the camera frame which are stored in vpPoint .

This function intends to introduce noise after the initialization of the parameters. Cartesian \((x,y)\) coordinates are first converted in pixel coordinates in the image using goodCam camera parameters. Then, the pixels coordinates of the point are converted back to cartesian coordinates \((x^{'},y^{'})\) using the noisy camera parameters wrongCam . These last parameters are stored in the vpFeaturePoint .

Warning

To be sure that the vpFeaturePoint is well initialized, you have to be sure that at least the point coordinates in the image plan and in the camera frame are computed and stored in the vpPoint .

Parameters:
s

The feature point.

goodCam

Camera parameters used to introduce noise. These parameters are used to convert cartesian coordinates of the point p in the image plane in pixel coordinates.

wrongCam

Camera parameters used to introduce noise. These parameters are used to convert pixel coordinates of the point in cartesian coordinates of the point in the image plane.

p

The vpPoint used to create the vpFeaturePoint .

  1. create(s: visp._visp.visual_features.FeatureSegment, cam: visp._visp.core.CameraParameters, d1: visp._visp.blob.Dot, d2: visp._visp.blob.Dot) -> None

Initialize a segment feature out of vpDots and camera parameters.

Parameters:
s

Visual feature to initialize.

cam

The parameters of the camera used to acquire the image containing the point.

d1

The dot corresponding to the first point of the segment.

d2

The dot corresponding to the second point of the segment.

  1. create(s: visp._visp.visual_features.FeatureSegment, cam: visp._visp.core.CameraParameters, d1: visp._visp.blob.Dot2, d2: visp._visp.blob.Dot2) -> None

Initialize a segment feature out of vpDots and camera parameters.

Parameters:
s

Visual feature to initialize.

cam

The parameters of the camera used to acquire the image containing the point.

d1

The dot corresponding to the first point of the segment.

d2

The dot corresponding to the second point of the segment.

  1. create(s: visp._visp.visual_features.FeatureSegment, cam: visp._visp.core.CameraParameters, ip1: visp._visp.core.ImagePoint, ip2: visp._visp.core.ImagePoint) -> None

Initialize a segment feature out of image points and camera parameters.

Parameters:
s

Visual feature to initialize.

cam

The parameters of the camera used to acquire the image containing the point.

ip1

The image point corresponding to the first point of the segment.

ip2

The image point corresponding to the second point of the segment.

  1. create(s: visp._visp.visual_features.FeatureSegment, P1: visp._visp.core.Point, P2: visp._visp.core.Point) -> None

Build a segment visual feature from two points.

Parameters:
s

Visual feature to initialize.

P1

Two points defining the segment. These points must contain the 3D coordinates in the camera frame (cP) and the projected coordinates in the image plane (p).

P2

Two points defining the segment. These points must contain the 3D coordinates in the camera frame (cP) and the projected coordinates in the image plane (p).

  1. create(s: visp._visp.visual_features.FeaturePointPolar, cam: visp._visp.core.CameraParameters, dot: visp._visp.blob.Dot) -> None

Initialize a point feature with polar coordinates \((\rho,\theta)\) using the coordinates of the point in pixels obtained by image processing. This point is the center of gravity of a dot tracked using vpDot . Using the camera parameters, the pixels coordinates of the dot are first converted in cartesian \((x,y)\) coordinates in meter in the camera frame and than in polar coordinates by:

\[\rho = \sqrt{x^2+y^2} \hbox{,}\; \; \theta = \arctan \frac{y}{x}\]

Warning

This function does not initialize \(Z\) which is requested to compute the interaction matrix by vpfeaturePointPolar::interaction().

The code below shows how to initialize a vpFeaturePointPolar visual feature. First, we initialize the \(\rho,\theta\) , and lastly we set the 3D depth \(Z\) of the point which is generally the result of a pose estimation.

vpImage<unsigned char> I; // Image container
vpCameraParameters cam;   // Default intrinsic camera parameters
vpDot2 dot;               // Dot tracker

vpFeaturePointPolar s;    // Point feature with polar coordinates
...
// Tracking on the dot
dot.track(I);

// Initialize rho,theta visual feature
vpFeatureBuilder::create(s, cam, dot);

// A pose estimation is requested to initialize Z, the depth of the
// point in the camera frame.
double Z = 1; // Depth of the point in meters
....
s.set_Z(Z);
Parameters:
s

Visual feature \((\rho,\theta)\) to initialize. Be aware, the 3D depth \(Z\) requested to compute the interaction matrix is not initialized by this function.

cam

Camera parameters.

dot

Tracked dot. The center of gravity corresponds to the coordinates of the point in the image plane.

  1. create(s: visp._visp.visual_features.FeaturePointPolar, cam: visp._visp.core.CameraParameters, dot: visp._visp.blob.Dot2) -> None

Initialize a point feature with polar coordinates \((\rho,\theta)\) using the coordinates of the point in pixels obtained by image processing. This point is the center of gravity of a dot tracked using vpDot2 . Using the camera parameters, the pixels coordinates of the dot are first converted in cartesian \((x,y)\) coordinates in meter in the camera frame and than in polar coordinates by:

\[\rho = \sqrt{x^2+y^2} \hbox{,}\; \; \theta = \arctan \frac{y}{x}\]

Warning

This function does not initialize \(Z\) which is requested to compute the interaction matrix by vpfeaturePointPolar::interaction().

The code below shows how to initialize a vpFeaturePointPolar visual feature. First, we initialize the \(\rho,\theta\) , and lastly we set the 3D depth \(Z\) of the point which is generally the result of a pose estimation.

vpImage<unsigned char> I; // Image container
vpCameraParameters cam;   // Default intrinsic camera parameters
vpDot2 dot;               // Dot tracker

vpFeaturePointPolar s;    // Point feature with polar coordinates
...
// Tracking on the dot
dot.track(I);

// Initialize rho,theta visual feature
vpFeatureBuilder::create(s, cam, dot);

// A pose estimation is requested to initialize Z, the depth of the
// point in the camera frame.
double Z = 1; // Depth of the point in meters
....
s.set_Z(Z);
Parameters:
s

Visual feature \((\rho,\theta)\) to initialize. Be aware, the 3D depth \(Z\) requested to compute the interaction matrix is not initialized by this function.

cam

Camera parameters.

dot

Tracked dot. The center of gravity corresponds to the coordinates of the point in the image plane.

  1. create(s: visp._visp.visual_features.FeaturePointPolar, cam: visp._visp.core.CameraParameters, iP: visp._visp.core.ImagePoint) -> None

Initialize a point feature with polar coordinates \((\rho,\theta)\) using the coordinates of the point in pixels obtained by image processing. The points coordinates are stored in a vpImagePoint . Using the camera parameters, the pixels coordinates of the point are first converted in cartesian \((x,y)\) coordinates in meter in the camera frame and than in polar coordinates by:

\[\rho = \sqrt{x^2+y^2} \hbox{,}\; \; \theta = \arctan \frac{y}{x}\]

Warning

This function does not initialize \(Z\) which is requested to compute the interaction matrix by vpfeaturePointPolar::interaction().

The code below shows how to initialize a vpFeaturePointPolar visual feature. First, we initialize the \(\rho,\theta\) , and lastly we set the 3D depth \(Z\) of the point which is generally the result of a pose estimation.

vpImage<unsigned char> I; // Image container
vpCameraParameters cam;   // Default intrinsic camera parameters
vpImagePoint iP;               // the point in the image

vpFeaturePointPolar s;    // Point feature with polar coordinates
...
// Set the point coordinates in the image (here the coordinates are given in
the (i,j) frame iP.set_i(0); iP.set_j(0);

// Initialize rho,theta visual feature
vpFeatureBuilder::create(s, cam, iP);

// A pose estimation is requested to initialize Z, the depth of the
// point in the camera frame.
double Z = 1; // Depth of the point in meters
....
s.set_Z(Z);
Parameters:
s

Visual feature \((\rho,\theta)\) to initialize. Be aware, the 3D depth \(Z\) requested to compute the interaction matrix is not initialized by this function.

cam

Camera parameters.

iP

The vpImagePoint used to create the vpFeaturePoint .

  1. create(s: visp._visp.visual_features.FeaturePointPolar, p: visp._visp.core.Point) -> None

Initialize a point feature with polar coordinates \((\rho,\theta)\) using the coordinates of the point \((x,y,Z)\) , where \((x,y)\) correspond to the perspective projection of the point in the image plane and \(Z\) the 3D depth of the point in the camera frame. The values of \((x,y,Z)\) are expressed in meters. From the coordinates in the image plane, the polar coordinates are computed by:

\[\rho = \sqrt{x^2+y^2} \hbox{,}\; \; \theta = \arctan \frac{y}{x}\]
Parameters:
s

Visual feature \((\rho,\theta)\) and \(Z\) to initialize.

p

A point with \((x,y)\) cartesian coordinates in the image plane corresponding to the camera perspective projection, and with 3D depth \(Z\) .

  1. create(s: visp._visp.visual_features.FeaturePointPolar, goodCam: visp._visp.core.CameraParameters, wrongCam: visp._visp.core.CameraParameters, p: visp._visp.core.Point) -> None

Initialize a point feature with polar coordinates \((\rho,\theta)\) using the coordinates of the point \((x,y,Z)\) , where \((x,y)\) correspond to the perspective projection of the point in the image plane and \(Z\) the 3D depth of the point in the camera frame. The values of \((x,y,Z)\) are expressed in meters.

This function intends to introduce noise in the conversion from cartesian to polar coordinates. Cartesian \((x,y)\) coordinates are first converted in pixel coordinates in the image using goodCam camera parameters. Then, the pixels coordinates of the point are converted back to cartesian coordinates \((x^{'},y^{'})\) using the noisy camera parameters wrongCam . From these new coordinates in the image plane, the polar coordinates are computed by:

\[\rho = \sqrt{x^2+y^2} \hbox{,}\; \; \theta = \arctan \frac{y}{x}\]
Parameters:
s

Visual feature \((\rho,\theta)\) and \(Z\) to initialize.

goodCam

Camera parameters used to introduce noise. These parameters are used to convert cartesian coordinates of the point p in the image plane in pixel coordinates.

wrongCam

Camera parameters used to introduce noise. These parameters are used to convert pixel coordinates of the point in cartesian coordinates of the point in the image plane.

p

A point with \((x,y)\) cartesian coordinates in the image plane corresponding to the camera perspective projection, and with 3D depth \(Z\) .

  1. create(s: visp._visp.visual_features.FeaturePoint3D, p: visp._visp.core.Point) -> None

Initialize a 3D point feature using the coordinates of the point \((X,Y,Z)\) in the camera frame. The values of \((X,Y,Z)\) are expressed in meters.

Warning

To be sure that the vpFeaturePoint is well initialized, you have to be sure that at least the point coordinates in the camera frame are computed and stored in the vpPoint .

Parameters:
s

Visual feature to initialize.

  1. create(s: visp._visp.visual_features.FeatureLine, l: visp._visp.core.Line) -> None

Initialize a line feature thanks to a vpLine . A vpFeatureLine contains the parameters \((\rho,\theta)\) which are expressed in meter. It also contains the parameters of a plan equation \((A,B,C,D)\) . In vpLine there are the parameters of two plans, but the one which have the biggest D parameter is copied in the vpFeatureLine parameters.

Parameters:
s

Visual feature to initialize.

  1. create(s: visp._visp.visual_features.FeatureLine, c: visp._visp.core.Cylinder, line: int) -> None

Initialize a line feature thanks to a vpCylinder . A vpFeatureLine contains the parameters \((\rho,\theta)\) which are expressed in meter. It also contains the parameters of a plan equation \((A,B,C,D)\) . These parameters are computed thanks to the parameters that are contained in vpCylinder . It is possible to choose which edge of the cylinder to use to initialize the vpFeatureLine .

Parameters:
s

Visual feature to initialize.

line

The cylinder edge used to create the line feature. It can be vpCylinder::line1 or vpCylinder::line2 .

  1. create(s: visp._visp.visual_features.FeatureLine, cam: visp._visp.core.CameraParameters, mel: visp._visp.me.MeLine) -> None

Initialize a line feature thanks to a vpMeLine and the parameters of the camera. A vpFeatureLine contains the parameters \((\rho,\theta)\) which are expressed in meter. In vpMeLine these parameters are given in pixel. The conversion is done thanks to the camera parameters.

Warning

vpFeatureLine also contains the parameters of a plan equation \((A,B,C,D)\) . These parameters are needed to compute the interaction matrix but can not be computed thanks to a vpMeLine . You have to compute and set these parameters outside the function.

The code below shows how to initialize a vpFeatureLine visual feature. First, we initialize the \((\rho,\theta)\) , and lastly we set the parameters of the plane which is generally the result of a pose estimation.

vpImage<unsigned char> I; // Image container
vpCameraParameters cam;   // Default intrinsic camera parameters
vpMeLine line;            // Moving-edges line tracker

vpFeatureLine s;    // Point feature
...
// Tracking on the dot
line.track(I);

// Initialize rho,theta visual feature
vpFeatureBuilder::create(s, cam, line);

// A pose estimation is requested to initialize A, B, C and D the
//parameters of the equation plan.
double A = 1;
double B = 1;
double C = 1;
double D = 1;
....
s.setABCD(A,B,C,D);
Parameters:
s

Visual feature to initialize.

cam

The parameters of the camera used to acquire the image containing the line.

  1. create(s: visp._visp.visual_features.FeatureEllipse, c: visp._visp.core.Circle) -> None

create vpFeatureEllipse feature

Initialize an ellipse feature thanks to a vpCircle . The vpFeatureEllipse is initialized thanks to the parameters of the circle in the camera frame and in the image plane. All the parameters are given in meter.

Warning

To be sure that the vpFeatureEllipse is well initialized, you have to be sure that at least the circle coordinates in the image plane and in the camera frame are computed and stored in the vpCircle .

Parameters:
s

Visual feature to initialize.

  1. create(s: visp._visp.visual_features.FeatureEllipse, sphere: visp._visp.core.Sphere) -> None

Initialize an ellipse feature thanks to a vpSphere . The vpFeatureEllipse is initialized thanks to the parameters of the sphere in the camera frame and in the image plan. All the parameters are given in meter.

Warning

To be sure that the vpFeatureEllipse is well initialized, you have to be sure that at least the sphere coordinates in the image plan and in the camera frame are computed and stored in the vpSphere .

Parameters:
s

Visual feature to initialize.

  1. create(s: visp._visp.visual_features.FeatureEllipse, cam: visp._visp.core.CameraParameters, blob: visp._visp.blob.Dot) -> None

Initialize an ellipse feature thanks to a vpDot and camera parameters. The vpFeatureEllipse is initialized thanks to the parameters of the dot given in pixel. The camera parameters are used to convert the pixel parameters to parameters given in meter.

Warning

With a vpDot there is no information about 3D parameters. Thus the parameters \((A,B,C)\) can not be set. You have to compute them and initialized them outside the method.

Parameters:
s

Visual feature to initialize.

cam

The parameters of the camera used to acquire the image containing the vpDot .

blob

The blob used to create the vpFeatureEllipse .

  1. create(s: visp._visp.visual_features.FeatureEllipse, cam: visp._visp.core.CameraParameters, blob: visp._visp.blob.Dot2) -> None

Initialize an ellipse feature thanks to a vpDot2 and camera parameters. The vpFeatureEllipse is initialized thanks to the parameters of the dot given in pixel. The camera parameters are used to convert the pixel parameters to parameters given in meter.

Warning

With a vpDot2 there is no information about 3D parameters. Thus the parameters \((A,B,C)\) can not be set. You have to compute them and initialized them outside the method.

Parameters:
s

Visual feature to initialize.

cam

The parameters of the camera used to acquire the image containing the vpDot2 .

blob

The blob used to create the vpFeatureEllipse .

  1. create(s: visp._visp.visual_features.FeatureEllipse, cam: visp._visp.core.CameraParameters, ellipse: visp._visp.me.MeEllipse) -> None

Initialize an ellipse feature thanks to a vpMeEllipse and camera parameters. The vpFeatureEllipse is initialized thanks to the parameters of the ellipse given in pixel. The camera parameters are used to convert the pixel parameters to parameters given in meters in the image plane.

Warning

With a vpMeEllipse there is no information about 3D parameters. Thus the parameters \((A,B,C)\) can not be set. You have to compute them and initialize them outside the method.

Parameters:
s

Visual feature to initialize.

cam

The parameters of the camera used to acquire the image containing the vpMeEllipse

ellipse

The tracked vpMeEllipse used to create the vpFeatureEllipse .

  1. create(s: visp._visp.visual_features.FeatureVanishingPoint, p: visp._visp.core.Point, select: int = (vpFeatureVanishingPoint::selectX()|vpFeatureVanishingPoint::selectY())) -> None

Initialize a vpFeatureVanishingPoint thanks to a vpPoint . The vpFeatureVanishingPoint is initialized thanks to the parameters of the point in the image plane. All the parameters are given in meter.

Parameters:
s

Visual feature to initialize; either \({\bf s} = (x, y)\) or either \({\bf s} = (1/\rho, \alpha)\) depending on select parameter.

p

The vpPoint with updated \((x, y)\) coordinates in the image plane that are used to create the vpFeatureVanishingPoint .

select

Use either vpFeatureVanishingPoint::selectX() or vpFeatureVanishingPoint::selectY() to build \({\bf s} = (x, y)\) visual feature, or use rather select vpFeatureVanishingPoint::selectOneOverRho() or vpFeatureVanishingPoint::selectAlpha() to build \({\bf s} = (1/\rho, \alpha)\) visual feature.

  1. create(s: visp._visp.visual_features.FeatureVanishingPoint, l1: visp._visp.visual_features.FeatureLine, l2: visp._visp.visual_features.FeatureLine, select: int = (vpFeatureVanishingPoint::selectX()|vpFeatureVanishingPoint::selectY())) -> None

Initialize a vpFeatureVanishingPoint thanks to two vpFeatureLine . The vpFeatureVanishingPoint is initialized thanks to the coordinate of the intersection point in the image plan. All the parameters are given in meter.

Warning

An exception is thrown if the two lines are parallel when cartesian coordinates \({\bf s} = (x, y)\) are used.

Parameters:
s

Visual feature to initialize; either \({\bf s} = (x, y)\) or rather \({\bf s} = (1/\rho, \alpha)\) depending on select parameter.

select

Use either vpFeatureVanishingPoint::selectX() or vpFeatureVanishingPoint::selectY() to build \({\bf s} = (x, y)\) visual feature, or use rather select vpFeatureVanishingPoint::selectOneOverRho() or vpFeatureVanishingPoint::selectAlpha() to build \({\bf s} = (1/\rho, \alpha)\) visual feature.

  1. create(s: visp._visp.visual_features.FeatureVanishingPoint, l1: visp._visp.core.Line, l2: visp._visp.core.Line, select: int = (vpFeatureVanishingPoint::selectX()|vpFeatureVanishingPoint::selectY())) -> None

Initialize a vpFeatureVanishingPoint thanks to two vpLine . The vpFeatureVanishingPoint is initialized thanks to the coordinate of the intersection point in the image plan. All the parameters are given in meter.

Warning

An exception is thrown if the two lines are parallel when cartesian coordinates \({\bf s} = (x, y)\) are used.

Parameters:
s

Visual feature to initialize; either \({\bf s} = (x, y)\) or rather \({\bf s} = (1/\rho, \alpha)\) depending on select parameter.

select

Use either vpFeatureVanishingPoint::selectX() or vpFeatureVanishingPoint::selectY() to build \({\bf s} = (x, y)\) visual feature, or use rather select vpFeatureVanishingPoint::selectOneOverRho() or vpFeatureVanishingPoint::selectAlpha() to build \({\bf s} = (1/\rho, \alpha)\) visual feature.

  1. create(s: visp._visp.visual_features.FeatureVanishingPoint, cam: visp._visp.core.CameraParameters, line1_ip1: visp._visp.core.ImagePoint, line1_ip2: visp._visp.core.ImagePoint, line2_ip1: visp._visp.core.ImagePoint, line2_ip2: visp._visp.core.ImagePoint, select: int) -> None

Initialize a vpFeatureVanishingPoint thanks to two vpLine . The vpFeatureVanishingPoint is initialized thanks to the coordinate of the intersection point in the image plan. All the parameters are given in meter.

Parameters:
s

Visual feature to initialize; either \({\bf s} = (x, y)\) or rather \({\bf s} = (1/\rho, \alpha)\) depending on select parameter.

cam

Camera parameters used to convert image point coordinates from pixel in meter in the image plane.

line1_ip1

The first line defined by 2 image points with pixel coordinates in the image.

line1_ip2

The first line defined by 2 image points with pixel coordinates in the image.

line2_ip1

The second line defined by 2 image points with pixel coordinates in the image.

line2_ip2

The second line defined by 2 image points with pixel coordinates in the image.

select

Use either vpFeatureVanishingPoint::selectX() or vpFeatureVanishingPoint::selectY() to build \({\bf s} = (x, y)\) visual feature, or use rather select vpFeatureVanishingPoint::selectOneOverRho() or vpFeatureVanishingPoint::selectAlpha() to build \({\bf s} = (1/\rho, \alpha)\) visual feature.