FeatureSegment

class FeatureSegment(self, normalized: bool = false)

Bases: BasicFeature

Class that defines a 2D segment visual features. This class allow to consider two sets of visual features:

  • the non normalized features \({\bf s} = (x_c, y_c, l, \alpha)\) where \((x_c,y_c)\) are the coordinates of the segment center, \(l\) the segment length and \(\alpha\) the orientation of the segment with respect to the \(x\) axis.

  • or the normalized features \({\bf s} = (x_n, y_n, l_n, \alpha)\) with \(x_n = x_c/l\) , \(y_n = y_c/l\) and \(l_n = 1/l\) .

The selection of the feature set is done either during construction using vpFeatureSegment(bool) , or by setNormalized(bool) .

Default constructor that builds an empty segment visual feature.

Parameters:
normalized: bool = false

If true, use normalized features \({\bf s} = (x_n, y_n, l_n, \alpha)\) . If false, use non normalized features \({\bf s} = (x_c, y_c, l_c, \alpha)\) .

Methods

__init__

Default constructor that builds an empty segment visual feature.

buildFrom

Build a segment visual feature from two points and their Z coordinates.

display

Overloaded function.

error

Overloaded function.

getAlpha

Get the value of \(\alpha\) which represents the orientation of the segment.

getL

Get the length of the segment.

getXc

Get the x coordinate of the segment center in the image plane.

getYc

Get the y coordinate of the segment center in the image plane.

getZ1

Get the value of \(Z_1\) which represents the Z coordinate in the camera frame of the 3D point that corresponds to the segment first point.

getZ2

Get the value of \(Z_2\) which represents the Z coordinate in the camera frame of the 3D point that corresponds to the segment second point.

init

Initialise the memory space requested for segment visual feature.

interaction

Compute and return the interaction matrix \(L\) associated to a subset of the possible features \({\bf s} = (x_c, y_c, l, \alpha)\) or \({\bf s} = (x_n, y_n, l_n, \alpha)\) .

isNormalized

Indicates if the normalized features are considered.

print

Print to stdout the values of the current visual feature \(s\) .

selectAlpha

Function used to select the \(\alpha\) subfeature.

selectL

Function used to select the \(l\) or \(l_n\) subfeature.

selectXc

Function used to select the \(x_c\) or \(x_n\) subfeature.

selectYc

Function used to select the \(y_c\) or \(y_n\) subfeature.

setAlpha

Set the value of \(\alpha\) which represents the orientation of the segment in the image plane.

setL

Set the value of the segment length in the image plane.

setNormalized

Set the king of feature to consider.

setXc

Set the value of the x coordinate of the segment center in the image plane.

setYc

Set the value of the y coordinate of the segment center in the image plane.

setZ1

Set the value of \(Z_1\) which represents the Z coordinate in the camera frame of the 3D point that corresponds to the segment first point.

setZ2

Set the value of \(Z_2\) which represents the Z coordinate in the camera frame of the 3D point that corresponds to the segment second point.

Inherited Methods

user

BasicFeatureSelect

Indicates who should deallocate the feature.

setFlags

Set feature flags to true to prevent warning when re-computing the interaction matrix without having updated the feature.

FEATURE_ALL

getDimension

Get the feature vector dimension.

getDeallocate

dimension_s

Return the dimension of the feature vector \(\bf s\) .

setDeallocate

selectAll

Select all the features.

get_s

Get the feature vector \(\bf s\) .

BasicFeatureDeallocatorType

Indicates who should deallocate the feature.

vpServo

Operators

__doc__

__init__

Default constructor that builds an empty segment visual feature.

__module__

Attributes

FEATURE_ALL

__annotations__

user

vpServo

class BasicFeatureDeallocatorType(self, value: int)

Bases: pybind11_object

Indicates who should deallocate the feature.

Values:

  • user

  • vpServo

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
class BasicFeatureSelect(self, value: int)

Bases: pybind11_object

Indicates who should deallocate the feature.

Values:

  • user

  • vpServo

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
__init__(self, normalized: bool = false)

Default constructor that builds an empty segment visual feature.

Parameters:
normalized: bool = false

If true, use normalized features \({\bf s} = (x_n, y_n, l_n, \alpha)\) . If false, use non normalized features \({\bf s} = (x_c, y_c, l_c, \alpha)\) .

buildFrom(self, x1: float, y1: float, Z1: float, x2: float, y2: float, Z2: float) None

Build a segment visual feature from two points and their Z coordinates.

Depending on the feature set that is considered, the features \({\bf s} = (x_c, y_c, l, \alpha)\) or \({\bf s} = (x_n, y_n, l_n, \alpha)\) are computed from the two points using the following formulae:

\[x_c = \frac{x_1 + x_2}{2} \]
\[y_c = \frac{y_1 + y_2}{2} \]
\[l = \sqrt{{x_1 - x_2}^2 + {y_1 - y_2}^2} \]
\[\alpha = arctan(\frac{y_1 - y_2}{x_1 - x_2}) \]
Parameters:
x1: float

coordinates of the first point in the image plane.

y1: float

coordinates of the first point in the image plane.

Z1: float

depth of the first point in the camera frame.

x2: float

coordinates of the second point in the image plane.

y2: float

coordinates of the second point in the image plane.

Z2: float

depth of the second point in the camera frame.

dimension_s(self) int

Return the dimension of the feature vector \(\bf s\) .

display(*args, **kwargs)

Overloaded function.

  1. display(self: visp._visp.visual_features.FeatureSegment, cam: visp._visp.core.CameraParameters, I: visp._visp.core.ImageGray, color: visp._visp.core.Color = vpColor::green, thickness: int = 1) -> None

Displays a segment representing the feature on a grayscale image. The two limiting points are displayed in cyan and yellow.

Parameters:
cam

Camera parameters.

I

Image.

color

Color to use for the segment.

thickness

Thickness of the feature representation.

  1. display(self: visp._visp.visual_features.FeatureSegment, cam: visp._visp.core.CameraParameters, I: visp._visp.core.ImageRGBa, color: visp._visp.core.Color = vpColor::green, thickness: int = 1) -> None

Displays a segment representing the feature on a RGBa image. The two limiting points are displayed in cyan and yellow.

Parameters:
cam

Camera parameters.

I

Image.

color

Color to use for the segment.

thickness

Thickness of the feature representation.

error(*args, **kwargs)

Overloaded function.

  1. error(self: visp._visp.visual_features.FeatureSegment, s_star: visp._visp.visual_features.BasicFeature, select: int = FEATURE_ALL) -> visp._visp.core.ColVector

Computes the error between the current and the desired visual features from a subset of the possible features \({\bf s} = (x_c, y_c, l, \alpha)\) or \({\bf s} = (x_n, y_n, l_n, \alpha)\) .

For the angular component \(\alpha\) , we define the error as \(\alpha \ominus \alpha^*\) , where \(\ominus\) is modulo \(2\pi\) subtraction.

Parameters:
s_star

Desired 2D segment feature.

select

The error can be computed for a selection of a subset of the possible segment features.

  • To compute the error for all the four parameters use vpBasicFeature::FEATURE_ALL . In that case the error vector is a 4 dimension column vector.

  • To compute the error for only one subfeature of \({\bf s} = (x_c, y_c, l, \alpha)\) or \({\bf s} = (x_n, y_n, l_n, \alpha)\) feature set use one of the following functions: selectXc() , selectYc() , selectL() , selectAlpha() .

Returns:

The error between the current and the desired visual feature.

  1. error(self: visp._visp.visual_features.BasicFeature, s_star: visp._visp.visual_features.BasicFeature, select: int = FEATURE_ALL) -> visp._visp.core.ColVector

Compute the error between two visual features from a subset of the possible features.

getAlpha(self) float

Get the value of \(\alpha\) which represents the orientation of the segment.

Returns:

The value of \(\alpha\) .

getDeallocate(self) visp._visp.visual_features.BasicFeature.BasicFeatureDeallocatorType
getDimension(self, select: int = FEATURE_ALL) int

Get the feature vector dimension.

getL(self) float

Get the length of the segment.

Returns:

If normalized features are used, return \(l_n = 1 / l\) . Otherwise return \(l\) .

getXc(self) float

Get the x coordinate of the segment center in the image plane.

Returns:

If normalized features are used, return \(x_n = x_c / l\) . Otherwise return \(x_c\) .

getYc(self) float

Get the y coordinate of the segment center in the image plane.

Returns:

If normalized features are used, return \(y_n = y_c / l\) . Otherwise return \(y_c\) .

getZ1(self) float

Get the value of \(Z_1\) which represents the Z coordinate in the camera frame of the 3D point that corresponds to the segment first point.

Returns:

The value of the depth \(Z_1\) .

getZ2(self) float

Get the value of \(Z_2\) which represents the Z coordinate in the camera frame of the 3D point that corresponds to the segment second point.

Returns:

The value of the depth \(Z_2\) .

get_s(self, select: int = FEATURE_ALL) visp._visp.core.ColVector

Get the feature vector \(\bf s\) .

init(self) None

Initialise the memory space requested for segment visual feature.

interaction(self, select: int = FEATURE_ALL) visp._visp.core.Matrix

Compute and return the interaction matrix \(L\) associated to a subset of the possible features \({\bf s} = (x_c, y_c, l, \alpha)\) or \({\bf s} = (x_n, y_n, l_n, \alpha)\) .

The interaction matrix of the non normalized feature set is of the following form:

\[\begin{split}{\bf L} = \left[ \begin{array}{c} L_{x_c} \\L_{y_c} \\L_{l} \\L_{\alpha} \end{array} \right] = \left[ \begin{array}{cccccc} -\lambda_2 & 0 & \lambda_2 x_c - \lambda_1 l \frac{\cos \alpha}{4} & x_c y_c + l^2 \frac{\cos \alpha \sin \alpha}{4} & -(1 + {x_{c}}^{2} + l^2 \frac{\cos^2\alpha}{4}) & y_c \\0 & -\lambda_2 & \lambda_2 y_c - \lambda_1 l \frac{\sin \alpha}{4} & 1 + {y_{c}}^{2} + l^2 \frac{\sin^2 \alpha}{4} & -x_c y_c-l^2 \frac{\cos \alpha \sin \alpha}{4} & -x_c \\\lambda_1 \cos \alpha & \lambda_1 \sin \alpha & \lambda_2 l - \lambda_1 (x_c \cos \alpha + y_c \sin \alpha) & l (x_c \cos \alpha \sin \alpha + y_c (1 + \sin^2 \alpha)) & -l (x_c (1 + \cos^2 \alpha)+y_c \cos \alpha \sin \alpha) & 0 \\-\lambda_1 \frac{\sin \alpha}{l} & \lambda_1 \frac{\cos \alpha}{l} & \lambda_1 \frac{x_c \sin \alpha - y_c \cos \alpha}{l} & -x_c \sin^2 \alpha + y_c \cos \alpha \sin \alpha & x_c \cos \alpha \sin \alpha - y_c \cos^2 \alpha & -1 \end{array} \right] \end{split}\]

with \(\lambda_1 = \frac{Z_1 - Z_2}{Z_1 Z_2}\) and \(\lambda_2 = \frac{Z_1 + Z_2}{2 Z_1 Z_2}\) where \(Z_i\) are the depths of the points.

The code below shows how to compute the interaction matrix associated to the visual feature \({\bf s} = (x_c, y_c, l, \alpha)\) .

#include <visp3/core/vpPoint.h>
#include <visp3/visual_features/vpFeatureSegment.h>

int main()
{
  // Define two 3D points in the object frame
  vpPoint p1(.1, .1, 0.), p2(.3, .2, 0.);

  // Define the camera pose wrt the object
  vpHomogeneousMatrix cMo (0, 0, 1, 0, 0, 0); // Z=1 meter
  // Compute the coordinates of the points in the camera frame
  p1.changeFrame(cMo);
  p2.changeFrame(cMo);
  // Compute the coordinates of the points in the image plane by perspective projection
  p1.project(); p2.project();

  // Build the segment visual feature
  vpFeatureSegment s;
  s.buildFrom(p1.get_x(), p1.get_y(), p1.get_Z(), p2.get_x(), p2.get_y(), p2.get_Z());

  // Compute the interaction matrix
  vpMatrix L = s.interaction( vpBasicFeature::FEATURE_ALL );
}

In this case, L is a 4 by 6 matrix.

It is also possible to build the interaction matrix associated to one of the possible features. The code below shows how to modify the previous code to consider as visual feature \(s = (l, \alpha)\) .

vpMatrix L = s.interaction( vpFeatureSegment::selectL() | vpFeatureSegment::selectAlpha() );

In that case, L is a 2 by 6 matrix.

Parameters:
select: int = FEATURE_ALL

Selection of a subset of the possible segment features.

  • To compute the interaction matrix for all the four subset features \((x_c\) , \(y_c\) , \(l\) , \(\alpha)\) or \((x_n\) , \(y_n\) , \(l_n\) , \(\alpha)\) use vpBasicFeature::FEATURE_ALL . In that case the dimension of the interaction matrix is \([4 \times 6]\) .

  • To compute the interaction matrix for only one of the subset use one of the following functions: selectXc() , selectYc() , selectL() , selectAlpha() . In that case, the returned interaction matrix is of dimension \([1 \times 6]\) .

Returns:

The interaction matrix computed from the segment features.

isNormalized(self) bool

Indicates if the normalized features are considered.

print(self, select: int = FEATURE_ALL) None

Print to stdout the values of the current visual feature \(s\) .

s.print();

produces the following output:

vpFeatureSegment: (xc = -0.255634; yc = -0.13311; l = 0.105005; alpha = 92.1305 deg)

while

s.print( vpFeatureSegment::selectL() | vpFeatureSegment::selectAlpha() );

produces the following output:

vpFeatureSegment: (l = 0.105005; alpha = 92.1305 deg)
Parameters:
select: int = FEATURE_ALL

Selection of a subset of the possible segment features ( \(x_c\) , \(y_c\) , \(l\) , \(\alpha\) ).

static selectAll() int

Select all the features.

static selectAlpha() int

Function used to select the \(\alpha\) subfeature.

This function is to use in conjunction with interaction() in order to compute the interaction matrix associated to \(\alpha\) feature.

See the interaction() method for an usage example.

This function is also useful in the vpServo class to indicate that a subset of the visual feature is to use in the control law:

vpFeatureSegment s, s_star; // Current and desired visual feature
vpServo task;
...
// Add only the alpha subset feature from a segment to the task
task.addFeature(s, s_star, vpFeatureSegment::selectAlpha());

Note

See selectXc() , selectYc() , selectL()

static selectL() int

Function used to select the \(l\) or \(l_n\) subfeature.

This function is to use in conjunction with interaction() in order to compute the interaction matrix associated to \(l\) or \(l_n\) feature.

See the interaction() method for an usage example.

This function is also useful in the vpServo class to indicate that a subset of the visual feature is to use in the control law:

vpFeatureSegment s, s_star; // Current and desired visual feature
vpServo task;
...
// Add only the l subset feature from a segment to the task
task.addFeature(s, s_star, vpFeatureSegment::selectL());

Note

See selectXc() , selectYc() , selectAlpha()

static selectXc() int

Function used to select the \(x_c\) or \(x_n\) subfeature.

This function is to use in conjunction with interaction() in order to compute the interaction matrix associated to \(x_c\) or \(x_n\) feature.

See the interaction() method for an usage example.

This function is also useful in the vpServo class to indicate that a subset of the visual feature is to use in the control law:

vpFeatureSegment s, s_star; // Current and desired visual feature
vpServo task;
...
// Add only the xc subset feature from a segment to the task
task.addFeature(s, s_star, vpFeatureSegment::selectXc());

Note

See selectYc() , selectL() , selectAlpha()

static selectYc() int

Function used to select the \(y_c\) or \(y_n\) subfeature.

This function is to use in conjunction with interaction() in order to compute the interaction matrix associated to \(y_c\) or \(y_n\) feature.

See the interaction() method for an usage example.

This function is also useful in the vpServo class to indicate that a subset of the visual feature is to use in the control law:

vpFeatureSegment s, s_star; // Current and desired visual feature
vpServo task;
...
// Add only the yc subset feature from a segment to the task
task.addFeature(s, s_star, vpFeatureSegment::selectYc());

Note

See selectXc() , selectL() , selectAlpha()

setAlpha(self, val: float) None

Set the value of \(\alpha\) which represents the orientation of the segment in the image plane. It is one parameter of the visual feature \(s\) .

Parameters:
val: float

math:alpha value to set.

setDeallocate(self, d: visp._visp.visual_features.BasicFeature.BasicFeatureDeallocatorType) None
setFlags(self) None

Set feature flags to true to prevent warning when re-computing the interaction matrix without having updated the feature.

setL(self, val: float) None

Set the value of the segment length in the image plane. It is one parameter of the visual feature \(s\) .

Parameters:
val: float

Value to set, that is either equal to \(l_n= 1/l\) when normalized features are considered, or equal to \(l\) otherwise.

setNormalized(self, normalized: bool) None

Set the king of feature to consider.

Parameters:
normalized: bool

If true, use normalized features \({\bf s} = (x_n, y_n, l_n, \alpha)\) . If false, use non normalized features \({\bf s} = (x_c, y_c, l_c, \alpha)\) .

setXc(self, val: float) None

Set the value of the x coordinate of the segment center in the image plane. It is one parameter of the visual feature \(s\) .

Parameters:
val: float

Value to set, that is either equal to \(x_n = x_c/l\) when normalized features are considered, or equal to \(x_c\) otherwise.

setYc(self, val: float) None

Set the value of the y coordinate of the segment center in the image plane. It is one parameter of the visual feature \(s\) .

Parameters:
val: float

Value to set, that is either equal to \(y_n = y_c/l\) when normalized features are considered, or equal to \(y_c\) otherwise.

setZ1(self, val: float) None

Set the value of \(Z_1\) which represents the Z coordinate in the camera frame of the 3D point that corresponds to the segment first point.

This value is requested to compute the interaction matrix.

Parameters:
val: float

math:Z_1 value to set.

setZ2(self, val: float) None

Set the value of \(Z_2\) which represents the Z coordinate in the camera frame of the 3D point that corresponds to the segment second point.

This value is requested to compute the interaction matrix.

Parameters:
val: float

math:Z_2 value to set.