KeyPoint

class KeyPoint(*args, **kwargs)

Bases: BasicKeyPoint

Class that allows keypoints detection (and descriptors extraction) and matching thanks to OpenCV library. Thus to enable this class OpenCV should be installed. Installation instructions are provided here https://visp.inria.fr/3rd_opencv .

This class permits to use different types of detectors, extractors and matchers easily. So, the classical SIFT and SURF keypoints could be used, as well as ORB, FAST, (etc.) keypoints, depending of the version of OpenCV you use.

Note

Due to some patents, SIFT and SURF are packaged in an external module called nonfree module in OpenCV version before 3.0.0 and in xfeatures2d from 3.0.0. You have to check you have the corresponding module to use SIFT and SURF.

The goal of this class is to provide a tool to match reference keypoints from a reference image (or train keypoints in OpenCV terminology) and detected keypoints from a current image (or query keypoints in OpenCV terminology).

If you supply the corresponding 3D coordinates corresponding to the 2D coordinates of the reference keypoints, you can also estimate the pose of the object by matching a set of detected keypoints in the current image with the reference keypoints.

If you use this class, the first thing you have to do is to build the reference keypoints by detecting keypoints in a reference image which contains the object to detect. Then you match keypoints detected in a current image with those detected in a reference image by calling matchPoint() methods. You can access to the lists of matched points thanks to the methods getMatchedPointsInReferenceImage() and getMatchedPointsInCurrentImage(). These two methods return a list of matched points. The nth element of the first list is matched with the nth element of the second list. To provide easy compatibility with OpenCV terminology, getTrainKeyPoints() give you access to the list of keypoints detected in train images (or reference images) and getQueryKeyPoints() give you access to the list of keypoints detected in a query image (or current image). The method getMatches() give you access to a list of cv::DMatch with the correspondence between the index of the train keypoints and the index of the query keypoints.

The following small example shows how to use the class to do the matching between current and reference keypoints.

#include <visp3/core/vpImage.h>
#include <visp3/vision/vpKeyPoint.h>

int main()
{
#if (VISP_HAVE_OPENCV_VERSION >= 0x020300)
  vpImage<unsigned char> Irefrence;
  vpImage<unsigned char> Icurrent;

  vpKeyPoint::vpFilterMatchingType filterType = vpKeyPoint::ratioDistanceThreshold;
  vpKeyPoint keypoint("ORB", "ORB", "BruteForce-Hamming", filterType);

  // First grab the reference image Irefrence
  // Add your code to load the reference image in Ireference

  // Build the reference ORB points.
  keypoint.buildReference(Irefrence);

  // Then grab another image which represents the current image Icurrent

  // Match points between the reference points and the ORB points computed in the current image.
  keypoint.matchPoint(Icurrent);

  // Display the matched points
  keypoint.display(Irefrence, Icurrent);
#endif

  return (0);
}

It is also possible to build the reference keypoints in a region of interest (ROI) of an image and find keypoints to match in only a part of the current image. The small following example shows how to do this:

#include <visp3/core/vpDisplay.h>
#include <visp3/core/vpImage.h>
#include <visp3/vision/vpKeyPoint.h>

int main()
{
#if (VISP_HAVE_OPENCV_VERSION >= 0x020300)
  vpImage<unsigned char> Ireference;
  vpImage<unsigned char> Icurrent;

  vpKeyPoint::vpFilterMatchingType filterType = vpKeyPoint::ratioDistanceThreshold;
  vpKeyPoint keypoint("ORB", "ORB", "BruteForce-Hamming", filterType);

  //First grab the reference image Irefrence
  //Add your code to load the reference image in Ireference

  //Select a part of the image by clicking on two points which define a rectangle
  vpImagePoint corners[2];
  for (int i=0 ; i < 2 ; i++) {
    vpDisplay::getClick(Ireference, corners[i]);
  }

  //Build the reference ORB points.
  int nbrRef;
  unsigned int height, width;
  height = (unsigned int)(corners[1].get_i() - corners[0].get_i());
  width = (unsigned int)(corners[1].get_j() - corners[0].get_j());
  nbrRef = keypoint.buildReference(Ireference, corners[0], height, width);

  //Then grab another image which represents the current image Icurrent

  //Select a part of the image by clicking on two points which define a rectangle
  for (int i=0 ; i < 2 ; i++) {
    vpDisplay::getClick(Icurrent, corners[i]);
  }

  //Match points between the reference points and the ORB points computed in the current image.
  int nbrMatched;
  height = (unsigned int)(corners[1].get_i() - corners[0].get_i());
  width = (unsigned int)(corners[1].get_j() - corners[0].get_j());
  nbrMatched = keypoint.matchPoint(Icurrent, corners[0], height, width);

  //Display the matched points
  keypoint.display(Ireference, Icurrent);
#endif

  return(0);
}

This class is also described in tutorial-matching.

Overloaded function.

  1. __init__(self: visp._visp.vision.KeyPoint, detectorType: visp._visp.vision.KeyPoint.FeatureDetectorType, descriptorType: visp._visp.vision.KeyPoint.FeatureDescriptorType, matcherName: str, filterType: visp._visp.vision.KeyPoint.FilterMatchingType = ratioDistanceThreshold) -> None

  2. __init__(self: visp._visp.vision.KeyPoint, detectorName: str = ORB, extractorName: str = ORB, matcherName: str = BruteForce-Hamming, filterType: visp._visp.vision.KeyPoint.FilterMatchingType = ratioDistanceThreshold) -> None

  3. __init__(self: visp._visp.vision.KeyPoint, detectorNames: list[str], extractorNames: list[str], matcherName: str = BruteForce, filterType: visp._visp.vision.KeyPoint.FilterMatchingType = ratioDistanceThreshold) -> None

Methods

__init__

Overloaded function.

buildReference

Overloaded function.

compute3D

Overloaded function.

createImageMatching

Overloaded function.

detect

Overloaded function.

display

Overloaded function.

displayMatching

Overloaded function.

getCovarianceMatrix

Get the covariance matrix when estimating the pose using the Virtual Visual Servoing approach.

getDetectionTime

Get the elapsed time to compute the keypoint detection.

getDetector

Overloaded function.

getDetectorNames

Get the feature detector name associated to the type.

getExtractionTime

Get the elapsed time to compute the keypoint extraction.

getExtractor

Overloaded function.

getExtractorNames

Get the feature descriptor extractor name associated to the type.

getImageFormat

Get the image format to use when saving training images.

getMatchQueryToTrainKeyPoints

Get the list of pairs with the correspondence between the matched query and train keypoints.

getMatcher

Get the matcher pointer.

getMatches

Get the list of matches (correspondences between the indexes of the detected keypoints and the train keypoints).

getMatchingTime

Get the elapsed time to compute the matching.

getNbImages

Get the number of train images.

getObjectPoints

Overloaded function.

getPoseTime

Get the elapsed time to compute the pose.

getQueryDescriptors

Get the descriptors matrix for the query keypoints.

getQueryKeyPoints

Overloaded function.

getRansacInliers

Get the list of Ransac inliers.

getRansacOutliers

Get the list of Ransac outliers.

getTrainDescriptors

Get the train descriptors matrix.

getTrainKeyPoints

Overloaded function.

getTrainPoints

Overloaded function.

initMatcher

Initialize a matcher based on its name.

insertImageMatching

Overloaded function.

loadConfigFile

Load configuration parameters from an XML config file.

loadLearningData

Load learning data saved on disk.

match

Match keypoints based on distance between their descriptors.

matchPoint

Overloaded function.

reset

Reset the instance as if we would declare another vpKeyPoint variable.

saveLearningData

Save the learning data in a file in XML or binary mode.

setCovarianceComputation

Set if the covariance matrix has to be computed in the Virtual Visual Servoing approach.

setDetectionMethod

setDetector

Overloaded function.

setDetectors

Set and initialize a list of detectors denominated by their names detectorNames .

setExtractor

Overloaded function.

setExtractors

Set and initialize a list of extractors denominated by their names extractorNames .

setFilterMatchingType

setImageFormat

setMatcher

Set and initialize a matcher denominated by his name matcherName .

setMatchingFactorThreshold

Set the factor value for the filtering method: constantFactorDistanceThreshold.

setMatchingRatioThreshold

Set the ratio value for the filtering method: ratioDistanceThreshold.

setMaxFeatures

Set maximum number of keypoints to extract.

setRansacConsensusPercentage

Set the percentage value for defining the cardinality of the consensus group.

setRansacFilterFlag

Set filter flag for RANSAC pose estimation.

setRansacIteration

Set the maximum number of iterations for the Ransac pose estimation method.

setRansacMinInlierCount

Set the minimum number of inlier for the Ransac pose estimation method.

setRansacParallel

Use or not the multithreaded version.

setRansacParallelNbThreads

Set the number of threads to use if multithreaded RANSAC pose.

setRansacReprojectionError

Set the maximum reprojection error (in pixel) to determine if a point is an inlier or not.

setRansacThreshold

Set the maximum error (in meter) to determine if a point is an inlier or not.

setUseAffineDetection

Set if multiple affine transformations must be used to detect and extract keypoints.

setUseMatchTrainToQuery

Set if we want to match the train keypoints to the query keypoints.

setUseRansacConsensusPercentage

Set the flag to choose between a percentage value of inliers for the cardinality of the consensus group or a minimum number.

setUseRansacVVS

Set the flag to choose between the OpenCV or ViSP Ransac pose estimation function.

setUseSingleMatchFilter

Set the flag to filter matches where multiple query keypoints are matched to the same train keypoints.

Inherited Methods

getReferenceImagePointsList

Return the vector of reference image point.

getIndexInAllReferencePointList

Get the nth matched reference point index in the complete list of reference point.

getMatchedReferencePoints

Return the index of the matched associated to the current image point i.

getCurrentImagePointsList

Return the vector of current image point.

getReferencePointNumber

Get the number of reference points.

getMatchedPointNumber

Get the number of matched points.

referenceBuilt

Indicate wether the reference has been built or not.

getReferencePoint

Get the nth reference point.

getMatchedPoints

Get the nth couple of reference point and current point which have been matched.

Operators

__doc__

__init__

Overloaded function.

__module__

Attributes

DESCRIPTOR_AKAZE

DESCRIPTOR_BRISK

DESCRIPTOR_KAZE

DESCRIPTOR_ORB

DESCRIPTOR_SIFT

DESCRIPTOR_TYPE_SIZE

DETECTOR_AGAST

DETECTOR_AKAZE

DETECTOR_BRISK

DETECTOR_FAST

DETECTOR_GFTT

DETECTOR_KAZE

DETECTOR_MSER

DETECTOR_ORB

DETECTOR_SIFT

DETECTOR_SimpleBlob

DETECTOR_TYPE_SIZE

__annotations__

constantFactorDistanceThreshold

detectionScore

detectionThreshold

jpgImageFormat

noFilterMatching

pgmImageFormat

pngImageFormat

ppmImageFormat

ratioDistanceThreshold

stdAndRatioDistanceThreshold

stdDistanceThreshold

class DetectionMethodType(self, value: int)

Bases: pybind11_object

Predefined constant for descriptor extraction type.

Values:

  • DESCRIPTOR_ORB: ORB descriptor.

  • DESCRIPTOR_BRISK: BRISK descriptor.

  • DESCRIPTOR_SIFT: SIFT descriptor.

  • DESCRIPTOR_SURF: SUFT descriptor.

  • DESCRIPTOR_KAZE: KAZE descriptor.

  • DESCRIPTOR_AKAZE: AKAZE descriptor.

  • DESCRIPTOR_TYPE_SIZE: Number of descriptors available.

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
class FeatureDescriptorType(self, value: int)

Bases: pybind11_object

Predefined constant for descriptor extraction type.

Values:

  • DESCRIPTOR_ORB: ORB descriptor.

  • DESCRIPTOR_BRISK: BRISK descriptor.

  • DESCRIPTOR_SIFT: SIFT descriptor.

  • DESCRIPTOR_SURF: SUFT descriptor.

  • DESCRIPTOR_KAZE: KAZE descriptor.

  • DESCRIPTOR_AKAZE: AKAZE descriptor.

  • DESCRIPTOR_TYPE_SIZE: Number of descriptors available.

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
class FeatureDetectorType(self, value: int)

Bases: pybind11_object

Predefined constant for descriptor extraction type.

Values:

  • DESCRIPTOR_ORB: ORB descriptor.

  • DESCRIPTOR_BRISK: BRISK descriptor.

  • DESCRIPTOR_SIFT: SIFT descriptor.

  • DESCRIPTOR_SURF: SUFT descriptor.

  • DESCRIPTOR_KAZE: KAZE descriptor.

  • DESCRIPTOR_AKAZE: AKAZE descriptor.

  • DESCRIPTOR_TYPE_SIZE: Number of descriptors available.

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
class FilterMatchingType(self, value: int)

Bases: pybind11_object

Predefined constant for descriptor extraction type.

Values:

  • DESCRIPTOR_ORB: ORB descriptor.

  • DESCRIPTOR_BRISK: BRISK descriptor.

  • DESCRIPTOR_SIFT: SIFT descriptor.

  • DESCRIPTOR_SURF: SUFT descriptor.

  • DESCRIPTOR_KAZE: KAZE descriptor.

  • DESCRIPTOR_AKAZE: AKAZE descriptor.

  • DESCRIPTOR_TYPE_SIZE: Number of descriptors available.

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
class ImageFormatType(self, value: int)

Bases: pybind11_object

Predefined constant for descriptor extraction type.

Values:

  • DESCRIPTOR_ORB: ORB descriptor.

  • DESCRIPTOR_BRISK: BRISK descriptor.

  • DESCRIPTOR_SIFT: SIFT descriptor.

  • DESCRIPTOR_SURF: SUFT descriptor.

  • DESCRIPTOR_KAZE: KAZE descriptor.

  • DESCRIPTOR_AKAZE: AKAZE descriptor.

  • DESCRIPTOR_TYPE_SIZE: Number of descriptors available.

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: visp._visp.vision.KeyPoint, detectorType: visp._visp.vision.KeyPoint.FeatureDetectorType, descriptorType: visp._visp.vision.KeyPoint.FeatureDescriptorType, matcherName: str, filterType: visp._visp.vision.KeyPoint.FilterMatchingType = ratioDistanceThreshold) -> None

  2. __init__(self: visp._visp.vision.KeyPoint, detectorName: str = ORB, extractorName: str = ORB, matcherName: str = BruteForce-Hamming, filterType: visp._visp.vision.KeyPoint.FilterMatchingType = ratioDistanceThreshold) -> None

  3. __init__(self: visp._visp.vision.KeyPoint, detectorNames: list[str], extractorNames: list[str], matcherName: str = BruteForce, filterType: visp._visp.vision.KeyPoint.FilterMatchingType = ratioDistanceThreshold) -> None

buildReference(*args, **kwargs)

Overloaded function.

  1. buildReference(self: visp._visp.vision.KeyPoint, I: visp._visp.core.ImageGray) -> int

Build the reference keypoints list.

Parameters:
I

Input reference image.

Returns:

The number of detected keypoints in the image I .

  1. buildReference(self: visp._visp.vision.KeyPoint, I: visp._visp.core.ImageGray, iP: visp._visp.core.ImagePoint, height: int, width: int) -> int

Build the reference keypoints list in a region of interest in the image.

Parameters:
I

Input reference image.

iP

Position of the top-left corner of the region of interest.

height

Height of the region of interest.

width

Width of the region of interest.

Returns:

The number of detected keypoints in the current image I.

  1. buildReference(self: visp._visp.vision.KeyPoint, I: visp._visp.core.ImageGray, rectangle: visp._visp.core.Rect) -> int

Build the reference keypoints list in a region of interest in the image.

Parameters:
I

Input image.

rectangle

Rectangle of the region of interest.

Returns:

The number of detected keypoints in the current image I.

  1. buildReference(self: visp._visp.vision.KeyPoint, I: visp._visp.core.ImageGray, trainKeyPoints: list[cv::KeyPoint], points3f: list[cv::Point3_<float>], append: bool = false, class_id: int = -1) -> tuple[int, list[cv::KeyPoint], list[cv::Point3_<float>]]

Build the reference keypoints list and compute the 3D position corresponding of the keypoints locations.

Parameters:
I

Input image.

trainKeyPoints

List of the train keypoints.

points3f

Output list of the 3D position corresponding of the keypoints locations.

append

If true, append the supply train keypoints with those already present.

class_id

The class id to be set to the input cv::KeyPoint if != -1.

Returns:

A tuple containing:

  • The number of detected keypoints in the current image I.

  • trainKeyPoints: List of the train keypoints.

  • points3f: Output list of the 3D position corresponding of the keypoints locations.

  1. buildReference(self: visp._visp.vision.KeyPoint, I: visp._visp.core.ImageGray, trainKeyPoints: list[cv::KeyPoint], trainDescriptors: cv::Mat, points3f: list[cv::Point3_<float>], append: bool = false, class_id: int = -1) -> int

Build the reference keypoints list and compute the 3D position corresponding of the keypoints locations.

Parameters:
I

Input image.

trainKeyPoints

List of the train keypoints.

trainDescriptors

List of the train descriptors.

points3f

List of the 3D position corresponding of the keypoints locations.

append

If true, append the supply train keypoints with those already present.

class_id

The class id to be set to the input cv::KeyPoint if != -1.

Returns:

The number of keypoints in the current image I.

  1. buildReference(self: visp._visp.vision.KeyPoint, I_color: visp._visp.core.ImageRGBa) -> int

Build the reference keypoints list.

Parameters:
I_color

Input reference image.

Returns:

The number of detected keypoints in the image I .

  1. buildReference(self: visp._visp.vision.KeyPoint, I_color: visp._visp.core.ImageRGBa, iP: visp._visp.core.ImagePoint, height: int, width: int) -> int

Build the reference keypoints list in a region of interest in the image.

Parameters:
I_color

Input reference image.

iP

Position of the top-left corner of the region of interest.

height

Height of the region of interest.

width

Width of the region of interest.

Returns:

The number of detected keypoints in the current image I.

  1. buildReference(self: visp._visp.vision.KeyPoint, I_color: visp._visp.core.ImageRGBa, rectangle: visp._visp.core.Rect) -> int

Build the reference keypoints list in a region of interest in the image.

Parameters:
I_color

Input image.

rectangle

Rectangle of the region of interest.

Returns:

The number of detected keypoints in the current image I.

  1. buildReference(self: visp._visp.vision.KeyPoint, I_color: visp._visp.core.ImageRGBa, trainKeyPoints: list[cv::KeyPoint], points3f: list[cv::Point3_<float>], append: bool = false, class_id: int = -1) -> tuple[int, list[cv::KeyPoint], list[cv::Point3_<float>]]

Build the reference keypoints list and compute the 3D position corresponding of the keypoints locations.

Parameters:
I_color

Input image.

trainKeyPoints

List of the train keypoints.

points3f

Output list of the 3D position corresponding of the keypoints locations.

append

If true, append the supply train keypoints with those already present.

class_id

The class id to be set to the input cv::KeyPoint if != -1.

Returns:

A tuple containing:

  • The number of detected keypoints in the current image I.

  • trainKeyPoints: List of the train keypoints.

  • points3f: Output list of the 3D position corresponding of the keypoints locations.

  1. buildReference(self: visp._visp.vision.KeyPoint, I_color: visp._visp.core.ImageRGBa, trainKeyPoints: list[cv::KeyPoint], trainDescriptors: cv::Mat, points3f: list[cv::Point3_<float>], append: bool = false, class_id: int = -1) -> int

Build the reference keypoints list and compute the 3D position corresponding of the keypoints locations.

Parameters:
I_color

Input image.

trainKeyPoints

List of the train keypoints.

trainDescriptors

List of the train descriptors.

points3f

List of the 3D position corresponding of the keypoints locations.

append

If true, append the supply train keypoints with those already present.

class_id

The class id to be set to the input cv::KeyPoint if != -1.

Returns:

The number of detected keypoints in the current image I.

static compute3D(*args, **kwargs)

Overloaded function.

  1. compute3D(candidate: cv::KeyPoint, roi: list[visp._visp.core.Point], cam: visp._visp.core.CameraParameters, cMo: visp._visp.core.HomogeneousMatrix, point: cv::Point3_<float>) -> None

Compute the 3D coordinate in the world/object frame given the 2D image coordinate and under the assumption that the point is located on a plane whose the plane equation is known in the camera frame. The Z-coordinate is retrieved according to the proportional relationship between the plane equation expressed in the normalized camera frame (derived from the image coordinate) and the same plane equation expressed in the camera frame.

Parameters:
candidate

Keypoint we want to compute the 3D coordinate.

roi

List of 3D points in the camera frame representing a planar face.

cam

Camera parameters.

cMo

Homogeneous matrix between the world and the camera frames.

point

3D coordinate in the world/object frame computed.

  1. compute3D(candidate: visp._visp.core.ImagePoint, roi: list[visp._visp.core.Point], cam: visp._visp.core.CameraParameters, cMo: visp._visp.core.HomogeneousMatrix, point: visp._visp.core.Point) -> None

Compute the 3D coordinate in the world/object frame given the 2D image coordinate and under the assumption that the point is located on a plane whose the plane equation is known in the camera frame. The Z-coordinate is retrieved according to the proportional relationship between the plane equation expressed in the normalized camera frame (derived from the image coordinate) and the same plane equation expressed in the camera frame.

Parameters:
candidate

vpImagePoint we want to compute the 3D coordinate.

roi

List of 3D points in the camera frame representing a planar face.

cam

Camera parameters.

cMo

Homogeneous matrix between the world and the camera frames.

point

3D coordinate in the world/object frame computed.

createImageMatching(*args, **kwargs)

Overloaded function.

  1. createImageMatching(self: visp._visp.vision.KeyPoint, IRef: visp._visp.core.ImageGray, ICurrent: visp._visp.core.ImageGray, IMatching: visp._visp.core.ImageGray) -> None

Initialize the size of the matching image (case with a matching side by side between IRef and ICurrent).

Parameters:
IRef

Reference image.

ICurrent

Current image.

IMatching

Image matching.

  1. createImageMatching(self: visp._visp.vision.KeyPoint, ICurrent: visp._visp.core.ImageGray, IMatching: visp._visp.core.ImageGray) -> None

Initialize the size of the matching image with appropriate size according to the number of training images. Used to display the matching of keypoints detected in the current image with those detected in multiple training images.

Parameters:
ICurrent

Current image.

IMatching

Image initialized with appropriate size.

  1. createImageMatching(self: visp._visp.vision.KeyPoint, IRef: visp._visp.core.ImageGray, ICurrent: visp._visp.core.ImageRGBa, IMatching: visp._visp.core.ImageRGBa) -> None

Initialize the size of the matching image (case with a matching side by side between IRef and ICurrent).

Parameters:
IRef

Reference image.

ICurrent

Current image.

IMatching

Image matching.

  1. createImageMatching(self: visp._visp.vision.KeyPoint, ICurrent: visp._visp.core.ImageRGBa, IMatching: visp._visp.core.ImageRGBa) -> None

Initialize the size of the matching image with appropriate size according to the number of training images. Used to display the matching of keypoints detected in the current image with those detected in multiple training images.

Parameters:
ICurrent

Current image.

IMatching

Image initialized with appropriate size.

detect(*args, **kwargs)

Overloaded function.

  1. detect(self: visp._visp.vision.KeyPoint, I: visp._visp.core.ImageGray, keyPoints: list[cv::KeyPoint], rectangle: visp._visp.core.Rect = vpRect()) -> list[cv::KeyPoint]

Detect keypoints in the image.

Parameters:
I

Input image.

keyPoints

Output list of the detected keypoints.

rectangle

Optional rectangle of the region of interest.

Returns:

A tuple containing:

  • keyPoints: Output list of the detected keypoints.

  1. detect(self: visp._visp.vision.KeyPoint, I_color: visp._visp.core.ImageRGBa, keyPoints: list[cv::KeyPoint], rectangle: visp._visp.core.Rect = vpRect()) -> list[cv::KeyPoint]

Detect keypoints in the image.

Parameters:
I_color

Input image.

keyPoints

Output list of the detected keypoints.

rectangle

Optional rectangle of the region of interest.

Returns:

A tuple containing:

  • keyPoints: Output list of the detected keypoints.

  1. detect(self: visp._visp.vision.KeyPoint, matImg: cv::Mat, keyPoints: list[cv::KeyPoint], mask: cv::Mat) -> list[cv::KeyPoint]

Detect keypoints in the image.

Parameters:
matImg

Input image.

keyPoints

Output list of the detected keypoints.

mask

Optional 8-bit integer mask to detect only where mask[i][j] != 0.

Returns:

A tuple containing:

  • keyPoints: Output list of the detected keypoints.

  1. detect(self: visp._visp.vision.KeyPoint, I: visp._visp.core.ImageGray, keyPoints: list[cv::KeyPoint], elapsedTime: float, rectangle: visp._visp.core.Rect = vpRect()) -> tuple[list[cv::KeyPoint], float]

Detect keypoints in the image.

Parameters:
I

Input image.

keyPoints

Output list of the detected keypoints.

elapsedTime

Elapsed time.

rectangle

Optional rectangle of the region of interest.

Returns:

A tuple containing:

  • keyPoints: Output list of the detected keypoints.

  • elapsedTime: Elapsed time.

  1. detect(self: visp._visp.vision.KeyPoint, I_color: visp._visp.core.ImageRGBa, keyPoints: list[cv::KeyPoint], elapsedTime: float, rectangle: visp._visp.core.Rect = vpRect()) -> tuple[list[cv::KeyPoint], float]

Detect keypoints in the image.

Parameters:
I_color

Input image.

keyPoints

Output list of the detected keypoints.

elapsedTime

Elapsed time.

rectangle

Optional rectangle of the region of interest.

Returns:

A tuple containing:

  • keyPoints: Output list of the detected keypoints.

  • elapsedTime: Elapsed time.

  1. detect(self: visp._visp.vision.KeyPoint, matImg: cv::Mat, keyPoints: list[cv::KeyPoint], elapsedTime: float, mask: cv::Mat) -> tuple[list[cv::KeyPoint], float]

Detect keypoints in the image.

Parameters:
matImg

Input image.

keyPoints

Output list of the detected keypoints.

elapsedTime

Elapsed time.

mask

Optional 8-bit integer mask to detect only where mask[i][j] != 0.

Returns:

A tuple containing:

  • keyPoints: Output list of the detected keypoints.

  • elapsedTime: Elapsed time.

display(*args, **kwargs)

Overloaded function.

  1. display(self: visp._visp.vision.KeyPoint, IRef: visp._visp.core.ImageGray, ICurrent: visp._visp.core.ImageGray, size: int = 3) -> None

Display the reference and the detected keypoints in the images.

Parameters:
IRef

Input reference image.

ICurrent

Input current image.

size

Size of the displayed cross.

  1. display(self: visp._visp.vision.KeyPoint, ICurrent: visp._visp.core.ImageGray, size: int = 3, color: visp._visp.core.Color = vpColor::green) -> None

Display the reference keypoints.

Parameters:
ICurrent

Input current image.

size

Size of the displayed crosses.

color

Color of the crosses.

  1. display(self: visp._visp.vision.KeyPoint, IRef: visp._visp.core.ImageRGBa, ICurrent: visp._visp.core.ImageRGBa, size: int = 3) -> None

Display the reference and the detected keypoints in the images.

Parameters:
IRef

Input reference image.

ICurrent

Input current image.

size

Size of the displayed cross.

  1. display(self: visp._visp.vision.KeyPoint, ICurrent: visp._visp.core.ImageRGBa, size: int = 3, color: visp._visp.core.Color = vpColor::green) -> None

Display the reference keypoints.

Parameters:
ICurrent

Input current image.

size

Size of the displayed crosses.

color

Color of the crosses.

displayMatching(*args, **kwargs)

Overloaded function.

  1. displayMatching(self: visp._visp.vision.KeyPoint, IRef: visp._visp.core.ImageGray, IMatching: visp._visp.core.ImageGray, crossSize: int, lineThickness: int = 1, color: visp._visp.core.Color = vpColor::green) -> None

Display the matching lines between the detected keypoints with those detected in one training image.

Parameters:
IRef

Reference image, used to have the x-offset.

IMatching

Resulting image matching.

crossSize

Size of the displayed crosses.

lineThickness

Thickness of the displayed lines.

color

Color to use, if none, we pick randomly a color for each pair of matching.

  1. displayMatching(self: visp._visp.vision.KeyPoint, ICurrent: visp._visp.core.ImageGray, IMatching: visp._visp.core.ImageGray, ransacInliers: list[visp._visp.core.ImagePoint] = [], crossSize: int = 3, lineThickness: int = 1) -> None

Display matching between keypoints detected in the current image and with those detected in the multiple training images. Display also RANSAC inliers if the list is supplied.

Parameters:
ICurrent

Current image.

IMatching

Resulting matching image.

ransacInliers

List of Ransac inliers or empty list if not available.

crossSize

Size of the displayed crosses.

lineThickness

Thickness of the displayed line.

  1. displayMatching(self: visp._visp.vision.KeyPoint, IRef: visp._visp.core.ImageGray, IMatching: visp._visp.core.ImageRGBa, crossSize: int, lineThickness: int = 1, color: visp._visp.core.Color = vpColor::green) -> None

Display the matching lines between the detected keypoints with those detected in one training image.

Parameters:
IRef

Reference image, used to have the x-offset.

IMatching

Resulting image matching.

crossSize

Size of the displayed crosses.

lineThickness

Thickness of the displayed lines.

color

Color to use, if none, we pick randomly a color for each pair of matching.

  1. displayMatching(self: visp._visp.vision.KeyPoint, IRef: visp._visp.core.ImageRGBa, IMatching: visp._visp.core.ImageRGBa, crossSize: int, lineThickness: int = 1, color: visp._visp.core.Color = vpColor::green) -> None

Display the matching lines between the detected keypoints with those detected in one training image.

Parameters:
IRef

Reference image, used to have the x-offset.

IMatching

Resulting image matching.

crossSize

Size of the displayed crosses.

lineThickness

Thickness of the displayed lines.

color

Color to use, if none, we pick randomly a color for each pair of matching.

  1. displayMatching(self: visp._visp.vision.KeyPoint, ICurrent: visp._visp.core.ImageRGBa, IMatching: visp._visp.core.ImageRGBa, ransacInliers: list[visp._visp.core.ImagePoint] = [], crossSize: int = 3, lineThickness: int = 1) -> None

Display matching between keypoints detected in the current image and with those detected in the multiple training images. Display also RANSAC inliers if the list is supplied.

Parameters:
ICurrent

Current image.

IMatching

Resulting matching image.

ransacInliers

List of Ransac inliers or empty list if not available.

crossSize

Size of the displayed crosses.

lineThickness

Thickness of the displayed line.

getCovarianceMatrix(self) visp._visp.core.Matrix

Get the covariance matrix when estimating the pose using the Virtual Visual Servoing approach.

Warning

The compute covariance flag has to be true if you want to compute the covariance matrix.

Note

See setCovarianceComputation

getCurrentImagePointsList(self) list[visp._visp.core.ImagePoint]

Return the vector of current image point.

Returns:

Vector of the current image point.

getDetectionTime(self) float

Get the elapsed time to compute the keypoint detection.

Returns:

The elapsed time.

getDetector(*args, **kwargs)

Overloaded function.

  1. getDetector(self: visp._visp.vision.KeyPoint, type: visp._visp.vision.KeyPoint.FeatureDetectorType) -> cv::Ptr<cv::Feature2D>

  2. getDetector(self: visp._visp.vision.KeyPoint, name: str) -> cv::Ptr<cv::Feature2D>

Parameters:
name

Name of the detector.

Returns:

The detector or nullptr if the name passed in parameter does not exist.

getDetectorNames(self) dict[visp._visp.vision.KeyPoint.FeatureDetectorType, str]

Get the feature detector name associated to the type.

getExtractionTime(self) float

Get the elapsed time to compute the keypoint extraction.

Returns:

The elapsed time.

getExtractor(*args, **kwargs)

Overloaded function.

  1. getExtractor(self: visp._visp.vision.KeyPoint, type: visp._visp.vision.KeyPoint.FeatureDescriptorType) -> cv::Ptr<cv::Feature2D>

  2. getExtractor(self: visp._visp.vision.KeyPoint, name: str) -> cv::Ptr<cv::Feature2D>

Parameters:
name

Name of the descriptor extractor.

Returns:

The descriptor extractor or nullptr if the name passed in parameter does not exist.

getExtractorNames(self) dict[visp._visp.vision.KeyPoint.FeatureDescriptorType, str]

Get the feature descriptor extractor name associated to the type.

getImageFormat(self) visp._visp.vision.KeyPoint.ImageFormatType

Get the image format to use when saving training images.

Returns:

The image format.

getIndexInAllReferencePointList(self, indexInMatchedPointList: int) int

Get the nth matched reference point index in the complete list of reference point.

In the code below referencePoint1 and referencePoint2 correspond to the same matched reference point.

vpKeyPoint keypoint;

// Here the code to compute the reference points and the current points.
vpImagePoint referencePoint1;
vpImagePoint currentPoint;
keypoint.getMatchedPoints(1, referencePoint1, currentPoint);  //Get the first matched points

vpImagePoint referencePoint2;
const vpImagePoint* referencePointsList = keypoint.getAllPointsInReferenceImage();
// Get the first matched reference point index in the complete reference point list
int index = keypoint.getIndexInAllReferencePointList(1);
// Get the first matched reference point
referencePoint2 = referencePointsList[index];
getMatchQueryToTrainKeyPoints(self) list[tuple[cv::KeyPoint, cv::KeyPoint]]

Get the list of pairs with the correspondence between the matched query and train keypoints.

Returns:

The list of pairs with the correspondence between the matched query and train keypoints.

getMatchedPointNumber(self) int

Get the number of matched points.

Returns:

the number of matched points.

getMatchedPoints(self, index: int, referencePoint: visp._visp.core.ImagePoint, currentPoint: visp._visp.core.ImagePoint) None

Get the nth couple of reference point and current point which have been matched. These points are copied in the vpImagePoint instances given in argument.

Parameters:
index: int

The index of the desired couple of reference point and current point. The index must be between 0 and the number of matched points - 1.

referencePoint: visp._visp.core.ImagePoint

The coordinates of the desired reference point are copied here.

currentPoint: visp._visp.core.ImagePoint

The coordinates of the desired current point are copied here.

getMatchedReferencePoints(self) list[int]

Return the index of the matched associated to the current image point i. The ith element of the vector is the index of the reference image point matching with the current image point.

Returns:

The vector of matching index.

getMatcher(self) cv::Ptr<cv::DescriptorMatcher>

Get the matcher pointer.

Returns:

The matcher pointer.

getMatches(self) list[cv::DMatch]

Get the list of matches (correspondences between the indexes of the detected keypoints and the train keypoints).

Returns:

The list of matches.

getMatchingTime(self) float

Get the elapsed time to compute the matching.

Returns:

The elapsed time.

getNbImages(self) int

Get the number of train images.

Returns:

The number of train images.

getObjectPoints(*args, **kwargs)

Overloaded function.

  1. getObjectPoints(self: visp._visp.vision.KeyPoint, objectPoints: list[cv::Point3_<float>]) -> list[cv::Point3_<float>]

Get the 3D coordinates of the object points matched (the corresponding 3D coordinates in the object frame of the keypoints detected in the current image after the matching).

Parameters:
objectPoints

List of 3D coordinates in the object frame.

Returns:

A tuple containing:

  • objectPoints: List of 3D coordinates in the object frame.

  1. getObjectPoints(self: visp._visp.vision.KeyPoint, objectPoints: list[visp._visp.core.Point]) -> list[visp._visp.core.Point]

Get the 3D coordinates of the object points matched (the corresponding 3D coordinates in the object frame of the keypoints detected in the current image after the matching).

Parameters:
objectPoints

List of 3D coordinates in the object frame.

Returns:

A tuple containing:

  • objectPoints: List of 3D coordinates in the object frame.

getPoseTime(self) float

Get the elapsed time to compute the pose.

Returns:

The elapsed time.

getQueryDescriptors(self) cv::Mat

Get the descriptors matrix for the query keypoints.

Returns:

Matrix with descriptors values at each row for each query keypoints.

getQueryKeyPoints(*args, **kwargs)

Overloaded function.

  1. getQueryKeyPoints(self: visp._visp.vision.KeyPoint, keyPoints: list[cv::KeyPoint], matches: bool = true) -> list[cv::KeyPoint]

Get the query keypoints list in OpenCV type.

Parameters:
keyPoints

List of query keypoints (or keypoints detected in the current image).

matches

If false return the list of all query keypoints extracted in the current image. If true, return only the query keypoints list that have matches.

Returns:

A tuple containing:

  • keyPoints: List of query keypoints (or keypoints detected in the current image).

  1. getQueryKeyPoints(self: visp._visp.vision.KeyPoint, keyPoints: list[visp._visp.core.ImagePoint], matches: bool = true) -> list[visp._visp.core.ImagePoint]

Get the query keypoints list in ViSP type.

Parameters:
keyPoints

List of query keypoints (or keypoints detected in the current image).

matches

If false return the list of all query keypoints extracted in the current image. If true, return only the query keypoints list that have matches.

Returns:

A tuple containing:

  • keyPoints: List of query keypoints (or keypoints detected in the current image).

getRansacInliers(self) list[visp._visp.core.ImagePoint]

Get the list of Ransac inliers.

Returns:

The list of Ransac inliers.

getRansacOutliers(self) list[visp._visp.core.ImagePoint]

Get the list of Ransac outliers.

Returns:

The list of Ransac outliers.

getReferenceImagePointsList(self) list[visp._visp.core.ImagePoint]

Return the vector of reference image point.

Returns:

Vector of reference image point.

getReferencePoint(self, index: int, referencePoint: visp._visp.core.ImagePoint) None

Get the nth reference point. This point is copied in the vpImagePoint instance given in argument.

Parameters:
index: int

The index of the desired reference point. The index must be between 0 and the number of reference points - 1.

referencePoint: visp._visp.core.ImagePoint

The coordinates of the desired reference point are copied there.

getReferencePointNumber(self) int

Get the number of reference points.

Returns:

the number of reference points.

getTrainDescriptors(self) cv::Mat

Get the train descriptors matrix.

Returns:

: Matrix with descriptors values at each row for each train keypoints (or reference keypoints).

getTrainKeyPoints(*args, **kwargs)

Overloaded function.

  1. getTrainKeyPoints(self: visp._visp.vision.KeyPoint, keyPoints: list[cv::KeyPoint]) -> list[cv::KeyPoint]

Get the train keypoints list in OpenCV type.

Parameters:
keyPoints

List of train keypoints (or reference keypoints).

Returns:

A tuple containing:

  • keyPoints: List of train keypoints (or reference keypoints).

  1. getTrainKeyPoints(self: visp._visp.vision.KeyPoint, keyPoints: list[visp._visp.core.ImagePoint]) -> list[visp._visp.core.ImagePoint]

Get the train keypoints list in ViSP type.

Parameters:
keyPoints

List of train keypoints (or reference keypoints).

Returns:

A tuple containing:

  • keyPoints: List of train keypoints (or reference keypoints).

getTrainPoints(*args, **kwargs)

Overloaded function.

  1. getTrainPoints(self: visp._visp.vision.KeyPoint, points: list[cv::Point3_<float>]) -> list[cv::Point3_<float>]

Get the train points (the 3D coordinates in the object frame) list in OpenCV type.

Parameters:
points

List of train points (or reference points).

Returns:

A tuple containing:

  • points: List of train points (or reference points).

  1. getTrainPoints(self: visp._visp.vision.KeyPoint, points: list[visp._visp.core.Point]) -> list[visp._visp.core.Point]

Get the train points (the 3D coordinates in the object frame) list in ViSP type.

Parameters:
points

List of train points (or reference points).

Returns:

A tuple containing:

  • points: List of train points (or reference points).

initMatcher(self, matcherName: str) None

Initialize a matcher based on its name.

Parameters:
matcherName: str

Name of the matcher (e.g BruteForce, FlannBased).

insertImageMatching(*args, **kwargs)

Overloaded function.

  1. insertImageMatching(self: visp._visp.vision.KeyPoint, IRef: visp._visp.core.ImageGray, ICurrent: visp._visp.core.ImageGray, IMatching: visp._visp.core.ImageGray) -> None

Insert a reference image and a current image side-by-side.

Parameters:
IRef

Reference image.

ICurrent

Current image.

IMatching

Matching image for displaying all the matching between the query keypoints and those detected in the training images.

  1. insertImageMatching(self: visp._visp.vision.KeyPoint, ICurrent: visp._visp.core.ImageGray, IMatching: visp._visp.core.ImageGray) -> None

Insert the different training images in the matching image.

Parameters:
ICurrent

Current image.

IMatching

Matching image for displaying all the matching between the query keypoints and those detected in the training images

  1. insertImageMatching(self: visp._visp.vision.KeyPoint, IRef: visp._visp.core.ImageRGBa, ICurrent: visp._visp.core.ImageRGBa, IMatching: visp._visp.core.ImageRGBa) -> None

Insert a reference image and a current image side-by-side.

Parameters:
IRef

Reference image.

ICurrent

Current image.

IMatching

Matching image for displaying all the matching between the query keypoints and those detected in the training images.

  1. insertImageMatching(self: visp._visp.vision.KeyPoint, ICurrent: visp._visp.core.ImageRGBa, IMatching: visp._visp.core.ImageRGBa) -> None

Insert the different training images in the matching image.

Parameters:
ICurrent

Current image.

IMatching

Matching image for displaying all the matching between the query keypoints and those detected in the training images

loadConfigFile(self, configFile: str) None

Load configuration parameters from an XML config file.

Parameters:
configFile: str

Path to the XML config file.

loadLearningData(self, filename: str, binaryMode: bool = false, append: bool = false) None

Load learning data saved on disk.

Parameters:
filename: str

Path of the learning file.

binaryMode: bool = false

If true, the learning file is in a binary mode, otherwise it is in XML mode.

append: bool = false

If true, concatenate the learning data, otherwise reset the variables.

match(self: visp._visp.vision.KeyPoint, trainDescriptors: cv::Mat, queryDescriptors: cv::Mat, matches: list[cv::DMatch], elapsedTime: float) tuple[list[cv::DMatch], float]

Match keypoints based on distance between their descriptors.

Parameters:
trainDescriptors

Train descriptors (or reference descriptors).

queryDescriptors

Query descriptors.

matches

Output list of matches.

elapsedTime

Elapsed time.

Returns:

A tuple containing:

  • matches: Output list of matches.

  • elapsedTime: Elapsed time.

matchPoint(*args, **kwargs)

Overloaded function.

  1. matchPoint(self: visp._visp.vision.KeyPoint, I: visp._visp.core.ImageGray) -> int

Match keypoints detected in the image with those built in the reference list.

Parameters:
I

Input current image.

Returns:

The number of matched keypoints.

  1. matchPoint(self: visp._visp.vision.KeyPoint, I: visp._visp.core.ImageGray, iP: visp._visp.core.ImagePoint, height: int, width: int) -> int

Match keypoints detected in a region of interest of the image with those built in the reference list.

Parameters:
I

Input image.

iP

Coordinate of the top-left corner of the region of interest.

height

Height of the region of interest.

width

Width of the region of interest.

Returns:

The number of matched keypoints.

  1. matchPoint(self: visp._visp.vision.KeyPoint, I: visp._visp.core.ImageGray, rectangle: visp._visp.core.Rect) -> int

Match keypoints detected in a region of interest of the image with those built in the reference list.

Parameters:
I

Input image.

rectangle

Rectangle of the region of interest.

Returns:

The number of matched keypoints.

  1. matchPoint(self: visp._visp.vision.KeyPoint, queryKeyPoints: list[cv::KeyPoint], queryDescriptors: cv::Mat) -> int

Match query keypoints with those built in the reference list using buildReference() .

Parameters:
queryKeyPoints

List of the query keypoints.

queryDescriptors

List of the query descriptors.

Returns:

The number of matched keypoints.

  1. matchPoint(self: visp._visp.vision.KeyPoint, I_color: visp._visp.core.ImageRGBa) -> int

Match keypoints detected in the image with those built in the reference list.

Parameters:
I_color

Input current image.

Returns:

The number of matched keypoints.

  1. matchPoint(self: visp._visp.vision.KeyPoint, I_color: visp._visp.core.ImageRGBa, iP: visp._visp.core.ImagePoint, height: int, width: int) -> int

Match keypoints detected in a region of interest of the image with those built in the reference list.

Parameters:
I_color

Input image.

iP

Coordinate of the top-left corner of the region of interest.

height

Height of the region of interest.

width

Width of the region of interest.

Returns:

The number of matched keypoints.

  1. matchPoint(self: visp._visp.vision.KeyPoint, I_color: visp._visp.core.ImageRGBa, rectangle: visp._visp.core.Rect) -> int

Match keypoints detected in a region of interest of the image with those built in the reference list.

Parameters:
I_color

Input image.

rectangle

Rectangle of the region of interest.

Returns:

The number of matched keypoints.

referenceBuilt(self) bool

Indicate wether the reference has been built or not.

Returns:

True if the reference of the current instance has been built.

reset(self) None

Reset the instance as if we would declare another vpKeyPoint variable.

saveLearningData(self, filename: str, binaryMode: bool = false, saveTrainingImages: bool = true) None

Save the learning data in a file in XML or binary mode.

Parameters:
filename: str

Path of the save file.

binaryMode: bool = false

If true, the data are saved in binary mode, otherwise in XML mode.

saveTrainingImages: bool = true

If true, save also the training images on disk.

setCovarianceComputation(self, flag: bool) None

Set if the covariance matrix has to be computed in the Virtual Visual Servoing approach.

Parameters:
flag: bool

True if the covariance has to be computed, false otherwise.

setDetectionMethod(self, method: visp._visp.vision.KeyPoint.DetectionMethodType) None
setDetector(*args, **kwargs)

Overloaded function.

  1. setDetector(self: visp._visp.vision.KeyPoint, detectorType: visp._visp.vision.KeyPoint.FeatureDetectorType) -> None

  2. setDetector(self: visp._visp.vision.KeyPoint, detectorName: str) -> None

Set and initialize a detector denominated by his name detectorName .

Parameters:
detectorName

Name of the detector.

setDetectors(self, detectorNames: list[str]) None

Set and initialize a list of detectors denominated by their names detectorNames .

Parameters:
detectorNames: list[str]

List of detector names.

setExtractor(*args, **kwargs)

Overloaded function.

  1. setExtractor(self: visp._visp.vision.KeyPoint, extractorType: visp._visp.vision.KeyPoint.FeatureDescriptorType) -> None

  2. setExtractor(self: visp._visp.vision.KeyPoint, extractorName: str) -> None

Set and initialize a descriptor extractor denominated by his name extractorName .

Parameters:
extractorName

Name of the extractor.

setExtractors(self, extractorNames: list[str]) None

Set and initialize a list of extractors denominated by their names extractorNames .

Parameters:
extractorNames: list[str]

List of extractor names.

setFilterMatchingType(self, filterType: visp._visp.vision.KeyPoint.FilterMatchingType) None
setImageFormat(self, imageFormat: visp._visp.vision.KeyPoint.ImageFormatType) None
setMatcher(self, matcherName: str) None

Set and initialize a matcher denominated by his name matcherName . The different matchers are:

  • BruteForce (it uses L2 distance)

  • BruteForce-L1

  • BruteForce-Hamming

  • BruteForce-Hamming(2)

  • FlannBased

L1 and L2 norms are preferable choices for SIFT and SURF descriptors, NORM_HAMMING should be used with ORB, BRISK and BRIEF, NORM_HAMMING2 should be used with ORB when WTA_K==3 or 4.

Parameters:
matcherName: str

Name of the matcher.

setMatchingFactorThreshold(self, factor: float) None

Set the factor value for the filtering method: constantFactorDistanceThreshold.

Parameters:
factor: float

Factor value

setMatchingRatioThreshold(self, ratio: float) None

Set the ratio value for the filtering method: ratioDistanceThreshold.

Parameters:
ratio: float

Ratio value (]0 ; 1])

setMaxFeatures(self, maxFeatures: int) None

Set maximum number of keypoints to extract.

Warning

This functionality is only available for ORB and SIFT extractors.

Parameters:
maxFeatures: int

Maximum number of keypoints to extract. Set -1 to use default values.

setRansacConsensusPercentage(self, percentage: float) None

Set the percentage value for defining the cardinality of the consensus group.

Parameters:
percentage: float

Percentage value (]0 ; 100])

setRansacFilterFlag(self, flag: visp._visp.vision.Pose.RANSAC_FILTER_FLAGS) None

Set filter flag for RANSAC pose estimation.

setRansacIteration(self, nbIter: int) None

Set the maximum number of iterations for the Ransac pose estimation method.

Parameters:
nbIter: int

Maximum number of iterations for the Ransac

setRansacMinInlierCount(self, minCount: int) None

Set the minimum number of inlier for the Ransac pose estimation method.

Parameters:
minCount: int

Minimum number of inlier for the consensus

setRansacParallel(self, parallel: bool) None

Use or not the multithreaded version.

Note

Needs C++11 or higher.

setRansacParallelNbThreads(self, nthreads: int) None

Set the number of threads to use if multithreaded RANSAC pose.

Note

See setRansacParallel

Parameters:
nthreads: int

Number of threads, if 0 the number of CPU threads will be determined

setRansacReprojectionError(self, reprojectionError: float) None

Set the maximum reprojection error (in pixel) to determine if a point is an inlier or not.

Parameters:
reprojectionError: float

Maximum reprojection error in pixel (used by OpenCV function)

setRansacThreshold(self, threshold: float) None

Set the maximum error (in meter) to determine if a point is an inlier or not.

Parameters:
threshold: float

Maximum error in meter for ViSP function

setUseAffineDetection(self, useAffine: bool) None

Set if multiple affine transformations must be used to detect and extract keypoints.

Parameters:
useAffine: bool

True to use multiple affine transformations, false otherwise

setUseMatchTrainToQuery(self, useMatchTrainToQuery: bool) None

Set if we want to match the train keypoints to the query keypoints.

Parameters:
useMatchTrainToQuery: bool

True to match the train keypoints to the query keypoints

setUseRansacConsensusPercentage(self, usePercentage: bool) None

Set the flag to choose between a percentage value of inliers for the cardinality of the consensus group or a minimum number.

Parameters:
usePercentage: bool

True to a percentage ratio of inliers, otherwise use a specified number of inliers

setUseRansacVVS(self, ransacVVS: bool) None

Set the flag to choose between the OpenCV or ViSP Ransac pose estimation function.

Parameters:
ransacVVS: bool

True to use ViSP function, otherwise use OpenCV function

setUseSingleMatchFilter(self, singleMatchFilter: bool) None

Set the flag to filter matches where multiple query keypoints are matched to the same train keypoints.

Parameters:
singleMatchFilter: bool

True to use the single match filter.