Dot2

class Dot2(*args, **kwargs)

Bases: Tracker

This tracker is meant to track a blob (connex pixels with same gray level) on a vpImage .

The underground algorithm is based on a binarization of the image and then on a contour detection using the Freeman chain coding to determine the blob characteristics (location, moments, size…).

The binarization is done using gray level minimum and maximum values that define the admissible gray levels of the blob. You can specify these levels by setGrayLevelMin() and setGrayLevelMax() . These levels are also set automatically by setGrayLevelPrecision() . The algorithm allows to track white objects on a black background and vice versa.

When a blob is found, some tests are done to see if it is valid:

  • A blob is considered by default as ellipsoid. The found blob could be rejected if the shape is not ellipsoid. To determine if the shape is ellipsoid the algorithm consider an inner and outside ellipse. Sampled points on these two ellipses should have the right gray levels. Along the inner ellipse the sampled points should have gray levels that are in the gray level minimum and maximum bounds, while on the outside ellipse, the gray levels should be out of the gray level bounds. To set the percentage of the sample points which should have the right levels use setEllipsoidBadPointsPercentage() . The distance between the inner ellipsoid and the blob contour, as well the distance between the blob contour and the outside ellipse is fixed by setEllipsoidShapePrecision() . If you want to track a non ellipsoid shape, and turn off this validation test, you have to call setEllipsoidShapePrecision(0).

  • The width, height and surface of the blob are compared to the corresponding values of the previous blob. If they differ to much the blob could be rejected. To set the admissible distance you can use setSizePrecision() .

Note that track() and searchDotsInArea() are the most important features of this class.

  • track() estimate the current position of the dot using its previous position, then try to compute the new parameters of the dot. If everything went ok, tracking succeeds, otherwise we search this dot in a window around the last position of the dot.

  • searchDotsInArea() enable to find dots similar to this dot in a window. It is used when there was a problem performing basic tracking of the dot, but can also be used to find a certain type of dots in the full image.

The following sample code available in tutorial-blob-tracker-live-firewire.cpp shows how to grab images from a firewire camera, track a blob and display the tracking results.

#include <visp3/core/vpConfig.h>
#ifdef VISP_HAVE_MODULE_SENSOR
#include <visp3/sensor/vp1394CMUGrabber.h>
#include <visp3/sensor/vp1394TwoGrabber.h>
#endif
#include <visp3/blob/vpDot2.h>
#include <visp3/gui/vpDisplayGDI.h>
#include <visp3/gui/vpDisplayOpenCV.h>
#include <visp3/gui/vpDisplayX.h>

#if defined(HAVE_OPENCV_VIDEOIO)
#include <opencv2/videoio.hpp>
#endif

int main()
{
#if (defined(VISP_HAVE_DC1394) || defined(VISP_HAVE_CMU1394) || defined(HAVE_OPENCV_VIDEOIO)) &&             \
    (defined(VISP_HAVE_X11) || defined(VISP_HAVE_GDI) || defined(VISP_HAVE_OPENCV))
  vpImage<unsigned char>  I; // Create a gray level image container

#if defined(VISP_HAVE_DC1394)
  vp1394TwoGrabber  g(false);
  g.open(I);
#elif defined(VISP_HAVE_CMU1394)
  vp1394CMUGrabber  g;
  g. open (I);
#elif defined(HAVE_OPENCV_VIDEOIO)
  cv::VideoCapture g(0); // open the default camera
  if (!g.isOpened()) {   // check if we succeeded
    std::cout << "Failed to open the camera" << std::endl;
    return EXIT_FAILURE;
  }
  cv::Mat frame;
  g >> frame; // get a new frame from camera
  vpImageConvert::convert (frame, I);
#endif

#if defined(VISP_HAVE_X11)
  vpDisplayX  d(I, 0, 0, "Camera view");
#elif defined(VISP_HAVE_GDI)
  vpDisplayGDI  d(I, 0, 0, "Camera view");
#elif defined(HAVE_OPENCV_HIGHGUI)
  vpDisplayOpenCV  d(I, 0, 0, "Camera view");
#endif

  vpDot2  blob;
  blob. setGraphics (true);
  blob. setGraphicsThickness (2);

  vpImagePoint  germ;
  bool init_done = false;

  while (1) {
    try {
#if defined(VISP_HAVE_DC1394) || defined(VISP_HAVE_CMU1394)
      g. acquire (I);
#elif defined(HAVE_OPENCV_VIDEOIO)
      g >> frame;
      vpImageConvert::convert (frame, I);
#endif
      vpDisplay::display (I);

      if (!init_done) {
        vpDisplay::displayText (I, vpImagePoint (10, 10), "Click in the blob to initialize the tracker", vpColor::red );
        if ( vpDisplay::getClick (I, germ, false)) {
          blob. initTracking (I, germ);
          init_done = true;
        }
      }
      else {
        blob. track (I);
      }

      vpDisplay::flush (I);
    }
    catch (...) {
      init_done = false;
    }
  }
#endif
}

A line by line explanation of the previous example is provided in tutorial-tracking-blob.

This other example available in tutorial-blob-auto-tracker.cpp shows firstly how to detect in the first image all the blobs that match some characteristics in terms of size, area, gray level. Secondly, it shows how to track all the dots that are detected.

#include <visp3/blob/vpDot2.h>
#include <visp3/gui/vpDisplayGDI.h>
#include <visp3/gui/vpDisplayOpenCV.h>
#include <visp3/gui/vpDisplayX.h>
#include <visp3/io/vpImageIo.h>

int main()
{
  try {
    bool learn = false;
    vpImage<unsigned char>  I; // Create a gray level image container

    vpImageIo::read (I, "./target.pgm");

#if defined(VISP_HAVE_X11)
    vpDisplayX  d(I, 0, 0, "Camera view");
#elif defined(VISP_HAVE_GDI)
    vpDisplayGDI  d(I, 0, 0, "Camera view");
#elif defined(HAVE_OPENCV_HIGHGUI)
    vpDisplayOpenCV  d(I, 0, 0, "Camera view");
#else
    std::cout << "No image viewer is available..." << std::endl;
#endif
    vpDisplay::display (I);
    vpDisplay::flush (I);

    vpDot2  blob;
    if (learn) {
      // Learn the characteristics of the blob to auto detect
      blob. setGraphics (true);
      blob. setGraphicsThickness (1);
      blob. initTracking (I);
      blob. track (I);
      std::cout << "Blob characteristics: " << std::endl;
      std::cout << " width : " << blob. getWidth () << std::endl;
      std::cout << " height: " << blob. getHeight () << std::endl;
#if VISP_VERSION_INT > VP_VERSION_INT(2, 7, 0)
      std::cout << " area: " << blob. getArea () << std::endl;
#endif
      std::cout << " gray level min: " << blob. getGrayLevelMin () << std::endl;
      std::cout << " gray level max: " << blob. getGrayLevelMax () << std::endl;
      std::cout << " grayLevelPrecision: " << blob. getGrayLevelPrecision () << std::endl;
      std::cout << " sizePrecision: " << blob. getSizePrecision () << std::endl;
      std::cout << " ellipsoidShapePrecision: " << blob. getEllipsoidShapePrecision () << std::endl;
    }
    else {
      // Set blob characteristics for the auto detection
      blob. setWidth (50);
      blob. setHeight (50);
#if VISP_VERSION_INT > VP_VERSION_INT(2, 7, 0)
      blob. setArea (1700);
#endif
      blob. setGrayLevelMin (0);
      blob. setGrayLevelMax (30);
      blob. setGrayLevelPrecision (0.8);
      blob. setSizePrecision (0.65);
      blob. setEllipsoidShapePrecision (0.65);
    }

    std::list<vpDot2> blob_list;
    blob. searchDotsInArea (I, 0, 0, I. getWidth (), I. getHeight (), blob_list);

    if (learn) {
      // The blob that is tracked by initTracking() is not in the list of auto
      // detected blobs We add it:
      blob_list.push_back(blob);
    }
    std::cout << "Number of auto detected blob: " << blob_list.size() << std::endl;
    std::cout << "A click to exit..." << std::endl;

    while (1) {
      vpDisplay::display (I);

      for (std::list<vpDot2>::iterator it = blob_list.begin(); it != blob_list.end(); ++it) {
        (*it).setGraphics(true);
        (*it).setGraphicsThickness(3);
        (*it).track(I);
      }

      vpDisplay::flush (I);

      if ( vpDisplay::getClick (I, false))
        break;

      vpTime::wait (40);
    }
  } catch (const vpException  &e) {
    std::cout << "Catch an exception: " << e << std::endl;
  }
}

A line by line explanation of this last example is also provided in tutorial-tracking-blob, section tracking_blob_tracking.

Note

See vpDot

Overloaded function.

  1. __init__(self: visp._visp.blob.Dot2) -> None

Default constructor. Just do basic default initialization.

  1. __init__(self: visp._visp.blob.Dot2, ip: visp._visp.core.ImagePoint) -> None

Constructor initialize the coordinates of the gravity center of the dot to the image point ip . Rest is the same as the default constructor.

Parameters:
ip

An image point with sub-pixel coordinates.

  1. __init__(self: visp._visp.blob.Dot2, twinDot: visp._visp.blob.Dot2) -> None

Copy constructor.

Methods

__init__

Overloaded function.

defineDots

Wrapper for the defineDots method, see the C++ ViSP documentation.

display

Display in overlay the dot edges and center of gravity.

displayDot

Overloaded function.

getArea

Return the area of the dot.

getBBox

Return the dot bounding box.

getCog

Return the location of the dot center of gravity.

getDistance

Return the distance between the two center of dots.

getEdges

Overloaded function.

getEllipsoidBadPointsPercentage

Get the percentage of sampled points that are considered non conform in terms of the gray level on the inner and the outside ellipses.

getEllipsoidShapePrecision

Return the precision of the ellipsoid shape of the dot.

getFreemanChain

Gets the list of Freeman chain code used to turn around the dot counterclockwise.

getGamma

getGrayLevelMax

Return the color level of pixels inside the dot.

getGrayLevelMin

Return the color level of pixels inside the dot.

getGrayLevelPrecision

Return the precision of the gray level of the dot.

getHeight

Return the height of the dot.

getMaxSizeSearchDistPrecision

Return the precision of the search maximum distance to get the starting point on a dot border.

getMeanGrayLevel

return:

The mean gray level value of the dot.

getPolygon

return:

a vpPolygon made from the edges of the dot.

getSizePrecision

Return the precision of the size of the dot.

getWidth

Return the width of the dot.

get_nij

Gets the second order normalized centered moment \(n_{ij}\) as a 3-dim vector containing \(n_{20}, n_{11}, n_{02}\) such as \(n_{ij} = \mu_{ij}/m_{00}\)

initTracking

Overloaded function.

print

searchDotsInArea

Overloaded function.

setArea

Set the area of the dot.

setCog

Initialize the dot coordinates with ip .

setComputeMoments

Activates the dot's moments computation.

setEllipsoidBadPointsPercentage

Set the percentage of sampled points that are considered non conform in terms of the gray level on the inner and the outside ellipses.

setEllipsoidShapePrecision

Indicates if the dot should have an ellipsoid shape to be valid.

setGraphics

Activates the display of the border of the dot during the tracking.

setGraphicsThickness

Modify the default thickness that is set to 1 of the drawings in overlay when setGraphics() is enabled.

setGrayLevelMax

Set the color level of pixels surrounding the dot.

setGrayLevelMin

Set the color level of the dot to search a dot in a region of interest.

setGrayLevelPrecision

Set the precision of the gray level of the dot.

setHeight

Set the height of the dot.

setMaxSizeSearchDistPrecision

Set the precision of the search maximum distance to get the starting point on a dot border.

setSizePrecision

Set the precision of the size of the dot.

setWidth

Set the width of the dot.

track

Overloaded function.

trackAndDisplay

Wrapper for the trackAndDisplay method, see the C++ ViSP documentation.

Inherited Methods

get_p

Return object parameters expressed in the 2D image plane computed by perspective projection.

get_cP

Return object parameters expressed in the 3D camera frame.

cP

p

cPAvailable

Operators

__doc__

__init__

Overloaded function.

__module__

__repr__

Attributes

__annotations__

cP

cPAvailable

p

__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: visp._visp.blob.Dot2) -> None

Default constructor. Just do basic default initialization.

  1. __init__(self: visp._visp.blob.Dot2, ip: visp._visp.core.ImagePoint) -> None

Constructor initialize the coordinates of the gravity center of the dot to the image point ip . Rest is the same as the default constructor.

Parameters:
ip

An image point with sub-pixel coordinates.

  1. __init__(self: visp._visp.blob.Dot2, twinDot: visp._visp.blob.Dot2) -> None

Copy constructor.

static defineDots(dots: list[visp._visp.blob.Dot2], dotFile: str, I: visp._visp.core.ImageGray, color: visp._visp.core.Color, trackDot: bool = True) visp._visp.core.Matrix

Wrapper for the defineDots method, see the C++ ViSP documentation.

display(self, I: visp._visp.core.ImageGray, color: visp._visp.core.Color, thickness: int = 1) None

Display in overlay the dot edges and center of gravity.

Parameters:
I: visp._visp.core.ImageGray

Image.

color: visp._visp.core.Color

The color used for the display.

static displayDot(*args, **kwargs)

Overloaded function.

  1. displayDot(I: visp._visp.core.ImageGray, cog: visp._visp.core.ImagePoint, edges_list: list[visp._visp.core.ImagePoint], color: visp._visp.core.Color, thickness: int = 1) -> None

Display the dot center of gravity and its list of edges.

Parameters:
I

The image used as background.

cog

The center of gravity.

edges_list

The list of edges;

color

Color used to display the dot.

thickness

Thickness of the dot.

  1. displayDot(I: visp._visp.core.ImageRGBa, cog: visp._visp.core.ImagePoint, edges_list: list[visp._visp.core.ImagePoint], color: visp._visp.core.Color, thickness: int = 1) -> None

Display the dot center of gravity and its list of edges.

Parameters:
I

The image used as background.

cog

The center of gravity.

edges_list

The list of edges;

color

Color used to display the dot.

thickness

Thickness of the dot.

getArea(self) float

Return the area of the dot.

The area of the dot is also given by \(|m00|\) .

getBBox(self) visp._visp.core.Rect

Return the dot bounding box.

Note

See getWidth() , getHeight()

getCog(self) visp._visp.core.ImagePoint

Return the location of the dot center of gravity.

Returns:

The coordinates of the center of gravity.

getDistance(self, distantDot: visp._visp.blob.Dot2) float

Return the distance between the two center of dots.

getEdges(*args, **kwargs)

Overloaded function.

  1. getEdges(self: visp._visp.blob.Dot2) -> list[visp._visp.core.ImagePoint]

Return the list of all the image points on the dot border.

Returns:

A tuple containing:

  • edges_list: The list of all the images points on the dot border. This list is update after a call to track() .

  1. getEdges(self: visp._visp.blob.Dot2) -> list[visp._visp.core.ImagePoint]

Return the list of all the image points on the dot border.

Returns:

The list of all the images points on the dot border. This list is update after a call to track() .

getEllipsoidBadPointsPercentage(self) float

Get the percentage of sampled points that are considered non conform in terms of the gray level on the inner and the outside ellipses.

Note

See setEllipsoidBadPointsPercentage()

getEllipsoidShapePrecision(self) float

Return the precision of the ellipsoid shape of the dot. It is a double precision float which value is in [0,1]. 1 means full precision, whereas values close to 0 show a very bad precision.

Note

See setEllipsoidShapePrecision()

getFreemanChain(self) list[int]

Gets the list of Freeman chain code used to turn around the dot counterclockwise.

Returns:

A tuple containing:

  • freeman_chain: List of Freeman chain list [0, …, 7]

  • 0 : right

  • 1 : top right

  • 2 : top

  • 3 : top left

  • 4 : left

  • 5 : down left

  • 6 : down

  • 7 : down right

getGamma(self) float
getGrayLevelMax(self) int

Return the color level of pixels inside the dot.

Note

See getGrayLevelMin()

getGrayLevelMin(self) int

Return the color level of pixels inside the dot.

Note

See getGrayLevelMax()

getGrayLevelPrecision(self) float

Return the precision of the gray level of the dot. It is a double precision float which value is in [0,1]. 1 means full precision, whereas values close to 0 show a very bad precision.

getHeight(self) float

Return the height of the dot.

Note

See getWidth()

getMaxSizeSearchDistPrecision(self) float

Return the precision of the search maximum distance to get the starting point on a dot border. It is a double precision float which value is in [0.05,1]. 1 means full precision, whereas values close to 0 show a very bad precision.

getMeanGrayLevel(self) float
Returns:

The mean gray level value of the dot.

getPolygon(self) visp._visp.core.Polygon
Returns:

a vpPolygon made from the edges of the dot.

getSizePrecision(self) float

Return the precision of the size of the dot. It is a double precision float which value is in [0.05,1]. 1 means full precision, whereas values close to 0 show a very bad precision.

getWidth(self) float

Return the width of the dot.

Note

See getHeight()

get_cP(self) visp._visp.core.ColVector

Return object parameters expressed in the 3D camera frame.

get_nij(self) visp._visp.core.ColVector

Gets the second order normalized centered moment \(n_{ij}\) as a 3-dim vector containing \(n_{20}, n_{11}, n_{02}\) such as \(n_{ij} = \mu_{ij}/m_{00}\)

Note

See getCog() , getArea()

Returns:

The 3-dim vector containing \(n_{20}, n_{11}, n_{02}\) .

get_p(self) visp._visp.core.ColVector

Return object parameters expressed in the 2D image plane computed by perspective projection.

initTracking(*args, **kwargs)

Overloaded function.

  1. initTracking(self: visp._visp.blob.Dot2, I: visp._visp.core.ImageGray, size: int = 0) -> None

Initialize the tracking with a mouse click on the image and update the dot characteristics (center of gravity, moments) by a call to track() .

Wait a user click in a white area in the image. The clicked pixel will be the starting point from which the dot will be tracked.

To get center of gravity of the dot, see getCog() . To compute the moments see setComputeMoments() . To get the width or height of the dot, call getWidth() and getHeight() . The area of the dot is given by getArea() .

If no valid dot was found in the window, return an exception.

Note

See track()

Parameters:
I

Image.

size

Size of the dot to track.

  1. initTracking(self: visp._visp.blob.Dot2, I: visp._visp.core.ImageGray, ip: visp._visp.core.ImagePoint, size: int = 0) -> None

Initialize the tracking for a dot supposed to be located at (u,v) and update the dot characteristics (center of gravity, moments) by a call to track() .

To get center of gravity of the dot, see getCog() . To compute the moments see setComputeMoments() .

If no valid dot was found in the window, return an exception.

Parameters:
I

Image to process.

ip

Location of the starting point from which the dot will be tracked in the image.

size

Size of the dot to track.

  1. initTracking(self: visp._visp.blob.Dot2, I: visp._visp.core.ImageGray, ip: visp._visp.core.ImagePoint, gray_lvl_min: int, gray_lvl_max: int, size: int = 0) -> None

Initialize the tracking for a dot supposed to be located at (u,v) and update the dot characteristics (center of gravity, moments) by a call to track() .

The sub pixel coordinates of the dot are updated. To get the center of gravity coordinates of the dot, use getCog() . To compute the moments use setComputeMoments(true) before a call to initTracking() .

If no valid dot was found in the window, return an exception.

Note

See track() , getCog()

Parameters:
I

Image to process.

ip

Location of the starting point from which the dot will be tracked in the image.

gray_lvl_min

Minimum gray level threshold used to segment the dot; value comprised between 0 and 255.

gray_lvl_max

Maximum gray level threshold used to segment the dot; value comprised between 0 and 255. gray_level_max should be greater than gray_level_min .

size

Size of the dot to track.

print(self: visp._visp.blob.Dot2, os: std::ostream) None
searchDotsInArea(*args, **kwargs)

Overloaded function.

  1. searchDotsInArea(self: visp._visp.blob.Dot2, I: visp._visp.core.ImageGray, area_u: int, area_v: int, area_w: int, area_h: int) -> list[visp._visp.blob.Dot2]

Look for a list of dot matching this dot parameters within a region of interest defined by a rectangle in the image. The rectangle upper-left coordinates are given by ( area_u , area_v ). The size of the rectangle is given by area_w and area_h .

Warning

Allocates memory for the list of vpDot2 returned by this method. Desallocation has to be done by yourself, see searchDotsInArea()

Note

See searchDotsInArea(vpImage<unsigned char>& I, std::list<vpDot2> &)

Parameters:
I

Image to process.

area_u

Coordinate (column) of the upper-left area corner.

area_v

Coordinate (row) of the upper-left area corner.

area_w

Width or the area in which a dot is searched.

area_h

Height or the area in which a dot is searched.

Returns:

A tuple containing:

  • niceDots: List of the dots that are found.

  1. searchDotsInArea(self: visp._visp.blob.Dot2, I: visp._visp.core.ImageGray) -> list[visp._visp.blob.Dot2]

Look for a list of dot matching this dot parameters within the entire image.

Warning

Allocates memory for the list of dots returned by this method. Desallocation has to be done by yourself.

Before calling this method, dot characteristics to found have to be set like:

vpDot2 d;

// Set dot characteristics for the auto detection
d.setWidth(24);
d.setHeight(23);
d.setArea(412);
d.setGrayLevelMin(160);
d.setGrayLevelMax(255);
d.setGrayLevelPrecision(0.8);
d.setSizePrecision(0.65);
d.setEllipsoidShapePrecision(0.65);

To search dots in the whole image:

std::list<vpDot2> list_d;
d.searchDotsInArea(I, 0, 0, I.getWidth(), I.getHeight(), list_d) ;

The number of dots found in the area is given by:

std::cout << list_d.size();

To parse all the dots:

std::list<vpDot2>::iterator it;
for (it = list_d.begin(); it != list_d.end(); ++it) {
    vpDot2 tmp_d = *it;
}

Note

See searchDotsInArea ( vpImage<unsigned char> &, int, int, unsigned int, unsigned int, std::list<vpDot2> &)

Parameters:
I

Image.

Returns:

A tuple containing:

  • niceDots: List of the dots that are found.

setArea(self, area: float) None

Set the area of the dot. This is meant to be used to search a dot in a region of interest.

Note

See setWidth() , setHeight() , setSizePrecision()

setCog(self, ip: visp._visp.core.ImagePoint) None

Initialize the dot coordinates with ip .

setComputeMoments(self, activate: bool) None

Activates the dot’s moments computation.

Computed moment are vpDot::m00 , vpDot::m10 , vpDot::m01 , vpDot::m11 , vpDot::m20 , vpDot::m02 and second order centered moments vpDot::mu11 , vpDot::mu20 , vpDot::mu02 computed with respect to the blob centroid.

The coordinates of the region’s centroid (u, v) can be computed from the moments by \(u=\frac{m10}{m00}\) and \(v=\frac{m01}{m00}\) .

Parameters:
activate: bool

true, if you want to compute the moments. If false, moments are not computed.

setEllipsoidBadPointsPercentage(self, percentage: float = 0.0) None

Set the percentage of sampled points that are considered non conform in terms of the gray level on the inner and the outside ellipses. Points located on the inner ellipse should have the same gray level than the blob, while points located on the outside ellipse should have a different gray level.

Parameters:
percentage: float = 0.0

Percentage of points sampled with bad gray level on the inner and outside ellipses that are admissible. 0 means that all the points should have a right level, while a value of 1 means that all the points can have a bad gray level.

setEllipsoidShapePrecision(self, ellipsoidShapePrecision: float) None

Indicates if the dot should have an ellipsoid shape to be valid.

  • 1 means full precision, whereas values close to 0 show a very bad accuracy.

  • Values lower or equal to 0 are brought back to 0.

  • Values higher than 1 are brought back to 1. To track a non ellipsoid shape use setEllipsoidShapePrecision(0).

The following example show how to track a blob with a height constraint on an ellipsoid shape. The tracking will fail if the shape is not ellipsoid.

vpDot2 dot;
dot.setEllipsoidShapePrecision(0.9); // to track a blob with a height
constraint attendee on a circle shape
...
dot.track();

This other example shows how to remove any constraint on the shape. Here the tracker will be able to track any shape, including square or rectangular shapes.

vpDot2 dot;
dot.setEllipsoidShapePrecision(0.); // to track a blob without any constraint on the shape
...
dot.track();

Note

See getEllipsoidShapePrecision()

setGraphics(self, activate: bool) None

Activates the display of the border of the dot during the tracking. The default thickness of the overlayed drawings can be modified using setGraphicsThickness() .

Warning

To effectively display the dot graphics a call to vpDisplay::flush() is needed.

Note

See setGraphicsThickness()

Parameters:
activate: bool

If true, the border of the dot will be painted. false to turn off border painting.

setGraphicsThickness(self, t: int) None

Modify the default thickness that is set to 1 of the drawings in overlay when setGraphics() is enabled.

Note

See setGraphics()

setGrayLevelMax(self, max: int) None

Set the color level of pixels surrounding the dot. This is meant to be used to search a dot in a region of interest.

Note

See setGrayLevelMin() , setGrayLevelPrecision()

Parameters:
max: int

Intensity level of a dot to search in a region of interest.

setGrayLevelMin(self, min: int) None

Set the color level of the dot to search a dot in a region of interest. This level will be used to know if a pixel in the image belongs to the dot or not. Only pixels with higher level can belong to the dot. If the level is lower than the minimum level for a dot, set the level to MIN_IN_LEVEL.

Note

See setGrayLevelMax() , setGrayLevelPrecision()

Parameters:
min: int

Color level of a dot to search in a region of interest.

setGrayLevelPrecision(self, grayLevelPrecision: float) None

Set the precision of the gray level of the dot.

Note

See setGrayLevelMin() , setGrayLevelMax()

setHeight(self, height: float) None

Set the height of the dot. This is meant to be used to search a dot in an area.

Note

See setWidth() , setArea() , setSizePrecision()

setMaxSizeSearchDistPrecision(self, maxSizeSearchDistancePrecision: float) None

Set the precision of the search maximum distance to get the starting point on a dot border. A too low value mean a large search area.

setSizePrecision(self, sizePrecision: float) None

Set the precision of the size of the dot. Used to test the validity of the dot

Note

See setWidth() , setHeight() , setArea()

setWidth(self, width: float) None

Set the width of the dot. This is meant to be used to search a dot in an area.

Note

See setHeight() , setArea() , setSizePrecision()

track(*args, **kwargs)

Overloaded function.

  1. track(self: visp._visp.blob.Dot2, I: visp._visp.core.ImageGray, canMakeTheWindowGrow: bool = true) -> None

Try to locate the dot in the image:

  • First, estimate the new position of the dot, using its previous position.

  • Then compute the center of gravity (surface, width height) of the tracked entity from the Freeman chain elements.

  • If the dot is lost (estimated point too dark, too much surface change,…), search the dot in a window around the previous position.

  • If no valid dot was found in the window, return an exception.

To get the center of gravity of the dot, call getCog() . To get the width or height of the dot, call getWidth() and getHeight() . The area of the dot is given by getArea() .

To compute all the inertia moments associated to the dot see setComputeMoments() .

To get the pixels coordinates on the dot boundary, see getList_u() and getList_v().

Parameters:
I

Image.

canMakeTheWindowGrow

if true, the size of the searching area is increased if the blob is not found, otherwise it stays the same. Default value is true.

  1. track(self: visp._visp.blob.Dot2, I: visp._visp.core.ImageGray, cog: visp._visp.core.ImagePoint, canMakeTheWindowGrow: bool = true) -> None

Track and get the new dot coordinates. See track() for a more complete description

The behavior of this method is similar to the following code:

vpDot2 d;
d.track(I);
vpImagePoint cog = d.getCog();

Note

See track()

Parameters:
I

Image to process.

canMakeTheWindowGrow

if true, the size of the searching area is increased if the blob is not found, otherwise it stays the same. Default value is true.

static trackAndDisplay(dots: list[visp._visp.blob.Dot2], I: visp._visp.core.ImageGray, cogs: list[visp._visp.core.ImagePoint], desiredCogs: list[visp._visp.core.ImagePoint] | None) None

Wrapper for the trackAndDisplay method, see the C++ ViSP documentation.