Visual Servoing Platform  version 3.6.1 under development (2024-12-17)
Tutorial: Image-based visual servo

Introduction

The aim of all vision-based control schemes is to minimize an error ${\bf e}(t)$, which is typically defined by ${\bf e}(t) = {\bf s}[{\bf m}(t), {\bf a}]-{\bf s}^*$.

Traditional image-based control schemes use the image-plane coordinates of a set of points to define the set of visual features $\bf s$. The image measurements $\bf m$ are usually the pixel coordinates of the set of image points (but this is not the only possible choice), and the camera intrinsic parameters $\bf a$ are used to go from image measurements expressed in pixels to the features.

For a 3D point with coordinates ${\bf X} = (X,Y,Z)$ in the camera frame, which projects in the image as a 2D point with coordinates ${\bf x} = (x,y)$ we have:

\[ \left\{\begin{array}{l} x = X/Z = (u - u_0)/p_x\\ y = Y/Z = (v - v_0)/p_y \end{array}\right. \]

where ${\bf m}=(u,v)$ gives the coordinates of the image point expressed in pixel units, and ${\bf a}=(u_0,v_0, p_x,p_y)$ is the set of camera intrinsic parameters: $u_0$ and $v_0$ are the coordinates of the principal point, while $p_x$ and $p_y$ are the ratio between the focal length and the size of a pixel.

Let the spatial velocity of the camera be denoted by $ {\bf v}_c = (v_c, \omega_c)$, with $ v_c $ the instantaneous linear velocity of the origin of the camera frame and $ \omega_c $ the instantaneous angular velocity of the camera frame. The relationship between $ \dot{\bf x} $, the time variation of the feature $\bf s = x$, and the camera velocity $ {\bf v}_c $ is given by

\[ \dot{\bf s} = {\bf L_x} {\bf v}_c\]

where the interaction matrix $ {\bf L_x}$ is given by

\[ {\bf L_x} = \left[\begin{array}{cccccc} -1/Z & 0 & x/Z & xy & -(1+x^2) & y \\ 0 & -1/Z & y/Z & 1+y^2 & -xy & -x \end{array}\right]\]

Considering $ {\bf v}_c $ as the input to the robot controller, and if we would like for instance to try to ensure an exponential decoupled decrease of the error, we obtain

\[ {\bf v}_c = -\lambda {\bf L}_{\bf x}^{+} {\bf e} \]

Note that all the material (source code and image) described in this tutorial is part of ViSP source code (in tutorial/visual-servoing/ibvs folder) and could be found in https://github.com/lagadic/visp/tree/master/tutorial/visual-servoing/ibvs.

Basic IBVS simulation

The following example available in tutorial-ibvs-4pts.cpp shows how to use ViSP to implement an IBVS simulation using 4 points as visual features.

#include <visp3/core/vpConfig.h>
#include <visp3/robot/vpSimulatorCamera.h>
#include <visp3/visual_features/vpFeatureBuilder.h>
#include <visp3/vs/vpServo.h>
int main()
{
#ifdef ENABLE_VISP_NAMESPACE
using namespace VISP_NAMESPACE_NAME;
#endif
try {
vpHomogeneousMatrix cdMo(0, 0, 0.75, 0, 0, 0);
vpHomogeneousMatrix cMo(0.15, -0.1, 1., vpMath::rad(10), vpMath::rad(-10), vpMath::rad(50));
vpPoint point[4];
point[0].setWorldCoordinates(-0.1, -0.1, 0);
point[1].setWorldCoordinates(0.1, -0.1, 0);
point[2].setWorldCoordinates(0.1, 0.1, 0);
point[3].setWorldCoordinates(-0.1, 0.1, 0);
vpServo task;
task.setLambda(0.5);
vpFeaturePoint p[4], pd[4];
for (unsigned int i = 0; i < 4; i++) {
point[i].track(cdMo);
vpFeatureBuilder::create(pd[i], point[i]);
point[i].track(cMo);
vpFeatureBuilder::create(p[i], point[i]);
task.addFeature(p[i], pd[i]);
}
robot.setSamplingTime(0.040);
robot.getPosition(wMc);
wMo = wMc * cMo;
std::cout << "Start visual-servoing loop..." << std::endl;
for (unsigned int iter = 0; iter < 150; iter++) {
robot.getPosition(wMc);
cMo = wMc.inverse() * wMo;
for (unsigned int i = 0; i < 4; i++) {
point[i].track(cMo);
vpFeatureBuilder::create(p[i], point[i]);
}
}
std::cout << "Stop visual-servoing loop" << std::endl;
}
catch (const vpException &e) {
std::cout << "Catch an exception: " << e << std::endl;
}
}
Implementation of column vector and the associated operations.
Definition: vpColVector.h:191
error that can be emitted by ViSP classes.
Definition: vpException.h:60
static void create(vpFeaturePoint &s, const vpCameraParameters &cam, const vpImagePoint &t)
Class that defines a 2D point visual feature which is composed by two parameters that are the cartes...
void track(const vpHomogeneousMatrix &cMo)
Implementation of an homogeneous matrix and operations on such kind of matrices.
vpHomogeneousMatrix inverse() const
static double rad(double deg)
Definition: vpMath.h:129
Class that defines a 3D point in the object frame and allows forward projection of a 3D point in the ...
Definition: vpPoint.h:79
void setWorldCoordinates(double oX, double oY, double oZ)
Definition: vpPoint.cpp:113
void setVelocity(const vpRobot::vpControlFrameType frame, const vpColVector &vel) VP_OVERRIDE
@ CAMERA_FRAME
Definition: vpRobot.h:84
void setInteractionMatrixType(const vpServoIteractionMatrixType &interactionMatrixType, const vpServoInversionType &interactionMatrixInversion=PSEUDO_INVERSE)
Definition: vpServo.cpp:380
@ EYEINHAND_CAMERA
Definition: vpServo.h:161
void addFeature(vpBasicFeature &s_cur, vpBasicFeature &s_star, unsigned int select=vpBasicFeature::FEATURE_ALL)
Definition: vpServo.cpp:331
void setLambda(double c)
Definition: vpServo.h:986
void setServo(const vpServoType &servo_type)
Definition: vpServo.cpp:134
vpColVector computeControlLaw()
Definition: vpServo.cpp:705
@ CURRENT
Definition: vpServo.h:202
Class that defines the simplest robot: a free flying camera.

Now we give a line by line description of the source:

#include <visp3/core/vpFeatureBuilder.h>

Include a kind of common header for all the classes that implement visual features, especially in our case vpFeaturePoint that will allow us to handle ${\bf x} = (x,y)$ described in the Introduction.

#include <visp3/vs/vpServo.h>

Include the header of the vpServo class that implements the control law

\[ {\bf v}_c = -\lambda {\bf L}_{\bf x}^{+} {\bf e} \]

described in the Introduction.

#include <visp3/robot/vpSimulatorCamera.h>

Include the header of the vpSimulatorCamera class that allows to simulate a 6 dof free flying camera.

Then in the main() function, we define the desired and initial position of the camera as two homogeneous matrices; cdMo refers to ${^c}^*{\bf M}_o$ and cMo to ${^c}{\bf M}_o$.

vpHomogeneousMatrix cdMo(0, 0, 0.75, 0, 0, 0);
vpHomogeneousMatrix cMo(0.15, -0.1, 1., vpMath::rad(10), vpMath::rad(-10), vpMath::rad(50));

Then we define four 3D points that represent the corners of a 20cm by 20cm square.

vpPoint point[4] ;
point[0].setWorldCoordinates(-0.1,-0.1, 0);
point[1].setWorldCoordinates( 0.1,-0.1, 0);
point[2].setWorldCoordinates( 0.1, 0.1, 0);
point[3].setWorldCoordinates(-0.1, 0.1, 0);

The instantiation of the visual servo task is done with the next lines. We initialize the task as an eye in hand visual servo. Resulting velocities computed by the controller are those that should be applied in the camera frame: $ {\bf v}_c $. The interaction matrix will be computed from the current visual features. Thus they need to be updated at each iteration of the control loop. Finally, the constant gain $ \lambda$ is set to 0.5.

It is now time to define four visual features as points in the image-plane. To this end we instantiate the vpFeaturePoint class. The current point feature ${\bf s}$ is implemented in p[i]. The desired point feature ${\bf s}^*$ is implemented in pd[i].

vpFeaturePoint p[4], pd[4] ;

Each feature is obtained by computing the position of the 3D points in the corresponding camera frame, and then by applying the perspective projection. Once current and desired features are created, they are added to the visual servo task.

for (unsigned int i = 0 ; i < 4 ; i++) {
point[i].track(cdMo);
vpFeatureBuilder::create(pd[i], point[i]);
point[i].track(cMo);
vpFeatureBuilder::create(p[i], point[i]);
task.addFeature(p[i], pd[i]);
}

For the simulation we need first to create two homogeneous transformations wMc and wMo, respectively to define the position of the camera, and the position of the object in the world frame.

Secondly. we create an instance of our free flying camera. Here we also specify the sampling time to 0.040 seconds. When a velocity is applied to the camera, this time will be used by the exponential map to determine the next position of the camera.

robot.setSamplingTime(0.040);

Finally, from the initial position wMc of the camera and the position of the object previously fixed in the camera frame cMo, we compute the position of the object in the world frame wMo. Since in our simulation the object is static, wMo will remain unchanged.

robot.getPosition(wMc);
wMo = wMc * cMo;

Now we can enter in the visual servo loop. When a velocity is applied to our free flying camera, the position of the camera frame wMc will evolve wrt the world frame. From this position we compute the position of object in the new camera frame.

robot.getPosition(wMc);
cMo = wMc.inverse() * wMo;

The current visual features are then updated by projecting the 3D points in the image-plane associated to the new camera location cMo.

for (unsigned int i = 0 ; i < 4 ; i++) {
point[i].track(cMo);
vpFeatureBuilder::create(p[i], point[i]);
}

Finally, the velocity skew $ {\bf v}_c $ is computed.

This 6-dimension velocity vector is then applied to the camera.

Before exiting the program, we free all the memory by killing the task.

task.kill();
void kill()
Definition: vpServo.cpp:108

The next Tutorial: Real-time curves plotter tool shows how to modify the previous example to plot some curves that illustrate the visual servo behavior.

IBVS simulation with basic viewers

The previous IBVS simulation can be modified easily to introduce a basic internal and external camera viewer. This is implemented in tutorial-ibvs-4pts-display.cpp and given below:

#include <visp3/core/vpConfig.h>
#include <visp3/gui/vpDisplayGDI.h>
#include <visp3/gui/vpDisplayOpenCV.h>
#include <visp3/gui/vpDisplayX.h>
#include <visp3/gui/vpProjectionDisplay.h>
#include <visp3/robot/vpSimulatorCamera.h>
#include <visp3/visual_features/vpFeatureBuilder.h>
#include <visp3/vs/vpServo.h>
#include <visp3/vs/vpServoDisplay.h>
#ifdef ENABLE_VISP_NAMESPACE
using namespace VISP_NAMESPACE_NAME;
#endif
void display_trajectory(const vpImage<unsigned char> &I, std::vector<vpPoint> &point, const vpHomogeneousMatrix &cMo,
const vpCameraParameters &cam);
void display_trajectory(const vpImage<unsigned char> &I, std::vector<vpPoint> &point, const vpHomogeneousMatrix &cMo,
const vpCameraParameters &cam)
{
static std::vector<vpImagePoint> traj[4];
for (unsigned int i = 0; i < 4; i++) {
// Project the point at the given camera position
point[i].project(cMo);
vpMeterPixelConversion::convertPoint(cam, point[i].get_x(), point[i].get_y(), cog);
traj[i].push_back(cog);
}
for (unsigned int i = 0; i < 4; i++) {
for (unsigned int j = 1; j < traj[i].size(); j++) {
vpDisplay::displayLine(I, traj[i][j - 1], traj[i][j], vpColor::green);
}
}
}
int main()
{
try {
vpHomogeneousMatrix cdMo(0, 0, 0.75, 0, 0, 0);
vpHomogeneousMatrix cMo(0.15, -0.1, 1., vpMath::rad(10), vpMath::rad(-10), vpMath::rad(50));
std::vector<vpPoint> point;
point.push_back(vpPoint(-0.1, -0.1, 0));
point.push_back(vpPoint(0.1, -0.1, 0));
point.push_back(vpPoint(0.1, 0.1, 0));
point.push_back(vpPoint(-0.1, 0.1, 0));
vpServo task;
task.setLambda(0.5);
vpFeaturePoint p[4], pd[4];
for (unsigned int i = 0; i < 4; i++) {
point[i].track(cdMo);
vpFeatureBuilder::create(pd[i], point[i]);
point[i].track(cMo);
vpFeatureBuilder::create(p[i], point[i]);
task.addFeature(p[i], pd[i]);
}
robot.setSamplingTime(0.040);
robot.getPosition(wMc);
wMo = wMc * cMo;
vpImage<unsigned char> Iint(480, 640, 255);
vpImage<unsigned char> Iext(480, 640, 255);
#if defined(VISP_HAVE_X11)
vpDisplayX displayInt(Iint, 0, 0, "Internal view");
vpDisplayX displayExt(Iext, 670, 0, "External view");
#elif defined(VISP_HAVE_GDI)
vpDisplayGDI displayInt(Iint, 0, 0, "Internal view");
vpDisplayGDI displayExt(Iext, 670, 0, "External view");
#elif defined(HAVE_OPENCV_HIGHGUI)
vpDisplayOpenCV displayInt(Iint, 0, 0, "Internal view");
vpDisplayOpenCV displayExt(Iext, 670, 0, "External view");
#else
std::cout << "No image viewer is available..." << std::endl;
#endif
#if defined(VISP_HAVE_DISPLAY)
vpProjectionDisplay externalview;
for (unsigned int i = 0; i < 4; i++)
externalview.insert(point[i]);
#endif
vpCameraParameters cam(840, 840, Iint.getWidth() / 2, Iint.getHeight() / 2);
vpHomogeneousMatrix cextMo(0, 0, 3, 0, 0, 0);
while (1) {
robot.getPosition(wMc);
cMo = wMc.inverse() * wMo;
for (unsigned int i = 0; i < 4; i++) {
point[i].track(cMo);
vpFeatureBuilder::create(p[i], point[i]);
}
display_trajectory(Iint, point, cMo, cam);
#if defined(VISP_HAVE_DISPLAY)
externalview.display(Iext, cextMo, cMo, cam, vpColor::red, true);
#endif
// A click to exit
if (vpDisplay::getClick(Iint, false) || vpDisplay::getClick(Iext, false))
break;
vpTime::wait(robot.getSamplingTime() * 1000);
}
}
catch (const vpException &e) {
std::cout << "Catch an exception: " << e << std::endl;
}
}
Generic class defining intrinsic camera parameters.
static const vpColor red
Definition: vpColor.h:217
static const vpColor green
Definition: vpColor.h:220
Display for windows using GDI (available on any windows 32 platform).
Definition: vpDisplayGDI.h:130
The vpDisplayOpenCV allows to display image using the OpenCV library. Thus to enable this class OpenC...
static bool getClick(const vpImage< unsigned char > &I, bool blocking=true)
static void display(const vpImage< unsigned char > &I)
static void displayLine(const vpImage< unsigned char > &I, const vpImagePoint &ip1, const vpImagePoint &ip2, const vpColor &color, unsigned int thickness=1, bool segment=true)
static void flush(const vpImage< unsigned char > &I)
Class that defines a 2D point in an image. This class is useful for image processing and stores only ...
Definition: vpImagePoint.h:82
static void convertPoint(const vpCameraParameters &cam, const double &x, const double &y, double &u, double &v)
interface with the image for feature display
void insert(vpForwardProjection &fp)
void display(vpImage< unsigned char > &I, const vpHomogeneousMatrix &cextMo, const vpHomogeneousMatrix &cMo, const vpCameraParameters &cam, const vpColor &color, const bool &displayTraj=false, unsigned int thickness=1)
static void display(const vpServo &s, const vpCameraParameters &cam, const vpImage< unsigned char > &I, vpColor currentColor=vpColor::green, vpColor desiredColor=vpColor::red, unsigned int thickness=1)
VISP_EXPORT int wait(double t0, double t)

The result of this program is visible in the next videos. The first one shows the internal camera view with the straight line trajectory of each point in the image. The second one provides an external view that shows the 3D camera trajectory to reach the desired position.

We explain now the new lines that were introduced.

First, we add the headers of the classes that are used to introduce the viewers and some display functionality. vpProjectionDisplay is the class that allows to handle the external view from a given camera position, while vpServoDisplay allows to display in overlay the position of the current and desired features in the internal camera view.

#include <visp3/core/vpDisplayX.h>
#include <visp3/core/vpDisplayGDI.h>
#include <visp3/core/vpProjectionDisplay.h>
#include <visp3/vs/vpServoDisplay.h>

Secondly, we introduce the function display_trajectory() that allows to display the trajectory of the current features in the image. From the 3D coordinates of the points given in the object frame, we compute their respective position in the camera frame, then we apply the perspective projection before retrieving their positions in sub-pixels in the image thanks to the camera parameters. The successive sub-pixel positions are stored in a vector named traj and displayed as a green trajectory.

void display_trajectory(const vpImage<unsigned char> &I, std::vector<vpPoint> &point,
const vpHomogeneousMatrix &cMo, const vpCameraParameters &cam)
{
int thickness = 1;
static std::vector<vpImagePoint> traj[4];
for (unsigned int i=0; i<4; i++) {
// Project the point at the given camera position
point[i].project(cMo);
vpMeterPixelConversion::convertPoint(cam, point[i].get_x(), point[i].get_y(), cog);
traj[i].push_back(cog);
}
for (unsigned int i=0; i<4; i++) {
for (unsigned int j=1; j<traj[i].size(); j++) {
vpDisplay::displayLine(I, traj[i][j-1], traj[i][j], vpColor::green, thickness);
}
}
}

We enter then in the main() where we create two images that will be displayed in a window. The first images Iint is dedicated to the internal camera view. It shows the content of the images seen by the simulated camera. The second image Iext correspond the images seen by an external camera that observes the simulated camera.

vpImage<unsigned char> Iint(480, 640, 255) ;
vpImage<unsigned char> Iext(480, 640, 255) ;
#if defined(VISP_HAVE_X11)
vpDisplayX displayInt(Iint, 0, 0, "Internal view");
vpDisplayX displayExt(Iext, 670, 0, "External view");
#elif defined(VISP_HAVE_GDI)
vpDisplayGDI displayInt(Iint, 0, 0, "Internal view");
vpDisplayGDI displayExt(Iext, 670, 0, "External view");
#else
std::cout << "No image viewer is available..." << std::endl;
#endif

We create an instance of the vpProjectionDisplay class. This class is only available if at least one of the display (X11, GDI, OpenCV, GTK, D3D9) is installed. That is why we use VISP_HAVE_DISPLAY macro. We then insert the points used to define the target. Later, the 3D coordinates of these points defined in the object frame will be used and projected in Iext.

#if defined(VISP_HAVE_DISPLAY)
vpProjectionDisplay externalview;
for (unsigned int i = 0 ; i < 4 ; i++)
externalview.insert(point[i]) ;
#endif

We initialize the intrinsic camera parameters that are used in display_trajectory() to determine the positions in pixels in the image of the visual features.

vpCameraParameters cam(840, 840, Iint.getWidth()/2, Iint.getHeight()/2);

We also set the position of the external camera that will observe the evolution of the simulated camera during the servo.

vpHomogeneousMatrix cextMo(0,0,3, 0,0,0);

Finally, at each iteration of the servo loop we update the viewers:

display_trajectory(Iint, point, cMo, cam);
#if defined(VISP_HAVE_DISPLAY)
externalview.display(Iext, cextMo, cMo, cam, vpColor::red, true);
#endif

IBVS simulation with wireframe viewers

The IBVS simulation presented section Basic IBVS simulation can also be modified to introduce a wireframe internal and external camera viewer. This is implemented in tutorial-ibvs-4pts-wireframe-camera.cpp and given below:

#include <visp3/core/vpConfig.h>
#include <visp3/gui/vpDisplayGDI.h>
#include <visp3/gui/vpDisplayOpenCV.h>
#include <visp3/gui/vpDisplayX.h>
#include <visp3/gui/vpProjectionDisplay.h>
#include <visp3/robot/vpSimulatorCamera.h>
#include <visp3/robot/vpWireFrameSimulator.h>
#include <visp3/visual_features/vpFeatureBuilder.h>
#include <visp3/vs/vpServo.h>
#include <visp3/vs/vpServoDisplay.h>
#ifdef ENABLE_VISP_NAMESPACE
using namespace VISP_NAMESPACE_NAME;
#endif
void display_trajectory(const vpImage<unsigned char> &I, std::vector<vpPoint> &point, const vpHomogeneousMatrix &cMo,
const vpCameraParameters &cam);
void display_trajectory(const vpImage<unsigned char> &I, std::vector<vpPoint> &point, const vpHomogeneousMatrix &cMo,
const vpCameraParameters &cam)
{
static std::vector<vpImagePoint> traj[4];
for (unsigned int i = 0; i < 4; i++) {
// Project the point at the given camera position
point[i].project(cMo);
vpMeterPixelConversion::convertPoint(cam, point[i].get_x(), point[i].get_y(), cog);
traj[i].push_back(cog);
}
for (unsigned int i = 0; i < 4; i++) {
for (unsigned int j = 1; j < traj[i].size(); j++) {
vpDisplay::displayLine(I, traj[i][j - 1], traj[i][j], vpColor::green);
}
}
}
int main()
{
try {
vpHomogeneousMatrix cdMo(0, 0, 0.75, 0, 0, 0);
vpHomogeneousMatrix cMo(0.15, -0.1, 1., vpMath::rad(10), vpMath::rad(-10), vpMath::rad(50));
std::vector<vpPoint> point(4);
point[0].setWorldCoordinates(-0.1, -0.1, 0);
point[1].setWorldCoordinates(0.1, -0.1, 0);
point[2].setWorldCoordinates(0.1, 0.1, 0);
point[3].setWorldCoordinates(-0.1, 0.1, 0);
vpServo task;
task.setLambda(0.5);
vpFeaturePoint p[4], pd[4];
for (unsigned int i = 0; i < 4; i++) {
point[i].track(cdMo);
vpFeatureBuilder::create(pd[i], point[i]);
point[i].track(cMo);
vpFeatureBuilder::create(p[i], point[i]);
task.addFeature(p[i], pd[i]);
}
robot.setSamplingTime(0.040);
robot.getPosition(wMc);
wMo = wMc * cMo;
vpImage<unsigned char> Iint(480, 640, 0);
vpImage<unsigned char> Iext(480, 640, 0);
#if defined(VISP_HAVE_X11)
vpDisplayX displayInt(Iint, 0, 0, "Internal view");
vpDisplayX displayExt(Iext, 670, 0, "External view");
#elif defined(VISP_HAVE_GDI)
vpDisplayGDI displayInt(Iint, 0, 0, "Internal view");
vpDisplayGDI displayExt(Iext, 670, 0, "External view");
#elif defined(HAVE_OPENCV_HIGHGUI)
vpDisplayOpenCV displayInt(Iint, 0, 0, "Internal view");
vpDisplayOpenCV displayExt(Iext, 670, 0, "External view");
#else
std::cout << "No image viewer is available..." << std::endl;
#endif
vpCameraParameters cam(840, 840, Iint.getWidth() / 2, Iint.getHeight() / 2);
vpHomogeneousMatrix cextMo(0, 0, 3, 0, 0, 0);
while (1) {
robot.getPosition(wMc);
cMo = wMc.inverse() * wMo;
for (unsigned int i = 0; i < 4; i++) {
point[i].track(cMo);
vpFeatureBuilder::create(p[i], point[i]);
}
sim.getInternalImage(Iint);
sim.getExternalImage(Iext);
display_trajectory(Iint, point, cMo, cam);
// A click in the internal view to exit
if (vpDisplay::getClick(Iint, false))
break;
vpTime::wait(1000 * robot.getSamplingTime());
}
}
catch (const vpException &e) {
std::cout << "Catch an exception: " << e << std::endl;
}
}
Implementation of a wire frame simulator. Compared to the vpSimulator class, it does not require thir...
void setCameraPositionRelObj(const vpHomogeneousMatrix &cMo_)
void getInternalImage(vpImage< unsigned char > &I)
void initScene(const vpSceneObject &obj, const vpSceneDesiredObject &desiredObject)
void setExternalCameraPosition(const vpHomogeneousMatrix &cam_Mf)
void setDesiredCameraPosition(const vpHomogeneousMatrix &cdMo_)
void setInternalCameraParameters(const vpCameraParameters &cam)
void setExternalCameraParameters(const vpCameraParameters &cam)
void getExternalImage(vpImage< unsigned char > &I)

The result of this program is visible in the next videos. The first one shows the internal wireframe camera view with the straight line trajectory of each point in the image. The second one provides an external wireframe view that shows the 3D camera trajectory to reach the desired position.

We explain now the new lines that were introduced.

First we include the header of the wireframe simulator.

#include <visp3/robot/vpWireFrameSimulator.h>

Then in the main(), we create an instance of the simulator. First we initialize the object in the scene. We recall that the target is a 20cm by 20cm square. This is exactly an object handled by the simulator and defined as a vpWireFrameSimulator::PLATE. By vpWireFrameSimulator::D_STANDARD we indicate that the object displayed at the desired position is also a PLATE. Secondly we initialize the camera position wrt to the object using cMo, and also the desired camera position, the one the camera has to reach at the end of the servo, using cdMo. Next we initialize the position of a static external camera that will observe the simulated camera during the servoing using cextMo. All these poses are define wrt the object frame. Finally, we set the intrinsic camera parameters used for the internal and external viewers.

At each iteration of the servo loop, we update the wireframe simulator with the new camera position.

Then we update the drawings in the internal and external viewers associated to their respective images.

sim.getInternalImage(Iint);
sim.getExternalImage(Iext);

Note here that the same kind of simulation could be achieved on a Viper arm or a gantry robot. The programs that does the simulation are provided in tutorial-ibvs-4pts-wireframe-robot-viper.cpp and in tutorial-ibvs-4pts-wireframe-robot-afma6.cpp.

The next video illustrate the behavior of the same IBVS implemented in tutorial-ibvs-4pts-wireframe-robot-viper.cpp on a Viper arm.

IBVS simulation with image processing

Note
In this section we assume that you are familiar with:

The IBVS simulation presented section Basic IBVS simulation can be modified to introduce an image processing that allows to track the point features using a blob tracker. This is implemented in tutorial-ibvs-4pts-image-tracking.cpp. The code is given below:

#include <visp3/core/vpConfig.h>
#include <visp3/gui/vpDisplayGDI.h>
#include <visp3/gui/vpDisplayOpenCV.h>
#include <visp3/gui/vpDisplayX.h>
#include <visp3/io/vpImageIo.h>
#include <visp3/robot/vpImageSimulator.h>
#include <visp3/robot/vpSimulatorCamera.h>
#include <visp3/visual_features/vpFeatureBuilder.h>
#include <visp3/vs/vpServo.h>
#include <visp3/vs/vpServoDisplay.h>
#ifdef ENABLE_VISP_NAMESPACE
using namespace VISP_NAMESPACE_NAME;
#endif
void display_trajectory(const vpImage<unsigned char> &I, const std::vector<vpDot2> &dot);
{
public:
vpVirtualGrabber(const std::string &filename, const vpCameraParameters &cam) : sim_(), target_(), cam_()
{
// The target is a square 20cm by 2cm square
// Initialise the 3D coordinates of the target corners
for (int i = 0; i < 4; i++)
X_[i].resize(3);
// Top left Top right Bottom right Bottom left
X_[0][0] = -0.1;
X_[1][0] = 0.1;
X_[2][0] = 0.1;
X_[3][0] = -0.1;
X_[0][1] = -0.1;
X_[1][1] = -0.1;
X_[2][1] = 0.1;
X_[3][1] = 0.1;
X_[0][2] = 0;
X_[1][2] = 0;
X_[2][2] = 0;
X_[3][2] = 0;
vpImageIo::read(target_, filename);
// Initialize the image simulator
cam_ = cam;
sim_.setInterpolationType(vpImageSimulator::BILINEAR_INTERPOLATION);
sim_.init(target_, X_);
}
void acquire(vpImage<unsigned char> &I, const vpHomogeneousMatrix &cMo)
{
sim_.setCleanPreviousImage(true);
sim_.setCameraPosition(cMo);
sim_.getImage(I, cam_);
}
private:
vpColVector X_[4]; // 3D coordinates of the target corners
vpImage<unsigned char> target_; // image of the target
};
void display_trajectory(const vpImage<unsigned char> &I, const std::vector<vpDot2> &dot)
{
static std::vector<vpImagePoint> traj[4];
for (unsigned int i = 0; i < 4; i++) {
traj[i].push_back(dot[i].getCog());
}
for (unsigned int i = 0; i < 4; i++) {
for (unsigned int j = 1; j < traj[i].size(); j++) {
vpDisplay::displayLine(I, traj[i][j - 1], traj[i][j], vpColor::green);
}
}
}
int main()
{
#if defined(VISP_HAVE_X11) || defined(VISP_HAVE_GDI) || defined(VISP_HAVE_OPENCV)
try {
vpHomogeneousMatrix cdMo(0, 0, 0.75, 0, 0, 0);
vpHomogeneousMatrix cMo(0.15, -0.1, 1., vpMath::rad(10), vpMath::rad(-10), vpMath::rad(50));
vpImage<unsigned char> I(480, 640, 255);
vpCameraParameters cam(840, 840, I.getWidth() / 2, I.getHeight() / 2);
std::vector<vpPoint> point;
point.push_back(vpPoint(-0.1, -0.1, 0));
point.push_back(vpPoint(0.1, -0.1, 0));
point.push_back(vpPoint(0.1, 0.1, 0));
point.push_back(vpPoint(-0.1, 0.1, 0));
vpServo task;
task.setLambda(0.5);
vpVirtualGrabber g("./target_square.pgm", cam);
g.acquire(I, cMo);
#if defined(VISP_HAVE_X11)
vpDisplayX d(I, 0, 0, "Current camera view");
#elif defined(VISP_HAVE_GDI)
vpDisplayGDI d(I, 0, 0, "Current camera view");
#elif defined(HAVE_OPENCV_HIGHGUI)
vpDisplayOpenCV d(I, 0, 0, "Current camera view");
#else
std::cout << "No image viewer is available..." << std::endl;
#endif
vpDisplay::displayText(I, 10, 10, "Click in the 4 dots to initialise the tracking and start the servo",
vpFeaturePoint p[4], pd[4];
std::vector<vpDot2> dot(4);
for (unsigned int i = 0; i < 4; i++) {
point[i].track(cdMo);
vpFeatureBuilder::create(pd[i], point[i]);
dot[i].setGraphics(true);
dot[i].initTracking(I);
vpFeatureBuilder::create(p[i], cam, dot[i].getCog());
task.addFeature(p[i], pd[i]);
}
robot.setSamplingTime(0.040);
robot.getPosition(wMc);
wMo = wMc * cMo;
for (;;) {
robot.getPosition(wMc);
cMo = wMc.inverse() * wMo;
g.acquire(I, cMo);
for (unsigned int i = 0; i < 4; i++) {
dot[i].track(I);
vpFeatureBuilder::create(p[i], cam, dot[i].getCog());
point[i].changeFrame(cMo, cP);
p[i].set_Z(cP[2]);
}
display_trajectory(I, dot);
if (vpDisplay::getClick(I, false))
break;
vpTime::wait(robot.getSamplingTime() * 1000);
}
}
catch (const vpException &e) {
std::cout << "Catch an exception: " << e << std::endl;
}
#endif
}
void init()
Basic initialization with the default parameters.
static void displayText(const vpImage< unsigned char > &I, const vpImagePoint &ip, const std::string &s, const vpColor &color)
void set_Z(double Z)
static void read(vpImage< unsigned char > &I, const std::string &filename, int backend=IO_DEFAULT_BACKEND)
Definition: vpImageIo.cpp:147
Class which enables to project an image in the 3D space and get the view of a virtual camera.
unsigned int getWidth() const
Definition: vpImage.h:242
unsigned int getHeight() const
Definition: vpImage.h:181

The result of this program is visible in the next video.

We explain now the new lines that were introduced.

First we create a white image in which the target will be projected prior to the image processing.

vpImage<unsigned char> I(480, 640, 255);

Since we will have to convert pixel coordinates to visual features expressed in meter, we need to initialize intrinsic camera parameters.

vpCameraParameters cam(840, 840, I.getWidth()/2, I.getHeight()/2);

To retrieve the simulated image that depends on the simulated camera position we create an instance of a virtual grabber. Its goal is to project the image of the target ./target_square.pgm to a given camera position cMo, and then to retrieve the corresponding image I. Note here that the cog of the blobs in the .pgm file define a 20cm by 20cm square.

vpVirtualGrabber g("./target_square.pgm", cam);
g.acquire(I, cMo);

The current visual features are computed using a vpDot2 blob tracker. Once four trackers are instantiated, we are waiting for a mouse click in each blob to initialize the tracker. Then we compute the current visual features $(x,y) $ from the camera parameters and the cog position of each blob.

for (unsigned int i = 0 ; i < 4 ; i++) {
...
dot[i].setGraphics(true);
dot[i].initTracking(I);
vpFeatureBuilder::create(p[i], cam, dot[i].getCog());
...
}

In the visual servo loop, at each iteration we get a new image of the target.

g.acquire(I, cMo);

We track each blob and update the values of the current visual features as previously.

for (unsigned int i = 0 ; i < 4 ; i++) {
dot[i].track(I);
vpFeatureBuilder::create(p[i], cam, dot[i].getCog());

As described in the Introduction, since the interaction matrix $\bf L_x $ depends on $(x,y)$ but also on $Z$ the depth of the feature, we need to update $Z$. This is done by projecting each 3D point of the target in the camera frame using:

point[i].changeFrame(cMo, cP) ;
p[i].set_Z(cP[2]);
}

Next tutorial

You are now ready to see the Tutorial: Visual servo simulation on a pioneer-like unicycle robot and Tutorial: How to boost your visual servo control law.