Visual Servoing Platform  version 3.6.1 under development (2024-12-17)
Tutorial: How to boost your visual servo control law

Introduction

This tutorial gives some hints to boost your visual servo control law in order to speed up the time to convergence.

Note that all the material (source code and image) described in this tutorial is part of ViSP source code (in tutorial/visual-servoing/ibvs folder) and could be found in https://github.com/lagadic/visp/tree/master/tutorial/visual-servoing/ibvs.

To illustrate this tutorial let us consider the example tutorial-ibvs-4pts-plotter.cpp introduced in Tutorial: Image-based visual servo. This example consider an image based visual servoing using four points as visual features.

In the general case, considering $ \dot {\bf q} $ as the input velocities to the robot controller, the control laws provided in vpServo class lead to the following control law $ \dot {\bf q} = \pm \lambda {{\bf \widehat J}_e}^+ {\bf e}$ where the sign is negative for an eye in hand servo and positive for an eye to hand servo, $\lambda$ is a constant gain, $ {\bf \widehat J}_e$ is the task Jacobian and $\bf e $ is the error to regulate to zero. As described in [3], this control law ensure an exponential decoupled decrease of the error ${\dot {\bf e}} = \pm \lambda {\bf e}$.

This behavior is illustrated with the next figure, where we see the exponential decrease of the eight visual features (x and y for each point) and the corresponding six velocities that are applied to the robot controller. As a consequence, velocities are high when the error is important, and very low when the error is small near the convergence. At the beginning, we can also notice velocity discontinuities with velocities varying from zero to high values in one iteration.

Convergence in 191 iterations with a constant gain.

This behavior can be reproduced running tutorial-ibvs-4pts-plotter.cpp example. Here after we recall the important lines of code used to compute the control law:

vpServo task ;
task.setLambda(0.5); // Set the constant gain value
for (unsigned int i = 0 ; i < 4 ; i++) {
...
task.addFeature(p[i], pd[i]); // Add visual features to the task
}
while(1) {
for (unsigned int i = 0 ; i < 4 ; i++) {
...
vpFeatureBuilder::create(p[i], point[i]); // Update the visual features used in the task
}
vpColVector v = task.computeControlLaw(); // Compute the control law
}
Implementation of column vector and the associated operations.
Definition: vpColVector.h:191
static void create(vpFeaturePoint &s, const vpCameraParameters &cam, const vpImagePoint &t)
void setInteractionMatrixType(const vpServoIteractionMatrixType &interactionMatrixType, const vpServoInversionType &interactionMatrixInversion=PSEUDO_INVERSE)
Definition: vpServo.cpp:380
@ EYEINHAND_CAMERA
Definition: vpServo.h:161
void addFeature(vpBasicFeature &s_cur, vpBasicFeature &s_star, unsigned int select=vpBasicFeature::FEATURE_ALL)
Definition: vpServo.cpp:331
void setLambda(double c)
Definition: vpServo.h:986
void setServo(const vpServoType &servo_type)
Definition: vpServo.cpp:134
vpColVector computeControlLaw()
Definition: vpServo.cpp:705
@ CURRENT
Definition: vpServo.h:202

Using an adaptive gain

As implemented in tutorial-ibvs-4pts-plotter-gain-adaptive.cpp it is possible to adapt the gain $ \lambda $ in order to depend on the infinity norm of the task Jacobian. The usage of an adaptive gain rather than a constant gain allows to reduce the convergence time. In that case the gain becomes:

\[ \lambda (x) = (\lambda_0 - \lambda_\infty) e^{ -\frac{ \lambda'_0}{\lambda_0 - \lambda_\infty}x} + \lambda_\infty \]

where:

  • $ x = ||{\bf e}||_{\infty} $ is the infinity norm of the task Jacobian to consider.
  • $\lambda_0 = \lambda(0)$ is the gain in 0, that is for very small values of $||{\bf e}||$
  • $\lambda_\infty = \lambda_{||{\bf e}|| \rightarrow \infty}\lambda(||{\bf e}||)$ is the gain to infinity, that is for very high values of $||{\bf e}||$
  • $\lambda'_0$ is the slope of $\lambda$ at $||{\bf e}|| = 0$

The impact of the adaptive gain is illustrated in the next figure. During the servo, velocities applied to the controller are higher, especially when the visual error ${\bf e}$ is small. But as in the previous section, using an adaptive gain doesn't insure continuous velocities especially at the first iteration.

Convergence in 91 iterations with an adaptive gain.

This behavior can be reproduced running tutorial-ibvs-4pts-plotter-gain-adaptive.cpp example. Compared to the previous code given in Introduction and available in tutorial-ibvs-4pts-plotter.cpp, here after we give the new lines of code that were introduced to use an adaptive gain:

vpAdaptiveGain lambda(4, 0.4, 30); // lambda(0)=4, lambda(oo)=0.4 and lambda'(0)=30
task.setLambda(lambda);
Adaptive gain computation.

How to tune adaptive gain

To adjust the adaptative gain to your current servoing task, you need to proceed step-by-step :

  • First, switch back to a constant gain by replacing
    task.setLambda(lambda_adapt);
    by
    task.setLambda(lambda);

  • In order to tune the first parameter $\lambda_0 = \lambda(0)$, which corresponds to the gain when the error is close to zero, place the robot close to the final desired position of the servoing task. Then, gradually increase lambda (start with lambda = 1.0) until you observe robot oscillations. A good value for $\lambda_0$ should be slightly inferior to the lambda for which oscillations start to occur.

  • For the second parameter $\lambda_\infty = \lambda_{||{\bf e}|| \rightarrow \infty}\lambda(||{\bf e}||)$, which corresponds to the gain when the error is very high, move the robot further away from the target in order to get a large visual servoing error. Set lambda to a small value, like 0.1, and increase it gradually until vision is no longer able to track your features, or when the robot becomes dangerous with a velocity too high.

  • The last value, $\lambda'_0$, is the slope of the curve $ \lambda_{adapt} = f(s-s^*) $ where $ s-s^* = 0 $ . You can keep it at 30.

Continuous sequencing

As implemented in tutorial-ibvs-4pts-plotter-continuous-gain-adaptive.cpp it is also possible to ensure continuous sequencing to avoid velocity discontinuities. This behavior is achieved by introducing an additional term to the general form of the control law. This additional term comes from the task sequencing approach described in [30] equation (17). It allows to compute continuous velocities by avoiding abrupt changes in the command.

The form of the control law considered here is the following:

\[ {\bf \dot q} = \pm \lambda {{\bf \widehat J}_e}^+ {\bf e} \mp \lambda {{\bf \widehat J}_{e(0)}}^+ {{\bf e}(0)} \exp(-\mu t) \]

where :

  • ${\bf \dot q}$ is the resulting continuous velocity command to apply to the robot controller.
  • the sign of the control law depends on the eye in hand or eye to hand configuration.
  • $\bf J$ is the Jacobian of the task.
  • $\bf e = (s-s^*)$ is the error to regulate.
  • $t$ is the time.
  • $\mu$ is a gain. We recommend to set this value to 4.
  • ${\bf \widehat J}_{e(0)}^+ {\bf e}(0)$ is the value of ${\bf \widehat J}_e^+ {\bf e}$ when $t=0$.

The effect of continuous sequencing is illustrated in the next figure where during the first iterations velocities are starting from zero.

Convergence in 98 iterations with an adaptive gain and

continuous sequencing."

This behavior can be reproduced running tutorial-ibvs-4pts-plotter-continuous-gain-adaptive.cpp example. Compared to the previous code given in Using an adaptive gain and available in tutorial-ibvs-4pts-plotter-gain-adaptive.cpp, here after we give the new line of code that were introduced to ensure continuous sequencing:

vpColVector v = task.computeControlLaw(iter*robot.getSamplingTime());

Next tutorial

You are now ready to see the Tutorial: PBVS with Panda 7-dof robot from Franka Emika that will show how to use adaptive gain and task sequencing on a real robot.