Servo¶
- class Servo(*args, **kwargs)¶
Bases:
pybind11_object
Class required to compute the visual servoing control law described in [3] and [4] .
Warning
To avoid potential memory leaks, it is mandatory to call explicitly the kill() function to destroy the task. Otherwise, the destructor ~vpServo() launch an exception vpServoException::notKilledProperly .
To learn how to use this class, we suggest first to follow the tutorial-ibvs. The tutorial-simu-robot-pioneer and tutorial-boost-vs are also useful for advanced usage of this class.
The example below shows how to build a position-based visual servo from 3D visual features \(s=({^{c^*}}t_c,\theta u)\) . In that case, we have \(s^* = 0\) . Let us denote \(\theta u\) the angle/axis parametrization of the rotation \({^{c^*}}R_c\) . Moreover, \({^{c^*}}t_c\) and \({^{c^*}}R_c\) represent respectively the translation and the rotation between the desired camera frame and the current one obtained by pose estimation (see vpPose class).
#include <visp3/core/vpColVector.h> #include <visp3/core/vpHomogeneousMatrix.h> #include <visp3/core/vpMatrix.h> #include <visp3/visual_features/vpFeatureThetaU.h> #include <visp3/visual_features/vpFeatureTranslation.h> #include <visp3/vs/vpServo.h> int main() { // Creation of an homogeneous matrix that represent the displacement // the camera has to achieve to move from the desired camera frame // and the current one vpHomogeneousMatrix cdMc; // ... cdMc is here the result of a pose estimation // Creation of the current visual feature s = (c*_t_c, ThetaU) vpFeatureTranslation s_t(vpFeatureTranslation::cdMc); vpFeatureThetaU s_tu(vpFeatureThetaU::cdRc); // Set the initial values of the current visual feature s = (c*_t_c, ThetaU) s_t.buildFrom(cdMc); s_tu.buildFrom(cdMc); // Build the desired visual feature s* = (0,0) vpFeatureTranslation s_star_t(vpFeatureTranslation::cdMc); // Default initialization to zero vpFeatureThetaU s_star_tu(vpFeatureThetaU::cdRc); // Default initialization to zero vpColVector v; // Camera velocity double error; // Task error // Creation of the visual servo task. vpServo task; // Visual servo task initialization // - Camera is mounted on the robot end-effector and velocities are // computed in the camera frame task.setServo(vpServo::EYEINHAND_CAMERA); // - Interaction matrix is computed with the current visual features s task.setInteractionMatrixType(vpServo::CURRENT); // - Set the constant gain to 1 task.setLambda(1); // - Add current and desired translation feature task.addFeature(s_t, s_star_t); // - Add current and desired ThetaU feature for the rotation task.addFeature(s_tu, s_star_tu); // Visual servoing loop. The objective is here to update the visual // features s = (c*_t_c, ThetaU), compute the control law and apply // it to the robot do { // ... cdMc is here the result of a pose estimation // Update the current visual feature s s_t.buildFrom(cdMc); // Update translation visual feature s_tu.buildFrom(cdMc); // Update ThetaU visual feature v = task.computeControlLaw(); // Compute camera velocity skew error = ( task.getError() ).sumSquare(); // error = s^2 - s_star^2 } while (error > 0.0001); // Stop the task when current and desired visual features are close }
Overloaded function.
__init__(self: visp._visp.vs.Servo) -> None
Default constructor that initializes the following settings:
No control law is specified. The user has to call setServo() to specify the control law.
In the control law, the interaction matrix \({\widehat {\bf L}}_e\) is computed with the desired features \({\bf s}^*\) . Using setInteractionMatrixType() you can also compute the interaction matrix with the current visual features, or from the mean \(\left({\widehat {\bf L}}_s + {\widehat {\bf L}}_{s^*}\right)/2\) .
In the control law the pseudo inverse will be used. The method setInteractionMatrixType() allows to use the transpose instead.
Warning
By default the threshold used to compute the pseudo-inverse is set to 1e-6. Advanced user can modify this value using setPseudoInverseThreshold() .
__init__(self: visp._visp.vs.Servo, servo_type: visp._visp.vs.Servo.ServoType) -> None
Constructor that allows to choose the visual servoing control law.
The other settings are the following:
In the control law, the interaction matrix \({\widehat {\bf L}}_e\) is computed with the desired features \({\bf s}^*\) . Using setInteractionMatrixType() you can also compute the interaction matrix with the current visual features, or from the mean \(\left({\widehat {\bf L}}_s + {\widehat {\bf L}}_{s^*}\right)/2\) .
In the control law the pseudo inverse will be used. The method setInteractionMatrixType() allows to use the transpose instead.
- Parameters:
- servo_type
Visual servoing control law.
Methods
Overloaded function.
Overloaded function.
Overloaded function.
Compute the error \(\bf e =(s - s^*)\) between the current set of visual features \(\bf s\) and the desired set of visual features \(\bf s^*\) .
Compute and return the interaction matrix related to the set of visual features.
Return the task dimension.
Return the error \(\bf e = (s - s^*)\) between the current set of visual features \(\bf s\) and the desired set of visual features \(\bf s^*\) .
Return the projection operator \({\bf I}-{\bf W}^+{\bf W}\) .
Return the interaction matrix \(L\) used to compute the task jacobian \(J_1\) .
Return the large projection operator.
Return pseudo-inverse threshold used to test the singular values.
Return the visual servo type.
Return the task jacobian \(J\) .
Return the pseudo inverse of the task jacobian \(J\) .
Return the rank of the task jacobian.
Get task singular values.
Return the projection operator \({\bf W}^+{\bf W}\) .
Return the velocity twist matrix used to transform a velocity skew vector from end-effector frame into the camera frame.
Return the velocity twist matrix used to transform a velocity skew vector from robot fixed frame (also called world or base frame) into the camera frame.
Return the robot jacobian expressed in the end-effector frame.
Return the robot jacobian expressed in the robot fixed frame (also called world or base frame).
Return the velocity twist matrix used to transform a velocity skew vector from robot end-effector frame into the fixed frame (also called world or base frame).
Task destruction.
Overloaded function.
Compute and return the secondary task vector for joint limit avoidance [33] using the new large projection operator (see equation(24) in the paper [32] ).
Set a 6-dim column vector representing the degrees of freedom that are controlled in the camera frame.
Set a variable which enables to compute the interaction matrix at each iteration.
Overloaded function.
Set the value of the parameter \(\mu\) used to ensure the continuity of the velocities computed using computeControlLaw(double) .
Set the pseudo-inverse threshold used to test the singular values.
Overloaded function.
Overloaded function.
Set the robot jacobian expressed in the end-effector frame.
Set the robot jacobian expressed in the robot fixed frame (also called world or base frame).
Overloaded function.
Test if all the initialization are correct.
Test if all the update are correct.
Inherited Methods
Operators
__doc__
Overloaded function.
__module__
Attributes
ALL
CONTROLLER
CURRENT
DESIRED
ERROR_VECTOR
EYEINHAND_CAMERA
EYEINHAND_L_cVe_eJe
EYETOHAND_L_cVe_eJe
EYETOHAND_L_cVf_fJe
EYETOHAND_L_cVf_fVe_eJe
FEATURE_CURRENT
FEATURE_DESIRED
GAIN
INTERACTION_MATRIX
J1
J1p
L
MEAN
MINIMUM
NONE
PSEUDO_INVERSE
TRANSPOSE
USER_DEFINED
__annotations__
desiredFeatureList
e
e1
error
featureList
featureSelectionList
interactionMatrixType
inversionType
lambda
q_dot
rankJ1
s
sStar
servoType
signInteractionMatrix
v
- class ServoInversionType(self, value: int)¶
Bases:
pybind11_object
Choice of the information to print.
Values:
ALL: Print all the task information.
CONTROLLER: Print the type of controller law.
ERROR_VECTOR: Print the error vector \(\bf e = (s-s^*)\) .
FEATURE_CURRENT: Print the current features \(\bf s\) .
FEATURE_DESIRED: Print the desired features \({\bf s}^*\) .
GAIN: Print the gain \(\lambda\) .
INTERACTION_MATRIX: Print the interaction matrix.
MINIMUM: Same as vpServo::vpServoPrintType::ERROR_VECTOR.
- class ServoIteractionMatrixType(self, value: int)¶
Bases:
pybind11_object
Choice of the information to print.
Values:
ALL: Print all the task information.
CONTROLLER: Print the type of controller law.
ERROR_VECTOR: Print the error vector \(\bf e = (s-s^*)\) .
FEATURE_CURRENT: Print the current features \(\bf s\) .
FEATURE_DESIRED: Print the desired features \({\bf s}^*\) .
GAIN: Print the gain \(\lambda\) .
INTERACTION_MATRIX: Print the interaction matrix.
MINIMUM: Same as vpServo::vpServoPrintType::ERROR_VECTOR.
- class ServoPrintType(self, value: int)¶
Bases:
pybind11_object
Choice of the information to print.
Values:
ALL: Print all the task information.
CONTROLLER: Print the type of controller law.
ERROR_VECTOR: Print the error vector \(\bf e = (s-s^*)\) .
FEATURE_CURRENT: Print the current features \(\bf s\) .
FEATURE_DESIRED: Print the desired features \({\bf s}^*\) .
GAIN: Print the gain \(\lambda\) .
INTERACTION_MATRIX: Print the interaction matrix.
MINIMUM: Same as vpServo::vpServoPrintType::ERROR_VECTOR.
- class ServoType(self, value: int)¶
Bases:
pybind11_object
Choice of the information to print.
Values:
ALL: Print all the task information.
CONTROLLER: Print the type of controller law.
ERROR_VECTOR: Print the error vector \(\bf e = (s-s^*)\) .
FEATURE_CURRENT: Print the current features \(\bf s\) .
FEATURE_DESIRED: Print the desired features \({\bf s}^*\) .
GAIN: Print the gain \(\lambda\) .
INTERACTION_MATRIX: Print the interaction matrix.
MINIMUM: Same as vpServo::vpServoPrintType::ERROR_VECTOR.
- __init__(*args, **kwargs)¶
Overloaded function.
__init__(self: visp._visp.vs.Servo) -> None
Default constructor that initializes the following settings:
No control law is specified. The user has to call setServo() to specify the control law.
In the control law, the interaction matrix \({\widehat {\bf L}}_e\) is computed with the desired features \({\bf s}^*\) . Using setInteractionMatrixType() you can also compute the interaction matrix with the current visual features, or from the mean \(\left({\widehat {\bf L}}_s + {\widehat {\bf L}}_{s^*}\right)/2\) .
In the control law the pseudo inverse will be used. The method setInteractionMatrixType() allows to use the transpose instead.
Warning
By default the threshold used to compute the pseudo-inverse is set to 1e-6. Advanced user can modify this value using setPseudoInverseThreshold() .
__init__(self: visp._visp.vs.Servo, servo_type: visp._visp.vs.Servo.ServoType) -> None
Constructor that allows to choose the visual servoing control law.
The other settings are the following:
In the control law, the interaction matrix \({\widehat {\bf L}}_e\) is computed with the desired features \({\bf s}^*\) . Using setInteractionMatrixType() you can also compute the interaction matrix with the current visual features, or from the mean \(\left({\widehat {\bf L}}_s + {\widehat {\bf L}}_{s^*}\right)/2\) .
In the control law the pseudo inverse will be used. The method setInteractionMatrixType() allows to use the transpose instead.
- Parameters:
- servo_type
Visual servoing control law.
- addFeature(*args, **kwargs)¶
Overloaded function.
addFeature(self: visp._visp.vs.Servo, s_cur: visp._visp.visual_features.BasicFeature, s_star: visp._visp.visual_features.BasicFeature, select: int = vpBasicFeature::FEATURE_ALL) -> None
Add a new set of 2 features \(\bf s\) and \({\bf s}^*\) in the task.
The following sample code explain how to use this method to add a visual feature point \((x,y)\) :
vpFeaturePoint s, s_star; ... vpServo task; task.addFeature(s, s_star);
For example to use only the \(x\) visual feature, the previous code becomes:
vpFeaturePoint s, s_star; ... vpServo task; task.addFeature(s, s_star, vpFeaturePoint::selectX());
- Parameters:
- s_cur
Current visual feature denoted \(\bf s\) .
- s_star
Desired visual feature denoted \({\bf s}^*\) .
- select
Feature selector. By default all the features in s and s_star are used, but is is possible to specify which one is used in case of multiple features.
addFeature(self: visp._visp.vs.Servo, s_cur: visp._visp.visual_features.BasicFeature, select: int = vpBasicFeature::FEATURE_ALL) -> None
Add a new features \(\bf s\) in the task. The desired visual feature denoted \({\bf s}^*\) is equal to zero.
The following sample code explain how to use this method to add a \(\theta {\bf u} =(\theta u_x, \theta u_y, \theta u_z)\) feature:
vpFeatureThetaU s(vpFeatureThetaU::cRcd); ... vpServo task; task.addFeature(s);
For example to use only the \(\theta u_x\) feature, the previous code becomes:
vpFeatureThetaU s(vpFeatureThetaU::cRcd); ... vpServo task; task.addFeature(s, vpFeatureThetaU::selectTUx);
- Parameters:
- s_cur
Current visual feature denoted \(\bf s\) .
- select
Feature selector. By default all the features in s are used, but is is possible to specify which one is used in case of multiple features.
- computeControlLaw(*args, **kwargs)¶
Overloaded function.
computeControlLaw(self: visp._visp.vs.Servo) -> visp._visp.core.ColVector
Compute the control law specified using setServo() . See vpServo::vpServoType for more details concerning the control laws that are available. The tutorial-ibvs and tutorial-boost-vs are also useful to illustrate the usage of this function.
The general form of the control law is the following:
\[{\bf \dot q} = \pm \lambda {{\bf \widehat J}_e}^+ {\bf e} \]where :
\({\bf \dot q}\) is the resulting velocity command to apply to the robot.
the sign of the control law depends on the eye in hand or eye to hand configuration.
\(\bf J\) is the Jacobian of the task. It is function of the interaction matrix and of the robot Jacobian.
\(\bf e = (s-s^*)\) is the error to regulate.
To ensure continuous sequencing the computeControlLaw(double) function can be used. It will ensure that the velocities that are computed are continuous.
computeControlLaw(self: visp._visp.vs.Servo, t: float) -> visp._visp.core.ColVector
Compute the control law specified using setServo() . See vpServo::vpServoType for more details concerning the control laws that are available. The tutorial-boost-vs is also useful to illustrate the usage of this function.
To the general form of the control law given in computeControlLaw() , we add here an additional term that comes from the task sequencing approach described in [28] equation (17). This additional term allows to compute continuous velocities by avoiding abrupt changes in the command.
The form of the control law considered here is the following:
\[{\bf \dot q} = \pm \lambda {{\bf \widehat J}_e}^+ {\bf e} \mp \lambda {{\bf \widehat J}_{e(0)}}^+ {{\bf e}(0)} \exp(-\mu t) \]where :
\({\bf \dot q}\) is the resulting continuous velocity command to apply to the robot.
the sign of the control law depends on the eye in hand or eye to hand configuration.
\(\bf J\) is the Jacobian of the task. It is function of the interaction matrix and of the robot Jacobian.
\(\bf e = (s-s^*)\) is the error to regulate.
\(t\) is the time given as parameter of this method.
\(\mu\) is a gain that is set by default to 4 and that could be modified using setMu() .
\({\bf \widehat J}_{e(0)}^+ {\bf e}(0)\) is the value of \({\bf \widehat J}_e^+ {\bf e}\) when \(t=0\) . This value is internally stored either at the first call of this method, or when t parameter is set to 0.
- Parameters:
- t
Time in second. When set to zero, \({{\bf \widehat J}_{e(0)}}^+ {{\bf e}(0)}\) is refreshed internally.
computeControlLaw(self: visp._visp.vs.Servo, t: float, e_dot_init: visp._visp.core.ColVector) -> visp._visp.core.ColVector
Compute the control law specified using setServo() . See vpServo::vpServoType for more details concerning the control laws that are available.
To the general form of the control law given in computeControlLaw() , we add here an additional term that comes from the task sequencing approach described in [28] equation (17). This additional term allows to compute continuous velocities by avoiding abrupt changes in the command.
The form of the control law considered here is the following:
\[{\bf \dot q} = \pm \lambda {{\bf \widehat J}_e}^+ {\bf e} + \left({\bf \dot e}(0) \mp \lambda {{\bf \widehat J}_{e(0)}}^+ {{\bf e}(0)}\right) \exp(-\mu t) \]where :
\({\bf \dot q}\) is the resulting continuous velocity command to apply to the robot.
the sign of the control law depends on the eye in hand or eye to hand configuration.
\(\bf J\) is the Jacobian of the task. It is function of the interaction matrix and of the robot Jacobian.
\(\bf e = (s-s^*)\) is the error to regulate.
\(t\) is the time given as parameter of this method.
\(\mu\) is a gain that is set by default to 4 and that could be modified using setMu() .
\({\bf \widehat J}_{e(0)}^+ {\bf e}(0)\) is the value of \({\bf \widehat J}_e^+ {\bf e}\) when \(t=0\) . This value is internally stored either at the first call of this method, or when t parameter is set to 0.
- Parameters:
- t
Time in second. When set to zero, \({{\bf \widehat J}_{e(0)}}^+ {{\bf e}(0)}\) is refreshed internally.
- e_dot_init
Initial value of \({\bf \dot e}(0)\) .
- computeError(self) visp._visp.core.ColVector ¶
Compute the error \(\bf e =(s - s^*)\) between the current set of visual features \(\bf s\) and the desired set of visual features \(\bf s^*\) .
- Returns:
The error vector \(\bf e\) .
- computeInteractionMatrix(self) visp._visp.core.Matrix ¶
Compute and return the interaction matrix related to the set of visual features.
- Returns:
The interaction matrix \({\widehat {\bf L}}_e\) used in the control law specified using setServo() .
- getError(self) visp._visp.core.ColVector ¶
Return the error \(\bf e = (s - s^*)\) between the current set of visual features \(\bf s\) and the desired set of visual features \(\bf s^*\) . The error vector is updated after a call of computeError() or computeControlLaw() .
vpServo task; ... vpColVector v = task.computeControlLaw(); // Compute the velocity corresponding to the visual servoing vpColVector e = task.getError(); // Get the error vector
- getI_WpW(self) visp._visp.core.Matrix ¶
Return the projection operator \({\bf I}-{\bf W}^+{\bf W}\) . This operator is updated after a call of computeControlLaw() .
vpServo task; ... vpColVector v = task.computeControlLaw(); // Compute the velocity corresponding to the visual servoing vpMatrix I_WpW = task.getI_WpW(); // Get the projection operator
Note
See getWpW()
- getInteractionMatrix(self) visp._visp.core.Matrix ¶
Return the interaction matrix \(L\) used to compute the task jacobian \(J_1\) . The interaction matrix is updated after a call to computeInteractionMatrix() or computeControlLaw() .
vpServo task; ... vpColVector v = task.computeControlLaw(); // Compute the velocity corresponding to the visual servoing vpMatrix L = task.getInteractionMatrix(); // Get the interaction matrix used to compute v
Note
See getTaskJacobian()
- getLargeP(self) visp._visp.core.Matrix ¶
Return the large projection operator. This operator is updated after a call of computeControlLaw() .
vpServo task; ... vpColVector v = task.computeControlLaw(); // Compute the velocity corresponding to the visual servoing vpMatrix P = task.getP(); // Get the large projection operator
Note
See getP()
- getPseudoInverseThreshold(self) float ¶
Return pseudo-inverse threshold used to test the singular values. If a singular value is lower than this threshold we consider that the matrix is not full rank.
Note
See setPseudoInverseThreshold()
- getServoType(self) visp._visp.vs.Servo.ServoType ¶
Return the visual servo type.
- getTaskJacobian(self) visp._visp.core.Matrix ¶
Return the task jacobian \(J\) . The task jacobian is updated after a call of computeControlLaw() .
In the general case, the task jacobian is given by \({\bf J} = {\widehat {\bf L}} {^c}{\bf V}_a {^a}{\bf J}_e\) .
vpServo task; ... vpColVector v = task.computeControlLaw(); // Compute the velocity corresponding to the visual servoing vpMatrix J = task.getTaskJacobian(); // Get the task jacobian used to compute v
Note
See getTaskJacobianPseudoInverse() , getInteractionMatrix()
- getTaskJacobianPseudoInverse(self) visp._visp.core.Matrix ¶
Return the pseudo inverse of the task jacobian \(J\) .
In the general case, the task jacobian is given by \({\bf J} = {\widehat {\bf L}} {^c}{\bf V}_a {^a}{\bf J}_e\) .
The task jacobian and its pseudo inverse are updated after a call of computeControlLaw() .
Note
See getTaskJacobian()
- Returns:
Pseudo inverse \({J}^{+}\) of the task jacobian.
vpServo task; ... vpColVector v = task.computeControlLaw(); // Compute the velocity corresponding to the visual servoing vpMatrix Jp = task.getTaskJacobianPseudoInverse(); // Get the pseudo inverse of task jacobian used to compute v
- getTaskRank(self) int ¶
Return the rank of the task jacobian. The rank is updated after a call of computeControlLaw() .
vpServo task; ... vpColVector v = task.computeControlLaw(); // Compute the velocity corresponding to the visual servoing unsigned int rank = task.getTaskRank(); // Get the rank of the task jacobian
- getTaskSingularValues(self) visp._visp.core.ColVector ¶
Get task singular values.
- Returns:
Singular values that relies on the task jacobian pseudo inverse.
- getWpW(self) visp._visp.core.Matrix ¶
Return the projection operator \({\bf W}^+{\bf W}\) . This operator is updated after a call of computeControlLaw() .
When the dimension of the task is equal to the number of degrees of freedom available \({\bf W^+W = I}\) .
vpServo task; ... vpColVector v = task.computeControlLaw(); // Compute the velocity corresponding to the visual servoing vpMatrix WpW = task.getWpW(); // Get the projection operator
Note
See getI_WpW()
- get_cVe(self) visp._visp.core.VelocityTwistMatrix ¶
Return the velocity twist matrix used to transform a velocity skew vector from end-effector frame into the camera frame.
- get_cVf(self) visp._visp.core.VelocityTwistMatrix ¶
Return the velocity twist matrix used to transform a velocity skew vector from robot fixed frame (also called world or base frame) into the camera frame.
- get_eJe(self) visp._visp.core.Matrix ¶
Return the robot jacobian expressed in the end-effector frame.
- get_fJe(self) visp._visp.core.Matrix ¶
Return the robot jacobian expressed in the robot fixed frame (also called world or base frame).
- get_fVe(self) visp._visp.core.VelocityTwistMatrix ¶
Return the velocity twist matrix used to transform a velocity skew vector from robot end-effector frame into the fixed frame (also called world or base frame).
- kill(self) None ¶
Task destruction. Kill the current and desired visual feature lists.
This function is called in the destructor. Since ViSP > 3.3.0 it is no more mandatory to call explicitly kill() .
vpServo task ; vpFeatureThetaU s; ... task.addFeature(s); // Add current ThetaU feature task.kill(); // This call is no more mandatory since achieved in the destructor
- print(self: visp._visp.vs.Servo, display_level: visp._visp.vs.Servo.ServoPrintType = ALL, os: std::ostream) None ¶
- secondaryTask(*args, **kwargs)¶
Overloaded function.
secondaryTask(self: visp._visp.vs.Servo, de2dt: visp._visp.core.ColVector, useLargeProjectionOperator: bool = false) -> visp._visp.core.ColVector
Compute and return the secondary task vector according to the classic projection operator \({\bf I-W^+W}\) (see equation(7) in the paper [30] ) or the new large projection operator (see equation(24) in the paper [32] ).
If the classic projection operator is used ( useLargeProjectionOperator = false (default value)) this function return:
\[({\bf I-W^+W})\frac{\partial {\bf e_2}}{\partial t} \]Note that the secondary task vector need than to be added to the primary task which can be in the general case written as:
\[-\lambda {\bf W^+W {\widehat {\bf J}}_e^+({\bf s-s^*})} \]Otherwise if the new large projection operator is used ( useLargeProjectionOperator = true ) this function return:
\[{\bf P}\frac{\partial {\bf e_2}}{\partial t} \]where
\[{\bf P} =\bar{\lambda }\left ( \left \| {\bf e} \right \| \right ){\bf P}_{ \left \| {\bf e } \right \| } \left ( 1 - \bar{\lambda }\left ( \left \| {\bf e } \right \| \right ) \right ) \left ( {\bf I-W^+W}\right ) \]with
\[{\bf P}_{\left \| {\bf e } \right \| } = I_{n} - \frac{1}{{\bf e }^\top {\bf J_{{\bf e }} } {\bf J_{{\bf e }}^\top }{\bf e }}{\bf J_{{\bf e }}^\top }{\bf e }{\bf e }^\top{\bf J_{{\bf e }} } \]Warning
computeControlLaw() must be call prior to this function since it updates the projection operators.
The following sample code shows how to use this method to compute a secondary task using the classic projection operator:
vpColVector v; // Velocity applied to the robot vpColVector de2dt; vpServo task; ... v = task.computeControlLaw(); // Compute the primary task v += task.secondaryTask(de2dt) // Compute and add the secondary task using the classical projection operator
The following sample code shows how to use this method to compute a secondary task using the large projection operator:
vpColVector v; // Velocity applied to the robot vpColVector de2dt; vpServo task; ... v = task.computeControlLaw(); // Compute the primary task v += task.secondaryTask(de2dt, true) // Compute and add the secondary task using the large projection operator
Note
See computeControlLaw()
- Parameters:
- de2dt
Value of \(\frac{\partial {\bf e_2}}{\partial t}\) the derivative of the secondary task \({\bf e}_2\) .
- useLargeProjectionOperator
if true will be use the large projection operator, if false the classic one (default).
- Returns:
The secondary task vector.
secondaryTask(self: visp._visp.vs.Servo, e2: visp._visp.core.ColVector, de2dt: visp._visp.core.ColVector, useLargeProjectionOperator: bool = false) -> visp._visp.core.ColVector
Compute and return the secondary task vector according to the classic projection operator \({\bf I-W^+W}\) (see equation(7) in the paper [30] ) or the new large projection operator (see equation(24) in the paper [32] ).
If the classic projection operator is used ( useLargeProjectionOperator = false (default value)) this function return:
\[-\lambda ({\bf I-W^+W}) {\bf e_2} + ({\bf I-W^+W})\frac{\partial {\bf e_2}}{\partial t} \]Note that the secondary task vector need than to be added to the primary task which can be in the general case written as:
\[-\lambda {\bf W^+W {\widehat {\bf J}}_e^+({\bf s-s^*})} \]Otherwise if the new large projection operator is used ( useLargeProjectionOperator = true ) this function return:
\[-\lambda {\bf P} {\bf e_2} + {\bf P}\frac{\partial {\bf e_2}}{\partial t} \]where
\[{\bf P} =\bar{\lambda }\left ( \left \| {\bf e} \right \| \right ){\bf P}_{ \left \| {\bf e } \right \| } \left ( 1 - \bar{\lambda }\left ( \left \| {\bf e } \right \| \right ) \right ) \left ( {\bf I-W^+W}\right ) \]with
\[{\bf P}_{\left \| {\bf e } \right \| } = I_{n} - \frac{1}{{\bf e }^\top {\bf J_{{\bf e }} } {\bf J_{{\bf e }}^\top }{\bf e }}{\bf J_{{\bf e }}^\top }{\bf e }{\bf e }^\top{\bf J_{{\bf e }} } \]Warning
computeControlLaw() must be call prior to this function since it updates the projection operators.
The following sample code shows how to use this method to compute a secondary task using the classical projection operator:
vpColVector v; // Velocity applied to the robot vpColVector e2; vpColVector de2dt; vpServo task; ... v = task.computeControlLaw(); // Compute the primary task v += task.secondaryTask(e2, de2dt) // Compute and add the secondary task using the classical projection operator
The following sample code shows how to use this method to compute a secondary task using the large projection operator:
vpColVector v; // Velocity applied to the robot vpColVector e2; vpColVector de2dt; vpServo task; ... v = task.computeControlLaw(); // Compute the primary task v += task.secondaryTask(e2, de2dt, true) // Compute and add the secondary task using the large projection operator
Note
See computeControlLaw()
- Parameters:
- e2
Value of the secondary task \({\bf e}_2\) .
- de2dt
Value of \(\frac{\partial {\bf e_2}}{\partial t}\) the derivative of the secondary task \({\bf e}_2\) .
- useLargeProjectionOperator
if true will be use the large projection operator, if false the classic one (default).
- Returns:
The secondary task vector.
- secondaryTaskJointLimitAvoidance(self, q: visp._visp.core.ColVector, dq: visp._visp.core.ColVector, qmin: visp._visp.core.ColVector, qmax: visp._visp.core.ColVector, rho: float = 0.1, rho1: float = 0.3, lambda_tune: float = 0.7) visp._visp.core.ColVector ¶
Compute and return the secondary task vector for joint limit avoidance [33] using the new large projection operator (see equation(24) in the paper [32] ). The robot avoids the joint limits very smoothly even when the main task constrains all the robot degrees of freedom.
vpServo task; vpColVector qmin; vpColVector qmax; vpColVector q; vpColVector dq; // Fill vector qmin and qmax with min and max limits of the joints (same joint order than vector q). // Update vector of joint position q and velocities dq; ... // Compute the velocity corresponding to the visual servoing vpColVector v = task.computeControlLaw(); // Compute and add the secondary task for the joint limit avoidance // using the large projection operator v += task.secondaryTaskJointLimitAvoidance(q, dq, qmin, qmax)
- Parameters:
- q: visp._visp.core.ColVector¶
Actual joint positions vector
- dq: visp._visp.core.ColVector¶
Actual joint velocities vector
- qmin: visp._visp.core.ColVector¶
Vector containing the low limit value of each joint in the chain.
- qmax: visp._visp.core.ColVector¶
Vector containing the high limit value of each joint in the chain.
- rho: float = 0.1¶
tuning parameter \({\left [ 0,\frac{1}{2} \right]}\) used to define the safe configuration for the joint. When the joint angle value cross the max or min boundaries ( \({ q_{l_{0}}^{max} }\) and \({q_{l_{0}}^{min}}\) ) the secondary task is activated gradually.
- rho1: float = 0.3¶
tuning parameter \({\left ] 0,1 \right ]}\) to compute the external boundaries ( \({q_{l_{1}}^{max}}\) and \({q_{l_{1}}^{min}}\) ) for the joint limits. Here the secondary task it completely activated with the highest gain.
- lambda_tune: float = 0.7¶
value \({\left [ 0,1 \right ]}\) used to tune the difference in magnitude between the absolute value of the elements of the primary task and the elements of the secondary task. (See equation (17) [33] )
- setCameraDoF(self, dof: visp._visp.core.ColVector) None ¶
Set a 6-dim column vector representing the degrees of freedom that are controlled in the camera frame. When set to 1, all the 6 dof are controlled.
The following example shows how to use this function to control only wx, wy like a pan/tilt:
#include <visp3/visual_features/vpFeaturePoint.h> #include <visp3/vs/vpServo.h> int main() { vpServo servo; servo.setServo(vpServo::EYEINHAND_CAMERA); vpFeaturePoint s, sd; servo.addFeature(s, sd); vpColVector dof(6, 1); dof[0] = 0; // turn off vx dof[1] = 0; // turn off vy dof[2] = 0; // turn off vz dof[5] = 0; // turn off wz servo.setCameraDoF(dof); while(1) { // vpFeatureBuilder::create(s, ...); // update current feature vpColVector v = servo.computeControlLaw(); // compute control law // only v[3] and v[4] corresponding to wx and wy are different from 0 } }
- Parameters:
- dof: visp._visp.core.ColVector¶
Degrees of freedom to control in the camera frame. Below we give the correspondence between the index of the vector and the considered dof:
dof[0] = 1 if translation along X is controled, 0 otherwise;
dof[1] = 1 if translation along Y is controled, 0 otherwise;
dof[2] = 1 if translation along Z is controled, 0 otherwise;
dof[3] = 1 if rotation along X is controled, 0 otherwise;
dof[4] = 1 if rotation along Y is controled, 0 otherwise;
dof[5] = 1 if rotation along Z is controled, 0 otherwise;
- setForceInteractionMatrixComputation(self, force_computation: bool) None ¶
Set a variable which enables to compute the interaction matrix at each iteration.
When the interaction matrix is computed from the desired features \({\bf s}^*\) which are in general constant, the interaction matrix \({\widehat {\bf L}}_{s^*}\) is computed just at the first iteration of the servo loop. Sometimes, when the desired features are time dependent \({{\bf s}(t)}^*\) or varying, the interaction matrix need to be computed at each iteration of the servo loop. This method allows to force the computation of \({\widehat {\bf L}}\) in this particular case.
- setInteractionMatrixType(self, interactionMatrixType: visp._visp.vs.Servo.ServoIteractionMatrixType, interactionMatrixInversion: visp._visp.vs.Servo.ServoInversionType = PSEUDO_INVERSE) None ¶
- setLambda(*args, **kwargs)¶
Overloaded function.
setLambda(self: visp._visp.vs.Servo, c: float) -> None
Set the gain \(\lambda\) used in the control law (see vpServo::vpServoType ) as constant.
The usage of an adaptive gain allows to reduce the convergence time, see setLambda(const vpAdaptiveGain&) .
- Parameters:
- c
Constant gain. Values are in general between 0.1 and 1. Higher is the gain, higher are the velocities that may be applied to the robot.
setLambda(self: visp._visp.vs.Servo, gain_at_zero: float, gain_at_infinity: float, slope_at_zero: float) -> None
Set the gain \(\lambda\) used in the control law (see vpServo::vpServoType ) as adaptive. Value of \(\lambda\) that is used in computeControlLaw() depend on the infinity norm of the task Jacobian.
The usage of an adaptive gain rather than a constant gain allows to reduce the convergence time.
For more details on these parameters see vpAdaptiveGain class.
- Parameters:
- gain_at_zero
the expected gain when \(x=0\) : \(\lambda(0)\) .
- gain_at_infinity
the expected gain when \(x=\infty\) : \(\lambda(\infty)\) .
- slope_at_zero
the expected slope of \(\lambda(x)\) when \(x=0\) : \({\dot \lambda}(0)\) .
setLambda(self: visp._visp.vs.Servo, l: visp._visp.vs.AdaptiveGain) -> None
Set the gain \(\lambda\) used in the control law (see vpServo::vpServoType ) as adaptive. Value of \(\lambda\) that is used in computeControlLaw() depend on the infinity norm of the task Jacobian.
The usage of an adaptive gain rather than a constant gain allows to reduce the convergence time.
Note
See vpAdaptiveGain
- setMu(self, mu_: float) None ¶
Set the value of the parameter \(\mu\) used to ensure the continuity of the velocities computed using computeControlLaw(double) .
A recommended value is 4.
- setPseudoInverseThreshold(self, pseudo_inverse_threshold: float) None ¶
Set the pseudo-inverse threshold used to test the singular values. If a singular value is lower than this threshold we consider that the matrix is not full rank.
Note
See getPseudoInverseThreshold()
- setServo(self, servo_type: visp._visp.vs.Servo.ServoType) None ¶
- set_cVe(*args, **kwargs)¶
Overloaded function.
set_cVe(self: visp._visp.vs.Servo, cVe_: visp._visp.core.VelocityTwistMatrix) -> None
Set the velocity twist matrix used to transform a velocity skew vector from end-effector frame into the camera frame.
set_cVe(self: visp._visp.vs.Servo, cMe: visp._visp.core.HomogeneousMatrix) -> None
Set the velocity twist matrix used to transform a velocity skew vector from end-effector frame into the camera frame.
- set_cVf(*args, **kwargs)¶
Overloaded function.
set_cVf(self: visp._visp.vs.Servo, cVf_: visp._visp.core.VelocityTwistMatrix) -> None
Set the velocity twist matrix used to transform a velocity skew vector from robot fixed frame (also called world or base frame) into the camera frame.
set_cVf(self: visp._visp.vs.Servo, cMf: visp._visp.core.HomogeneousMatrix) -> None
Set the velocity twist matrix used to transform a velocity skew vector from robot fixed frame (also called world or base frame) into the camera frame.
- set_eJe(self, eJe_: visp._visp.core.Matrix) None ¶
Set the robot jacobian expressed in the end-effector frame.
- set_fJe(self, fJe_: visp._visp.core.Matrix) None ¶
Set the robot jacobian expressed in the robot fixed frame (also called world or base frame).
- set_fVe(*args, **kwargs)¶
Overloaded function.
set_fVe(self: visp._visp.vs.Servo, fVe_: visp._visp.core.VelocityTwistMatrix) -> None
Set the velocity twist matrix used to transform a velocity skew vector from robot end-effector frame into the fixed frame (also called world or base frame).
set_fVe(self: visp._visp.vs.Servo, fMe: visp._visp.core.HomogeneousMatrix) -> None
Set the velocity twist matrix used to transform a velocity skew vector from robot end-effector frame into the fixed frame (also called world or base frame).