Visual Servoing Platform  version 3.6.1 under development (2024-11-14)
servoSimuPoint2DhalfCamVelocity2.cpp

Simulation of a 2 1/2 D visual servoing (x,y,log Z, theta U)

/****************************************************************************
*
* ViSP, open source Visual Servoing Platform software.
* Copyright (C) 2005 - 2023 by Inria. All rights reserved.
*
* This software is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
* See the file LICENSE.txt at the root directory of this source
* distribution for additional information about the GNU GPL.
*
* For using ViSP with software that can not be combined with the GNU
* GPL, please contact Inria about acquiring a ViSP Professional
* Edition License.
*
* See https://visp.inria.fr for more information.
*
* This software was developed at:
* Inria Rennes - Bretagne Atlantique
* Campus Universitaire de Beaulieu
* 35042 Rennes Cedex
* France
*
* If you have questions regarding the use of this file, please contact
* Inria at visp@inria.fr
*
* This file is provided AS IS with NO WARRANTY OF ANY KIND, INCLUDING THE
* WARRANTY OF DESIGN, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
*
* Description:
* Simulation of a 2 1/2 D visual servoing using theta U visual features.
*
*****************************************************************************/
#include <stdio.h>
#include <stdlib.h>
#include <visp3/core/vpConfig.h>
#include <visp3/core/vpHomogeneousMatrix.h>
#include <visp3/core/vpMath.h>
#include <visp3/core/vpPoint.h>
#include <visp3/io/vpParseArgv.h>
#include <visp3/robot/vpSimulatorCamera.h>
#include <visp3/visual_features/vpFeatureBuilder.h>
#include <visp3/visual_features/vpFeaturePoint.h>
#include <visp3/visual_features/vpFeatureThetaU.h>
#include <visp3/visual_features/vpGenericFeature.h>
#include <visp3/vs/vpServo.h>
// List of allowed command line options
#define GETOPTARGS "h"
#ifdef ENABLE_VISP_NAMESPACE
using namespace VISP_NAMESPACE_NAME;
#endif
void usage(const char *name, const char *badparam);
bool getOptions(int argc, const char **argv);
void usage(const char *name, const char *badparam)
{
fprintf(stdout, "\n\
Simulation of a 2 1/2 D visual servoing (x,y,log Z, theta U):\n\
- eye-in-hand control law,\n\
- velocity computed in the camera frame,\n\
- without display.\n\
\n\
SYNOPSIS\n\
%s [-h]\n",
name);
fprintf(stdout, "\n\
OPTIONS: Default\n\
\n\
-h\n\
Print the help.\n");
if (badparam)
fprintf(stdout, "\nERROR: Bad parameter [%s]\n", badparam);
}
bool getOptions(int argc, const char **argv)
{
const char *optarg_;
int c;
while ((c = vpParseArgv::parse(argc, argv, GETOPTARGS, &optarg_)) > 1) {
switch (c) {
case 'h':
usage(argv[0], nullptr);
return false;
default:
usage(argv[0], optarg_);
return false;
}
}
if ((c == 1) || (c == -1)) {
// standalone param or error
usage(argv[0], nullptr);
std::cerr << "ERROR: " << std::endl;
std::cerr << " Bad argument " << optarg_ << std::endl << std::endl;
return false;
}
return true;
}
int main(int argc, const char **argv)
{
#if (defined(VISP_HAVE_LAPACK) || defined(VISP_HAVE_EIGEN3) || defined(VISP_HAVE_OPENCV))
try {
// Read the command line options
if (getOptions(argc, argv) == false) {
return EXIT_FAILURE;
}
std::cout << std::endl;
std::cout << "-------------------------------------------------------" << std::endl;
std::cout << " simulation of a 2 1/2 D visual servoing " << std::endl;
std::cout << "-------------------------------------------------------" << std::endl;
std::cout << std::endl;
// In this example we will simulate a visual servoing task.
// In simulation, we have to define the scene frane Ro and the
// camera frame Rc.
// The camera location is given by an homogenous matrix cMo that
// describes the position of the scene or object frame in the camera frame.
vpServo task;
// sets the initial camera location
// we give the camera location as a size 6 vector (3 translations in meter
// and 3 rotation (theta U representation)
vpPoseVector c_r_o(0.1, 0.2, 2, vpMath::rad(20), vpMath::rad(10), vpMath::rad(50));
// this pose vector is then transformed in a 4x4 homogeneous matrix
vpHomogeneousMatrix cMo(c_r_o);
// We define a robot
// The vpSimulatorCamera implements a simple moving that is juste defined
// by its location cMo
// Compute the position of the object in the world frame
robot.getPosition(wMc);
wMo = wMc * cMo;
// Now that the current camera position has been defined,
// let us defined the defined camera location.
// It is defined by cdMo
// sets the desired camera location
vpPoseVector cd_r_o(0, 0, 1, vpMath::rad(0), vpMath::rad(0), vpMath::rad(0));
vpHomogeneousMatrix cdMo(cd_r_o);
//----------------------------------------------------------------------
// A 2 1/2 D visual servoing can be defined by
// - the position of a point x,y
// - the difference between this point depth and a desire depth
// modeled by log Z/Zd to be regulated to 0
// - the rotation that the camera has to realized cdMc
// Let us now defined the current value of these features
// since we simulate we have to define a 3D point that will
// forward-projected to define the current position x,y of the
// reference point
//------------------------------------------------------------------
// First feature (x,y)
//------------------------------------------------------------------
// Let oP be this ... point,
// a vpPoint class has three main member
// .oP : 3D coordinates in scene frame
// .cP : 3D coordinates in camera frame
// .p : 2D
//------------------------------------------------------------------
// sets the point coordinates in the world frame
vpPoint point(0, 0, 0);
// computes the point coordinates in the camera frame and its
// 2D coordinates cP and then p
// computes the point coordinates in the camera frame and its 2D
// coordinates" ) ;
point.track(cMo);
// We also defined (again by forward projection) the desired position
// of this point according to the desired camera position
vpPoint pointd(0, 0, 0);
pointd.track(cdMo);
// Nevertheless, a vpPoint is not a feature, this is just a "tracker"
// from which the feature are built
// a feature is juste defined by a vector s, a way to compute the
// interaction matrix and the error, and if required a (or a vector of)
// 3D information
// for a point (x,y) Visp implements the vpFeaturePoint class.
// we no defined a feature for x,y (and for (x*,y*))
// and we initialized the vector s=(x,y) of p from the tracker P
// Z coordinates in p is also initialized, it will be used to compute
// the interaction matrix
//------------------------------------------------------------------
// Second feature log (Z/Zd)
// not necessary to project twice (reuse p)
// This case in intersting since this visual feature has not
// been predefined in VisP
// In such case we have a generic feature class vpGenericFeature
// We will have to defined
// the vector s : .set_s(...)
// the interaction matrix Ls : .setInteractionMatrix(...)
// log(Z/Zd) is then a size 1 vector logZ
// initialized to s = log(Z/Zd)
// Let us note that here we use the point P and Pd, it's not necessary
// to forward project twice (it's already done)
logZ.set_s(log(point.get_Z() / pointd.get_Z()));
// This visual has to be regulated to zero
//------------------------------------------------------------------
// 3rd feature ThetaU
// The thetaU feature is defined, tu represents the rotation that the
// camera has to realized. the complete displacement is then defined by:
//------------------------------------------------------------------
// compute the rotation that the camera has to achieve
cdMc = cdMo * cMo.inverse();
// from this displacement, we extract the rotation cdRc represented by
// the angle theta and the rotation axis u
tu.buildFrom(cdMc);
// This visual has to be regulated to zero
// sets the desired rotation (always zero !)
// since s is the rotation that the camera has to realize
//------------------------------------------------------------------
// Let us now the task itself
//------------------------------------------------------------------
// define the task
// - we want an eye-in-hand control law
// - robot is controlled in the camera frame
// we choose to control the robot in the camera frame
// Interaction matrix is computed with the current value of s
// we build the task by "stacking" the visual feature
// previously defined
task.addFeature(p, pd);
task.addFeature(logZ);
task.addFeature(tu);
// addFeature(X,Xd) means X should be regulated to Xd
// addFeature(X) means that X should be regulated to 0
// some features such as vpFeatureThetaU MUST be regulated to zero
// (otherwise, it will results in an error at exectution level)
// set the gain
task.setLambda(1);
// Display task information
task.print();
//------------------------------------------------------------------
// An now the closed loop
unsigned int iter = 0;
// loop
while (iter++ < 200) {
std::cout << "---------------------------------------------" << iter << std::endl;
// get the robot position
robot.getPosition(wMc);
// Compute the position of the object frame in the camera frame
cMo = wMc.inverse() * wMo;
// update the feature
point.track(cMo);
cdMc = cdMo * cMo.inverse();
tu.buildFrom(cdMc);
// there is no feature for logZ, we explicitly build
// the related interaction matrix") ;
logZ.set_s(log(point.get_Z() / pointd.get_Z()));
vpMatrix LlogZ(1, 6);
LlogZ[0][0] = LlogZ[0][1] = LlogZ[0][5] = 0;
LlogZ[0][2] = -1 / p.get_Z();
LlogZ[0][3] = -p.get_y();
LlogZ[0][4] = p.get_x();
logZ.setInteractionMatrix(LlogZ);
// compute the control law
v = task.computeControlLaw();
// send the camera velocity to the controller ") ;
std::cout << "|| s - s* || = " << (task.getError()).sumSquare() << std::endl;
}
// Display task information
task.print();
// Final camera location
std::cout << cMo << std::endl;
return EXIT_SUCCESS;
}
catch (const vpException &e) {
std::cout << "Catch a ViSP exception: " << e << std::endl;
return EXIT_FAILURE;
}
#else
(void)argc;
(void)argv;
std::cout << "Cannot run this example: install Lapack, Eigen3 or OpenCV" << std::endl;
return EXIT_SUCCESS;
#endif
}
Implementation of column vector and the associated operations.
Definition: vpColVector.h:191
error that can be emitted by ViSP classes.
Definition: vpException.h:60
static void create(vpFeaturePoint &s, const vpCameraParameters &cam, const vpImagePoint &t)
Class that defines a 2D point visual feature which is composed by two parameters that are the cartes...
double get_y() const
double get_x() const
double get_Z() const
Class that defines a 3D visual feature from a axis/angle parametrization that represent the rotatio...
Class that enables to define a feature or a set of features which are not implemented in ViSP as a sp...
Implementation of an homogeneous matrix and operations on such kind of matrices.
vpHomogeneousMatrix & buildFrom(const vpTranslationVector &t, const vpRotationMatrix &R)
vpHomogeneousMatrix inverse() const
static double rad(double deg)
Definition: vpMath.h:129
Implementation of a matrix and operations on matrices.
Definition: vpMatrix.h:169
static bool parse(int *argcPtr, const char **argv, vpArgvInfo *argTable, int flags)
Definition: vpParseArgv.cpp:70
Class that defines a 3D point in the object frame and allows forward projection of a 3D point in the ...
Definition: vpPoint.h:79
Implementation of a pose vector and operations on poses.
Definition: vpPoseVector.h:203
void setVelocity(const vpRobot::vpControlFrameType frame, const vpColVector &vel) VP_OVERRIDE
@ CAMERA_FRAME
Definition: vpRobot.h:84
void setInteractionMatrixType(const vpServoIteractionMatrixType &interactionMatrixType, const vpServoInversionType &interactionMatrixInversion=PSEUDO_INVERSE)
Definition: vpServo.cpp:380
@ EYEINHAND_CAMERA
Definition: vpServo.h:161
void addFeature(vpBasicFeature &s_cur, vpBasicFeature &s_star, unsigned int select=vpBasicFeature::FEATURE_ALL)
Definition: vpServo.cpp:331
void print(const vpServo::vpServoPrintType display_level=ALL, std::ostream &os=std::cout)
Definition: vpServo.cpp:171
void setLambda(double c)
Definition: vpServo.h:986
void setServo(const vpServoType &servo_type)
Definition: vpServo.cpp:134
vpColVector getError() const
Definition: vpServo.h:510
vpColVector computeControlLaw()
Definition: vpServo.cpp:705
@ CURRENT
Definition: vpServo.h:202
Class that defines the simplest robot: a free flying camera.