SimulatorCamera

class SimulatorCamera(self)

Bases: RobotSimulator

Class that defines the simplest robot: a free flying camera.

This free flying camera has 6 dof; 3 in translation and 3 in rotation. It evolves as a gentry robot with respect to a world frame. This class is similar to vpRobotCamera class except that here the position of the robot is provided as the transformation from world frame to camera frame; wMc. This representation is more intuitive than the one implemented in vpRobotCamera where the transformation from camera to world frame is considered; cMw.

For this particular simulated robot, the end-effector and camera frame are confused. That means that the cMe transformation is equal to identity.

The robot jacobian expressed in the end-effector frame \({^e}{\bf J}_e\) is also set to identity (see get_eJe() ).

The following code shows how to control this robot in position and velocity.

#include <visp3/robot/vpSimulatorCamera.h>

int main()
{
  vpHomogeneousMatrix wMc;
  vpSimulatorCamera robot;

  robot.getPosition(wMc); // Position of the camera in the world frame
  std::cout << "Default position of the camera in the world frame wMc:\n" << wMc << std::endl;

  wMc[2][3] = 1.; // Camera frame is 1 meter along z axis in front of the world frame
  robot.setPosition(wMc); // Set the new position of the camera in the world frame
  std::cout << "New position of the camera in the world frame wMc:\n" << wMc << std::endl;

  robot.setSamplingTime(0.100); // Modify the default sampling time to 0.1 second
  robot.setMaxTranslationVelocity(1.); // vx, vy and vz max set to 1 m/s
  robot.setMaxRotationVelocity(vpMath::rad(90)); // wx, wy and wz max set to 90 deg/s

  vpColVector v(6);
  v = 0;
  v[2] = 1.; // set v_z to 1 m/s
  robot.setVelocity(vpRobot::CAMERA_FRAME, v);
  // The robot has moved from 0.1 meters along the z axis
  robot.getPosition(wMc); // Position of the camera in the world frame
  std::cout << "New position of the camera wMc:\n" << wMc << std::endl;
}

To know how this class can be used to achieve a visual servoing simulation, you can follow the tutorial-ibvs.

Default constructor that sets the transformation between world frame and camera frame to identity.

Methods

__init__

Default constructor that sets the transformation between world frame and camera frame to identity.

getPosition

Overloaded function.

get_cVe

Get the twist transformation from camera frame to end-effector frame.

get_eJe

Get the robot jacobian expressed in the end-effector frame.

setPosition

Set the robot position in the world frame.

setVelocity

Inherited Methods

STATE_ACCELERATION_CONTROL

RobotStateType

Robot control frames.

setMaxTranslationVelocity

Set the maximal translation velocity that can be sent to the robot during a velocity control.

getRobotState

setVerbose

END_EFFECTOR_FRAME

CAMERA_FRAME

saturateVelocities

Saturate velocities.

ARTICULAR_FRAME

JOINT_STATE

TOOL_FRAME

setMaxRotationVelocity

Set the maximal rotation velocity that can be sent to the robot during a velocity control.

getNDof

Return robot degrees of freedom number.

STATE_STOP

STATE_POSITION_CONTROL

getMaxTranslationVelocity

Get the maximal translation velocity that can be sent to the robot during a velocity control.

setRobotState

getSamplingTime

Return the sampling time.

ControlFrameType

Robot control frames.

getMaxRotationVelocity

Get the maximal rotation velocity that can be sent to the robot during a velocity control.

REFERENCE_FRAME

STATE_FORCE_TORQUE_CONTROL

setSamplingTime

Set the sampling time.

STATE_VELOCITY_CONTROL

MIXT_FRAME

Operators

__doc__

__init__

Default constructor that sets the transformation between world frame and camera frame to identity.

__module__

Attributes

ARTICULAR_FRAME

CAMERA_FRAME

END_EFFECTOR_FRAME

JOINT_STATE

MIXT_FRAME

REFERENCE_FRAME

STATE_ACCELERATION_CONTROL

STATE_FORCE_TORQUE_CONTROL

STATE_POSITION_CONTROL

STATE_STOP

STATE_VELOCITY_CONTROL

TOOL_FRAME

__annotations__

class ControlFrameType(self, value: int)

Bases: pybind11_object

Robot control frames.

Values:

  • REFERENCE_FRAME: Corresponds to a fixed reference frame attached to the robot structure.

  • ARTICULAR_FRAME: Corresponds to the joint state. This value is deprecated. You should rather use vpRobot::JOINT_STATE .

  • JOINT_STATE: Corresponds to the joint state.

  • END_EFFECTOR_FRAME: Corresponds to robot end-effector frame.

  • CAMERA_FRAME: Corresponds to a frame attached to the camera mounted on the robot end-effector.

  • TOOL_FRAME: Corresponds to a frame attached to the tool (camera, gripper…) mounted on the robot end-effector. This value is equal to vpRobot::CAMERA_FRAME .

  • MIXT_FRAME: Corresponds to a “virtual” frame where translations are expressed in the reference frame, and rotations in the camera frame.

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
class RobotStateType(self, value: int)

Bases: pybind11_object

Robot control frames.

Values:

  • REFERENCE_FRAME: Corresponds to a fixed reference frame attached to the robot structure.

  • ARTICULAR_FRAME: Corresponds to the joint state. This value is deprecated. You should rather use vpRobot::JOINT_STATE .

  • JOINT_STATE: Corresponds to the joint state.

  • END_EFFECTOR_FRAME: Corresponds to robot end-effector frame.

  • CAMERA_FRAME: Corresponds to a frame attached to the camera mounted on the robot end-effector.

  • TOOL_FRAME: Corresponds to a frame attached to the tool (camera, gripper…) mounted on the robot end-effector. This value is equal to vpRobot::CAMERA_FRAME .

  • MIXT_FRAME: Corresponds to a “virtual” frame where translations are expressed in the reference frame, and rotations in the camera frame.

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
__init__(self)

Default constructor that sets the transformation between world frame and camera frame to identity.

getMaxRotationVelocity(self) float

Get the maximal rotation velocity that can be sent to the robot during a velocity control.

Returns:

Maximum rotation velocity expressed in rad/s.

getMaxTranslationVelocity(self) float

Get the maximal translation velocity that can be sent to the robot during a velocity control.

Returns:

Maximum translational velocity expressed in m/s.

getNDof(self) int

Return robot degrees of freedom number.

getPosition(*args, **kwargs)

Overloaded function.

  1. getPosition(self: visp._visp.robot.SimulatorCamera) -> visp._visp.core.HomogeneousMatrix

Return the camera position in the world frame.

  1. getPosition(self: visp._visp.robot.SimulatorCamera, wMc: visp._visp.core.HomogeneousMatrix) -> None

Get the camera position in the world frame.

  1. getPosition(self: visp._visp.robot.SimulatorCamera, frame: visp._visp.robot.Robot.ControlFrameType, q: visp._visp.core.ColVector) -> None

getRobotState(self) visp._visp.robot.Robot.RobotStateType
getSamplingTime(self) float

Return the sampling time.

Returns:

Sampling time in second used to compute the robot displacement from the velocity applied to the robot during this time.

get_cVe(self, cVe: visp._visp.core.VelocityTwistMatrix) None

Get the twist transformation from camera frame to end-effector frame. This transformation allows to compute a velocity expressed in the end-effector frame into the camera frame.

Parameters:
cVe: visp._visp.core.VelocityTwistMatrix

Twist transformation. Here this transformation is equal to identity since camera frame and end-effector frame are at the same location.

get_eJe(self, eJe: visp._visp.core.Matrix) None

Get the robot jacobian expressed in the end-effector frame. For that simple robot the Jacobian is the identity.

static saturateVelocities(v_in: visp._visp.core.ColVector, v_max: visp._visp.core.ColVector, verbose: bool = false) visp._visp.core.ColVector

Saturate velocities.

The code below shows how to use this static method in order to saturate a velocity skew vector.

#include <iostream>

#include <visp3/robot/vpRobot.h>

int main()
{
  // Set a velocity skew vector
  vpColVector v(6);
  v[0] = 0.1;               // vx in m/s
  v[1] = 0.2;               // vy
  v[2] = 0.3;               // vz
  v[3] = vpMath::rad(10);   // wx in rad/s
  v[4] = vpMath::rad(-10);  // wy
  v[5] = vpMath::rad(20);   // wz

  // Set the maximal allowed velocities
  vpColVector v_max(6);
  for (int i=0; i<3; i++)
    v_max[i] = 0.3;             // in translation (m/s)
  for (int i=3; i<6; i++)
    v_max[i] = vpMath::rad(10); // in rotation (rad/s)

  // Compute the saturated velocity skew vector
  vpColVector v_sat = vpRobot::saturateVelocities(v, v_max, true);

  std::cout << "v    : " << v.t() << std::endl;
  std::cout << "v max: " << v_max.t() << std::endl;
  std::cout << "v sat: " << v_sat.t() << std::endl;

  return 0;
}
Parameters:
v_in: visp._visp.core.ColVector

Vector of input velocities to saturate. Translation velocities should be expressed in m/s while rotation velocities in rad/s.

v_max: visp._visp.core.ColVector

Vector of maximal allowed velocities. Maximal translation velocities should be expressed in m/s while maximal rotation velocities in rad/s.

verbose: bool = false

Print a message indicating which axis causes the saturation.

Returns:

Saturated velocities.

setMaxRotationVelocity(self, maxVr: float) None

Set the maximal rotation velocity that can be sent to the robot during a velocity control.

setMaxTranslationVelocity(self, maxVt: float) None

Set the maximal translation velocity that can be sent to the robot during a velocity control.

setPosition(self, wMc: visp._visp.core.HomogeneousMatrix) None

Set the robot position in the world frame.

Parameters:
wMc: visp._visp.core.HomogeneousMatrix

Transformation from world frame to camera frame.

setRobotState(self, newState: visp._visp.robot.Robot.RobotStateType) visp._visp.robot.Robot.RobotStateType
setSamplingTime(self, delta_t: float) None

Set the sampling time.

Parameters:
delta_t: float

Sampling time in second used to compute the robot displacement from the velocity applied to the robot during this time.

setVelocity(self, frame: visp._visp.robot.Robot.ControlFrameType, vel: visp._visp.core.ColVector) None
setVerbose(self, verbose: bool) None