RobotWireFrameSimulator

class RobotWireFrameSimulator

Bases: RobotSimulator

This class aims to be a basis used to create all the simulators of robots.

Thus in this class you will find all the parameters and methods which are necessary to create a simulator. Several methods are pure virtual. In this case it means that they are specific to the each robot, for example the computation of the geometrical model.

Warning

This class uses threading capabilities. Thus on Unix-like platforms, the libpthread third-party library need to be installed. On Windows, we use the native threading capabilities.

Methods

__init__

getExternalCameraParameters

Get the parameters of the virtual external camera.

getExternalCameraPosition

Get the external camera's position relative to the the world reference frame.

getInternalView

Overloaded function.

get_cMo

Get the pose between the object and the robot's camera.

get_fMo

Get the pose between the object and the fixed world frame.

initScene

Overloaded function.

setCameraColor

Set the color used to display the camera in the external view.

setConstantSamplingTimeMode

Set the flag used to force the sampling time in the thread computing the robot's displacement to a constant value; see setSamplingTime() .

setCurrentViewColor

Set the color used to display the object at the current position in the robot's camera view.

setDesiredCameraPosition

Set the desired position of the robot's camera relative to the object.

setDesiredViewColor

Set the color used to display the object at the desired position in the robot's camera view.

setDisplayRobotType

setExternalCameraPosition

Set the external camera point of view.

setGraphicsThickness

Specify the thickness of the graphics drawings.

setSamplingTime

Overloaded function.

setSingularityManagement

Set the parameter which enable or disable the singularity management.

setVerbose

Activates extra printings when the robot reaches joint limits...

set_fMo

Set the pose between the object and the fixed world frame.

Inherited Methods

STATE_FORCE_TORQUE_CONTROL

TOOL_FRAME

setMaxTranslationVelocity

Set the maximal translation velocity that can be sent to the robot during a velocity control.

STATE_VELOCITY_CONTROL

saturateVelocities

Saturate velocities.

REFERENCE_FRAME

getMaxTranslationVelocity

Get the maximal translation velocity that can be sent to the robot during a velocity control.

ControlFrameType

Robot control frames.

getPosition

RobotStateType

Robot control frames.

getNDof

Return robot degrees of freedom number.

getMaxRotationVelocity

Get the maximal rotation velocity that can be sent to the robot during a velocity control.

setRobotState

STATE_STOP

STATE_POSITION_CONTROL

STATE_ACCELERATION_CONTROL

CAMERA_FRAME

setMaxRotationVelocity

Set the maximal rotation velocity that can be sent to the robot during a velocity control.

END_EFFECTOR_FRAME

ARTICULAR_FRAME

getRobotState

MIXT_FRAME

getSamplingTime

Return the sampling time.

JOINT_STATE

Operators

__doc__

__init__

__module__

Attributes

ARTICULAR_FRAME

CAMERA_FRAME

END_EFFECTOR_FRAME

I

JOINT_STATE

MIXT_FRAME

MODEL_3D

MODEL_DH

REFERENCE_FRAME

STATE_ACCELERATION_CONTROL

STATE_FORCE_TORQUE_CONTROL

STATE_POSITION_CONTROL

STATE_STOP

STATE_VELOCITY_CONTROL

TOOL_FRAME

__annotations__

class ControlFrameType(self, value: int)

Bases: pybind11_object

Robot control frames.

Values:

  • REFERENCE_FRAME: Corresponds to a fixed reference frame attached to the robot structure.

  • ARTICULAR_FRAME: Corresponds to the joint state. This value is deprecated. You should rather use vpRobot::JOINT_STATE .

  • JOINT_STATE: Corresponds to the joint state.

  • END_EFFECTOR_FRAME: Corresponds to robot end-effector frame.

  • CAMERA_FRAME: Corresponds to a frame attached to the camera mounted on the robot end-effector.

  • TOOL_FRAME: Corresponds to a frame attached to the tool (camera, gripper…) mounted on the robot end-effector. This value is equal to vpRobot::CAMERA_FRAME .

  • MIXT_FRAME: Corresponds to a “virtual” frame where translations are expressed in the reference frame, and rotations in the camera frame.

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
class DisplayRobotType(self, value: int)

Bases: pybind11_object

Values:

  • MODEL_3D

  • MODEL_DH

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
class RobotStateType(self, value: int)

Bases: pybind11_object

Robot control frames.

Values:

  • REFERENCE_FRAME: Corresponds to a fixed reference frame attached to the robot structure.

  • ARTICULAR_FRAME: Corresponds to the joint state. This value is deprecated. You should rather use vpRobot::JOINT_STATE .

  • JOINT_STATE: Corresponds to the joint state.

  • END_EFFECTOR_FRAME: Corresponds to robot end-effector frame.

  • CAMERA_FRAME: Corresponds to a frame attached to the camera mounted on the robot end-effector.

  • TOOL_FRAME: Corresponds to a frame attached to the tool (camera, gripper…) mounted on the robot end-effector. This value is equal to vpRobot::CAMERA_FRAME .

  • MIXT_FRAME: Corresponds to a “virtual” frame where translations are expressed in the reference frame, and rotations in the camera frame.

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
__init__(*args, **kwargs)
getExternalCameraParameters(self) visp._visp.core.CameraParameters

Get the parameters of the virtual external camera.

Returns:

It returns the camera parameters.

getExternalCameraPosition(self) visp._visp.core.HomogeneousMatrix

Get the external camera’s position relative to the the world reference frame.

Returns:

the main external camera position relative to the the world reference frame.

getInternalView(*args, **kwargs)

Overloaded function.

  1. getInternalView(self: visp._visp.robot.RobotWireFrameSimulator, I: visp._visp.core.ImageRGBa) -> None

Get the view of the camera’s robot.

According to the initialisation method you used, the current position and maybee the desired position of the object are displayed.

Warning

: The objects are displayed thanks to overlays. The image I is not modified.

  1. getInternalView(self: visp._visp.robot.RobotWireFrameSimulator, I: visp._visp.core.ImageGray) -> None

Get the view of the camera’s robot.

According to the initialisation method you used, the current position and maybee the desired position of the object are displayed.

Warning

: The objects are displayed thanks to overlays. The image I is not modified.

getMaxRotationVelocity(self) float

Get the maximal rotation velocity that can be sent to the robot during a velocity control.

Returns:

Maximum rotation velocity expressed in rad/s.

getMaxTranslationVelocity(self) float

Get the maximal translation velocity that can be sent to the robot during a velocity control.

Returns:

Maximum translational velocity expressed in m/s.

getNDof(self) int

Return robot degrees of freedom number.

getPosition(self, frame: visp._visp.robot.Robot.ControlFrameType) visp._visp.core.ColVector
getRobotState(self) visp._visp.robot.Robot.RobotStateType
getSamplingTime(self) float

Return the sampling time.

Returns:

Sampling time in second used to compute the robot displacement from the velocity applied to the robot during this time.

get_cMo(self) visp._visp.core.HomogeneousMatrix

Get the pose between the object and the robot’s camera.

Returns:

The pose between the object and the fixed world frame.

get_fMo(self) visp._visp.core.HomogeneousMatrix

Get the pose between the object and the fixed world frame.

Returns:

The pose between the object and the fixed world frame.

initScene(*args, **kwargs)

Overloaded function.

  1. initScene(self: visp._visp.robot.RobotWireFrameSimulator, obj: visp._visp.robot.WireFrameSimulator.SceneObject, desiredObject: visp._visp.robot.WireFrameSimulator.SceneDesiredObject) -> None

  2. initScene(self: visp._visp.robot.RobotWireFrameSimulator, obj: str, desiredObject: str) -> None

Initialize the display. It enables to choose the type of scene which will be used to display the object at the current position and at the desired position.

Here you can use the scene you want. You have to set the path to the .bnd file which is a scene file. It is also possible to use a vrml (.wrl) file.

Parameters:
obj

Path to the scene file you want to use.

  1. initScene(self: visp._visp.robot.RobotWireFrameSimulator, obj: visp._visp.robot.WireFrameSimulator.SceneObject) -> None

  2. initScene(self: visp._visp.robot.RobotWireFrameSimulator, obj: str) -> None

Initialize the display. It enables to choose the type of scene which will be used to display the object at the current position. The object at the desired position is not displayed.

Here you can use the scene you want. You have to set the path to the .bnd file which is a scene file, or the vrml file.

Parameters:
obj

Path to the scene file you want to use.

static saturateVelocities(v_in: visp._visp.core.ColVector, v_max: visp._visp.core.ColVector, verbose: bool = false) visp._visp.core.ColVector

Saturate velocities.

The code below shows how to use this static method in order to saturate a velocity skew vector.

#include <iostream>

#include <visp3/robot/vpRobot.h>

#ifdef ENABLE_VISP_NAMESPACE
using namespace VISP_NAMESPACE_NAME;
#endif

int main()
{
  // Set a velocity skew vector
  vpColVector v(6);
  v[0] = 0.1;               // vx in m/s
  v[1] = 0.2;               // vy
  v[2] = 0.3;               // vz
  v[3] = vpMath::rad(10);   // wx in rad/s
  v[4] = vpMath::rad(-10);  // wy
  v[5] = vpMath::rad(20);   // wz

  // Set the maximal allowed velocities
  vpColVector v_max(6);
  for (int i=0; i<3; i++)
    v_max[i] = 0.3;             // in translation (m/s)
  for (int i=3; i<6; i++)
    v_max[i] = vpMath::rad(10); // in rotation (rad/s)

  // Compute the saturated velocity skew vector
  vpColVector v_sat = vpRobot::saturateVelocities(v, v_max, true);

  std::cout << "v    : " << v.t() << std::endl;
  std::cout << "v max: " << v_max.t() << std::endl;
  std::cout << "v sat: " << v_sat.t() << std::endl;

  return 0;
}
Parameters:
v_in: visp._visp.core.ColVector

Vector of input velocities to saturate. Translation velocities should be expressed in m/s while rotation velocities in rad/s.

v_max: visp._visp.core.ColVector

Vector of maximal allowed velocities. Maximal translation velocities should be expressed in m/s while maximal rotation velocities in rad/s.

verbose: bool = false

Print a message indicating which axis causes the saturation.

Returns:

Saturated velocities.

setCameraColor(self, col: visp._visp.core.Color) None

Set the color used to display the camera in the external view.

Parameters:
col: visp._visp.core.Color

The desired color.

setConstantSamplingTimeMode(self, _constantSamplingTimeMode: bool) None

Set the flag used to force the sampling time in the thread computing the robot’s displacement to a constant value; see setSamplingTime() . It may be useful if the main thread (computing the features) is very time consuming. False by default.

Parameters:
_constantSamplingTimeMode: bool

The new value of the constantSamplingTimeMode flag.

setCurrentViewColor(self, col: visp._visp.core.Color) None

Set the color used to display the object at the current position in the robot’s camera view.

Parameters:
col: visp._visp.core.Color

The desired color.

setDesiredCameraPosition(self, cdMo_: visp._visp.core.HomogeneousMatrix) None

Set the desired position of the robot’s camera relative to the object.

Parameters:
cdMo_: visp._visp.core.HomogeneousMatrix

The desired pose of the camera.

setDesiredViewColor(self, col: visp._visp.core.Color) None

Set the color used to display the object at the desired position in the robot’s camera view.

Parameters:
col: visp._visp.core.Color

The desired color.

setDisplayRobotType(self, dispType: visp._visp.robot.RobotWireFrameSimulator.DisplayRobotType) None
setExternalCameraPosition(self, camMf_: visp._visp.core.HomogeneousMatrix) None

Set the external camera point of view.

Parameters:
camMf_: visp._visp.core.HomogeneousMatrix

The pose of the external camera relative to the world reference frame.

setGraphicsThickness(self, thickness: int) None

Specify the thickness of the graphics drawings.

setMaxRotationVelocity(self, maxVr: float) None

Set the maximal rotation velocity that can be sent to the robot during a velocity control.

setMaxTranslationVelocity(self, maxVt: float) None

Set the maximal translation velocity that can be sent to the robot during a velocity control.

setRobotState(self, newState: visp._visp.robot.Robot.RobotStateType) visp._visp.robot.Robot.RobotStateType
setSamplingTime(*args, **kwargs)

Overloaded function.

  1. setSamplingTime(self: visp._visp.robot.RobotWireFrameSimulator, delta_t: float) -> None

Set the sampling time.

Since the wireframe simulator is threaded, the sampling time is set to vpTime::getMinTimeForUsleepCall() / 1000 seconds.

Parameters:
delta_t

Sampling time in second used to compute the robot displacement from the velocity applied to the robot during this time.

  1. setSamplingTime(self: visp._visp.robot.RobotSimulator, delta_t: float) -> None

Set the sampling time.

Parameters:
delta_t

Sampling time in second used to compute the robot displacement from the velocity applied to the robot during this time.

setSingularityManagement(self, sm: bool) None

Set the parameter which enable or disable the singularity management.

setVerbose(self, verbose: bool) None

Activates extra printings when the robot reaches joint limits…

set_fMo(self, fMo_: visp._visp.core.HomogeneousMatrix) None

Set the pose between the object and the fixed world frame.

Parameters:
fMo_: visp._visp.core.HomogeneousMatrix

The pose between the object and the fixed world frame.