RobotAfma4

class RobotAfma4(self, verbose: bool = true)

Bases: Afma4, Robot

Control of Irisa’s cylindrical robot named Afma4.

Implementation of the vpRobot class in order to control Irisa’s Afma4 robot. This robot is a cylindrical robot with five degrees of freedom but only four are actuated (see vpAfma4 documentation). It was manufactured in 1995 by the french Afma-Robots company. In 2008, the low level controller change for a more recent Adept technology based on the MotionBlox controller. A firewire camera is mounted on the end-effector to allow eye-in-hand visual servoing. The control of this camera is achieved by the vp1394TwoGrabber class. A Servolens lens is attached to the camera. It allows to control the focal lens, the iris and the focus throw a serial link. The control of the lens is possible using the vpServolens class.

This class allows to control the Afma4 cylindrical robot in position and velocity:

  • in the joint space ( vpRobot::ARTICULAR_FRAME ),

  • in the fixed reference frame ( vpRobot::REFERENCE_FRAME ),

  • in the camera frame ( vpRobot::CAMERA_FRAME ),

Mixed frame ( vpRobot::MIXT_FRAME ) where translations are expressed in the reference frame and rotations in the camera frame is not implemented. End-effector frame ( vpRobot::END_EFFECTOR_FRAME ) is also not implemented.

All the translations are expressed in meters for positions and m/s for the velocities. Rotations are expressed in radians for the positions, and rad/s for the rotation velocities.

The Denavit-Hartenberg representation as well as the direct and inverse kinematics models are given and implemented in the vpAfma4 class.

Warning

A Ctrl-C, a segmentation fault or other system errors are catched by this class to stop the robot.

To communicate with the robot, you may first create an instance of this class by calling the default constructor:

vpRobotAfma4 robot;

This initialize the robot kinematics with the eMc extrinsic camera parameters.

To control the robot in position, you may set the controller to position control and than send the position to reach in a specific frame like here in the joint space:

vpColVector q(4);
// Set a joint position
q[0] =  M_PI/2; // X axis, in radian
q[1] =  0.2;    // Y axis, in meter
q[2] = -M_PI/2; // A axis, in radian
q[3] =  M_PI/8; // B axis, in radian

// Initialize the controller to position control
robot.setRobotState(vpRobot::STATE_POSITION_CONTROL);

// Moves the robot in the joint space
robot.setPosition(vpRobot::ARTICULAR_FRAME, q);

The robot moves to the specified position with the default positioning velocity vpRobotAfma4::defaultPositioningVelocity . The setPositioningVelocity() method allows to change the maximal velocity used to reach the desired position.

// Set the max velocity to 40%
robot.setPositioningVelocity(40);

// Moves the robot in the joint space
robot.setPosition(vpRobot::ARTICULAR_FRAME, q);

To control the robot in velocity, you may set the controller to velocity control and than send the velocities. To end the velocity control and stop the robot you have to set the controller to the stop state. Here is an example of a velocity control in the joint space:

vpColVector qvel(6);
// Set a joint velocity
qvel[0] = M_PI/8; // X axis, in rad/s
qvel[1] = 0.2;    // Y axis, in m/s
qvel[2] = 0;      // A axis, in rad/s
qvel[3] = M_PI/8; // B axis, in rad/s

// Initialize the controller to position control
robot.setRobotState(vpRobot::STATE_VELOCITY_CONTROL);

while (...) {
  // Apply a velocity in the joint space
  robot.setVelocity(vpRobot::ARTICULAR_FRAME, qvel);

  // Compute new velocities qvel...
}

// Stop the robot
robot.setRobotState(vpRobot::STATE_STOP)

There is also possible to measure the robot current position with getPosition() method and the robot current velocities with the getVelocity() method.

For convenience, there is also the ability to read/write joint positions from a position file with readPosFile() and writePosFile() methods.

The only available constructor.

This constructor calls init() to initialise the connection with the MotionBox or low level controller, power on the robot and wait 1 sec before returning to be sure the initialisation is done.

It also set the robot state to vpRobot::STATE_STOP .

Methods

__init__

The only available constructor.

getDisplacement

Get the robot displacement since the last call of this method.

getPosition

Overloaded function.

getPositioningVelocity

Get the maximal velocity percentage used for a position control.

getPowerState

Get the robot power state indication if power is on or off.

getTime

Returns the robot controller current time (in second) since last robot power on.

getVelocity

Overloaded function.

get_cMe

Overloaded function.

get_cVe

Overloaded function.

get_cVf

Overloaded function.

get_eJe

Overloaded function.

get_fJe

Overloaded function.

init

Overloaded function.

move

Moves the robot to the joint position specified in the filename.

powerOff

Power off the robot.

powerOn

Power on the robot.

readPosFile

Read joint positions in a specific Afma4 position file.

savePosFile

Save joint (articular) positions in a specific Afma4 position file.

setPosition

Overloaded function.

setPositioningVelocity

Set the maximal velocity percentage to use for a position control.

setRobotState

Overloaded function.

setVelocity

stopMotion

Stop the robot and set the robot state to vpRobot::STATE_STOP .

Inherited Methods

STATE_ACCELERATION_CONTROL

RobotStateType

Robot control frames.

setMaxTranslationVelocity

Set the maximal translation velocity that can be sent to the robot during a velocity control.

getRobotState

setVerbose

END_EFFECTOR_FRAME

CAMERA_FRAME

get_fMe

Compute the forward kinematics (direct geometric model) as an homogeneous matrix.

saturateVelocities

Saturate velocities.

ARTICULAR_FRAME

get_fJe_inverse

Get the inverse jacobian.

JOINT_STATE

TOOL_FRAME

setMaxRotationVelocity

Set the maximal rotation velocity that can be sent to the robot during a velocity control.

STATE_STOP

getForwardKinematics

Compute the forward kinematics (direct geometric model) as an homogeneous matrix.

get_fMc

Overloaded function.

STATE_POSITION_CONTROL

getJointMax

Get max joint values.

getMaxTranslationVelocity

Get the maximal translation velocity that can be sent to the robot during a velocity control.

ControlFrameType

Robot control frames.

njoint

getMaxRotationVelocity

Get the maximal rotation velocity that can be sent to the robot during a velocity control.

REFERENCE_FRAME

STATE_FORCE_TORQUE_CONTROL

STATE_VELOCITY_CONTROL

getNDof

Return robot degrees of freedom number.

MIXT_FRAME

getJointMin

Get min joint values.

Operators

__doc__

__init__

The only available constructor.

__module__

Attributes

ARTICULAR_FRAME

CAMERA_FRAME

END_EFFECTOR_FRAME

JOINT_STATE

MIXT_FRAME

REFERENCE_FRAME

STATE_ACCELERATION_CONTROL

STATE_FORCE_TORQUE_CONTROL

STATE_POSITION_CONTROL

STATE_STOP

STATE_VELOCITY_CONTROL

TOOL_FRAME

__annotations__

defaultPositioningVelocity

njoint

class ControlFrameType(self, value: int)

Bases: pybind11_object

Robot control frames.

Values:

  • REFERENCE_FRAME: Corresponds to a fixed reference frame attached to the robot structure.

  • ARTICULAR_FRAME: Corresponds to the joint state. This value is deprecated. You should rather use vpRobot::JOINT_STATE .

  • JOINT_STATE: Corresponds to the joint state.

  • END_EFFECTOR_FRAME: Corresponds to robot end-effector frame.

  • CAMERA_FRAME: Corresponds to a frame attached to the camera mounted on the robot end-effector.

  • TOOL_FRAME: Corresponds to a frame attached to the tool (camera, gripper…) mounted on the robot end-effector. This value is equal to vpRobot::CAMERA_FRAME .

  • MIXT_FRAME: Corresponds to a “virtual” frame where translations are expressed in the reference frame, and rotations in the camera frame.

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
class RobotStateType(self, value: int)

Bases: pybind11_object

Robot control frames.

Values:

  • REFERENCE_FRAME: Corresponds to a fixed reference frame attached to the robot structure.

  • ARTICULAR_FRAME: Corresponds to the joint state. This value is deprecated. You should rather use vpRobot::JOINT_STATE .

  • JOINT_STATE: Corresponds to the joint state.

  • END_EFFECTOR_FRAME: Corresponds to robot end-effector frame.

  • CAMERA_FRAME: Corresponds to a frame attached to the camera mounted on the robot end-effector.

  • TOOL_FRAME: Corresponds to a frame attached to the tool (camera, gripper…) mounted on the robot end-effector. This value is equal to vpRobot::CAMERA_FRAME .

  • MIXT_FRAME: Corresponds to a “virtual” frame where translations are expressed in the reference frame, and rotations in the camera frame.

__and__(self, other: object) object
__eq__(self, other: object) bool
__ge__(self, other: object) bool
__getstate__(self) int
__gt__(self, other: object) bool
__hash__(self) int
__index__(self) int
__init__(self, value: int)
__int__(self) int
__invert__(self) object
__le__(self, other: object) bool
__lt__(self, other: object) bool
__ne__(self, other: object) bool
__or__(self, other: object) object
__rand__(self, other: object) object
__ror__(self, other: object) object
__rxor__(self, other: object) object
__setstate__(self, state: int) None
__xor__(self, other: object) object
property name : str
__init__(self, verbose: bool = true)

The only available constructor.

This constructor calls init() to initialise the connection with the MotionBox or low level controller, power on the robot and wait 1 sec before returning to be sure the initialisation is done.

It also set the robot state to vpRobot::STATE_STOP .

getDisplacement(self, frame: visp._visp.robot.Robot.ControlFrameType, displacement: visp._visp.core.ColVector) None

Get the robot displacement since the last call of this method.

Warning

This functionnality is not implemented for the moment in the cartesian space. It is only available in the joint space ( vpRobot::ARTICULAR_FRAME ).

In camera or reference frame, rotations are expressed with the Euler Rxyz representation.

Parameters:
frame: visp._visp.robot.Robot.ControlFrameType

The frame in which the measured displacement is expressed.

displacement: visp._visp.core.ColVector

The measured displacement since the last call of this method. The dimension of displacement is always 4, the number of joints. Translations are expressed in meters, rotations in radians.

getForwardKinematics(self, q: visp._visp.core.ColVector) visp._visp.core.HomogeneousMatrix

Compute the forward kinematics (direct geometric model) as an homogeneous matrix.

By forward kinematics we mean here the position and the orientation of the camera relative to the base frame given the articular positions of all the four joints.

This method is the same than get_fMc(const vpColVector & q).

Note

See get_fMc(const vpColVector & q)

Note

See getInverseKinematics()

Parameters:
q: visp._visp.core.ColVector

Articular position of the four joints: q[0] corresponds to the first rotation (joint 1 with value \(q_1\) ) of the turret around the vertical axis, while q[1] corresponds to the vertical translation (joint 2 with value \(q_2\) ), while q[2] and q[3] correspond to the pan and tilt of the camera (respectively joint 4 and 5 with values \(q_4\) and \(q_5\) ). Rotations q[0], q[2] and q[3] are expressed in radians. The translation q[1] is expressed in meters.

Returns:

The homogeneous matrix corresponding to the direct geometric model which expresses the transformation between the fix frame and the camera frame ( \({^f}M_c\) ) with:

\[{^f}M_c = {^f}M_e * {^e}M_c \]
getJointMax(self) visp._visp.core.ColVector

Get max joint values.

Returns:

Maximal joint values for the 4 dof X, Y, A, B. Translation Y is expressed in meters. Rotations X, A and B in radians.

getJointMin(self) visp._visp.core.ColVector

Get min joint values.

Returns:

Minimal joint values for the 4 dof X, Y, A, B. Translation Y is expressed in meters. Rotations X,A and B in radians.

getMaxRotationVelocity(self) float

Get the maximal rotation velocity that can be sent to the robot during a velocity control.

Returns:

Maximum rotation velocity expressed in rad/s.

getMaxTranslationVelocity(self) float

Get the maximal translation velocity that can be sent to the robot during a velocity control.

Returns:

Maximum translational velocity expressed in m/s.

getNDof(self) int

Return robot degrees of freedom number.

getPosition(*args, **kwargs)

Overloaded function.

  1. getPosition(self: visp._visp.robot.RobotAfma4, frame: visp._visp.robot.Robot.ControlFrameType, position: visp._visp.core.ColVector) -> None

  2. getPosition(self: visp._visp.robot.RobotAfma4, frame: visp._visp.robot.Robot.ControlFrameType, position: visp._visp.core.ColVector, timestamp: float) -> float

  3. getPosition(self: visp._visp.robot.Robot, frame: visp._visp.robot.Robot.ControlFrameType) -> visp._visp.core.ColVector

getPositioningVelocity(self) float

Get the maximal velocity percentage used for a position control.

Note

See setPositioningVelocity()

getPowerState(self) bool

Get the robot power state indication if power is on or off.

Note

See powerOn() , powerOff()

Returns:

true if power is on. false if power is off

getRobotState(self) visp._visp.robot.Robot.RobotStateType
getTime(self) float

Returns the robot controller current time (in second) since last robot power on.

getVelocity(*args, **kwargs)

Overloaded function.

  1. getVelocity(self: visp._visp.robot.RobotAfma4, frame: visp._visp.robot.Robot.ControlFrameType, velocity: visp._visp.core.ColVector) -> None

  2. getVelocity(self: visp._visp.robot.RobotAfma4, frame: visp._visp.robot.Robot.ControlFrameType, velocity: visp._visp.core.ColVector, timestamp: float) -> float

  3. getVelocity(self: visp._visp.robot.RobotAfma4, frame: visp._visp.robot.Robot.ControlFrameType) -> visp._visp.core.ColVector

  4. getVelocity(self: visp._visp.robot.RobotAfma4, frame: visp._visp.robot.Robot.ControlFrameType, timestamp: float) -> tuple[visp._visp.core.ColVector, float]

get_cMe(*args, **kwargs)

Overloaded function.

  1. get_cMe(self: visp._visp.robot.RobotAfma4, cMe: visp._visp.core.HomogeneousMatrix) -> None

Get the geometric transformation between the camera frame and the end-effector frame. This transformation is constant and correspond to the extrinsic camera parameters estimated by calibration.

Parameters:
cMe

Transformation between the camera frame and the end-effector frame.

  1. get_cMe(self: visp._visp.robot.Afma4, cMe: visp._visp.core.HomogeneousMatrix) -> None

Get the geometric transformation between the camera frame and the end-effector frame. This transformation is constant and correspond to the extrinsic camera parameters estimated by hand or by calibration.

Parameters:
cMe

Transformation between the camera frame and the end-effector frame.

get_cVe(*args, **kwargs)

Overloaded function.

  1. get_cVe(self: visp._visp.robot.RobotAfma4, cVe: visp._visp.core.VelocityTwistMatrix) -> None

Get the twist transformation from camera frame to end-effector frame. This transformation allows to compute a velocity expressed in the end-effector frame into the camera frame.

Parameters:
cVe

Twist transformation.

  1. get_cVe(self: visp._visp.robot.Afma4, cVe: visp._visp.core.VelocityTwistMatrix) -> None

Get the twist transformation from camera frame to end-effector frame. This transformation allows to compute a velocity expressed in the end-effector frame into the camera frame.

Parameters:
cVe

Twist transformation.

get_cVf(*args, **kwargs)

Overloaded function.

  1. get_cVf(self: visp._visp.robot.RobotAfma4, cVf: visp._visp.core.VelocityTwistMatrix) -> None

Get the twist transformation from camera frame to the reference frame. This transformation allows to compute a velocity expressed in the reference frame into the camera frame.

Parameters:
cVf

Twist transformation.

  1. get_cVf(self: visp._visp.robot.Afma4, q: visp._visp.core.ColVector, cVf: visp._visp.core.VelocityTwistMatrix) -> None

Get the twist transformation from camera frame to the reference frame. This transformation allows to compute a velocity expressed in the reference frame into the camera frame.

Parameters:
q

Articular position of the four joints: q[0] corresponds to the first rotation (joint 1 with value \(q_1\) ) of the turret around the vertical axis, while q[1] corresponds to the vertical translation (joint 2 with value \(q_2\) ), while q[2] and q[3] correspond to the pan and tilt of the camera (respectively joint 4 and 5 with values \(q_4\) and \(q_5\) ). Rotations q[0], q[2] and q[3] are expressed in radians. The translation q[1] is expressed in meters.

cVf

Twist transformation.

get_eJe(*args, **kwargs)

Overloaded function.

  1. get_eJe(self: visp._visp.robot.RobotAfma4, eJe: visp._visp.core.Matrix) -> None

Get the robot jacobian expressed in the end-effector frame. To have access to the analytic form of this jacobian see vpAfma4::get_eJe() .

To compute eJe, we communicate with the low level controller to get the articular joint position of the robot.

Note

See vpAfma4::get_eJe()

Parameters:
eJe

Robot jacobian expressed in the end-effector frame.

  1. get_eJe(self: visp._visp.robot.Afma4, q: visp._visp.core.ColVector, eJe: visp._visp.core.Matrix) -> None

Get the robot jacobian expressed in the end-effector frame:

\[\begin{split}{^e}J_e = \left[\begin{array}{cccc} -c_5(a_1c_4+d_3s_4) & -s_5 & 0 & 0 \\s_5(a_1c_4+d_3s_4) & -c_5 & 0 & 0 \\a_1s_4-d_3c_4 & 0 & 0 & 0 \\-s_5 & 0 & -s_5 & 0 \\-c_5 & 0 & -c_5 & 0 \\0 & 0 & 0 & 1 \\\end{array} \right] \end{split}\]

Note

See get_fJe()

Parameters:
q

Articular position of the four joints: q[0] corresponds to the first rotation (joint 1 with value \(q_1\) ) of the turret around the vertical axis, while q[1] corresponds to the vertical translation (joint 2 with value \(q_2\) ), while q[2] and q[3] correspond to the pan and tilt of the camera (respectively joint 4 and 5 with values \(q_4\) and \(q_5\) ). Rotations q[0], q[2] and q[3] are expressed in radians. The translation q[1] is expressed in meters.

eJe

Robot jacobian expressed in the end-effector frame, with:

\[\begin{split}{^e}J_e = \left[\begin{array}{cc} {^f}R_e^T & 0_{3 \times 3} \\0_{3 \times 3} & {^f}R_e^T \\\end{array} \right] {^f}J_e\end{split}\]

get_fJe(*args, **kwargs)

Overloaded function.

  1. get_fJe(self: visp._visp.robot.RobotAfma4, fJe: visp._visp.core.Matrix) -> None

Get the robot jacobian expressed in the robot reference frame also called fix frame. To have access to the analytic form of this jacobian see vpAfma4::get_fJe() .

To compute fJe, we communicate with the low level controller to get the articular joint position of the robot.

Note

See vpAfma4::get_fJe()

Parameters:
fJe

Robot jacobian expressed in the reference frame.

  1. get_fJe(self: visp._visp.robot.Afma4, q: visp._visp.core.ColVector, fJe: visp._visp.core.Matrix) -> None

Get the robot jacobian expressed in the robot reference frame also called fix frame:

\[\begin{split}{^f}J_e = \left[\begin{array}{cccc} -a_1s_1-d_3c_1 & 0 & 0 & 0 \\a_1c_1-d_3s_1 & 0 & 0 & 0 \\0 & 1 & 0 & 0 \\0 & 0 & 0 & c_{14} \\0 & 0 & 0 & s_{14} \\1 & 0 & 1 & 0 \\\end{array} \right] \end{split}\]

Note

See get_eJe() and get_fJe_inverse()

Parameters:
q

Articular position of the four joints: q[0] corresponds to the first rotation (joint 1 with value \(q_1\) ) of the turret around the vertical axis, while q[1] corresponds to the vertical translation (joint 2 with value \(q_2\) ), while q[2] and q[3] correspond to the pan and tilt of the camera (respectively joint 4 and 5 with values \(q_4\) and \(q_5\) ). Rotations q[0], q[2] and q[3] are expressed in radians. The translation q[1] is expressed in meters.

fJe

Robot jacobian expressed in the robot reference frame.

get_fJe_inverse(self, q: visp._visp.core.ColVector, fJe_inverse: visp._visp.core.Matrix) None

Get the inverse jacobian.

\[\begin{split}{^f}J_e^+ = \left[\begin{array}{cccccc} -(a_1s_1+d_3c_1)/(a_1^2+d_3^2) & (a_1c_1-d_3s_1)/(a_1^2+d_3^2) & 0&0&0&0 \\0 & 0 & 1 & 0 & 0 & 0 \\(a_1s_1+d_3c_1)/(a_1^2+d_3^2) & -(a_1c_1-d_3s_1)/(a_1^2+d_3^2) & 0&0&0&1 \\0 & 0 & 0 & c_{14} & s_{14} & 0 \\\end{array} \right] \end{split}\]

Note

See get_eJe() and get_fJe()

Parameters:
q: visp._visp.core.ColVector

Articular position of the four joints: q[0] corresponds to the first rotation (joint 1 with value \(q_1\) ) of the turret around the vertical axis, while q[1] corresponds to the vertical translation (joint 2 with value \(q_2\) ), while q[2] and q[3] correspond to the pan and tilt of the camera (respectively joint 4 and 5 with values \(q_4\) and \(q_5\) ). Rotations q[0], q[2] and q[3] are expressed in radians. The translation q[1] is expressed in meters.

fJe_inverse: visp._visp.core.Matrix

Inverse robot jacobian expressed in the robot reference frame.

get_fMc(*args, **kwargs)

Overloaded function.

  1. get_fMc(self: visp._visp.robot.Afma4, q: visp._visp.core.ColVector) -> visp._visp.core.HomogeneousMatrix

Compute the forward kinematics (direct geometric model) as an homogeneous matrix.

By forward kinematics we mean here the position and the orientation of the camera relative to the base frame given the articular positions of all the four joints.

This method is the same than getForwardKinematics(const vpColVector & q).

Note

See getForwardKinematics(const vpColVector & q)

Parameters:
q

Articular position of the four joints: q[0] corresponds to the first rotation (joint 1 with value \(q_1\) ) of the turret around the vertical axis, while q[1] corresponds to the vertical translation (joint 2 with value \(q_2\) ), while q[2] and q[3] correspond to the pan and tilt of the camera (respectively joint 4 and 5 with values \(q_4\) and \(q_5\) ). Rotations q[0], q[2] and q[3] are expressed in radians. The translation q[1] is expressed in meters.

Returns:

The homogeneous matrix corresponding to the direct geometric model which expresses the transformation between the fix frame and the camera frame ( \({^f}M_c\) ) with:

\[{^f}M_c = {^f}M_e * {^e}M_c \]
  1. get_fMc(self: visp._visp.robot.Afma4, q: visp._visp.core.ColVector, fMc: visp._visp.core.HomogeneousMatrix) -> None

Compute the forward kinematics (direct geometric model) as an homogeneous matrix.

By forward kinematics we mean here the position and the orientation of the camera relative to the base frame given the articular positions of all the four joints.

Parameters:
q

Articular position of the four joints: q[0] corresponds to the first rotation (joint 1 with value \(q_1\) ) of the turret around the vertical axis, while q[1] corresponds to the vertical translation (joint 2 with value \(q_2\) ), while q[2] and q[3] correspond to the pan and tilt of the camera (respectively joint 4 and 5 with values \(q_4\) and \(q_5\) ). Rotations q[0], q[2] and q[3] are expressed in radians. The translation q[1] is expressed in meters.

fMc

The homogeneous matrix corresponding to the direct geometric model which expresses the transformation between the fix frame and the camera frame ( \({^f}M_c\) ) with:

\[{^f}M_c = {^f}M_e * {^e}M_c\]

get_fMe(self, q: visp._visp.core.ColVector, fMe: visp._visp.core.HomogeneousMatrix) None

Compute the forward kinematics (direct geometric model) as an homogeneous matrix.

By forward kinematics we mean here the position and the orientation of the end effector with respect to the base frame given the articular positions of all the four variable joints.

\[\begin{split}{^f}M_e = \left[\begin{array}{cccc} c_1s_4c_5+s_1c_4c_5 & -c_1s_4s_5-s_1c_4s_5 & c_1c_4-s_1s_4 &a_1c_1-d_3s_1 \\s_1s_4c_5-c_1c_4c_5 & -s_1s_4s_5+c_1c_4s_5 & s_1c_4+c_1s_4 &a_1s_1+d_3c_1 \\-s_5 & -c_5 & d_4+q_2 \\0 & 0 & 0 & 1 \\\end{array} \right] \end{split}\]
Parameters:
q: visp._visp.core.ColVector

Articular position of the four joints: q[0] corresponds to the first rotation (joint 1 with value \(q_1\) ) of the turret around the vertical axis, while q[1] corresponds to the vertical translation (joint 2 with value \(q_2\) ), while q[2] and q[3] correspond to the pan and tilt of the camera (respectively joint 4 and 5 with values \(q_4\) and \(q_5\) ). Rotations q[0], q[2] and q[3] are expressed in radians. The translation q[1] is expressed in meters.

fMe: visp._visp.core.HomogeneousMatrix

The homogeneous matrix corresponding to the direct geometric model which expresses the transformation between the fix frame and the end effector frame ( \({^f}M_e\) ) with

init(*args, **kwargs)

Overloaded function.

  1. init(self: visp._visp.robot.RobotAfma4) -> None

Initialise the connection with the MotionBox or low level controller, power on the robot and wait 1 sec before returning to be sure the initialisation is done.

  1. init(self: visp._visp.robot.Afma4) -> None

Does nothing for the moment.

move(self, filename: str) None

Moves the robot to the joint position specified in the filename. The positioning velocity is set to 10% of the robot maximal velocity.

Note

See readPosFile

Parameters:
filename: str

File containing a joint position.

powerOff(self) None

Power off the robot.

Note

See powerOn() , getPowerState()

powerOn(self) None

Power on the robot.

Note

See powerOff() , getPowerState()

static readPosFile(filename: str, q: visp._visp.core.ColVector) bool

Read joint positions in a specific Afma4 position file.

This position file has to start with a header. The six joint positions are given after the “R:” keyword. The first 3 values correspond to the joint translations X,Y,Z expressed in meters. The 3 last values correspond to the joint rotations A,B,C expressed in degres to be more representative for the user. Theses values are then converted in radians in q . The character “#” starting a line indicates a comment.

A typical content of such a file is given below:

#AFMA4 - Position - Version 2.01
# file: "myposition.pos "
#
# R: X Y A B
# Joint position: X : rotation of the turret in degrees (joint 1)
#                 Y : vertical translation in meters (joint 2)
#                 A : pan rotation of the camera in degrees (joint 4)
#                 B : tilt rotation of the camera in degrees (joint 5)
#

R: 45 0.3 -20 30

The code below shows how to read a position from a file and move the robot to this position.

vpRobotAfma4 robot;
vpColVector q;        // Joint position robot.
readPosFile("myposition.pos", q); // Set the joint position from the file
robot.setRobotState(vpRobot::STATE_POSITION_CONTROL);

robot.setPositioningVelocity(5); // Positioning velocity set to 5%
robot.setPosition(vpRobot::ARTICULAR_FRAME, q); // Move to the joint position

Note

See savePosFile()

Parameters:
filename: str

Name of the position file to read.

q: visp._visp.core.ColVector

Joint positions: X,Y,A,B. Translations Y is expressed in meters, while joint rotations X,A,B in radians.

Returns:

true if the positions were successfully readen in the file. false, if an error occurs.

static saturateVelocities(v_in: visp._visp.core.ColVector, v_max: visp._visp.core.ColVector, verbose: bool = false) visp._visp.core.ColVector

Saturate velocities.

The code below shows how to use this static method in order to saturate a velocity skew vector.

#include <iostream>

#include <visp3/robot/vpRobot.h>

int main()
{
  // Set a velocity skew vector
  vpColVector v(6);
  v[0] = 0.1;               // vx in m/s
  v[1] = 0.2;               // vy
  v[2] = 0.3;               // vz
  v[3] = vpMath::rad(10);   // wx in rad/s
  v[4] = vpMath::rad(-10);  // wy
  v[5] = vpMath::rad(20);   // wz

  // Set the maximal allowed velocities
  vpColVector v_max(6);
  for (int i=0; i<3; i++)
    v_max[i] = 0.3;             // in translation (m/s)
  for (int i=3; i<6; i++)
    v_max[i] = vpMath::rad(10); // in rotation (rad/s)

  // Compute the saturated velocity skew vector
  vpColVector v_sat = vpRobot::saturateVelocities(v, v_max, true);

  std::cout << "v    : " << v.t() << std::endl;
  std::cout << "v max: " << v_max.t() << std::endl;
  std::cout << "v sat: " << v_sat.t() << std::endl;

  return 0;
}
Parameters:
v_in: visp._visp.core.ColVector

Vector of input velocities to saturate. Translation velocities should be expressed in m/s while rotation velocities in rad/s.

v_max: visp._visp.core.ColVector

Vector of maximal allowed velocities. Maximal translation velocities should be expressed in m/s while maximal rotation velocities in rad/s.

verbose: bool = false

Print a message indicating which axis causes the saturation.

Returns:

Saturated velocities.

static savePosFile(filename: str, q: visp._visp.core.ColVector) bool

Save joint (articular) positions in a specific Afma4 position file.

This position file starts with a header on the first line. After convertion of the rotations in degrees, the joint position q is written on a line starting with the keyword “R: “. See readPosFile() documentation for an example of such a file.

Note

See readPosFile()

Parameters:
filename: str

Name of the position file to create.

q: visp._visp.core.ColVector

Joint positions: X,Y,A,B. Translations Y is expressed in meters, while joint rotations X,A,B in radians.

Returns:

true if the positions were successfully saved in the file. false, if an error occurs.

setMaxRotationVelocity(self, maxVr: float) None

Set the maximal rotation velocity that can be sent to the robot during a velocity control.

setMaxTranslationVelocity(self, maxVt: float) None

Set the maximal translation velocity that can be sent to the robot during a velocity control.

setPosition(*args, **kwargs)

Overloaded function.

  1. setPosition(self: visp._visp.robot.RobotAfma4, frame: visp._visp.robot.Robot.ControlFrameType, position: visp._visp.core.ColVector) -> None

  2. setPosition(self: visp._visp.robot.RobotAfma4, frame: visp._visp.robot.Robot.ControlFrameType, q1: float, q2: float, q4: float, q5: float) -> None

  3. setPosition(self: visp._visp.robot.RobotAfma4, filename: str) -> None

Move to an absolute joint position with a given percent of max velocity. The robot state is set to position control. The percent of max velocity is to set with setPositioningVelocity() . The position to reach is defined in the position file.

This method has the same behavior than the sample code given below;

vpColVector q;

robot.readPosFile("MyPositionFilename.pos", q);
robot.setRobotState(vpRobot::STATE_POSITION_CONTROL)
robot.setPosition(vpRobot::ARTICULAR_FRAME, q);

Note

See setPositioningVelocity()

Parameters:
filename

Name of the position file to read. The readPosFile() documentation shows a typical content of such a position file.

setPositioningVelocity(self, velocity: float) None

Set the maximal velocity percentage to use for a position control.

The default positioning velocity is defined by vpRobotAfma4::defaultPositioningVelocity . This method allows to change this default positioning velocity

vpColVector q[4]);
q = 0; // position in meter and rad

vpRobotAfma4 robot;

robot.setRobotState(vpRobot::STATE_POSITION_CONTROL);

// Set the max velocity to 20%
robot.setPositioningVelocity(20);

// Moves the robot to the joint position [0,0,0,0]
robot.setPosition(vpRobot::ARTICULAR_FRAME, q);

Note

See getPositioningVelocity()

Parameters:
velocity: float

Percentage of the maximal velocity. Values should be in ]0:100].

setRobotState(*args, **kwargs)

Overloaded function.

  1. setRobotState(self: visp._visp.robot.RobotAfma4, newState: visp._visp.robot.Robot.RobotStateType) -> visp._visp.robot.Robot.RobotStateType

Change the robot state.

Parameters:
newState

New requested robot state.

  1. setRobotState(self: visp._visp.robot.Robot, newState: visp._visp.robot.Robot.RobotStateType) -> visp._visp.robot.Robot.RobotStateType

setVelocity(self, frame: visp._visp.robot.Robot.ControlFrameType, velocity: visp._visp.core.ColVector) None
setVerbose(self, verbose: bool) None
stopMotion(self) None

Stop the robot and set the robot state to vpRobot::STATE_STOP .