Visual Servoing Platform
version 3.3.0 under development (2020-02-17)
|
This tutorial focuses estimation of the homogeneous transformation between the robot end-effector and the camera frame. As a use case, we will consider in this tutorial the case of a Panda robot in its research version from Franka Emika equipped with an Intel Realsense SR300 camera mounted on its end-effector. The principle of the extrinsic calibration is easy to apply to any other robot equipped with any other camera attached to the robot end-effector.
Let us consider:
The calibration process described in this tutorial consists in three stages:
Note that all the material (source code) described in this tutorial is part of ViSP source code (in tutorial/calibration
folder) and could be downloaded using the following command:
In order to compute the pose from the chessboard image, there is the need to get the camera intrinsic parameters. Depending on the device, these parameters are part of the device SDK or firmware. This is for example the case for our SR300 camera considered in this tutorial. These intrinsic parameters could be retrieved using vpRealSense2::getCameraParameters().
If you have an other camera, or if you want to have a better estimation than the factory parameters you may follow Tutorial: Camera intrinsic calibration. Otherwise you can skip this section.
Download and print a black and white chessboard [OpenCV_Chessboard.pdf].
Glue the chessboard on a flat surface and put it under the camera.
The objective here is to complete step 1. by acquiring couples of poses and the corresponding images of the chessboard. To this end move the camera attached to the robot to different positions. At least 8 to 10 positions are requested. To define a good position you have to imagine a half sphere over the chessboard and select positions that discretise as much as possible all the half sphere surface. For each position you should see all the chessboard as large as possible in the image.
If you have a Franka robot, follow Configure Ethernet and Connect to Franka desk instructions to power on the Panda robot. Then if this is not already done, follow Install Franka library and Configure and build ViSP.
Connect the Realsense SR300 camera to the computer, put the chessboard in the camera field of view, enter in tutorial/calibration
folder and run tutorial-franka-acquire-calib-data
binary to acquire the image and the corresponding robot end-effector position:
$ cd tutorial/calibration $ ./tutorial-franka-acquire-calib-data
By default the robot controller IP is 192.168.1.1
. If your Franka has an other IP (let say 10.0.0.2) use --ip
option like:
$ ./tutorial-franka-acquire-calib-data --ip 10.0.0.2
Click with the left mouse button to acquire data. It records the following outputs:
camera.xml
: XML file that contains the intrinsic camera parameters extracted from camera firmwareimage-<number>.png
+ pose-fMe-<number>.txt
with number starting from 1. pose-fMe-<number>.yaml
is the pose of the end-effector expressed in the robot base frame , while image-<number>.png
is the image captured at the corresponding robot position.Move the robot to an other position such as the chessboard remains in the image and repeat data acquisition by a left mouse click. We recommend to acquire data at 8 to 10 different robot positions.
A right mouse click ends this step exiting the binary.
This is the output when 8 different positions are considered:
The source code corresponding to the binary is available in tutorial-franka-acquire-calib-data.cpp. If your setup is different, it could be easily adapted to your robot or camera.
Here we will complete step 2. by computing for each image the corresponding pose of the chessboard using the camera intrinsic parameters recorded in camera.xml
file.
To this end run camera_pose
binary to compute the pose of the chessboard with respect to the camera frame. Here we consider that the size of the each chessboard square is 0.027 by 0.027 meter. Modify option --square_size
according to your chessboard.
$ ./tutorial-chessboard-pose --square_size 0.027 --input image-%d.png
It produces as output the corresponding poses pose-cMo-<number>.yaml
.
The source code corresponding to the binary is available in tutorial-chessboard-pose.cpp.
The final step 3. consists now to estimate the transformation from the couples of and poses.
Complete the calibration running tutorial-hand-eye-calibration
binary with the number of poses to consider:
$ ./tutorial-hand-eye-calibration --ndata <number of images or robot poses>
It produces as output the end-effector to camera frame transformation in eMc.yaml
and eMc.txt
files.
The result of the calibration is available in eMc.yaml
file that contains the as a pose vector, with translation in meter and rotation as a vector with values in radians.
The source code corresponding to the binary is available in tutorial-hand-eye-calibration.cpp.
You are now ready to follow Tutorial: PBVS with Panda 7-dof robot from Franka Emika.