RealSense2¶
- class RealSense2(self)¶
Bases:
pybind11_object
This class provides a lightweight wrapper over the Intel librealsense2 library https://github.com/IntelRealSense/librealsense . It allows to capture data from the Intel RealSense cameras.
Note
Supported devices for Intel® RealSense™ SDK 2.0:
Intel® RealSense™ Camera D400-Series
Intel® RealSense™ Developer Kit SR300
Intel® RealSense™ Tracking Camera T265 (librealsense2 version > 2.31.0)
The usage of vpRealSense2 class is enabled when librealsense2 3rd party is successfully installed.
Moreover, if Point Cloud Library (PCL) 3rd party is installed, we also propose interfaces to retrieve point cloud as pcl::PointCloud<pcl::PointXYZ> or pcl::PointCloud<pcl::PointXYZRGB> data structures.
Warning
Notice that the usage of this class requires compiler and library support for the ISO C++ 2011 standard. This support is enabled by default in ViSP when supported by the compiler. Hereafter we give an example of a CMakeLists.txt file that allows to build sample-realsense.cpp that uses vpRealSense2 class.
10) project(sample) find_package(VISP REQUIRED) include_directories(${VISP_INCLUDE_DIRS}) add_executable(sample-realsense sample-realsense.cpp) target_link_libraries(sample-realsense ${VISP_LIBRARIES})
To acquire images from the RealSense color camera and convert them into grey level images, a good starting is to use the following code that corresponds to the content of sample-realsense.cpp:
#include <visp3/gui/vpDisplayGDI.h> #include <visp3/gui/vpDisplayX.h> #include <visp3/sensor/vpRealSense2.h> #ifdef ENABLE_VISP_NAMESPACE using namespace VISP_NAMESPACE_NAME; #endif int main() { vpRealSense2 rs; rs.open(); vpImage<unsigned char> I(rs.getIntrinsics(RS2_STREAM_COLOR).height, rs.getIntrinsics(RS2_STREAM_COLOR).width); #ifdef VISP_HAVE_X11 vpDisplayX d(I); #elif defined(VISP_HAVE_GDI) vpDisplayGDI d(I); #endif while (true) { rs.acquire(I); vpDisplay::display(I); vpDisplay::flush(I); if (vpDisplay::getClick(I, false)) break; } return 0; }
If you want to acquire color images, in the previous sample replace:
vpImage<unsigned char> I(rs.getIntrinsics(RS2_STREAM_COLOR).height, rs.getIntrinsics(RS2_STREAM_COLOR).width);
by
vpImage<vpRGBa> I(rs.getIntrinsics(RS2_STREAM_COLOR).height, rs.getIntrinsics(RS2_STREAM_COLOR).width);
If you are interested in the point cloud and if ViSP is build with PCL support, you can start from the following example where we use PCL library to visualize the point cloud
#include <pcl/visualization/cloud_viewer.h> #include <pcl/visualization/pcl_visualizer.h> #include <visp3/sensor/vpRealSense2.h> #ifdef ENABLE_VISP_NAMESPACE using namespace VISP_NAMESPACE_NAME; #endif int main() { vpRealSense2 rs; rs.open(); pcl::PointCloud<pcl::PointXYZRGB>::Ptr pointcloud(new pcl::PointCloud<pcl::PointXYZRGB>); rs.acquire(nullptr, nullptr, nullptr, pointcloud); pcl::visualization::PCLVisualizer::Ptr viewer (new pcl::visualization::PCLVisualizer ("3D Viewer")); pcl::visualization::PointCloudColorHandlerRGBField<pcl::PointXYZRGB> rgb(pointcloud); viewer->setBackgroundColor(0, 0, 0); viewer->initCameraParameters(); viewer->setCameraPosition(0, 0, -0.5, 0, -1, 0); while (true) { rs.acquire(nullptr, nullptr, nullptr, pointcloud); static bool update = false; if (!update) { viewer->addPointCloud<pcl::PointXYZRGB> (pointcloud, rgb, "sample cloud"); viewer->setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 1, "sample cloud"); update = true; } else { viewer->updatePointCloud<pcl::PointXYZRGB> (pointcloud, rgb, "sample cloud"); } viewer->spinOnce(30); } return 0; }
If you want to change the default stream parameters, refer to the librealsense2 rs2::config documentation. The following code allows to capture the color stream in 1920x1080 at 30 Hz:
#include <visp3/gui/vpDisplayGDI.h> #include <visp3/gui/vpDisplayX.h> #include <visp3/sensor/vpRealSense2.h> #ifdef ENABLE_VISP_NAMESPACE using namespace VISP_NAMESPACE_NAME; #endif int main() { vpRealSense2 rs; rs2::config config; config.enable_stream(RS2_STREAM_COLOR, 1920, 1080, RS2_FORMAT_RGBA8, 30); config.enable_stream(RS2_STREAM_DEPTH, 640, 480, RS2_FORMAT_Z16, 30); config.enable_stream(RS2_STREAM_INFRARED, 640, 480, RS2_FORMAT_Y8, 30); rs.open(config); vpImage<vpRGBa> Ic(rs.getIntrinsics(RS2_STREAM_COLOR).height, rs.getIntrinsics(RS2_STREAM_COLOR).width); vpImage<unsigned char> Ii(rs.getIntrinsics(RS2_STREAM_INFRARED).height, rs.getIntrinsics(RS2_STREAM_INFRARED).width); #ifdef VISP_HAVE_X11 vpDisplayX dc(Ic, 0, 0, "Color"); vpDisplayX di(Ii, 100, 100, "Infrared"); #elif defined(VISP_HAVE_GDI) vpDisplayGDI dc(Ic, 0, 0, "Color"); vpDisplayGDI di(Ii, 100, 100, "Infrared"); #endif while (true) { rs.acquire((unsigned char *) Ic.bitmap, nullptr, nullptr, Ii.bitmap); vpDisplay::display(Ic); vpDisplay::display(Ii); vpDisplay::flush(Ic); vpDisplay::flush(Ii); if (vpDisplay::getClick(Ic, false) || vpDisplay::getClick(Ii, false)) break; } return 0; }
This other example shows how to get depth stream aligned on color stream:
#include <visp3/core/vpImageConvert.h> #include <visp3/gui/vpDisplayGDI.h> #include <visp3/gui/vpDisplayX.h> #include <visp3/sensor/vpRealSense2.h> #ifdef ENABLE_VISP_NAMESPACE using namespace VISP_NAMESPACE_NAME; #endif int main() { vpRealSense2 rs; rs2::config config; config.enable_stream(RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_RGBA8, 30); config.enable_stream(RS2_STREAM_DEPTH, 640, 480, RS2_FORMAT_Z16, 30); config.enable_stream(RS2_STREAM_INFRARED, 640, 480, RS2_FORMAT_Y8, 30); rs.open(config); vpImage<vpRGBa> Ic(rs.getIntrinsics(RS2_STREAM_COLOR).height, rs.getIntrinsics(RS2_STREAM_COLOR).width); vpImage<uint16_t> Id_raw(rs.getIntrinsics(RS2_STREAM_DEPTH).height, rs.getIntrinsics(RS2_STREAM_DEPTH).width); vpImage<vpRGBa> Id(rs.getIntrinsics(RS2_STREAM_DEPTH).height, rs.getIntrinsics(RS2_STREAM_DEPTH).width); #ifdef VISP_HAVE_X11 vpDisplayX dc(Ic, 0, 0, "Color"); vpDisplayX dd(Id, 100, 100, "Depth aligned to color"); #elif defined(VISP_HAVE_GDI) vpDisplayGDI dc(Ic, 0, 0, "Color"); vpDisplayGDI dd(Id, 100, 100, "Depth aligned to color"); #endif rs2::align align_to(RS2_STREAM_COLOR); while (true) { rs.acquire((unsigned char *) Ic.bitmap, (unsigned char *) Id_raw.bitmap, nullptr, nullptr, &align_to); vpImageConvert::createDepthHistogram(Id_raw, Id); vpDisplay::display(Ic); vpDisplay::display(Id); vpDisplay::flush(Ic); vpDisplay::flush(Id); if (vpDisplay::getClick(Ic, false) || vpDisplay::getClick(Id, false)) break; } return 0; }
References to rs2::pipeline_profile and rs2::pipeline can be retrieved with ( rs.open() must be called before ):
rs2::pipeline_profile& profile = rs.getPipelineProfile(); rs2::pipeline& pipeline = rs.getPipeline();
Information about the sensor can be printed with:
#include <visp3/sensor/vpRealSense2.h> #ifdef ENABLE_VISP_NAMESPACE using namespace VISP_NAMESPACE_NAME; #endif int main() { vpRealSense2 rs; rs.open(); std::cout << "RealSense sensor characteristics: \n" << rs << std::endl; return 0; }
It is also possible to use several RealSense sensors at the same time. In that case, you need to create a vpRealSense2 object for each device and use vpRealSense2::enable_device(const std::string &serial_number) to select the device explicitly by its serial number. An example is provided in tutorial-grabber-multiple-realsense.cpp.
Note
Additional information can be found in the librealsense wiki .
Default constructor.
Methods
Default constructor.
librealsense documentation:Stop the pipeline streaming.
Return the camera parameters corresponding to a specific stream.
Get depth scale value used to convert all the uint16_t values contained in a depth frame into a distance in meter.
Get intrinsic parameters corresponding to the stream.
Get the value used when the pixel value (u, v) in the depth map is invalid for the point cloud.
Get the maximum Z value (used to discard bad reconstructed depth for pointcloud).
Get a reference to rs2::pipeline .
Get a reference to rs2::pipeline_profile .
Get the product line of the device being used.
Alias for the &operator<< operator.
Get the extrinsic transformation from one stream to another.
Overloaded function.
Set the value used when the pixel value (u, v) in the depth map is invalid for the point cloud.
Set the maximum Z value (used to discard bad reconstructed depth for pointcloud).
Inherited Methods
Operators
__doc__
Default constructor.
__module__
__repr__
Attributes
__annotations__
- __init__(self)¶
Default constructor.
- close(self) None ¶
librealsense documentation:Stop the pipeline streaming. The pipeline stops delivering samples to the attached computer vision modules and processing blocks, stops the device streaming and releases the device resources used by the pipeline. It is the application’s responsibility to release any frame reference it owns. The method takes effect only after start() was called, otherwise an exception is raised.
- getCameraParameters(self, stream: rs2_stream, type: visp._visp.core.CameraParameters.CameraParametersProjType, index: int = -1) visp._visp.core.CameraParameters ¶
Return the camera parameters corresponding to a specific stream. This function has to be called after open() .
Note
See getIntrinsics()
- Parameters:
- stream: rs2_stream¶
Stream for which camera intrinsic parameters are returned.
- type: visp._visp.core.CameraParameters.CameraParametersProjType¶
Indicates if the model should include distortion parameters or not.
- index: int = -1¶
Index of camera in T265 device, 1: Left 2. Right. Otherwise: -1(default)
- getDepthScale(self) float ¶
Get depth scale value used to convert all the uint16_t values contained in a depth frame into a distance in meter.
- getIntrinsics(self, stream: rs2_stream, index: int = -1) rs2_intrinsics ¶
Get intrinsic parameters corresponding to the stream. This function has to be called after open() .
Note
See getCameraParameters()
- getInvalidDepthValue(self) float ¶
Get the value used when the pixel value (u, v) in the depth map is invalid for the point cloud. For instance, the Point Cloud Library (PCL) uses NAN values for points where the depth is invalid.
- getMaxZ(self) float ¶
Get the maximum Z value (used to discard bad reconstructed depth for pointcloud).
- getPipeline(self) rs2::pipeline ¶
Get a reference to rs2::pipeline .
- getPipelineProfile(self) rs2::pipeline_profile ¶
Get a reference to rs2::pipeline_profile .
- getProductLine(self) str ¶
Get the product line of the device being used. This function need librealsense > 2.31.0. Otherwise it returns “unknown”.
- getSensorInfo(self) str ¶
Alias for the &operator<< operator. Return sensor information such as:
device info
supported options
stream profiles
intrinsics / extrinsics
[…]
- getTransformation(self: visp._visp.sensor.RealSense2, from: rs2_stream, to: rs2_stream, from_index: int = -1) visp._visp.core.HomogeneousMatrix ¶
Get the extrinsic transformation from one stream to another. This function has to be called after open() .
- Parameters:
- from
Streams for which the camera extrinsic parameters are returned.
- to
Streams for which the camera extrinsic parameters are returned.
- from_index
Index of the stream from which we will calculate the transformation, 1: From left to right, 2: From right to left. Otherwise: -1(default)
- open(*args, **kwargs)¶
Overloaded function.
open(self: visp._visp.sensor.RealSense2, cfg: rs2::config) -> bool
Open access to the RealSense device and start the streaming.
open(self: visp._visp.sensor.RealSense2, cfg: rs2::config, callback: Callable[[rs2::frame], None]) -> bool
Open access to the RealSense device and start the streaming.
- Parameters:
- cfg
A rs2::config with requested filters on the pipeline configuration. By default no filters are applied.
- callback
Stream callback, can be any callable object accepting rs2::frame. The callback is invoked immediately once a frame is ready.