Visual Servoing Platform  version 3.6.1 under development (2024-12-17)
Rendering a 3D scene with Panda3D

Introduction

In the context of providing a render-based tracker, this tutorial introduces a new, easy to use renderer based on Panda3D.

This renderer can output:

  • A color image, with support for textures and lighting
  • Depth image
  • Normal maps
    • In world space
    • In camera space

It only supports camera models with no distortion.

It is also possible to compute camera clipping values, depending on the pose of an object in the camera frame. This ensures that the depth buffer is as accurate as possible when considering this object.

Below is a set of renders for a textured cube object.

Multi-output rendering is performed via the vpPanda3DRendererSet class, which duplicates the scene across multiple renders and synchronizes changes to objects and the camera. Each Sub renderer implements a specific type of render: geometric (vpPanda3DGeometryRenderer) or color-based (vpPanda3DRGBRenderer) etc. They all inherit from vpPanda3DBaseRenderer, which implements basic functions for a panda renderer.

Panda3D installation

Installation on Ubuntu

  • Installer are available for Ubuntu browsing the download page.
  • Hereafter you will find the instructions to build and install Panda3D from source on Ubuntu 22.04
    $ sudo apt install python3-pip
    $ mkdir -p $VISP_WS/3rdparty/panda3d
    $ cd $VISP_WS/3rdparty/panda3d
    $ git clone https://github.com/panda3d/panda3d
    $ cd panda3d
    $ python3 makepanda/makepanda.py --everything --installer --no-egl --no-gles --no-gles2 --no-opencv --threads $(nproc)
    At this point you can either:
    1. install the produced Debian package (recommended) with
      $ sudo dpkg -i panda3d1.11_1.11.0_amd64.deb
    2. use the Panda3D libraries located in the built folder without installing the Debian package panda3d1.11_1.11.0_amd64.deb, but in that case you need to set LD_LIBRARY_PATH environment var:
      $ export LD_LIBRARY_PATH=$VISP_WS/3rdparty/panda3d/panda3d/built/lib:$LD_LIBRARY_PATH
      Without setting LD_LIBRARY_PATH you may experience the following error when running a binary that uses Panda3D capabilities:
      $ ./tutorial-panda3d-renderer
      ./tutorial-panda3d-renderer: error while loading shared libraries: libp3dtoolconfig.so.1.11: cannot open shared object file: No such file or directory
  • Now to build ViSP with Panda3D support when Debian package panda3d1.11_1.11.0_amd64.deb is installed as described in option (1), you may notice that there is nothing specific to do, just run cmake as usual:
    $ cd $VISP_WS/visp-build
    $ cmake ../visp
    $ make -j$(nproc)
  • There is also the possibility to build ViSP with Panda3D support without installing Debian package panda3d1.11_1.11.0_amd64.deb as described in option (2):
    • By setting Panda3D_DIR cmake var to the Panda3D cloned folder
      $ cd $VISP_WS/visp-build
      $ cmake ../visp -DPanda3D_DIR=$VISP_WS/3rdparty/panda3d/panda3d
      $ make -j$(nproc)
    • By setting Panda3D_DIR environment variable
      $ export Panda3D_DIR=$VISP_WS/3rdparty/panda3d/panda3d
      $ cd $VISP_WS/visp-build
      $ cmake ../visp
      $ make -j$(nproc)

Installation on macOS

  • Installer are available for macOS browsing the download page.
    Note
    For the latest Panda3D 1.10.14 SDK there is an Installer for macOS X 10.9+ that is only compatible with architecture x86_64. If you are using a Mac M1 or M2, there is no Panda3D SDK available yet for arm64 architecture. The solution is to build Panda3D from source.
  • Hereafter you will find the instructions to build Panda3D from source on macOS.
    • On macOS, you will need to download a set of precompiled third-party packages in order to compile Panda3D. Navigate to PandaED download page, select the lastest SDK (in our case SDK 1.10.14), and under </> Source Code section, download Thirdparty tools for macOS (in our case panda3d-1.10.14-tools-mac.tar.gz).
    • Extract third-party tools for macOS from downloaded archive
      $ cd ~/Downloads
      $ tar xvzf panda3d-1.10.14-tools-mac.tar.gz
    • Once done clone Panda3D:
      $ mkdir -p $VISP_WS/3rdparty/panda3d
      $ cd $VISP_WS/3rdparty/panda3d
      $ git clone https://github.com/panda3d/panda3d
      $ cd panda3d
    • Move the downloaded third-party tools in Panda3D source code folder
      $ mv ~/Downloads/panda3d-1.10.14/thirdparty .
    • Build Panda3D from source
      $ python3 makepanda/makepanda.py --everything --installer --no-egl --no-gles --no-gles2 --no-opencv --no-python --threads $(sysctl -n hw.logicalcpu)
  • At this point you can either
    1. install the produced Panda3D-1.11.0-py3.9.dmg file (recommended) just by double clicking on it. In the installer window, don't forget to enable the C++ Header Files check box before pressing the installation button. After that you have to set DYLIB_LIBRARY_PATH environment var:
      $ export DYLD_LIBRARY_PATH=/Library/Developer/Panda3D/lib:$DYLD_LIBRARY_PATH
    2. or use the Panda3D libraries located in the built folder without installing .dmg file, but in that case you need to set DYLIB_LIBRARY_PATH environment var:
      $ export DYLD_LIBRARY_PATH=$VISP_WS/3rdparty/panda3d/panda3d/built/lib:$DYLD_LIBRARY_PATH
      Without setting DYLD_LIBRARY_PATH you may experience the following error when running a binary that uses Panda3D capabilities:
      $ ./tutorial-panda3d-renderer
      dyld[257]: Library not loaded: @loader_path/../lib/libpanda.1.11.dylib
  • Now to build ViSP with Panda3D support when .dmg file Panda3D-1.11.0-py3.9.dmg is installed, you can just run cmake as usual. Note that PCL is not compatible with Panda3D, that's why we disable here PCL usage (see Segfault: :framework(error): Unable to create window).
    $ cd $VISP_WS/visp-build
    $ cmake ../visp -DUSE_PCL=OFF
    $ make -j$(sysctl -n hw.logicalcpu)
  • There is also the possibility to build ViSP with Panda3D support without installing the .dmg file
    • By setting Panda3D_DIR cmake var to the Panda3D cloned folder
      $ cd $VISP_WS/visp-build
      $ cmake ../visp -DUSE_PCL=OFF -DPanda3D_DIR=$VISP_WS/3rdparty/panda3d/panda3d
      $ make -j$(sysctl -n hw.logicalcpu)
    • Or by setting Panda3D_DIR environment variable
      $ export Panda3D_DIR=$VISP_WS/3rdparty/panda3d/panda3d
      $ cd $VISP_WS/visp-build
      $ cmake ../visp -DUSE_PCL=OFF
      $ make -j$(sysctl -n hw.logicalcpu)

Installation on Windows

  • Installer are available for Windows browsing the download page.

Using Panda3D for rendering

An example that shows how to exploit Panda3D in ViSP to render a color image with support for textures and lighting, a depth image, normals in object space and in camera space is given in tutorial-panda3d-renderer.cpp.

To start rendering, we first instanciate a vpPanda3DRendererSet. This object allows to render multiple modalities (color, depth, etc.) in a single pass. To add different rendering modalities, we will use subclasses that will be registered to the renderer set. Internally, each sub renderer has its own scene: the renderer set synchronizes everything when the state changes (i.e, an object is added, an object is moved or the camera parameters change.)

A panda3D renderer should be instanciated with a vpPanda3DRenderParameters object. This object defines:

  • The camera intrinsics (see vpCameraParameters): As of now, Only parameters for a distortion free model are supported
  • The image resolution
  • The near and far clipping plane values. Object parts that are too close (less than the near clipping value) or too far (greater than the far clipping value) will not be rendered.

The creation of the renderer set can be found below

double factor = 1.0;
vpPanda3DRenderParameters renderParams(vpCameraParameters(600 * factor, 600 * factor, 320 * factor, 240 * factor), int(480 * factor), int(640 * factor), 0.01, 10.0);
unsigned h = renderParams.getImageHeight(), w = renderParams.getImageWidth();
vpPanda3DRendererSet renderer(renderParams);
renderer.setRenderParameters(renderParams);
renderer.setVerticalSyncEnabled(false);
renderer.setAbortOnPandaError(true);
if (debug) {
renderer.enableDebugLog();
}
Generic class defining intrinsic camera parameters.
Rendering parameters for a panda3D simulation.
Class that rendering multiple datatypes, in a single pass. A RendererSet contains multiple subrendere...

To actually render color, normals, etc., we need to define subrenderers:

std::shared_ptr<vpPanda3DGeometryRenderer> geometryRenderer = std::make_shared<vpPanda3DGeometryRenderer>(vpPanda3DGeometryRenderer::vpRenderType::OBJECT_NORMALS);
std::shared_ptr<vpPanda3DGeometryRenderer> cameraRenderer = std::make_shared<vpPanda3DGeometryRenderer>(vpPanda3DGeometryRenderer::vpRenderType::CAMERA_NORMALS);
std::shared_ptr<vpPanda3DRGBRenderer> rgbRenderer = std::make_shared<vpPanda3DRGBRenderer>();
std::shared_ptr<vpPanda3DRGBRenderer> rgbDiffuseRenderer = std::make_shared<vpPanda3DRGBRenderer>(false);
std::shared_ptr<vpPanda3DLuminanceFilter> grayscaleFilter = std::make_shared<vpPanda3DLuminanceFilter>("toGrayscale", rgbRenderer, false);
std::shared_ptr<vpPanda3DCanny> cannyFilter = std::make_shared<vpPanda3DCanny>("canny", grayscaleFilter, true, 10.f);

The different subrenderers are:

  • vpPanda3DGeometryRenderer instances allow to retrieve 3D information about the object: these are the surface normals in the object or camera frame, as well as the depth information.
  • vpPanda3DRGBRenderer objects perform the traditional color rendering. Lighting interaction can be disable, as is the case for the second renderer (diffuse only).
  • Post processing renderers, such as vpPanda3DLuminanceFilter, vpPanda3DCanny, operate on the output image of another renderer. They can be used to further process the output data and can be chained together. In this case, the chain vpPanda3DLuminanceFilter -> vpPanda3DGaussianBlur -> vpPanda3DCanny will perform a canny edge detection (without hysteresis) on a blurred, grayscale image.

For these subrenderers to actually be useful, they should be added to the main renderer:

renderer.addSubRenderer(geometryRenderer);
renderer.addSubRenderer(cameraRenderer);
renderer.addSubRenderer(rgbRenderer);
if (showLightContrib) {
renderer.addSubRenderer(rgbDiffuseRenderer);
}
if (showCanny) {
renderer.addSubRenderer(grayscaleFilter);
renderer.addSubRenderer(cannyFilter);
}
std::cout << "Initializing Panda3D rendering framework" << std::endl;
renderer.initFramework();
Warning
Once they have been added, a call to vpPanda3DBaseRenderer::initFramework() should be performed. Otherwise, no rendering will be performed and objects will not be loaded.

Here you will find the code used to configure the scene:

NodePath object = renderer.loadObject(objectName, modelPath);
renderer.addNodeToScene(object);
vpPanda3DAmbientLight alight("Ambient", vpRGBf(0.2f));
renderer.addLight(alight);
vpPanda3DPointLight plight("Point", vpRGBf(1.0f), vpColVector({ 0.3, -0.4, -0.2 }), vpColVector({ 0.0, 0.0, 1.0 }));
renderer.addLight(plight);
vpPanda3DDirectionalLight dlight("Directional", vpRGBf(2.0f), vpColVector({ 1.0, 1.0, 0.0 }));
renderer.addLight(dlight);
if (!backgroundPath.empty()) {
vpImage<vpRGBa> background;
vpImageIo::read(background, backgroundPath);
rgbRenderer->setBackgroundImage(background);
}
rgbRenderer->printStructure();
renderer.setCameraPose(vpHomogeneousMatrix(0.0, 0.0, -5.0, 0.0, 0.0, 0.0));
Implementation of column vector and the associated operations.
Definition: vpColVector.h:191
Implementation of an homogeneous matrix and operations on such kind of matrices.
static void read(vpImage< unsigned char > &I, const std::string &filename, int backend=IO_DEFAULT_BACKEND)
Definition: vpImageIo.cpp:147
Class representing an ambient light.
Class representing a directional light.
Class representing a Point Light.
Definition: vpRGBf.h:60

We start by loading the object to render with vpPanda3DBaseRenderer::loadObject, followed by vpPanda3DBaseRenderer::addNodeToScene. For the Color-based renderer, we add lights to shade our object. Different light types are supported, reusing the available Panda3D features.

Once the scene is setup, we can start rendering. This will be performed in a loop.

The first step shown is the following:

float nearV = 0, farV = 0;
geometryRenderer->computeNearAndFarPlanesFromNode(objectName, nearV, farV, true);
renderParams.setClippingDistance(nearV, farV);
renderer.setRenderParameters(renderParams);

Each frame, we compute the values of the clipping planes, and update the rendering properties. This will ensure that the target object is visible. Depending on your use case, this may not be necessary.

Once this is done, we can call upon Panda3D to render the object with

renderer.renderFrame();
Note
Note that under the hood, all subrenderers rely on the same Panda3D "framework": calling renderFrame on one will call it for the others.

To use the renders, we must convert them to ViSP images. To do so, each subrenderer defines its own getRender method, that will perform the conversion from a panda texture to the relevant ViSP data type.

For each render type, we start by getting the correct renderer via the vpPanda3DRendererSet::getRenderer, then call its getRender method.

renderer.getRenderer<vpPanda3DGeometryRenderer>(geometryRenderer->getName())->getRender(normalsImage, depthImage);
renderer.getRenderer<vpPanda3DGeometryRenderer>(cameraRenderer->getName())->getRender(cameraNormalsImage);
renderer.getRenderer<vpPanda3DRGBRenderer>(rgbRenderer->getName())->getRender(colorImage);
if (showLightContrib) {
renderer.getRenderer<vpPanda3DRGBRenderer>(rgbDiffuseRenderer->getName())->getRender(colorDiffuseOnly);
}
if (showCanny) {
renderer.getRenderer<vpPanda3DCanny>()->getRender(cannyRawData);
}
Implementation of canny filtering, using Sobel kernels.
Renderer that outputs object geometric information.
Implementation of a traditional RGB renderer in Panda3D.

Now that we have the retrieved the images, we can display them. To do so, we leverage utility functions defined beforehand (see the full code for more information). This may be required in cases where the data cannot be directly displayed. For instance, normals are encoded as 3 32-bit floats, but displays require colors to be represented as 8-bit unsigned characters. The same goes for the depth render, which is mapped back to the 0-255 range, although its value unbound.

displayNormals(normalsImage, normalDisplayImage);
displayNormals(cameraNormalsImage, cameraNormalDisplayImage);
displayDepth(depthImage, depthDisplayImage, nearV, farV);
if (showLightContrib) {
displayLightDifference(colorImage, colorDiffuseOnly, lightDifference);
}
if (showCanny) {
displayCanny(cannyRawData, cannyImage);
}
vpDisplay::display(colorImage);
vpDisplay::displayText(colorImage, 15, 15, "Click to quit", vpColor::red);
static const vpColor red
Definition: vpColor.h:217
static void display(const vpImage< unsigned char > &I)
static void displayText(const vpImage< unsigned char > &I, const vpImagePoint &ip, const std::string &s, const vpColor &color)

Finally, we use the snippet below to move the object, updating the scene. To have a constant velocity, we multiply the displacement by the time that has elapsed between the frame's start and end.

const double delta = (afterAll - beforeRender) / 1000.0;
const vpHomogeneousMatrix wTo = renderer.getNodePose(objectName);
const vpHomogeneousMatrix oToo = vpExponentialMap::direct(vpColVector({ 0.0, 0.0, 0.0, 0.0, vpMath::rad(20.0), 0.0 }), delta);
renderer.setNodePose(objectName, wTo * oToo);
static vpHomogeneousMatrix direct(const vpColVector &v)
static double rad(double deg)
Definition: vpMath.h:129

Tutorial full code

The full code of tutorial-panda3d-renderer.cpp is given below.

#include <iostream>
#include <visp3/core/vpConfig.h>
#if defined(VISP_HAVE_PANDA3D) && defined(VISP_HAVE_DISPLAY) && defined(VISP_HAVE_MODULE_IO)
#include <visp3/core/vpException.h>
#include <visp3/core/vpExponentialMap.h>
#include <visp3/gui/vpDisplayX.h>
#include <visp3/gui/vpDisplayGDI.h>
#include <visp3/gui/vpDisplayD3D.h>
#include <visp3/gui/vpDisplayOpenCV.h>
#include <visp3/gui/vpDisplayGTK.h>
#include <visp3/io/vpParseArgv.h>
#include <visp3/io/vpImageIo.h>
#include <visp3/ar/vpPanda3DRGBRenderer.h>
#include <visp3/ar/vpPanda3DGeometryRenderer.h>
#include <visp3/ar/vpPanda3DRendererSet.h>
#include <visp3/ar/vpPanda3DCommonFilters.h>
#ifdef ENABLE_VISP_NAMESPACE
using namespace VISP_NAMESPACE_NAME;
#endif
void displayNormals(const vpImage<vpRGBf> &normalsImage,
vpImage<vpRGBa> &normalDisplayImage)
{
#if defined(_OPENMP)
#pragma omp parallel for
#endif
for (unsigned int i = 0; i < normalsImage.getSize(); ++i) {
normalDisplayImage.bitmap[i].R = static_cast<unsigned char>((normalsImage.bitmap[i].R + 1.0) * 127.5f);
normalDisplayImage.bitmap[i].G = static_cast<unsigned char>((normalsImage.bitmap[i].G + 1.0) * 127.5f);
normalDisplayImage.bitmap[i].B = static_cast<unsigned char>((normalsImage.bitmap[i].B + 1.0) * 127.5f);
}
vpDisplay::display(normalDisplayImage);
vpDisplay::flush(normalDisplayImage);
}
void displayDepth(const vpImage<float> &depthImage,
vpImage<unsigned char> &depthDisplayImage, float nearV, float farV)
{
#if defined(_OPENMP)
#pragma omp parallel for
#endif
for (unsigned int i = 0; i < depthImage.getSize(); ++i) {
float val = std::max(0.f, (depthImage.bitmap[i] - nearV) / (farV - nearV));
depthDisplayImage.bitmap[i] = static_cast<unsigned char>(val * 255.f);
}
vpDisplay::display(depthDisplayImage);
vpDisplay::flush(depthDisplayImage);
}
void displayLightDifference(const vpImage<vpRGBa> &colorImage, const vpImage<vpRGBa> &colorDiffuseOnly, vpImage<unsigned char> &lightDifference)
{
#if defined(_OPENMP)
#pragma omp parallel for
#endif
for (unsigned int i = 0; i < colorImage.getSize(); ++i) {
float I1 = 0.299 * colorImage.bitmap[i].R + 0.587 * colorImage.bitmap[i].G + 0.114 * colorImage.bitmap[i].B;
float I2 = 0.299 * colorDiffuseOnly.bitmap[i].R + 0.587 * colorDiffuseOnly.bitmap[i].G + 0.114 * colorDiffuseOnly.bitmap[i].B;
lightDifference.bitmap[i] = static_cast<unsigned char>(round(abs(I1 - I2)));
}
vpDisplay::display(lightDifference);
vpDisplay::flush(lightDifference);
}
void displayCanny(const vpImage<vpRGBf> &cannyRawData,
{
#if defined(_OPENMP)
#pragma omp parallel for
#endif
for (unsigned int i = 0; i < cannyRawData.getSize(); ++i) {
vpRGBf &px = cannyRawData.bitmap[i];
canny.bitmap[i] = 255 * (px.R * px.R + px.G * px.G > 0);
//canny.bitmap[i] = static_cast<unsigned char>(127.5f + 127.5f * atan(px.B));
}
for (unsigned int i = 0; i < canny.getHeight(); i += 8) {
for (unsigned int j = 0; j < canny.getWidth(); j += 8) {
bool valid = (pow(cannyRawData[i][j].R, 2.f) + pow(cannyRawData[i][j].G, 2.f)) > 0;
if (!valid) continue;
float angle = cannyRawData[i][j].B;
unsigned x = j + 10 * cos(angle);
unsigned y = i + 10 * sin(angle);
}
}
}
int main(int argc, const char **argv)
{
bool stepByStep = false;
bool debug = false;
bool showLightContrib = false;
bool showCanny = false;
char *modelPathCstr = nullptr;
char *backgroundPathCstr = nullptr;
vpParseArgv::vpArgvInfo argTable[] =
{
{"-model", vpParseArgv::ARGV_STRING, (char *) nullptr, (char *)&modelPathCstr,
"Path to the model to load."},
{"-background", vpParseArgv::ARGV_STRING, (char *) nullptr, (char *)&backgroundPathCstr,
"Path to the background image to load for the rgb renderer."},
{"-step", vpParseArgv::ARGV_CONSTANT_BOOL, (char *) nullptr, (char *)&stepByStep,
"Show frames step by step."},
{"-specular", vpParseArgv::ARGV_CONSTANT_BOOL, (char *) nullptr, (char *)&showLightContrib,
"Show frames step by step."},
{"-canny", vpParseArgv::ARGV_CONSTANT_BOOL, (char *) nullptr, (char *)&showCanny,
"Show frames step by step."},
{"-debug", vpParseArgv::ARGV_CONSTANT_BOOL, (char *) nullptr, (char *)&debug,
"Show Opengl/Panda3D debug message."},
{"-h", vpParseArgv::ARGV_HELP, (char *) nullptr, (char *) nullptr,
"Print the help."},
{(char *) nullptr, vpParseArgv::ARGV_END, (char *) nullptr, (char *) nullptr, (char *) nullptr} };
// Read the command line options
if (vpParseArgv::parse(&argc, argv, argTable,
return (false);
}
if (PStatClient::is_connected()) {
PStatClient::disconnect();
}
std::string host = ""; // Empty = default config var value
int port = -1; // -1 = default config var value
if (!PStatClient::connect(host, port)) {
std::cout << "Could not connect to PStat server." << std::endl;
}
std::string modelPath;
if (modelPathCstr) {
modelPath = modelPathCstr;
}
else {
modelPath = "data/suzanne.bam";
}
std::string backgroundPath;
if (backgroundPathCstr) {
backgroundPath = backgroundPathCstr;
}
const std::string objectName = "object";
double factor = 1.0;
vpPanda3DRenderParameters renderParams(vpCameraParameters(600 * factor, 600 * factor, 320 * factor, 240 * factor), int(480 * factor), int(640 * factor), 0.01, 10.0);
unsigned h = renderParams.getImageHeight(), w = renderParams.getImageWidth();
vpPanda3DRendererSet renderer(renderParams);
renderer.setRenderParameters(renderParams);
renderer.setVerticalSyncEnabled(false);
renderer.setAbortOnPandaError(true);
if (debug) {
renderer.enableDebugLog();
}
std::shared_ptr<vpPanda3DGeometryRenderer> geometryRenderer = std::make_shared<vpPanda3DGeometryRenderer>(vpPanda3DGeometryRenderer::vpRenderType::OBJECT_NORMALS);
std::shared_ptr<vpPanda3DGeometryRenderer> cameraRenderer = std::make_shared<vpPanda3DGeometryRenderer>(vpPanda3DGeometryRenderer::vpRenderType::CAMERA_NORMALS);
std::shared_ptr<vpPanda3DRGBRenderer> rgbRenderer = std::make_shared<vpPanda3DRGBRenderer>();
std::shared_ptr<vpPanda3DRGBRenderer> rgbDiffuseRenderer = std::make_shared<vpPanda3DRGBRenderer>(false);
std::shared_ptr<vpPanda3DLuminanceFilter> grayscaleFilter = std::make_shared<vpPanda3DLuminanceFilter>("toGrayscale", rgbRenderer, false);
std::shared_ptr<vpPanda3DCanny> cannyFilter = std::make_shared<vpPanda3DCanny>("canny", grayscaleFilter, true, 10.f);
renderer.addSubRenderer(geometryRenderer);
renderer.addSubRenderer(cameraRenderer);
renderer.addSubRenderer(rgbRenderer);
if (showLightContrib) {
renderer.addSubRenderer(rgbDiffuseRenderer);
}
if (showCanny) {
renderer.addSubRenderer(grayscaleFilter);
renderer.addSubRenderer(cannyFilter);
}
std::cout << "Initializing Panda3D rendering framework" << std::endl;
renderer.initFramework();
NodePath object = renderer.loadObject(objectName, modelPath);
renderer.addNodeToScene(object);
vpPanda3DAmbientLight alight("Ambient", vpRGBf(0.2f));
renderer.addLight(alight);
vpPanda3DPointLight plight("Point", vpRGBf(1.0f), vpColVector({ 0.3, -0.4, -0.2 }), vpColVector({ 0.0, 0.0, 1.0 }));
renderer.addLight(plight);
vpPanda3DDirectionalLight dlight("Directional", vpRGBf(2.0f), vpColVector({ 1.0, 1.0, 0.0 }));
renderer.addLight(dlight);
if (!backgroundPath.empty()) {
vpImage<vpRGBa> background;
vpImageIo::read(background, backgroundPath);
rgbRenderer->setBackgroundImage(background);
}
rgbRenderer->printStructure();
renderer.setCameraPose(vpHomogeneousMatrix(0.0, 0.0, -5.0, 0.0, 0.0, 0.0));
std::cout << "Creating display and data images" << std::endl;
vpImage<vpRGBf> normalsImage;
vpImage<vpRGBf> cameraNormalsImage;
vpImage<vpRGBf> cannyRawData;
vpImage<float> depthImage;
vpImage<vpRGBa> colorImage(h, w);
vpImage<vpRGBa> colorDiffuseOnly(h, w);
vpImage<unsigned char> lightDifference(h, w);
vpImage<unsigned char> cannyImage(h, w);
vpImage<vpRGBa> normalDisplayImage(h, w);
vpImage<vpRGBa> cameraNormalDisplayImage(h, w);
vpImage<unsigned char> depthDisplayImage(h, w);
#if defined(VISP_HAVE_GTK)
using DisplayCls = vpDisplayGTK;
#elif defined(VISP_HAVE_X11)
using DisplayCls = vpDisplayX;
#elif defined(HAVE_OPENCV_HIGHGUI)
using DisplayCls = vpDisplayOpenCV;
#elif defined(VISP_HAVE_GDI)
using DisplayCls = vpDisplayGDI;
#elif defined(VISP_HAVE_D3D9)
using DisplayCls = vpDisplayD3D;
#endif
unsigned int padding = 80;
DisplayCls dNormals(normalDisplayImage, 0, 0, "normals in object space");
DisplayCls dNormalsCamera(cameraNormalDisplayImage, 0, h + padding, "normals in camera space");
DisplayCls dDepth(depthDisplayImage, w + padding, 0, "depth");
DisplayCls dColor(colorImage, w + padding, h + padding, "color");
DisplayCls dImageDiff;
if (showLightContrib) {
dImageDiff.init(lightDifference, w * 2 + padding, 0, "Specular/reflectance contribution");
}
DisplayCls dCanny;
if (showCanny) {
dCanny.init(cannyImage, w * 2 + padding, h + padding, "Canny");
}
renderer.renderFrame();
bool end = false;
std::vector<double> renderTime, fetchTime, displayTime;
while (!end) {
float nearV = 0, farV = 0;
geometryRenderer->computeNearAndFarPlanesFromNode(objectName, nearV, farV, true);
renderParams.setClippingDistance(nearV, farV);
renderer.setRenderParameters(renderParams);
const double beforeRender = vpTime::measureTimeMs();
renderer.renderFrame();
const double beforeFetch = vpTime::measureTimeMs();
renderer.getRenderer<vpPanda3DGeometryRenderer>(geometryRenderer->getName())->getRender(normalsImage, depthImage);
renderer.getRenderer<vpPanda3DGeometryRenderer>(cameraRenderer->getName())->getRender(cameraNormalsImage);
renderer.getRenderer<vpPanda3DRGBRenderer>(rgbRenderer->getName())->getRender(colorImage);
if (showLightContrib) {
renderer.getRenderer<vpPanda3DRGBRenderer>(rgbDiffuseRenderer->getName())->getRender(colorDiffuseOnly);
}
if (showCanny) {
renderer.getRenderer<vpPanda3DCanny>()->getRender(cannyRawData);
}
const double beforeConvert = vpTime::measureTimeMs();
displayNormals(normalsImage, normalDisplayImage);
displayNormals(cameraNormalsImage, cameraNormalDisplayImage);
displayDepth(depthImage, depthDisplayImage, nearV, farV);
if (showLightContrib) {
displayLightDifference(colorImage, colorDiffuseOnly, lightDifference);
}
if (showCanny) {
displayCanny(cannyRawData, cannyImage);
}
vpDisplay::display(colorImage);
vpDisplay::displayText(colorImage, 15, 15, "Click to quit", vpColor::red);
if (stepByStep) {
vpDisplay::displayText(colorImage, 50, 15, "Next frame: space", vpColor::red);
}
if (vpDisplay::getClick(colorImage, false)) {
end = true;
}
vpDisplay::flush(colorImage);
const double endDisplay = vpTime::measureTimeMs();
renderTime.push_back(beforeFetch - beforeRender);
fetchTime.push_back(beforeConvert - beforeFetch);
displayTime.push_back(endDisplay - beforeConvert);
std::string s;
if (stepByStep) {
bool next = false;
while (!next) {
vpDisplay::getKeyboardEvent(colorImage, s, true);
if (s == " ") {
next = true;
}
}
}
const double afterAll = vpTime::measureTimeMs();
const double delta = (afterAll - beforeRender) / 1000.0;
const vpHomogeneousMatrix wTo = renderer.getNodePose(objectName);
const vpHomogeneousMatrix oToo = vpExponentialMap::direct(vpColVector({ 0.0, 0.0, 0.0, 0.0, vpMath::rad(20.0), 0.0 }), delta);
renderer.setNodePose(objectName, wTo * oToo);
}
if (renderTime.size() > 0) {
std::cout << "Render time: " << vpMath::getMean(renderTime) << "ms +- " << vpMath::getStdev(renderTime) << "ms" << std::endl;
std::cout << "Panda3D -> vpImage time: " << vpMath::getMean(fetchTime) << "ms +- " << vpMath::getStdev(fetchTime) << "ms" << std::endl;
std::cout << "Display time: " << vpMath::getMean(displayTime) << "ms +- " << vpMath::getStdev(displayTime) << "ms" << std::endl;
}
return 0;
}
#else
int main()
{
std::cerr << "Recompile ViSP with Panda3D as a third party to run this tutorial" << std::endl;
return EXIT_FAILURE;
}
#endif
static const vpColor green
Definition: vpColor.h:220
Display for windows using Direct3D 3rd party. Thus to enable this class Direct3D should be installed....
Definition: vpDisplayD3D.h:106
Display for windows using GDI (available on any windows 32 platform).
Definition: vpDisplayGDI.h:130
The vpDisplayGTK allows to display image using the GTK 3rd party library. Thus to enable this class G...
Definition: vpDisplayGTK.h:133
The vpDisplayOpenCV allows to display image using the OpenCV library. Thus to enable this class OpenC...
static bool getClick(const vpImage< unsigned char > &I, bool blocking=true)
static bool getKeyboardEvent(const vpImage< unsigned char > &I, bool blocking=true)
static void flush(const vpImage< unsigned char > &I)
static void displayArrow(const vpImage< unsigned char > &I, const vpImagePoint &ip1, const vpImagePoint &ip2, const vpColor &color=vpColor::white, unsigned int w=4, unsigned int h=2, unsigned int thickness=1)
unsigned int getWidth() const
Definition: vpImage.h:242
unsigned int getSize() const
Definition: vpImage.h:221
Type * bitmap
points toward the bitmap
Definition: vpImage.h:135
unsigned int getHeight() const
Definition: vpImage.h:181
static double getStdev(const std::vector< double > &v, bool useBesselCorrection=false)
Definition: vpMath.cpp:353
static double getMean(const std::vector< double > &v)
Definition: vpMath.cpp:302
static bool parse(int *argcPtr, const char **argv, vpArgvInfo *argTable, int flags)
Definition: vpParseArgv.cpp:70
@ ARGV_NO_DEFAULTS
No default options like -help.
Definition: vpParseArgv.h:172
@ ARGV_NO_LEFTOVERS
Print an error message if an option is not in the argument list.
Definition: vpParseArgv.h:173
@ ARGV_STRING
Argument is associated to a char * string.
Definition: vpParseArgv.h:157
@ ARGV_CONSTANT_BOOL
Stand alone argument associated to a bool var that is set to true.
Definition: vpParseArgv.h:154
@ ARGV_END
End of the argument list.
Definition: vpParseArgv.h:164
@ ARGV_HELP
Argument is for help displaying.
Definition: vpParseArgv.h:163
unsigned char B
Blue component.
Definition: vpRGBa.h:169
unsigned char R
Red component.
Definition: vpRGBa.h:167
unsigned char G
Green component.
Definition: vpRGBa.h:168
float G
Green component.
Definition: vpRGBf.h:141
float R
Red component.
Definition: vpRGBf.h:140
VISP_EXPORT double measureTimeMs()

Running the tutorial

  • Once ViSP is built, you may run the tutorial by:
    $ cd $VISP_WS/visp-build
    $ ./tutorial/ar/tutorial-panda3d-renderer
    It downloads the object located by default in tutorial/ar/data/suzanne.bam file.
  • You should see something similar to the following video

Known issues

Library not loaded: libpanda.1.11.dylib

This error occurs on macOS.

% cd $VISP_WS/visp-build/tutorial/ar/
% ./tutorial-panda3d-renderer
dyld[1795]: Library not loaded: @loader_path/../lib/libpanda.1.11.dylib
Referenced from: <0D61FFE0-73FA-3053-8D8D-8912BFF16E36> /Users/fspindle/soft/visp/visp_ws/test-pr/visp-SamFlt/visp-build/tutorial/ar/tutorial-panda3d-renderer
Reason: tried: '/Users/fspindle/soft/visp/visp_ws/test-pr/visp-SamFlt/visp-build/tutorial/ar/../lib/libpanda.1.11.dylib' (no such file)
zsh: abort ./tutorial-panda3d-renderer
Definition: vpIoTools.h:61

It occurs when you didn't follow carefully the instructions mentionned in Installation on macOS section.

A quick fix is to add the path to the library in DYLD_LIBRARY_PATH env var:

$ export DYLD_LIBRARY_PATH=/Library/Developer/Panda3D/lib:$DYLD_LIBRARY_PATH

Segfault: :framework(error): Unable to create window

This error occurs on macOS.

% cd $VISP_WS/visp-build/tutorial/ar/
% ./tutorial-panda3d-renderer
Initializing Panda3D rendering framework
Known pipe types:
CocoaGLGraphicsPipe
(all display modules loaded.)
:framework(error): Unable to create window.
zsh: segmentation fault ./tutorial-panda3d-renderer

This issue is probably due to EIGEN_MAX_ALIGN_BYTES and HAVE_PNG macro redefinition that occurs when building ViSP with Panda3D support:

$ cd visp-build
$ make
...
[100%] Building CXX object tutorial/ar/CMakeFiles/tutorial-panda3d-renderer.dir/tutorial-panda3d-renderer.cpp.o
In file included from $VISP_WS/visp/tutorial/ar/tutorial-panda3d-renderer.cpp:17:
In file included from $VISP_WS/visp/modules/ar/include/visp3/ar/vpPanda3DRGBRenderer.h:39:
In file included from $VISP_WS/visp/modules/ar/include/visp3/ar/vpPanda3DBaseRenderer.h:42:
In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/pandaFramework.h:17:
In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/pandabase.h:21:
In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/dtoolbase.h:22:
$VISP_WS/3rdparty/panda3d/panda3d/built/include/dtool_config.h:40:9: warning: 'HAVE_PNG' macro redefined [-Wmacro-redefined]
#define HAVE_PNG 1
^
/opt/homebrew/include/pcl-1.14/pcl/pcl_config.h:53:9: note: previous definition is here
#define HAVE_PNG
^
In file included from $VISP_WS/visp/tutorial/ar/tutorial-panda3d-renderer.cpp:17:
In file included from $VISP_WS/visp/modules/ar/include/visp3/ar/vpPanda3DRGBRenderer.h:39:
In file included from $VISP_WS/visp/modules/ar/include/visp3/ar/vpPanda3DBaseRenderer.h:42:
In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/pandaFramework.h:17:
In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/pandabase.h:21:
In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/dtoolbase.h:22:
$VISP_WS/3rdparty/panda3d/panda3d/built/include/dtool_config.h:64:9: warning: 'HAVE_ZLIB' macro redefined [-Wmacro-redefined]
#define HAVE_ZLIB 1
^
/opt/homebrew/include/pcl-1.14/pcl/pcl_config.h:55:9: note: previous definition is here
#define HAVE_ZLIB
^
In file included from $VISP_WS/visp/tutorial/ar/tutorial-panda3d-renderer.cpp:17:
In file included from $VISP_WS/visp/modules/ar/include/visp3/ar/vpPanda3DRGBRenderer.h:39:
In file included from $VISP_WS/visp/modules/ar/include/visp3/ar/vpPanda3DBaseRenderer.h:42:
In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/pandaFramework.h:17:
In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/pandabase.h:21:
$VISP_WS/3rdparty/panda3d/panda3d/built/include/dtoolbase.h:432:9: warning: 'EIGEN_MAX_ALIGN_BYTES' macro redefined [-Wmacro-redefined]
#define EIGEN_MAX_ALIGN_BYTES MEMORY_HOOK_ALIGNMENT
^
/opt/homebrew/include/eigen3/Eigen/src/Core/util/ConfigureVectorization.h:175:11: note: previous definition is here
#define EIGEN_MAX_ALIGN_BYTES EIGEN_IDEAL_MAX_ALIGN_BYTES
^
3 warnings generated.
[100%] Linking CXX executable tutorial-panda3d-renderer
[100%] Built target tutorial-panda3d-renderer
Base class for a panda3D renderer. This class handles basic functionalities, such as loading object,...

The work around consists in disabling PCL usage during ViSP configuration

$ cd $VISP_WS/visp-build
$ cmake ../visp -DUSE_PCL=OFF
$ make -j$(sysctl -n hw.logicalcpu)