Visual Servoing Platform  version 3.1.0 under development (2017-09-20)
Tutorial: Image frame grabbing

Introduction

In this tutorial you will learn how to grab images with ViSP, either from cameras or from a video stream.

All the material (source code and videos) described in this tutorial is part of ViSP source code and could be downloaded using the following command:

$ svn export https://github.com/lagadic/visp.git/trunk/tutorial/grabber

Images from PointGrey cameras

After ViSP 3.0.0, we introduce vpFlyCaptureGrabber class, a wrapper over PointGrey FlyCapture SDK that allows to grab images from any PointGrey camera. This grabber was tested under Ubuntu and Windows with the following cameras:

  • Flea3 USB 3.0 cameras (FL3-U3-32S2M-CS, FL3-U3-13E4C-C)
  • Flea2 firewire camera (FL2-03S2C)
  • Dragonfly2 firewire camera (DR2-COL)

It should also work with GigE PGR cameras.

The following example also available in tutorial-grabber-flycapture.cpp shows how to use vpFlyCaptureGrabber to capture grey level images from a PointGrey camera under Ubuntu or Windows. The following example suppose that a window renderer (libX11 on Ubuntu or GDI on Windows) and FlyCapture SDK 3rd party are available throw VISP.

#include <visp3/sensor/vpFlyCaptureGrabber.h>
#include <visp3/gui/vpDisplayX.h>
#include <visp3/gui/vpDisplayGDI.h>
#include <visp3/core/vpImage.h>
int main()
{
#ifdef VISP_HAVE_FLYCAPTURE
try {
vpImage<unsigned char> I; // Create a gray level image container
vpFlyCaptureGrabber g; // Create a grabber based on FlyCapture SDK third party lib
try {
g.setShutter(true); // Turn auto shutter on
g.setGain(true); // Turn auto gain on
g.setVideoModeAndFrameRate(FlyCapture2::VIDEOMODE_1280x960Y8, FlyCapture2::FRAMERATE_60);
}
catch(...) { // If settings are not available just catch execption to continue with default settings
}
g.open(I);
std::cout << "Image size: " << I.getWidth() << " " << I.getHeight() << std::endl;
#if defined(VISP_HAVE_X11)
vpDisplayX d(I);
#elif defined(VISP_HAVE_GDI)
#else
std::cout << "No image viewer is available..." << std::endl;
#endif
while(1) {
g.acquire(I);
vpDisplay::displayText(I, 15, 15, "Click to quit", vpColor::red);
if (vpDisplay::getClick(I, false))
break;
}
}
catch(vpException &e) {
std::cout << "Catch an exception: " << e.getStringMessage() << std::endl;
}
#endif
}

Here after we explain the source code.

First an instance of the frame grabber is created.

vpFlyCaptureGrabber g; // Create a grabber based on FlyCapture SDK third party lib

Once the grabber is created, we turn auto shutter and auto gain on and set the camera image size, color coding, and framerate. These settings are enclosed in a try/catch to be able to continue if one of these settings are not supported by the camera.

try {
g.setShutter(true); // Turn auto shutter on
g.setGain(true); // Turn auto gain on
g.setVideoModeAndFrameRate(FlyCapture2::VIDEOMODE_1280x960Y8, FlyCapture2::FRAMERATE_60);
}
catch(...) { // If settings are not available just catch execption to continue with default settings
}

Then the grabber is initialized using:

g.open(I);

From now the grey level image I is also initialized with the size corresponding to the grabber settings.

Then we enter in a while loop where image acquisition is simply done by:

g.acquire(I);

This image is then displayed using libX11 or GDI renderer:

We are waiting for a non blocking mouse event to break the while loop before ending the program.

vpDisplay::displayText(I, 15, 15, "Click to quit", vpColor::red);
if (vpDisplay::getClick(I, false))
break;

Images from firewire cameras

The next example also available in tutorial-grabber-1394.cpp shows how to use a framegrabber to acquire color images from a firewire camera under Unix. The following example suppose that libX11 and libdc1394-2 3rd party are available.

#include <visp3/sensor/vp1394TwoGrabber.h>
#include <visp3/gui/vpDisplayX.h>
#include <visp3/core/vpImage.h>
int main()
{
#ifdef VISP_HAVE_DC1394
try {
vpImage<unsigned char> I; // Create a gray level image container
bool reset = true; // Enable bus reset during construction (default)
vp1394TwoGrabber g(reset); // Create a grabber based on libdc1394-2.x third party lib
g.open(I);
std::cout << "Image size: " << I.getWidth() << " " << I.getHeight() << std::endl;
#ifdef VISP_HAVE_X11
vpDisplayX d(I);
#else
std::cout << "No image viewer is available..." << std::endl;
#endif
while(1) {
g.acquire(I);
if (vpDisplay::getClick(I, false))
break;
}
}
catch(vpException &e) {
std::cout << "Catch an exception: " << e << std::endl;
}
#endif
}

Here after we explain the new lines that are introduced.

First an instance of the frame grabber is created. During the creating a bus reset is send. If you don't want to reset the firewire bus, just turn reset to false.

vp1394TwoGrabber g(reset); // Create a grabber based on libdc1394-2.x third party lib

Once the grabber is created, we set the camera image size, color coding, and framerate.

Note that here you can specify some other settings such as the firewire transmission speed. For a more complete list of settings see vp1394TwoGrabber class.

g.setIsoTransmissionSpeed(vp1394TwoGrabber::vpISO_SPEED_800);

Then the grabber is initialized using:

g.open(I);

From now the color image I is also initialized with the size corresponding to the grabber settings.

Then we enter in a while loop where image acquisition is simply done by:

g.acquire(I);

We are waiting for a non blocking mouse event to break the while loop before ending the program.

if (vpDisplay::getClick(I, false))
break;

In the previous example we use vp1394TwoGrabber class that works for firewire cameras under Unix. If you are under Windows, you may use vp1394CMUGrabber class. A similar example is provided in tutorial-grabber-CMU1394.cpp.

Images from other cameras

If you want to grab images from an usb camera under Unix, you may use vpV4l2Grabber class. To this end libv4l should be installed. An example is provided in tutorial-grabber-v4l2.cpp.

It is also possible to grab images using OpenCV. You may find an example in tutorial-grabber-opencv.cpp.

Images from a video stream

With ViSP it also possible to get images from an input video stream. Supported formats are *.avi, *.mp4, *.mov, *.ogv, *.flv and many others... To this end we exploit OpenCV 3rd party.

The example below available in tutorial-video-reader.cpp shows how o consider an mpeg video stream.

Warning
We recall that this example works only if ViSP was build with OpenCV support.
#include <visp3/gui/vpDisplayGDI.h>
#include <visp3/gui/vpDisplayOpenCV.h>
#include <visp3/gui/vpDisplayX.h>
#include <visp3/core/vpTime.h>
#include <visp3/io/vpVideoReader.h>
int main(int argc, char** argv)
{
#if (VISP_HAVE_OPENCV_VERSION >= 0x020100)
try {
std::string videoname = "video.mpg";
for (int i=0; i<argc; i++) {
if (std::string(argv[i]) == "--name")
videoname = std::string(argv[i+1]);
else if (std::string(argv[i]) == "--help") {
std::cout << "\nUsage: " << argv[0] << " [--name <video name>] [--help]\n" << std::endl;
return 0;
}
}
g.setFileName(videoname);
g.open(I);
std::cout << "video name: " << videoname << std::endl;
std::cout << "video framerate: " << g.getFramerate() << "Hz" << std::endl;
std::cout << "video dimension: " << I.getWidth() << " " << I.getHeight() << std::endl;
#ifdef VISP_HAVE_X11
vpDisplayX d(I);
#elif defined(VISP_HAVE_GDI)
#elif defined(VISP_HAVE_OPENCV)
#else
std::cout << "No image viewer is available..." << std::endl;
#endif
vpDisplay::setTitle(I, "Video reader");
while (! g.end() ) {
double t = vpTime::measureTimeMs();
g.acquire(I);
if (vpDisplay::getClick(I, false)) break;
vpTime::wait(t, 1000. / g.getFramerate());
}
}
catch(vpException &e) {
std::cout << e.getMessage() << std::endl;
}
#else
(void)argc;
(void)argv;
std::cout << "Install OpenCV and rebuild ViSP to use this example." << std::endl;
#endif
}

We explain now the new lines that were introduced.

#include <visp3/core/vpTime.h>
#include <visp3/io/vpVideoReader.h>

Include the header of the vpTime class that allows to measure time, and of the vpVideoReader class that allows to read a video stream.

Create an instance of a video reader.

g.setFileName(videoname);

Set the name of the video stream. Here videoname corresponds to a video file name location. For example we provide the file video.mpg located in the same folder than the executable.

The vpVideoReader class can also handle a sequence of images. For example, to read the following images:

% ls *.png
image0000.png
image0001.png
image0002.png
image0003.png
image0004.png
...

you may use the following:

g.setFileName("./image%04d.png");

where you specify that each image number is coded with 4 digits. Here, we will use libpng or OpenCV to read PNG images. Supported image formats are PPM, PGM, PNG and JPEG.

Then as for any other grabber, you have to initialize the frame grabber using:

g.open(I);

Then we enter in the while loop until the last image was not reached:

while (! g.end() ) {

To get the next image in the stream, we just use:

g.acquire(I);

To synchronize the video decoding with the video framerate, we measure the beginning time of each loop iteration:

double t = vpTime::measureTimeMs();

The synchronization is done by waiting from the beginning of the iteration the corresponding time expressed in milliseconds by using:

vpTime::wait(t, 1000. / g.getFramerate());

Next tutorial

You are now ready to see how to continue with: