This tutorial supposes that you have followed the Tutorial: How to create a basic iOS application that uses ViSP.
Introduction
In this tutorial you will learn how to do simple image processing on iOS devices with ViSP. This application loads a color image (monkey.png) and allows the user to visualize either this image in grey level, either the image gradients, or either canny edges on iOS simulator or devices.
In ViSP images are carried out using vpImage class. However in iOS, image rendering has to be done using UIImage class that is part of the Core Graphics framework available in iOS. In this tutorial we provide the functions that allow to convert a vpImage to an UIImage and vice versa.
Note that all the material (source code and image) described in this tutorial is part of ViSP source code (in tutorial/ios/StartedImageProc
folder) and could be found in https://github.com/lagadic/visp/tree/master/tutorial/ios/StartedImageProc.
StartedImageProc application
Let us consider the Xcode project named StartedImageProc
that is part of ViSP source code and located in $VISP_WS/tutorial/ios/StartedImageProc
. This project is a Xcode "Single view application"
where we renamed ViewController.m
into ViewController.mm
, introduced minor modifications in ViewController.h
and add monkey.png image.
To open this application, if you followed Tutorial: Installation from prebuilt packages for iOS devices simply run:
$ cd $HOME/framework
download the content of https://github.com/lagadic/visp/tree/master/tutorial/ios/StartedImageProc and run
$ open StartedImageProc -a Xcode
or if you already downloaded ViSP following Tutorial: Installation from source for iOS devices run:
$ open ~/framework/visp/tutorial/ios/StartedImageProc -a Xcode
Here you should see something similar to:
Once opened, you have just to drag & drop ViSP and OpenCV frameworks available in $HOME/framework/ios
if you followed Tutorial: Installation from prebuilt packages for iOS devices.
In the dialog box, enable check box "Copy item if needed"
to add visp3.framework
and opencv2.framework
to the project.
Now you should be able to build and run your application.
Image conversion functions
The Xcode project StartedImageProc
contains ImageConversion.h
and ImageConversion.mm
files that implement the functions to convert UIImage to ViSP vpImage and vice versa.
UIImage to color vpImage
The following function implemented in ImageConversion.mm
show how to convert an UIImage
into a vpImage<vpRGBa>
instantiated as a color image.
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
if (CGColorSpaceGetModel(colorSpace) == kCGColorSpaceModelMonochrome) {
NSLog(@"Input UIImage is grayscale");
CGContextRef contextRef = CGBitmapContextCreate(gray.bitmap,
image.size.width,
image.size.height,
8,
image.size.width,
colorSpace,
kCGImageAlphaNone |
kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage);
CGContextRelease(contextRef);
return color;
}
else {
NSLog(@"Input UIImage is color");
colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef contextRef = CGBitmapContextCreate(color.bitmap,
image.size.width,
image.size.height,
8,
4 * image.size.width,
colorSpace,
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage);
CGContextRelease(contextRef);
return color;
}
}
static void convert(const vpImage< unsigned char > &src, vpImage< vpRGBa > &dest)
UIImage to gray vpImage
The following function implemented in ImageConversion.mm
show how to convert an UIImage
into a vpImage<unsigned char>
instantiated as a grey level image.
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
if (CGColorSpaceGetModel(colorSpace) == kCGColorSpaceModelMonochrome) {
NSLog(@"Input UIImage is grayscale");
CGContextRef contextRef = CGBitmapContextCreate(gray.bitmap,
image.size.width,
image.size.height,
8,
image.size.width,
colorSpace,
kCGImageAlphaNone |
kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage);
CGContextRelease(contextRef);
return gray;
} else {
NSLog(@"Input UIImage is color");
colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef contextRef = CGBitmapContextCreate(color.bitmap,
image.size.width,
image.size.height,
8,
4 * image.size.width,
colorSpace,
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage);
CGContextRelease(contextRef);
return gray;
}
}
Color vpImage to UIImage
The following function implemented in ImageConversion.mm
show how to convert a gray level vpImage<unsigned char>
into an UIImage.
{
NSData *data = [NSData dataWithBytes:I.
bitmap length:I.
getSize()*4];
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(I.
getWidth(),
8,
8 * 4,
colorSpace,
kCGImageAlphaNone|kCGBitmapByteOrderDefault,
provider,
nullptr,
false,
kCGRenderingIntentDefault
);
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
unsigned int getWidth() const
unsigned int getSize() const
Type * bitmap
points toward the bitmap
unsigned int getHeight() const
Gray vpImage to UIImage
The following function implemented in ImageConversion.mm
show how to convert a color vpImage<vpRGBa>
into an UIImage
.
{
NSData *data = [NSData dataWithBytes:I.
bitmap length:I.
getSize()];
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceGray();
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(I.
getWidth(),
8,
8,
colorSpace,
kCGImageAlphaNone|kCGBitmapByteOrderDefault,
provider,
nullptr,
false,
kCGRenderingIntentDefault
);
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
Application output
- Now we are ready to build
"StartedImageProc"
application using Xcode "Product > Build"
menu. - Note
- Here it may be possible that you get a build issue iOS error: libxml/parser.h not found. Just follow the link to see how to fix this issue.
- Once build, if you run
StartedImageProc
application on your device, you should be able to see the following screen shots.
- Pressing
"load image"
button gives the following result:
- Pressing
"convert to gray"
button gives the following result:
- Pressing
"compute gradient"
button gives the following result:
- Pressing
"canny detector"
button gives the following result:
Known issues
iOS error: libxml/parser.h not found
Follow iOS error: libxml/parser.h not found link if you get this issue.
Next tutorial
You are now ready to see the Tutorial: AprilTag marker detection on iOS.