Categories
mobile Programming

Creating Test Images and Comparing UIImages

IMG_7749

I’ve been working on this app which relates to my obsession with color. It’s an image processing app, and you can see some pictures made with it on our Tumblr.

This involved learning about how to take images apart and put them back together, rewriting a lot of stuff in C for performance, etc. But one of the other problems I faced was a question of how to test things involving images? How do I create test images? And how do I compare them?

Creating Test Images

The simplest way to do this is to to draw the image into context. This is super not performant, so isn’t really viable for much other than small test images, but does work.

I have three little helper functions that create some test images that I can work with.


// Make an image all of one size, in whatever color.
+ (UIImage *)createTestImageWithWidth:(CGFloat)width
height:(CGFloat)height
color:(UIColor *)color {
CGRect rect = CGRectMake(0, 0, width, height);
UIGraphicsBeginImageContext(rect.size);
[color set];
UIRectFill(rect);
UIImage *testImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return testImage;
}
// Create a 3×3 image alternating between two colors.
+ (UIImage *)createTestImageNineQuadrantsWithColor1:(UIColor *)color1
color2:(UIColor *)color2 {
// Create an image with 4 quadrants of color.
CGRect rect = CGRectMake(0, 0, 3.0, 3.0);
UIGraphicsBeginImageContext(rect.size);
[color1 set];
UIRectFill(CGRectMake(0, 0, 1, 1));
UIRectFill(CGRectMake(2, 0, 1, 1));
UIRectFill(CGRectMake(1, 1, 1, 1));
UIRectFill(CGRectMake(0, 2, 1, 1));
UIRectFill(CGRectMake(2, 2, 1, 1));
[color2 set];
UIRectFill(CGRectMake(1, 0, 1, 1));
UIRectFill(CGRectMake(0, 1, 1, 1));
UIRectFill(CGRectMake(2, 1, 1, 1));
UIRectFill(CGRectMake(1, 2, 1, 1));
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
// Create a 2×2 image with each quadrant a different color.
+ (UIImage *)createTestImageWithFourColors {
// Create an image with 4 quadrants of color.
CGRect rect = CGRectMake(0, 0, 2.0, 2.0);
UIGraphicsBeginImageContext(rect.size);
[[UIColor redColor] set];
UIRectFill(CGRectMake(0, 0, 1, 1));
[[UIColor greenColor] set];
UIRectFill(CGRectMake(1, 0, 1, 1));
[[UIColor blueColor] set];
UIRectFill(CGRectMake(0, 1, 1, 1));
[[UIColor blackColor] set];
UIRectFill(CGRectMake(1, 1, 1, 1));
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}

view raw

ImageDrawer.m

hosted with ❤ by GitHub

The other thing I have is a function that turns an array of UIColors into an image. This is a bit more complicated, but helpful for some tests.


// Create an image from an array of colors.
+ (UIImage *)createImageWithPixelData:(NSArray *)pixelData width:(int)width height:(int)height {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Add 1 for the alpha channel
size_t numberOfComponents = CGColorSpaceGetNumberOfComponents(colorSpace) + 1;
size_t bitsPerComponent = 8;
size_t bytesPerPixel = (bitsPerComponent * numberOfComponents) / 8;
size_t bytesPerRow = bytesPerPixel * width;
uint8_t *rawData = (uint8_t*)calloc([pixelData count] * numberOfComponents, sizeof(uint8_t));
CGContextRef context = CGBitmapContextCreate(rawData,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
(CGBitmapInfo) kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
int byteIndex = 0;
for (int index = 0; index < [pixelData count]; index += 1) {
CGFloat r, g, b, a;
BOOL convert = [[pixelData objectAtIndex:index] getRed:&r green:&g blue:&b alpha:&a];
if (!convert) {
// TODO(cate): Handle this.
NSLog(@"Failed, continue");
}
rawData[byteIndex] = r * 255;
rawData[byteIndex + 1] = g * 255;
rawData[byteIndex + 2] = b * 255;
rawData[byteIndex + 3] = a * 255;
byteIndex += 4;
}
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(context);
CGImageRelease(imageRef);
return newImage;
}

view raw

ImageMaker.m

hosted with ❤ by GitHub

Comparing Images

This leads me to the question of comparing images. For my purposes (and the app is heavily focused on colors), I can determine if things have worked by comparing two color arrays. I could compare the rawData but  I want to abstract it away a bit to make my tests clearer. So I have another function that is basically the inverse of the one above, which extracts an array of pixels from an image.

Turning images into arrays of UIColors and vice versa is so-so performance-wise, and UIColors have a huge space overhead compared to the rawData array. It’s fine for testing, for very small images or a proof of concept, but not much more than that.


// Turn an image into an array of UIColors.
+ (NSArray *)pixelsForImage:(UIImage *)image {
NSUInteger width = [image size].width;
NSUInteger height = [image size].height;
NSUInteger count = width * height;
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Add 1 for the alpha channel
size_t numberOfComponents = CGColorSpaceGetNumberOfComponents(colorSpace) + 1;
size_t bitsPerComponent = 8;
size_t bytesPerPixel = (bitsPerComponent * numberOfComponents) / 8;
size_t bytesPerRow = bytesPerPixel * width;
uint8_t *rawData = (uint8_t*)calloc(count * numberOfComponents, sizeof(uint8_t));
CGContextRef context = CGBitmapContextCreate(rawData,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
(CGBitmapInfo) kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGImageRef cgImage = [image CGImage];
CGContextDrawImage(context, CGRectMake(0, 0, width, height), cgImage);
int byteIndex = 0;
for (int i = 0 ; i < count ; ++i) {
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *color = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:color];
}
CGContextRelease(context);
free(rawData);
return result;
}

Then with two arrays I can just loop through and compare.


– (void)compareColorArrayRGBs:(NSArray *)array toExpected:(NSArray *)expected {
XCTAssertEqual([expected count], [array count]);
for (int i = 0; i < [expected count]; i++) {
UIColor *color = [array objectAtIndex:i];
UIColor *expectedColor = [expected objectAtIndex:i];
CGFloat r, g, b, a;
CGFloat eR, eG, eB, eA;
[color getRed:&r green:&g blue:&b alpha:&a];
[expectedColor getRed:&eR green:&eG blue:&eB alpha:&eA];
XCTAssertEqualWithAccuracy(r, eR, 0.005);
XCTAssertEqualWithAccuracy(g, eG, 0.005);
XCTAssertEqualWithAccuracy(b, eB, 0.005);
XCTAssertEqualWithAccuracy(a, eA, 0.005);
}
}

view raw

ImageAsserts.m

hosted with ❤ by GitHub

5 replies on “Creating Test Images and Comparing UIImages”

After a little tweeking it also works for NSImage on macOS. Passing both arrays to XCTestAssertEqualObjects() does comparison without any extra effort. Nice solution!

Comments are closed.