When creating a static universal framework we’re facing one quite annoying problem. How do we get our pretty images added to the app bundle that our code will be used in?
Contrary to what you might be used to on the Windows platform there is no built-in method of embedding graphics files into an app binary. Because of this you see famous SDKs like FBConnect provide a bundle together with their libraries. To add these you have to add both the library/framework and the graphics bundle to your project.
Bundles are basically just folders that have been named with the .bundle extension. This hides their content from lazy clicking, but you can still look inside in terminal or by right click and “Show Package Contents”. This opens the bundle like a folder and you can edit its contents.
Now for the longest time I had a longing to package library and SDK code in neat frameworks that you would simply drag&drop into a target project. I managed to build two libraries and glue them together so that the same library can be used for building for simulator and device. Then guest author Netytan demonstrated how you can hack a bundle project to create a framework instead. The graphics problem was literally the only open loop to close.
Until today …
Because if you think about it, a graphics file is just a collection of bytes, just like static NSString variable or a static C-style array. The only problem is that there are not just legal character values in graphics files, but also newlines and other control characters because each byte can be any value from 0 to 255.
The first method to encode the data in a way that does not destroy your source files is the same method that is used for transporting binary data over HTTP. Base64 works by only taking 6 bits of every 8 bit and mapping the resulting number (0-64) to a table of string-safe characters. That means that every 3 bytes turn into 4 characters resulting in a third more bytes as a tradeoff.
For the MobFox framework I used this method. All I needed was a base64 encoder and decoder which I found as NSData category by Matt Gallagher. Matt’s methods are breaking the lines, but NSStrings can only be on a single line, so I had to remove this breaking.
I wrote a small command line utility loaded the image data, base64-encoded it and wrote it to another file. The resulting string can be transformed into a UIImage like so:
Fairly straightforward I think. With the help of the aforementioned category extension for NSData we decode the string which we can directly feed to imageWithData.
The second method that I was made aware of is way more geeky but if you understand it, works just as well.
Binary C-Style Array
In pure C arrays are just a block of memory where the size is number of elements times size of one element. No methods or anything as we have them in NSArray. It just so happens that C allows to initialize C arrays with curly brackets and between these the values that you want to be written into the memory.
I had chosen Base64 because there I thought the encoding to be easier, but it turns out that there’s a tool already installed on your Mac that creates such C-style array code for you. All we need to do is pipe the output into a header file.
xxd -i close.png > close.png.h
xxd hexdumps the specified file and -i sets C include file style as output format. This contains such a C-style array plus a define holding the length of the array, which in C we have to keep track of ourselves. In this example they are automatically derived from the file name: close_png and close_png_len. The resulting file we can add to our project.
In our project the use of the bytes is just as simple.
// --- top of file #import "close.png.h" //--- in some method // pointer to image data and length unsigned char *pngBytes = close_png; NSUInteger pngLength = close_png_len; // make NSData and UIImage from it NSData *pngData = [NSData dataWithBytesNoCopy:pngBytes length:pngLength freeWhenDone:NO]; UIImage *image = [UIImage imageWithData: pngData];
Since the memory for the image data array is being allocated when the binary is loaded we don’t need to make another copy. Instead we “transform” it into an NSData object via the dataWithBytesNoCopy method of NSData. For the same reason we don’t want NSData to free the memory when it gets released.
This method has a couple of advantages over the base64 method. The size used in memory is exactly the same as on disc as there is no encoding necessary. And you don’t need any extra decoding method.
I was able to figure out this mechanism by looking at this commit to AQGridView by Alan Quartermain where you can see how he replaced PNG files loaded from disk with this approach. There’s also a small script that you might want to use in your project to run on multiple files and end up with just a single header.
If you know how then embedding any kind of resource into your binaries appears quite easy.
There might be one disadvantage that we have not touched on as of yet: Retina. Because of embedding you no longer have the smart loading through imageNamed. So you have to make a choice: either you embed both resolutions or you only include the high-res image and manually specify the size in screen points for image views.
There is little benefit for doing that for regular apps, but for static libraries this enables you to eliminate the headache of external resources. Because embedding always makes your binaries fatter you have to exercise good judgement where it makes sense. But generally speaking if you only have a handful of small images then the convenience for frameworks easily outweighs the extra work.