Avoiding Image Decompression Sickness

When starting to work on our iCatalog.framework I stumbled upon an annoying problem, the same that you will face if you ever need to work with large images. “Large” meaning of a resolution sufficient to cover the entire screen of an iPad or potentially double that (horizontally and vertically) when dealing with Retina Resolution on a future iPad. Imagine you have a UIScrollView that displays UIImageViews for the individual pages of a catalog or magazine style app. As soon as even one pixel of the following page comes on screen you instantiate (or reuse) a UIImageView and pop it into the scroll view’s content area. That works quite well in Simulator, but when you test this on the device you find that every time you try to page to the next page, there is a noticeable delay. This delay results from the fact that images need to be decompressed from their file incarnation to be rendered on screen. Unfortunately UIImage does this decompression at the very latest possible moment, i.e. when it is to be displayed. Since adding a new view to the view hierarchy has to occur on the main thread, so does the decompression and subsequent rendering of the image on screen. This is where this annoying stutter or pause is stemming from. You can see the same on app store apps where scrolling through something stutters whenever a new image appears on screen. You have basically two main choices for an image format on disk, JPEG and PNG. Apple generally recommends that you use PNGs as graphics for your user interface because when building your app those get optimized by an open source tool named PNGCRUSH. This changes the PNGs such that they can be decompressed and rendered faster on iOS devices by making them easier to digest for the specific hardware. The first iPad magazines, like Wired, were using PNGs as the image format in which they transported the invidiual magazine pages causing one edition of Wired to take upwards of 500 MB. Just because PNGs are automatically optimized for you, that does not mean they are the apex of wisdom for any kind of purpose. That’s all nice and dandy for images that you can bundle with your app, but what about images that have to be downloaded over the internet? There are distinct advantages and disadvantages to both formats. Let’s review… PNGs can have an alpha channel, JPEGs cannot. PNGs have lossless compression, JPEGs allow you to choose a quality of anywhere between 0 and 100%. So if you need the alpha channel – how transparent each pixel is – you are stuck with PNG. But if you don’t need a pixel-perfect rendition of your image then you can go with the perceptually optimized JPEG which basically omits information you don’t see anyway. For most kinds of images you can go with around 60-70% of quality without any visual artifacts spoiling the visual fidelity. If you have “sharp pixels”, like for text you might want to go higher, for photos you can choose a lower setting. Looking at memory used an image takes multiple chunks of it: space on disk or being transferred over the internet uncompressed space, typically width*height*4 Bytes (RGBA) when displayed in a view the view itself also needs space for the layer backing store There’s an optimization possible for 1) because instead of copying the compressed bits into memory they can also be mapped there. NSData has the ability of pretending that some space on disk is in memory. When it is being access then you actually access the bytes on disk and not in RAM. CGImage is rumored to know by itself which loading strategy is more efficient. UIImage is basically just a wrapper around CGImage. Then there’s the question of “How fast can I get these pixels on screen?”. The answer to this is comprised of 3 main time intervals: time to alloc/init the UIImage with data on disk time to decompress the bits into an uncompressed format time to transfer the uncompressed bits to a CGContext, potentially resizing, blending, anti-aliasing it To be able to give a definitive answer to a scientific question, we need to take measurements. We need to benchmark. Basic Assumptions and Ingredients I made a benchmark app that I ran on several iOS device I had handy. Since I want to also compare crushed versus non-crushed PNGs I needed to start with a couple of source images that Xcode would crush for me. I would have been nice to also dynamically try out different size, but at present I lack a possibility of doing the pngcrush on the device. So I started with 5 resolutions of the same photo of a flower. Of course a true geek would have contrived of multiple different images representing both highly compressible, … Continue reading Avoiding Image Decompression Sickness