BuySellAds.com

Until Dec 3rd, 44% off all Manning books, including Barcodes with iOS! Promo code: mobicftw
Our DNA is written in Objective-C
Jump

DTLoupe – Reverse-engineering Apple’s Loupes

I am working on a CoreText-based rich text editor at the moment. That means employing two primary technologies: the UITextInput protocol as well as rendering the formatted text with CoreText. Unfortunately Apple has forgotten to add selection and loupe mechanics to UITextInput, so we have to build these ourselves if we want to get the same look&feel as the built-in stuff.

So to get the selection handling and loupe we see developers go two paths: either they distort UIWebView with fancy JavaScript or they struggle with implementing their own code. These approaches lead to a wide variety of differently looking and behaving loupes and selection mechanics. I have contacted Apple by all means available to me and I’m hoping that there will be an official method to get the selection mechanics and loupe down the road.

But until there is, I let me present an interim solution for this problem. This will be a component I call DTLoupe and it have many potential applications besides being used in an editor to select text. Like providing a magnifier in a context when pinch-to-zoom does not make sense.

Apple’s loupe consists of several parts that are combined with a zoomed rendering of the view around your finger. The layers bottom to top are:

  • a “lo” image forms the base
  • a mask masks the inner shape of the loupe
  • a 1.25x enlarged rendering of the view is put into this space
  • a “high” image adds the shine and general frame

Apple seems to prefer use of a masking image over a clipping path. We believe this to be the case because masking is probably much faster than clipping because it can be entirely done on the GPU, being essentially a pixel-by-pixel operation, whereas with clipping there needs to be some extra logic to decide whether a pixel is still in the permitted area or not.

Quite a bit of experimentation was necessary to get it looking right, including the showing and hiding animation. Thankfully there is an amazing project on GitHub UIKit-Artwork-Extractor that helped us learn about “how they did it” by letting us inspect the images that are part of the whole effect. A note at the side, Apple is extensively using PNGs for all sorts of UI elements, so you might want to do the same, especially if you have a designer that can provide all these individual items for you to assemble.

My friend Michael Kaye was hard at work to research all the nuts and bolts and the result is coming along very nicely. There are a couple of other projects that provided some inspiration, like the OmniGroup framework. If you ask me now “why did you re-invent the wheel if OmniUI already has a loupe?” the answer is simple: it does not look like and does not behave like the original. And on top of that they never updated their solution for retina displays, which makes it impossible to use in a modern project where Retina-support is a must-have.

Now for the demo:

Work is ongoing to polish but I just had show off the fabulous work that Michael did over the past few days. Next up is actually hooking up the touch handling for moving the cursor, extending the selection and handling the context menu.

 


Categories: Parts

%d bloggers like this: