Ad

Our DNA is written in Swift
Jump

Assorted Observations

While I am developing my first own Mac app I’ve been adding to this blog post whenever something was weird to me. Or simply different.

You can definitely see in many instances how some modus operandi on iOS has its roots on Mac, but Apple had to change things a bit around to accomodate the different UI paradigms on the mobile platform.

CG versus NS

You will find that AppKit uses structs for everything that are prefixed with NS, even though the contents are identical. In many cases you can assign a CGRect to an NSRect though that would disturb purists.

There are a plethora of macros to move between these worlds. So you better use these so that if somebody reads your code does not get the impression you don’t know what you are doing.

Upside Down

NSViews have the origin of their coordinate system in the lower left corner of the view. On UIKit we are used to origin being in the upper left corner. Fortunately there’s a simple trick to flip the coordinate system.

In your NSView subclass add this:

- (BOOL)isFlipped
{
	return YES;
}

Henceforth all added sub views will be positioned just like on iOS. Since the default implementation of this method returns NO you can only make this modification to your own sub-classes.

Basic Drawing

Like on iOS you have two way of drawing stuff. You can act on the higher level Objective-C API of AppKit/UIKit or you can go down to the Quartz level. Internally all drawing gets mapped to Quartz anyway, so if you already have such lower-level code you can easily transfer it, though you might have to be careful about the coordinate system flipping.

To draw a simple rectangle at the corner of the NSView you can do that in AppKit:

- (void)drawRect:(NSRect)dirtyRect
{
	[[NSColor redColor] set];
	[NSBezierPath strokeRect:self.bounds];
}

Or in Quartz:

- (void)drawRect:(NSRect)dirtyRect
{
	CGContextRef context = [[NSGraphicsContext currentContext] graphicsPort];
 
	CGContextSetRGBStrokeColor(context, 1.0, 0, 0, 1.0);
	CGContextStrokeRect(context, self.bounds);
}

The main difference is that you have to obtain the CGContext reference from the current NSGraphicsContext as opposed to UIGraphicsGetCurrentContext. Without isFlipped:YES both AppKit and Quartz have the lower left origin. On iOS Quartz draws from the lower left corner, UIKit from the upper left corner. You might remember having to the coordinate system for Quartz drawing on iOS by means of a transform.

Drop a Shadow

To drop a shadow behind an NSImage you can use the NSShadow class. Similar to NSColors you can also simply “set” the shadow instance on the graphics context.

- (void)drawRect:(NSRect)dirtyRect
{
	NSShadow *shadow = [[NSShadow alloc] init];
	[shadow setShadowColor: [NSColor colorWithDeviceWhite: 0.0f alpha: 1.0f]];
	[shadow setShadowBlurRadius: 5.0f];
	[_shadow setShadowOffset: NSMakeSize(0, -1)];
	[shadow set];
	[_pageImage drawInRect:[self _rectForPage] fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
}

This creates a shadow and the subsequent drawing operations will show it. Be careful however, because on Mac graphics contexts sometimes get reused.

Clean Up After Yourself

NSPageController creates a snapshot of the view that is going out and the one that is appearing. One thing that was not obvious to me was that it reuses the graphics context used to draw its content view.

The page view would be drawn once and as soon as you started to move your finger it would create another snapshot reusing the graphics context we previously set the shadow on. You see that the gray area in the dashed red box is blurry and so is the page corner where I am drawing a red line for debug purposes.

I was puzzled at first because on iOS you don’t ever see a graphics being reused like this. The solution – of course – is to save the graphics state before making changes to it and restore that state when you’re done drawing.

- (void)drawRect:(NSRect)dirtyRect
{
	NSGraphicsContext* context = [NSGraphicsContext currentContext];
	[context saveGraphicsState];
 
	[[self shadow] set];
	[_pageImage drawInRect:[self _rectForPage] fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
 
	[context restoreGraphicsState];
}

That fixes the problem nicely.

So we can concluded that it is probably generally good citizenship to clean up changes to the graphics context.

Layer or No Layer?

The red dashed boxes you see in the screenshots are subviews of the large page view. With half an ear I had heard that since quite recently OS X can also back NSViews with CoreAnimation layers. This always has been the only way on iOS.

You can activate this layer-backing mode by setting the wantsLayer property to YES. I figured that layers must be better, so I turned it on for the subviews.

Now I was seeing another strange effect together with the NSPageController. The snapshot for the pages would only include the boxes that where present on screen when you started the swiping motion on the trackpad.

When I disabled the layer-backing suddenly the snapshots would also properly include the sub-views. If you know why this is and maybe can explain the advantages of using layer-backing then please speak up!

Difference in Animation

One explanation for many differences is the simple fact that all views on iOS only ever WHERE layer-backed. Whereas you only are beginning to get this as an option on OS X since 10.7. You can see many places where stock-controls show weird artifacts when used in a layer-hierarchy. Any layer that you make layer-backed view (via setWantsLayer:YES) will automatically pass on the setting to all of its subviews.

Now if you are using some ancient UI controls down the layer-hierarchy, like for example NSTextField there are situations where you see that it is unable to properly draw itself.

Animations on iOS are usually done by simply setting a property value. This triggers an implicit animation with a duration of like 25 ms. If you are resizing an UIImageView like this then CoreAnimation will take care of the animation frames between start and finish. All the while you already see the final value on the property.

On Mac Animations existed since long before there where CALayers, CA as in Core Animation. There a frame change is actually done by calling the animator of a view and setting the property there. This will then take care of setting a different frame for each animation step and the view redrawing itself.

- (void)layoutSubviewsAnimated:(BOOL)animated
{
	if (animated)
	{
		// temporarily set the document view frame to the entire contianer, to avoid flicker at bottom section
		[[self documentView] setFrame:self.bounds];
 
		[NSAnimationContext beginGrouping];
		[[NSAnimationContext currentContext] setDuration:kDMPaletteContainerAnimationDuration];
		[[NSAnimationContext currentContext] setCompletionHandler:^{
		}];
	}
 
    NSSortDescriptor *sortDescriptor = [NSSortDescriptor sortDescriptorWithKey:@"index" ascending:YES];
    contentSectionViews = [contentSectionViews sortedArrayUsingDescriptors:[NSArray arrayWithObject:sortDescriptor]];
 
 
    [contentSectionViews enumerateObjectsUsingBlock:^(DMPaletteSectionView* paletteSection, NSUInteger idx, BOOL *stop) {
		 if (animated)
		 {
			 [[paletteSection animator] setFrame:[self frameForSectionAtIndex:idx]];
		 }
		 else
		 {
			 paletteSection.frame = [self frameForSectionAtIndex:idx];
		 }
    }];
 
	if (animated)
	{
		[NSAnimationContext endGrouping];
	}
}

It makes me smile to see the possibility of adding a completion handler to the animation group set up here. A sign of new tech infusing the tried and true approach.

If you appreciate this difference in how animations occur in iOS versus traditional OS X then you begin to understand why Apple could never have pulled off the responsiveness of iOS with animators. Only Core Animation and layer animations allow for doing all sorts of tricks on the GPU to scale the contents of layers/views.

First Responder, WTF?

On iOS you might have made a view first responder a few times before, usually you would do that with a UITextView to have the keyboard show. Also there we resigned to the fact there there is no public method to get the current first responder, but we are content with that since becomeFirstResponder automatically resigns the current first responder from its status before the new view becomes the new one.

On Mac it is sufficient for the user to click on any view and it will be made first responder, provided that it responds with acceptsFirstResponder YES. If not then the click will travel up the responder chain and the next view responding with YES will be it. Since you can have multiple windows visible on Mac at the same time, each window has its own first responder which you can inquire from the window.

Setting a first responder in code is also slightly different than on iOS. There the NSWindow instance provides a method to set the first responder. According to the docs this sends resignFirstResponder to the current one and then – if that was successful – asks the new one to becomeFirstResonder.

[[self mainDocumentWindowController].window makeFirstResponder:pageView];

What I don’t yet understand is why in my experiments a becomeFirstResponder often did work, but not always. For example an NSPageController seems to not relinquish first responder status this way. In my app I had to use the makeFirstResponder instead. The only answer I got on Twitter in this regard was along the lines of “NSPageController is a bitch”.

Editing Command Validation

If a UIView can deal with the usual editing commands, like copy:, paste. et al then you can implement a canPerformAction:withSender: which depending on the selector passed in the action parameter can inform the system what actions are available. This is used by the system to only show menu items in UIMenuController that somebody can take care of.

On Mac the story is a bit more complicated. You’ll not find a canPerformAction method there. Instead you implement validateUserInterfaceItem which passes a generic object that tells you the action selector and an element tag.

- (BOOL)validateUserInterfaceItem:(id < NSValidatedUserInterfaceItem >)anItem
{
	SEL action = [anItem action];
	NSUInteger selectedZoneCount = [[self selectedHotZoneViews] count];
 
	if (action == @selector(selectAll:))
	{
		if ([_hotZoneViews count]>selectedZoneCount)
		{
			return YES;
		}
	}
 
	return NO;
}

You see, same mechanism, but on Mac this allows for many different kinds of NSValidatedUserInterfaceItems, be they menu items or something else. Apple probably decided on the different name to make it less confusing on the unsuspecting developer.

Conclusion

You can see the parallels in iOS and Mac everywhere you look. There are many classes that are identical on both OSes, but wherever there are different user interface paradigms you usually have an NS and UI class duality going on.

iOS is full of optimizations where you can Apple see cleaning up cruft that has gathered in AppKit classes and only putting the leanest and meanest parts into iOS. Thankfully with the success of this high octane approach the Mac platform apparently benefits from many “back to the Mac” tweaks under the hood.


Categories: Q&A

2 Comments »

  1. You may want the acceptFirstResponder on iOS too. I used to assign the equivalent property (canBeFirstResponder?) to my main view to detect the undo motion.