When I started out developing iOS apps, 11 years ago I put several apps on the App Store. Since they became fewer and fewer as the income from them didn’t warrant updating them. Amongst those my most successful one was iWoman, which I sold in 2015. My second-most-valuable (in terms of revenue) remained my beloved SpeakerClock, the last app standing.
I had left SpeakerClock online for the main reason that it kept producing like an average of $100 per month, even without me doing anything on it. For that reason, I didn’t want to make it free, but rather put it to a relatively high price tag of $5. There is also an In-App-Purchase of another $5. I figured “why kill the cow while it still produces some tasty milk”.
The other side effect of these price tags was that – I believe – only people who really wanted what the app was offering would actually purchase it. My philosophy with this speaking timer was to have the biggest LED digits possible, with the functionality that supports the speaking style of TED Talks, which historically have defaulted to a maximum length of 18 minutes.
Some crashes introduced by new iOS versions caused me to do small bug fixing releases (for iOS 3 in 2010, iOS 5 in 2011, and 2017 for iOS 10). Also, looking back at the release notes of those versions, I had made this exact promise:
“We have totally modernised the code base so that we can bring you some exciting new features in the next major release”
But I didn’t lie with this statement, a “next major” release would have been version 2.0. But I didn’t ever dare to turn the version number up that high. I only increased the third digit of the version number.
Apple did force me to do a new build eventually, when they cracked down on apps which weren’t updated in too long a time. And the most recent update they did themselves, when the Apple certificate had expired and they re-signed my app on their servers without me doing anything.
Enter SwiftUI
Over the last few months, I have grown very fond of SwiftUI. Being a developer on Apple platforms for more than a decade made me quite tired of having to keep writing the same MVC code countless times. And that would only get you like standard functionality, nothing truly exciting. So I jumped at the chance when one of my clients asked me to implement a new iOS Widget in SwiftUI, in the fall of 2020. Apple had turned to SwiftUI as the only way you could create such widgets because of SwiftUIs ability to produce and preserve a static view hierarchy which the system could show to the user at certain points in a timeline without substantial power usage.
My client was happy about the result and so I was tasked with the next level of SwiftUI development. I needed to implement a watchOS app, also entirely in SwiftUI. Development was quite similar to the widget, but this time I also needed to deal with user interaction and communication with the iOS counterpart app. That all took some a few months more than the widget, but again increased my SwiftUI skills tremendously.
After having delivered the watch app, I had a little extra time available to do something for myself. I do have some other ideas for apps, but my thoughts turned to SpeakerClock. I figured that this highly custom UI would lend itself nicely to be implemented in SwiftUI.
Paths in Shapes
The most important asset in the legacy code was the drawing of the big red LED digits and how they arrange themselves in portrait versus landscape, in a nice animation. So my first SwiftUI view was one that had a Path element with the SwiftUI commands adding the path elements to make up the individual bars of the LED. My first error here concerned using a GeometryReader to determine the dimensions of the path. The LED digits have a fixed aspect ratio and the drawing coordinates are based on those.
struct LEDDigit: View
{
var digit: Int? = nil
var body: some View
{
GeometryReader { proxy in
let (w, h) = proxy.unitSize
// top horizontal line
Path { path in
path.move(to: CGPoint(x: 24 * w, y: 7 * h))
path.addLine(to: CGPoint(x: 60 * w, y: 7 * h))
path.addLine(to: CGPoint(x: 62 * w, y: 10 * h))
path.addLine(to: CGPoint(x: 57 * w, y: 15 * h))
path.addLine(to: CGPoint(x: 24 * w, y: 15 * h))
path.addLine(to: CGPoint(x: 21 * w, y: 10 * h))
path.closeSubpath()
}
.activeLEDEffect(when: [0, 2, 3, 5, 7, 8, 9].contains(digit))
...
}
While this produces the correct output, it causes the individual Path
s to animate separately when rotating the device. I solved this problem by moving the individual path’s code into a Shape where I am adding the bars only based on whether I am looking for the active or inactive LED elements. The path(in rect: CGRect) function hands us the required size, so we don’t a GeometryReader any more.
struct LEDDigitShape: Shape
{
var digit: Int? = nil
var isActive: Bool
func path(in rect: CGRect) -> Path
{
let w = rect.size.width / 73
let h = rect.size.height / 110
var path = Path()
// top horizontal line
if [0, 2, 3, 5, 7, 8, 9].contains(digit) == isActive
{
path.move(to: CGPoint(x: 24 * w, y: 7 * h))
path.addLine(to: CGPoint(x: 60 * w, y: 7 * h))
path.addLine(to: CGPoint(x: 62 * w, y: 10 * h))
path.addLine(to: CGPoint(x: 57 * w, y: 15 * h))
path.addLine(to: CGPoint(x: 24 * w, y: 15 * h))
path.addLine(to: CGPoint(x: 21 * w, y: 10 * h))
path.closeSubpath()
}
...
}
This is used such:
struct LEDDigit: View
{
var digit: Int? = nil
var body: some View
{
ZStack
{
LEDDigitShape(digit: digit, dot: dot, isActive: false)
.activeLEDEffect(isActive: false)
LEDDigitShape(digit: digit, dot: dot, isActive: true)
.activeLEDEffect(isActive: true)
}
}
The two members of the ZStack
draw all the inactive LED elements behind the active LED elements. It still needed to be two Shapes because one shape can only have a single drawing style. The inactive elements are simply filled in a gray. The active elements are filled with red and have a red glow around them simulating some radiance.
With this approach a digit is always drawn in its entirety which lends itself to smooth resizing.
Layout and Orientation Woes
The next step was to aggregate multiple LED digits and lay them out over the screen with different positions for landscape and portrait orientations, with a smooth animation when you rotate the device.
I have basically two layouts:
- Hour digits, Colon, Minute digits (in a
HStack
)- in horizontal layout with the outer sides touching the safe area insets - A
VStack
of Hour digits and Minute digits – in vertical layout
Sounds easy, but my attempts with HStacks and VStacks failed miserably. At the beginning of the rotation animation the digits would always get a very small frame expanding into the final one.
I can only imagine that somehow the SwiftUI layout system doesn’t remember that those are the same views. So I tried giving them static identifiers and I also tried geometry matching. But I couldn’t shake these animation artefacts. There must be some piece missing in my understanding about view identity.
In the end I came back to doing my own layout inside a GeometryReader, setting frame’s width/height and appropriate offsets (i.e. translation) for individual elements. This works very nicely and also lets me have a separate animation for the opacity of the colon.
The colon sticks to the right side of the hour digits and disappears in portrait layout. By sorting view modifiers in a certain way I was able to get this effect that the colon fades in with a slight delay.
var body: some View
{
GeometryReader { proxy in
let digitSize = self.digitSize(proxy: proxy)
let colonSize = self.colonSize(proxy: proxy)
let centeringOffset = self.centeringOffset(proxy: proxy)
let isLandscape = proxy.isLandscape
let timerSize = self.timerSize(proxy: proxy)
Group
{
LEDNumber(value: model.countdown.minutes)
.frame(width: digitSize.width * 2, height: digitSize.height)
.animation(nil)
LEDColon()
.frame(width: colonSize.width, height: colonSize.height)
.offset(x: digitSize.width * 2, y: 0)
.animation(nil)
.opacity(isLandscape ? 1 : 0)
.animation(isPadOrPhone ? (isLandscape ? .easeInOut.delay(0.2)
: .easeInOut) : nil)
LEDNumber(value: model.countdown.seconds)
.frame(width: digitSize.width * 2, height: digitSize.height)
.offset(x: isLandscape ? digitSize.width * 2 + colonSize.width : 0,
y: isLandscape ? 0 : digitSize.height)
.animation(nil)
}
.offset(x: centeringOffset.width,
y: centeringOffset.height)
You can see that I am specifically disabling animation with .animation(nil)
for the most parts because I found that the animation otherwise is always out of sync with the rotation resizing animation. The LED colon on the other hand has its own animation with an additional delay of 0.2 seconds.
The second reason why I explicitly disabled animations is because on the Mac version those animations would lag behind the resizing of the app’s window. This resizing also switches between both layouts depending on how you drag the window corner, sort of like “responsive design” as we have seen on HTML web pages. More on Mac things further down below.
Multi-Modal Buttons
Another challenge that had me try multiple approaches concerned the preset buttons (top left) and traffic light buttons (center bottom). These buttons have a different function for a single tap (select) versus a long press (set).
The main problem is that you cannot have a simple .onLongPressGesture
because this prevents the normal taps from being handled. One approach is to have a .simultaneousGesture
for the long press, but then the tap action is executed right (i.e. “simultaneous”) after the long press action if you lift the finger over the button. The other approach is to use a .highPriorityGesture
which again disables the built-in tap.
I ended up with the following approach which uses the gesture mask to selectively disable the long press gesture if there is no long press action and to disable the tap gesture if a long press was detected.
struct LEDButton<Content: View>: View
{
var action: ()->()
var longPressAction: (()->())?
@ViewBuilder var content: ()->Content
@State fileprivate var didLongPress = false
var body: some View
{
Button(action: {}, label: content) // must have empty action
.contentShape(Circle())
.buttonStyle(PlainButtonStyle()) // needed for Mac
.simultaneousGesture(LongPressGesture().onEnded({ _ in
didLongPress = true
longPressAction!()
didLongPress = false
}), including: longPressAction != nil ? .all : .subviews)
.highPriorityGesture(TapGesture().onEnded({ _ in
action()
}), including: didLongPress ? .subviews : .all)
}
}
This approach uses a custom TapGesture
in tandem with the LongPressGesture
. A @State
variable keeps track of the long press. We do need to reset didLongPress
to false
or else all subsequent taps would continue to be ignored. I found that I don’t need a dispatch async for putting it back to false.
I believe that the reason for that is that the first setting of the variable causes the body to be updated and thus the including:
to disable the tap gesture while in progress. Thus the tap doesn’t fire upon releasing the long press. Good to know: The .all
enables the gesture and the .subviews
disables a gesture.
Contrary to other approaches I have seen on the internet this approach preserves the standard behavior of Button for highlighting, While you press a custom button like this, it makes it slightly transparent.
A Mac Version – For Free?
The huge promise of SwiftUI is that you would get a Mac version of your app for little extra work, effectively “for free”. So I decided to put this to the test also produce a macOS version. I set the targeted devices to iPhone, iPad, Mac and chose the “Optimize Interface for Mac” because that sounded to me like the better result.
This optimized mode caused some issues for my custom buttons, because they got replaced with empty round rects destroying my custom look. You can prevent this modification by adding .buttonStyle(PlainButtonStyle())
.
Apart from this my code really did run as a native Mac app quite nicely. Behind the scenes though it is all Mac Catalyst. As I understand it, that means UIKit is still at the helm, on Mac just a macOS version of it.
I left the code signing settings alone as I wanted to have users be able to install the Mac and iOS versions with the same purchase. This “universal purchase” is enabled by having the same bundle identifier for both versions.
Some very minor tweaks were required for adjusting some minimum and maximum button sizes. There is a bug on macOS that stumped me for a while. Only on Mac I found that when I tapped in certain spots in my app this would cause gestures to stop working. Then when I triggered a new layout by resizing the window, everything returned back to normal.
My workaround for this was to attach the Pan Gesture (for setting the timer) only to the LED digits. This way there is no interference and all buttons continue to work normally. The system might get confused by having too many conflicting gestures on top of each other.
A side-effect of the Mac version is that you start to attach keyboard shortcuts to buttons. This was also a reason why I wanted to get Button
to work with tap and long press as opposed to making a custom view that is not a button.
let title = "\(index+1)"
PresetButton()
.keyboardShortcut(KeyEquivalent(title.first!), modifiers: [.command])
This way you can trigger the preset buttons also with COMMAND plus number. And not just for the Mac app, but that works for iPads with attached keyboard as well.
That got me thinking, that maybe it would be great to allow the space bar to stop/start the timer, like we are used to from video players. For that purpose I have an empty completely black button behind the LED digits:
Button(action: { model.isTimerActive.toggle() },
label: {
Rectangle()
.foregroundColor(.black)
.frame(width: timerSize.width, height: timerSize.height)
.onTapGesture(count: 2) { model.restoreGreenTime() }
})
.keyboardShortcut(.space, modifiers: [])
.buttonStyle(PlainButtonStyle())
This button allows me to add a keyboard shortcut for space to act the same as a tap. Curiously having a two-tap gesture attached to the Rectangle()
poses no problem.
I submitted the Mac build right after the one for iOS but initially got a shocking rejection:
The user interface of your app is not consistent with the macOS Human Interface Guidelines. Specifically:
We found that the app contains iOS touch control instructions such as tap and swipe.
The reason for that was that I put back the help screen with a text I had previously written with iOS in mind. I needed to replace mentions of swiping with dragging and instead of tapping you are clicking. I have hard coded the text and formatting for now and with and #if
I can switch the text between a version for Mac and one for iOS.
Group
{
Text("Setting the Timer")
.font(.headline)
.padding(.bottom, 5)
#if targetEnvironment(macCatalyst)
Text("To adjust the timer, click on the LED digits and drag horizontally.")
.font(.body)
.padding(.bottom, 5)
#else
Text("To adjust the timer swipe left and right.")
.font(.body)
.padding(.bottom, 5)
#endif
}
Once I had made those changes the Mac app was approved very quickly.
Conclusion
I’ve experienced first hand how I can rewrite an app in SwiftUI and the great pleasure that can be had from deleting all your crufty Objective-C code when doing so.
SwiftUI is my new love and this way my app is no longer a “child from another mother”. This restores some enthusiasm in me to actually finally really add some long-promised “exciting new features”. For starters I am thinking of having a watchOS companion app which shows the timer and allows you to remote control it. Another idea might be to store my presets on iCloud so that they are the same on all my devices.
I would love to hear from you what you think about the process of re-implementing parts of apps or even whole apps in SwiftUI.
Also published on Medium.
Categories: Updates