Projects – Cocoanetics https://www.cocoanetics.com Our DNA is written in Swift Tue, 08 Nov 2016 15:29:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 39982308 Introducing the POEditor.com API and Tool https://www.cocoanetics.com/2016/11/introducing-the-poeditor-com-api-and-tool/ https://www.cocoanetics.com/2016/11/introducing-the-poeditor-com-api-and-tool/#respond Tue, 08 Nov 2016 15:28:06 +0000 https://www.cocoanetics.com/?p=10325 If you want to be successful in international markets you need to localize your app in as many languages as you can. For iWoman we use POEditor.com. As the number of localization grows it becomes increasingly time-consuming to update all localizations in your Xcode project, let alone deal with plurals which are not supported in the XLIFF format.

Today I am showing off, how I got the update process to be a matter of seconds. I achieved this by writing a POEditor.com API and building a command line tool – I call POET (for POEditor Tool) which greatly simplifies the update process.

By writing the API wrapper in Swift 3 I was able to leverage some great new features of the language, in particular the ability to have generic type aliases. I also used what I learned researching the Result pattern, which I had researched for a previous talk.

The POEditorAPI project is on GitHub and I invite you to check it out and by any means help extend it. So far I have implemented four API functions, but it is easy add additional ones, mostly copy/pasting and adjusting the parameters and expected result types.

I am giving a tour of the project and showing it in action in this video:

I am particularly proud of discovering an elegant way of URL-encoding the form fields for the body of the HTTP POST. The URLComponents class does that for us.

import Foundation

extension URLRequest
{
   static func formPost(url: URL, fields: [String: Any]) -> URLRequest
   {
      var request = URLRequest.init(url: url, timeoutInterval: 10.0)
      request.httpMethod = "POST"
      request.setValue("application/x-www-form-urlencoded",
                       forHTTPHeaderField: "Content-Type")

      // URL-encode form fields
      var components = URLComponents()

      components.queryItems = fields.map { (key, value) -> URLQueryItem in
         return URLQueryItem(name: key, value: "\(value)")
      }

      // create form post data and length
      let data = components.url!.query!.data(using: .utf8)!
      request.httpBody = data
      request.setValue("\(data.count)", forHTTPHeaderField: "Content-Length")

      return request
   }
}

This extension for URLRequest is the core for all interaction with the POEditor API. Since all responses should be JSON I have a second extension to also take care of decoding it.

Command Line Input

POET requires user input on the command line for specifying the API token, selecting a project and specifying the translation export threshold. Turns out, this is easy in Swift 3 as well:

// get token from user if needed
if settings.token == nil
{
   // no api token
   print("Enter POEditor API Token> ", terminator: "")
   settings.token = readLine(strippingNewline: true)
}

Note the terminator parameter on the print function. This defaults to \n but if you pass an empty string the curser remains in the same line, awaiting user input.

You cannot embed dynamic frameworks in command line binaries. So I just added the source of POEditorAPI to the command line utility target. But for platforms that do support dynamic frameworks you can simply embed the iOS or macOS framework. It is also available on Cocoapods.

Conclusion

Other famous tools, use web development languages like Ruby or JavaScript. But I live and breathe Swift and so I wanted to have my own API and command line utility written in my favorite language.

I believe you will find it instructive how I structured the API calls, using the Result programming pattern, generics and some other modern approaches. I would love to hear your comments and if you want to use the project in an app of yours.

Above all, I hope that you – if you are also a user of POEditor.com – can automate your update process with the help of poet. What could be more poetical than time savings?

]]>
https://www.cocoanetics.com/2016/11/introducing-the-poeditor-com-api-and-tool/feed/ 0 10325
AutoIngest for Mac https://www.cocoanetics.com/2013/04/autoingest-for-mac/ https://www.cocoanetics.com/2013/04/autoingest-for-mac/#comments Fri, 19 Apr 2013 16:22:31 +0000 http://www.cocoanetics.com/?p=8032 In this weeks Friday Labs session I created a Mac status bar app that I call AutoIngest for Mac. This uses my Objective-C re-implementation of  Apple’s Java-based AutoIngest.java downloader as static library.

Screen Shot 2013-04-19 at 18.07.40

This first version is as functional as I was able to get it. Now I need your help. It is on GitHub, awaiting your input.

My goal for this is to have a convenient status bar utility that automatically downloads all available reports via “The Missing API”. You can select a folder to download the reports into, for example a folder on your Dropbox.

Screen Shot 2013-04-19 at 18.07.33

There are a couple more report types which I cannot test because I don’t have any Opt-In or Newsstand apps. Technically they would be simple to add but I don’t know yet if the download parameters on the downloader are correct.

The most time I spent on the Account tab because I wanted to save the password on the keychain right off the bat.

Screen Shot 2013-04-19 at 18.07.37

A cool side-effect of my efforts was that on Mac you can see the keychain entry with the keychain assistant.

You can see that this is all quite crude for starters, but if you enter the correct settings here the download already nicely works in the background. Because I don’t know any better approach I am changing the status bar menu title during the download, having an animated sync symbol (with an Apple logo in the center?) would be awesome.

I put together a quick laundry list of what items I’d like your help for.

  • provide a status bar icon
  • animate this icon while downloading is occuring
  • change the “Sync Now” menu option to “Stop Sync” during download
  • provide an app icon for the app
  • Add downloading on a timer
  • Add counting of how many items where downloaded per type during a sync session
  • report these via the didFinishNotification and output them in the user notification
  • add UI to configure multiple accounts + Vendor IDs
  • add support for Opt-In and Newsstand reports (I have no reports there so I cannot test them)
  • add option to organize report folder: Vendor_ID/ReportType/ReportSubType/ReportDateType
  • add Sparkle and automatic updating with Developer ID

Additional features really depend on your use case for this. I specifically have one in mind where you can load those synched reports into an iOS app from Dropbox (via my DTReportCore framework) which is why I am leaving the reports zipped. I can unzip them easily with my DTZipArchive class. So if somebody needs this option to save the reports unzipped, I can quickly add that myself.

If you use Dropbox for your storage location then you can even install AutoIngest on multiple Macs. It automatically skips reports you already have and thus Dropbox becomes your Synching-Cloud for reports.

My greater goal is for AutoIngest for Mac to become the de facto standard for convenient downloading of sales reports on Mac. So that you don’t have to rely on third party tools any more and entrust your credentials to people outside your company.

]]>
https://www.cocoanetics.com/2013/04/autoingest-for-mac/feed/ 13 8032
MyAppSales Open Letter https://www.cocoanetics.com/2013/04/myappsales-open-letter/ https://www.cocoanetics.com/2013/04/myappsales-open-letter/#comments Sat, 13 Apr 2013 14:04:51 +0000 http://www.cocoanetics.com/?p=8016 This article it is about a suggestion I want to make to you: I want to create a well documented brand new platform-independent ReportCore framework and you build apps that leverage it to use and sell. Read on for how I plan to make this app store legal.

I sent out the following email to people on the MyAppSales Google group.

Hi,

I have begun working on a full on rewrite of MyAppSales.

You probably have moved on to some professional reporting service in the meantime, picked up AppViz or maybe went with the other open source mobile app out there.

The problem I have with these services is that however professional they are, they don’t answer the really important questions that I have:

  • What’s the price elasticity of my apps? How did it change over time?
  • Or asked differently: at what price tier do I make the most profit?
  • For apps where I have a revenue sharing model: how much have we earned so far? What is everyones share of the revenue?
  • For a specific app: If I make an app free, will the IAPs make me a larger total profit?

The “other solutions” show pretty charts and maybe convert the local currencies into a single one. But I know from my experience that you tend to lose interest in the details if the overnumber of “sales for yesterday” doesn’t ever really change.

Since last month (March 6th to be exact) Apple offers not only Daily and Weekly sales data via their Autoingest.java interface, they also added Monthly and Yearly reports. On my old account I have sales reports dating back until when I got started with my own apps, 2009.

I rewrote this java app in Objective-C a while ago and yesterday I updated it with the new reports: Report Downloading Update

Up until now we had to be scraping the Apple website which is a fragile process, especially with all the AJAX going on. Now the time is finally ripe to ditch this process and use DTITCReportDownloader.

Last but not least I have learned many things on my way as an iOS and Mac developer in the past 4 years. Much of the code in the current Open Source MyAppSales version makes me feel sick.

A New Hope

These reasons prompted me to start working on a brand new reporting and analysis app which – because I own this name – I am still calling MyAppSales.

Though the philosophy will be dramatically different from the predecessor: I would like to give it another go with Apple.

Apple originally rejected MyAppSales because of the scraping of the ITC site. Instead of this I want to position MyAppSales 2 as an analysis tool with no downloading capability. Instead you would have some mechanism that downloads all available reports via the DTITCReportDownloader mechanism, either on a script running on your server, or as a Mac Status Bar app. The downloaded reports would be put in a central place that you own, for example Dropbox, WebDAV or an FTP server.

These reports would then be synched via “the cloud” and each client would have its own database where it imports the reports into. By having MyAppSales 2 an official app we can send it native push notifications whenever daily reports are available. So the downloading experience would not differ much from what it is now with the added benefit that it is not something that would have to happen manually whenever you open the app.

Instead there would be a ReportCore that has a CoreData database of all sales by report and would keep track of which files have already been imported. This is where the analyzing would be working with, but you keep the txt.gz of the report files as long as you like. This is the ultimate backup of sales data because it is completely independent from what app you chose to use in the future. If you want to move elsewhere you just import this folder of reports and you’re golden.

Why I am Mailing You?

I need your help. This reaches 301 members which are currently signed up on the MyAppSales mailing list.

At this stage I have the beginnings of some unit tests and the report parser. The next step is to develop the platform-independent ReportCore and based on this I see one or more iOS and or Mac apps. We also need a downloader and for that we need to implement our own Mac Dropbox framework.

I am uncertain if I would prefer to run this new project on my own GitLab server or have everything on GitHub. I would probably want to leverage some of my commercial components but I don’t like to having to add their binary builds to GitHub.

I think I want to have you as a partner. I see myself as being providing the ReportCore and partners would implement analysis apps based on and compatible with this component. An app similar to the original MyAppSales would be a definite possibility. This is something that ReportCore should enable with very little code, all leveraging CoreData, NSFetchedResultsController, GCD and be taking care of the syncing automatically for you.

I imagine that you would have a Dropbox linking dialog in your own app and once you have linked it, then ReportCore would start going about importing the reports found there and informing you about its progress. Then it is up to you to give the users the kinds of analysis or revenue splitting tools that you think are useful.

I also know from experience that I am not really good with UI design. But I think I have become really good with API design and building components. So in other words I am good in packaging something developers need for enabling functionality in their own apps.

So my plan is this: I (and a small team) would be working on ReportCore and provide the synching mechanism via Dropbox (and other cloud-based file synching systems) and a notification mechanism (e.g. APN). And you would be building apps on top of that.

I don’t believe that there has to be only a single choice for a mobile report viewer. By encapsulating the difficult part and sharing this with many app developers I hope to enable a new eco-system of apps that are all compatible with each other.

For example there could be a simple app where you chose an app, a report type, a date range and then it would graph the two price elasticity charts that Michael Jurewitz hat in his Cingleton presentation. That’s it! A highly specialized tool for a single purpose. No need to bother with setup, downloading etc. Only link with the Dropbox folder where the reports can be found and the user is done with setup.

Another app would help you calculate revenue shares and mail out statements to your app partners. Same process: a quick setup via a view controller, boom, the app would know the share percentage and build statements from any report type you like.

Another app would maybe not use sales reports at all, instead it would focus on providing statistics on the Newsstand or Opt-In reports that are also available for people who have certain kinds of apps on the store. Same process, but instead of sales reports you would ask ReportCore for the other kinds of reports.

So what do you think?

I know that developers are a niche market and you probably won’t get rich from marketing a multi-facetted sales report app that everybody expects to be getting at $1. But provided that you don’t have to worry about the synching of sales data, don’t you think that certain scenarios would be well worth $10 or more for a extremely focussed niche analysis app? Like the pricing elasticity app I described above? Certainly developers would be willing to pay much more for apps that enable them to discover the pricing sweet spot.

Of course I need to get some form of sponsoring, one-off licensing or share in profits for financing the ongoing development of ReportCore. But we can tailor this totally to your capabilities and interests.

I am looking to find if the above can spark any sort of interest or response from you. If yes, then we should talk about at what level you would want to collaborate:

  • Developing Co-Development of ReportCore
  • Working with me on adapting some parts of Dropbox API for a iOS/Mac framework to be used on both
  • Providing generic reporting apps on iOS or Mac
  • Providing focussed niche apps on iOS or Mac
  • or just interested in seeing what modern, well documented code looks like

Whatever your interest I am positive that we can find some common ground.

kind regards

Oliver Drobnik

Cocoanetics.com

]]>
https://www.cocoanetics.com/2013/04/myappsales-open-letter/feed/ 3 8016
Report Downloading Update https://www.cocoanetics.com/2013/04/report-downloading-update/ https://www.cocoanetics.com/2013/04/report-downloading-update/#comments Fri, 12 Apr 2013 13:07:34 +0000 http://www.cocoanetics.com/?p=8010 Fascinated by the Cingleton talk given by Michael Jurewitz (full video) on pricing elasticity I decided on a new project for my Lab Friday. Those are the 20% time where we try to explore something else than the nitty gritty we are working on all the other time.

It was in May four years ago that Apple had rejected my original MyAppSales app for scraping Apple’s site to get sales data. Shortly thereafter I started a Petition for Apple to give us an official API for downloading sales data. The bug report rdar://6807195 is still in status “Open” ever since April 20th 2009, its the oldest open Radar I have in my list.

Apple finally caved (a little) when they released the Autoingest.java tool for downloading some forms of reports in early 2012. Which I promptly decompiled and rewrote in a sensible language: Objective-C.

Ever since my first reverse implementation of Autoingest in ObjC this project was available Open Source on GitHub, offering two targets: one Mac command line utility that does exactly the same things as its Java brother, and a static library that you can add to your own custom solution.

The usefulness of Autoingest was quite limited initially because it only offered Daily and Weekly sales reports and some kind of “Opt-In” report, no idea what this is for. Until March 6th, 2013.

Getting back to my idea for a Friday Lab Project … I want to analyze my sales data for how elastic demand is for the various prices that I haven been charging for my apps. To get the sales data I fired up my Autoingest.objc and much to my surprise still found it working without problems.

For some reason I googled for the tool’s name and found the iTunes Connect Sales and Trends Guide: App Store which was updated to version 8 last month. The revision history mentions (emphasis mine)

Updates for Sales and Trends redesign. Monthly summary reports for the previous 12 months are now available. Yearly summary reports for all previous years can be downloaded. Graph and sales data can be filtered by free or paid content.

Version 7 of this document was mentioned as having a publication date of November 6th, 2012.

In addition these mentioned items I also found mention of Newsstand reports which are available in multiple detail levels as well.

Thus my Friday project mutated into an Updating session for DTITCReportDownloader. While being at it I revamped many things while adding support for the new parameters.

If you build the command line utility you can download the most recent available daily sales report like this. Omit the rightmost date parameter for the latest or specify a yyyyMMdd date for a specific date.

./Autoingest you@domain.com secretpassword 81234567 Sales Daily Summary

I also added an awesome looping capability that the Java version doesn’t have. You can have it download all available reports of one type, for example all Yearly reports:

./Autoingest you@domain.com secretpassword 81234567 Sales Yearly Summary ALL

Note that Sales reports only offer the Summary resolution level. Username and password are your Apple ID and the number beginning with 8 is your vendor ID which you can find on the portal.

Autoingest puts the downloaded reports in the current folder, if a report file of a certain name already exists it will be skipped.

Conclusion

I believe that we iOS and Mac developers should be using Objective-C for our tools and this is why I keep updating this component.  I am now looking for other developers (you?) who are willing to give this tool a try, be it on the command line or via static library.

I am specifically in need of feedback related to the Opt-In and Newsstand reports as I don’t have data for these myself. You don’t have to share any sales data with me, just test it and maybe suggest fixes. And I would be excited if you also find this useful.

The next step after now having taken care of the source of report data will be to load them into a CoreData DB and do some analyzing magic. More on that later.

]]>
https://www.cocoanetics.com/2013/04/report-downloading-update/feed/ 2 8010
DTXMLRPC for Posting to WordPress https://www.cocoanetics.com/2013/03/dtxmlrpc-for-posting-to-wordpress/ https://www.cocoanetics.com/2013/03/dtxmlrpc-for-posting-to-wordpress/#comments Fri, 22 Mar 2013 18:24:36 +0000 http://www.cocoanetics.com/?p=7889 On Fridays I like to do something fresh. Today I wrote an XML-RPC framework. The reason for doing that is that I love Amy Worrall’s QuickRadar.

Since I am trying my best to file good bug reports I spend much time on them. Being economically minded I like to reuse the same content I produce as much as possible. So when I file a Radar with QuickRadar I post to Apple’s bug report, to OpenRadar, share the link on App.net and on Twitter. What was missing in my opinion was to also get the text of the bug report into a new WordPress blog post.

The fruits of my labor you can admire on my Open Source DTXMLRPC framework which works on iOS and Mac. The DTXMLRPC project is on GitHub. Bear in mind that this is only a single day of work, nevertheless I did my best to comment the headers. Some uses cases might not be fully implemented, for example some variable types, like base64. I have the code for that already in DTFoundation, so if somebody needs it I can quickly add this.

The WordPress API is XML-RPC throughout, so this is the reason why I called the framework after the underlying XML-RPC technology. DTWordpress implements a few basic operations I needed.

To post a new blog post all you need to do is this:

DTWordpress *wordpress = [[DTWordpress alloc] initWithEndpointURL:endpointURL];
wordpress.userName = userName;
wordpress.password = password;
 
NSDictionary *content = @{@"title":title, @"description":description, @"post_type":@"post"};
 
[wordpress newPostWithContent:content shouldPublish:NO completion:^(NSInteger postID, NSError *error) {
   // returns the new post ID or an error
}];

Now today’s project would only be half complete if I wouldn’t have tried to see if I couldn’t add this to QuickRadar as well.

I forked it and thanks to a clear and modular API design I was able to add a QRWordpressSubmissionService in under an hour. QuickRadar is obviously built to be easily extendable. I went for the minimum feasible integration for starters.

QuickRadar WordPress Config

 

I won’t bore you with the details of the integration. Suffice it to say that those QRSubmissionService sub-classes have a very nice mechanism of working together, being managed by a QRSubmissionController. You can even add hard and soft dependencies. Like I don’t want to post anything if the filing of the Radar itself fails.

Of course you con always make stuff nicer, but I am a firm believer that the first version should be functional first and pretty second. Also I had seen that there seems to be some UI reworking coming and I don’t want to waste time on UI that will probably change anyway.

As soon as those UI updates are in, we can make sort of a WordPress setup wizard where you only have to enter the blog URL and the rest will be auto-detected. Also adding categories and tags is an option for the future. Other blog engines are also an option since most of them share some degree of the same APIs as WordPress.

Conclusion

I hope that Amy (and the other users using QuickRadar) take a liking to my work that I submitted a pull request for inclusion in origin master.

If you have some app or project that might benefit from DTXMLRPC or DTWordpress please get in touch, I’d be happy to collaborate on this.

]]>
https://www.cocoanetics.com/2013/03/dtxmlrpc-for-posting-to-wordpress/feed/ 1 7889
Urban Airship Commander BETA https://www.cocoanetics.com/2012/07/urban-airship-commander-beta/ https://www.cocoanetics.com/2012/07/urban-airship-commander-beta/#comments Tue, 17 Jul 2012 17:00:14 +0000 http://www.cocoanetics.com/?p=6746 Imagine yourself implementing APN (Apple Push Notifications) using Urban Airship’s API and service. How do you go about testing the push functionality and possibly demonstrating it in front of your client?

The usual approach would be to peruse a web form on the Urban Airship site to send such notifications.

We felt a need to simplify the procedure and make it more fun for developers. So today we’re announcing UAC (Urban Airship Commander) which sets out to do exactly that.

We have identified to kinds of users that would want to use this app on their iPhone. 1) Developers who want to set up a few test devices and effortlessly be able to test their APN implementations. 2) Clients of said developers who would use the app to broadcast APNs to their users from a mobile app.

You start by setting up an app similar to how you do it on the Urban Airship web site. You can even set an app icon if you have one your device. You can configure custom fields that would get appended to the push notification and set a friendly short name and default value.

Once you have done the initial setup and your UA credentials have been validated you are good to go. Tap on the app, specify the message and custom values and send off the message. A log shows all past sent messages and also allows you to copy a recent message to a new push notification. This is especially useful for example if you send the same message (and custom values) every month.

//www.youtube.com/watch?v=dvGEOU-bg9E

We are now looking for a couple of BETA testers who will try out the app on their own UA implementations. Also we hope to gain feedback as to what the actual use cases for this app are. Limited BETA spots are available, email me info about how you plan to use this app and don’t forget to send me a UDID of the device you want to test it on.

]]>
https://www.cocoanetics.com/2012/07/urban-airship-commander-beta/feed/ 2 6746
Who Wants Cocoapedia? https://www.cocoanetics.com/2012/07/who-wants-cocoapedia/ https://www.cocoanetics.com/2012/07/who-wants-cocoapedia/#comments Mon, 09 Jul 2012 12:18:17 +0000 http://www.cocoanetics.com/?p=6702 Two years ago I was more idealistic than today. That was when I created Cocoapedia (Oct 2010) and thought that if the place was there the contributors would come. Boy was I naive.

The idea behind Cocoapedia came from an unfortunate run-in I had had with Wikipedia two years earlier. At that time I – similarly enthusiastic – had created a Wikipedia page for myself only to find that flagged for deletion the day after. That was when I learned that Wikipedia has a set of relevancy criteria that artificially filters the content that can go into it.

I had never gotten a medal, never played a part in “historic, political or newsworthy events”, are no “widely known personality from the entertainment industry” and nobody has ever called my works as excellent. My TV appearances were never in an important function, I did not write two novels (or 4 non-fiction books) and the scandals I have unearthed fell mostly on death ears, too.

So I figured, if Wikipedia won’t have us irrelevant iOS developers, then our own Wiki definitely would. So Cocoapedia was born.

At first I kept registrations on approval basis which kept the content nice and clean and small. Then I decided to drop the approval for new registrations and instead rely on the same process as Wikipedia: everybody can join, everybody can post. As my popularity on Twitter grew I created stubs (incomplete beginnings of articles) for several people and invited them to complete those with some biographical info.

But somehow the massive Spam industry got wind of this paradise where everybody could post any keywords and links. Chaos ensued. I started to get more and more spam posts up to a level that I could no longer handle them myself.

Wikipedia’s Pareto Principle

Turns out Wikipedia only works because of the massive amount of volunteers who believe in the mission and who police the content rigoriously. If you plot the amount of articles certain logins have created you find that most of the articles on Wikipedia come from a relatively small group. The Pareto Principle applies: 80% of articles come from 20% of logins. Or maybe 95% articles from only 5% of logins. Though these are illustrative numbers not to be taken literally.

That this means is that for any kind of site or services that thrives on socially created content you need to have an enormous user base so that amongst these you will find the few active people who will carry it. That’s lesson number one.

Wikipedia is working because it is driven by sufficient amounts of people believing in their cause. To curate encyclopedic knowledge and defend it against commercial interests. Whoever tries to immortalize himself on Wikipedia will find his attempts flagged and deleted before the end of the day.

I was by myself for the policing and mostly for the creation of articles as well. An enormously unfair disadvantage compared to Wikipedia.

Let’s try Brute Force

The next thing I tried was enabling a spam filter with a lengthy keyword list ranging from insurances over car to sex toys. With mixed success, I was still getting tons of obvious spam registrations but a large portion of spam articles got blocked. However the problem with that approach was that I needed to go into a Mediawiki config file and keep adding new spam words.

The problem with this approach however is that as the spam word list grows longer you start to encroach into the realm of innocent words. Like for example developer and book author Chris Adamson complained that “wiki” was on this list because I had put it there earlier.

Also this did not solve the issue with having to wade ankle-deep through left over spam accounts that had been registered by a bot but had failed to inject their spammy payload. So that solution was none and only served to delay the inevitable.

What About (Dot) Me?

I thought that developers would flock to Cocoapedia and use it as a free platform to tell the world about themselves. Sort of like what you would want a journalist who is writing about you to know about you and possibly quote.

One of the strongest motivators on the internet seems to be the sense of ownership in one’s own identity. While they were still available people buy their names as .com domain names. With the supply of .com names mostly exhausted people turned to new TLDs like .me or turned to offerings like about.me to at least have a few toes in the door.

Any of the stubs I alluded to earlier I created by browsing people’s personal websites and copying the most important biographical details from their “about” pages and a list of their apps from what I could find linked to on the app store.

People with home pages have split in two categories over the last few years: some continue to fill their blogs with interesting articles, others found that they preferred to micro-blog on Twitter and Facebook. A smart person once said: “Every Tweet is a blog post not written”. I found that this holds true ever more than ever before.

Often I quip at some tweet by responding to it or I +1 it by retweeting or favoring it. Smart people like John Gruber don’t do that, instead they quote the tweet on their site and add their comment there. This way they stay the owners of their own statements. As opposed to forfeiting ownership by handing it over to Twitter.

I can relate to people being too busy to keeping information about themselves current in more than one location. If there are changes to your biography that you want to be known (“won Nobel price!”) then you’d have to update your blog, your (newly awarded) Wikipedia page as well as all social network profiles and then – after everything else – the lowly Cocoapedia.

A Solution in Search of a Problem

When evaluating the utility of Cocoapedia I have to immediately admit that its main purpose was for me to have a good feeling about being the owner of tons of information about other people. Not very altruistic, I admit.

And my ego aside, the second main reason was to give other people a platform for publicly stroking their respective egos. As mentioned earlier I could definitely see several people taking much care to include all their public appearances in Cocoapedia. Can you guess which country those come from?

Another idea was that Cocoapedia could possibly become the second site that Google would list if somebody was searching for your name. I just did the search for myself and found my German blog at number one, then Cocoanetics, two articles on Cocoanetics, Xing (where I had a premium account), and then my Cocoapedia page.

However this strategy only works if Google attributes sufficient interestingness in your specific combination of first and last name. That means because of all my online efforts Google is attributing a certain value to this individual named “Oliver Drobnik”. If you spend the same amount of time writing then you will have a similar rank.

Note that this is not page rank, but apparently Google seems to understand the concept of people independently of individual pages or their rank. Sort of a bit like Artificial Intelligence where a big brother-eske intelligence tracks people individually.

If you take most other developers who added themselves to Cocoapedia you will find that if you search for their name the Cocoapedia entry is nowhere to be found in the first few pages of search results. The reason being that Google has not “learned” of their importance.

In short Cocoapedia was trying to achieve something that is not possible. At least not any more. Google will not increase your personal rank just because you have a Cocoapedia page. It will only do that if you make sure that there are many instances on the web and social networks that point to you. A many-to-one relationship if you will.

All idealistic ruminations aside, I fail to see any lasting value in Cocoapedia. I don’t make money with it. It only frustrates because nobody wants to play with me. And it gets drowned by spam. All of this gets rid of your idealism in no time.

I give up!

The philosophy that made Apple great is to know what projects to can. It is not sufficient for something to be good or idealistic. It has to be great to be allowed to survive. Especially so if the upkeep of a “hobby” requires substantial resources without any hope of any kind of compensation.

I checked the server stats in the morning and found around 1000 unique visits per day. I’ve grown Cocoanetics to about 2500 uniques per day and BuySellAds nets me around $100 for banner ads. Do the math! If I plastered Cocoapedia with ads I might net up to $50. So economically Cocoapedia isn’t, it is far from being economic.

There isn’t any universe in which $50 would pay for somebody to manually keep the spam at bay.

And this brings us full circle to why Wikipedia works and Cocoapedia doesn’t. Public wikis can only flourish if they are fueled by identification with their philosophical principals. It takes an army of unpaid volunteers to curate the content. And people only work “pro bono” they believe in the bono.

Where you lack volunteers you might make up the difference in technology. But I have to admit that I have many shortcomings in this area (MySQL, PHP, Mediawiki plugins). I am an iOS developer at heart and would have loved to hire somebody to be the full time administrator of something great. But alas I am neither rich nor wealthy so that this has any chance of turning into reality.

You cannot believe how much pain this causes me. I’ve invested both time and money in Cocoapedia and as such I find it hard to let go. But in trying to become more like Apple I am seeing this as a symbol of a road to becoming more ruthless when it comes to canceling failed projects.

Wanna Take it Off My Hands?

It might be the case that somebody has better plans with the Cocoapedia name. I can imagine that it would be a nice name for a component store or a tutorial collection or anything else that collects wisdom on iOS/Mac development. Or maybe somebody would love to try his luck with the content already in the database.

Because of this I am not shutting down the site right away, but I’m offering the full package to the highest bidder:

  • Cocoapedia Mediawiki Database, import it into your own Wiki or keep it alive
  • Domain Names Cocoapedia.org and Cocoapedia.com – so something non-profit (org) or for commercial reasons (com). About 1500 unique visitors.
  • Twitter user Cocoapedia to go with it. About 200 followers.

I am collecting all serious offers until Monday July 16th and then I will decide who gets it. If nobody can be found then I will just take down the site and replace it with a screen full of advertisements as any smart domain squatter does.

You can e-mail me or tweet me if you have questions or want to submit an offer. Please, pleeeease!

]]>
https://www.cocoanetics.com/2012/07/who-wants-cocoapedia/feed/ 1 6702
Twitter Curator Demo https://www.cocoanetics.com/2012/05/twitter-curator-demo/ https://www.cocoanetics.com/2012/05/twitter-curator-demo/#comments Thu, 31 May 2012 16:23:40 +0000 http://www.cocoanetics.com/?p=6386 I get so fed up with how YouTube mangles the screen recordings of QuickTime that I purchased Screenflow. I wanted to show off my specialized Twitter search tool that I’m working on besides of my regular projects. A “Hobby” if you will.

I would very much appreciate if you would take a few minutes to watch it and then let me hear your thoughts. If you have some ideas for improvements then don’t hesitate to let me know.

Primarily this is to make it faster to curate Tweets related to iOS and OS X jobs. The ideas is that you will be able to combine multiple searches and then have powerful filters to whittle down the results to a manageable size. Right now you can filter by user, linked domain and also restrict to certain languages as reported by the language ISO code.

//www.youtube.com/watch?v=ZMZXEjTJj0A

Let me know if this is the sort of research/curation tool that you would buy in addition to your favorite Twitter client. It’s not supposed to replace any of these, but rather supplement for situations where you want to filter the world of Twitter for business or pleasure.

 

]]>
https://www.cocoanetics.com/2012/05/twitter-curator-demo/feed/ 3 6386
ELO Digital Office GmbH releases iPad Version https://www.cocoanetics.com/2012/03/elo-digital-office-gmbh-releases-ipad-version/ https://www.cocoanetics.com/2012/03/elo-digital-office-gmbh-releases-ipad-version/#comments Mon, 05 Mar 2012 18:36:43 +0000 http://www.cocoanetics.com/?p=6057 For the past several months we were busily working on no small undertaking. Our client ELO Digital Office GmbH in Stuttgart, Germany (maker of the ELO digital document management system) wanted to present a super-charged iPad client just in time for Cebit. And that’s “ELO” as in “Electronic Leitz Office”, not “Electronic Light Orchestra” the band.

So today they announce immediate availability of the dedicated iPad version of ELO. It is a free download because to really use it you need to have their ELO server software. But you can still test drive the client because there is a test server available and pre-set when you download the app so that you can try it out for yourself.

The new ELO iClient for iPad is a ground-up rewrite of the client, designed to give users of the ELO server software the best of both worlds. Digital document storage, indexing and search via their powerful server. Touch-based interaction with a beautiful user interface on the iPad as users have to come expect it from Apple devices.

The entire UI hat to be custom-made because the task was to combine the visual appeal of the Twitter app with the feature-richness of the ELO server. And you know how it is with software, there always is some small problem that creeps in. But we had to draw a line, about 2 weeks before launch, to send a version 1.0 to Apple for approval.

Approval for Version 1.0 took 5 days, as almost all do nowadays. But we had set the release time to a future date to be able to control the availability. After the first approval was through we sent them a 1.0.1 version with some minor fixes. The second time around we used the “Time Critical Release” bonus card, via the contact form to get an expedited review.

There are some things that did not make the cut-off in time, but which are almost done. But as it is with the app store you can always submit another point release.

One such item is that we wanted to have the panels move a bit depending on the amount of momentum that you give them when lifting your finger. I’ve seen it and it works beautifully! Another item is that we made the choice to submit a separate app from the iPhone version. We want to unify both apps into a single client that works for iPhone and iPad at the same time. But for that we need to update all the UI for the iPhone and achieve feature-parity. Forcing both apps into one would have been too schizophrenic, simply because the iPad version is a ground-up rewrite.

So we’re going to take sufficient time to polish up the iPhone part and then we can make the current iPad app universal, as originally intended. It is our hope and goal to give ELO’s clients the best possible iOS app for accessing their documents on the go. Even offline. That was one of the most complicated items to get right, but we think we pulled it off.

The greatest part of the experience for us was that all the predictions we made about how Apple would react held true. Thanks Apple, for letting us look good in front of our clients!

]]>
https://www.cocoanetics.com/2012/03/elo-digital-office-gmbh-releases-ipad-version/feed/ 1 6057
Parsing ASN.1 for Certificates and Pleasure https://www.cocoanetics.com/2012/02/parsing-asn-1-for-certificates-and-pleasure/ https://www.cocoanetics.com/2012/02/parsing-asn-1-for-certificates-and-pleasure/#comments Mon, 20 Feb 2012 10:36:40 +0000 http://www.cocoanetics.com/?p=5990 When figuring out how to work with self-signed certificates and helping somebody debug an SSL problem I did a bit of research regarding the certificates being used on iOS and what functions are available to inspect these.

There are two functions provided to get more information from a SCCertificateRef , one to get a meager description, the other to get the entire certificate, encoded in ASN.1. But there you’re stuck, unless you want to mess with compiling and installing OpenSSL.

OpenSSL is the de-facto standard method of decoding certificates on Mac. For example you can parse a binary ASN.1-encoded (aka DER) certificate as easy as:

openssl asn1parse -in certi.der
   0:d=0  hl=4 l=1086 cons: SEQUENCE
    4:d=1  hl=4 l= 806 cons: SEQUENCE
    8:d=2  hl=2 l=   3 cons: cont [ 0 ]
   10:d=3  hl=2 l=   1 prim: INTEGER           :02
   13:d=2  hl=2 l=  16 prim: INTEGER           :19D031A4DCCFB1691284E00750BA7A23
   31:d=2  hl=2 l=  13 cons: SEQUENCE
   33:d=3  hl=2 l=   9 prim: OBJECT            :sha1WithRSAEncryption
   44:d=3  hl=2 l=   0 prim: NULL
   46:d=2  hl=2 l=  94 cons: SEQUENCE
   48:d=3  hl=2 l=  11 cons: SET
   50:d=4  hl=2 l=   9 cons: SEQUENCE
   52:d=5  hl=2 l=   3 prim: OBJECT            :countryName
   57:d=5  hl=2 l=   2 prim: PRINTABLESTRING   :US
   61:d=3  hl=2 l=  21 cons: SET
   63:d=4  hl=2 l=  19 cons: SEQUENCE
   65:d=5  hl=2 l=   3 prim: OBJECT            :organizationName
   70:d=5  hl=2 l=  12 prim: PRINTABLESTRING   :Thawte, Inc.
...

OpenSSL comes preinstalled on all Macs, but we don’t know what it was that Apple did not like about it to omit it on iOS. Was it the “Open”? Was is the license? We will never know.

The above is exactly the same information that Safari shows you if an SSL connection cannot be verified. Wouldn’t it be nice to be able to get this kind of mechanism for our own apps as well? Some clients might want to be able to accept self-signed certificates, but not do so without notice, but rather show all certificate details to the user like Safari does.  With DTASN1Parser this becomes possible.

Another scenario that needs to decode certificates is to validate them, for example to check if the certificate your app is signed with is indeed one for the app store …

I remember – a couple of years ago – I wrote an ASN.1 Decoder and Encoder in C++, but those backups were lost in time. So I scraped together in information about the format what I could find. Unfortunately it is just as cryptic as the cryptography that is is used for in our case.

I found and borrowed ideas from two web-based decoders (1, 2), skimmed through a “Layman’s Guide“,  marveled at The Anatomy of a X.509 Certificate, but then gave up on the mother of all ASN1 tutorials: ASN1 Complete.

Getting the Certificate

If you do an NSURLConnection to a server with a self-signed (or shady) certificate you get a callback for an authentication space. From this you can get the SecTrust and from that you may retrieve the certificates used.

– (void)connection:(NSURLConnection *)connection
didReceiveAuthenticationChallenge:(NSURLAuthenticationChallenge *)challenge
{
if ([challenge.protectionSpace.authenticationMethod
isEqualToString:NSURLAuthenticationMethodServerTrust])
{
SecTrustRef secTrust = [challenge.protectionSpace serverTrust];

SecCertificateRef certi = SecTrustGetCertificateAtIndex(secTrust, 0);
NSData *data = (__bridge_transfer NSData *) SecCertificateCopyData(certi);
// .. we have a certificate in DER format!

NSString *summ = (__bridge_transfer NSString *) SecCertificateCopySubjectSummary(certi);
// … a useless summary

I saved a certificate to disk with this method and – lo and behold! – you can inspect the certificate with QuickLook.

That’s quite convenient, because you have a direct comparison with what data we can decode with DTASN1Parser.

DTASN1Parser walks through the hierarchy contained within the ASN.1 data and emits events whenever it encounters something noteworthy. To build a certificate class with it you would create a parser with the data and then act on the individual events. Here’s the basic structure:

@implementation DTCertificate
{
	NSData *_data;
 
	id _rootContainer;
	id _currentContainer;
	NSMutableArray *_stack;
 
	NSString *currentObjectIdentifier;
	DTASN1Parser *_parser;
}
 
- (id)initWithData:(NSData *)data
{
	self = [super init];
	if (self)
	{
		_parser = [[DTASN1Parser alloc] initWithData:data];
		_parser.delegate = self;
 
		_stack = [[NSMutableArray alloc] init];
 
		if (![_parser parse])
		{
			return nil;
		}
	}
	return self;
}
 
#pragma mark DTASN1ParserDelegate
 
- (void)parser:(DTASN1Parser *)parser foundObjectIdentifier:(NSString *)objIdentifier
{
	currentObjectIdentifier = objIdentifier;
}
 
- (void)parser:(DTASN1Parser *)parser didStartContainerWithType:(DTASN1Type)type
{
	// start a new container
	id previousContainer = _currentContainer;
 
	// we don't care about the sequences, just the structure
	_currentContainer = [[NSMutableArray alloc] init];
	currentObjectIdentifier = nil;
 
	[previousContainer addObject:_currentContainer];
 
	if (!_rootContainer)
	{
		_rootContainer = _currentContainer;
	}
 
	[_stack addObject:_currentContainer];
}
 
- (void)parser:(DTASN1Parser *)parser didEndContainerWithType:(DTASN1Type)type
{
	// remove item from stack
	[_stack removeLastObject];
 
	_currentContainer = [_stack lastObject];
}
 
- (void)addObject:(id)object forIdentifier:(NSString *)identifier
{
	if (identifier)
	{
		NSDictionary *dict = [NSDictionary dictionaryWithObject:object forKey:identifier];
		NSLog(@"%@", dict);
		[_currentContainer addObject:dict];
	}
	else
	{
		[_currentContainer addObject:object];
		NSLog(@"%@", object);
	}
}
 
- (void)parserFoundNull:(DTASN1Parser *)parser
{
	[self addObject:[NSNull null] forIdentifier:currentObjectIdentifier];
}
 
- (void)parser:(DTASN1Parser *)parser foundDate:(NSDate *)date
{
	[self addObject:date forIdentifier:currentObjectIdentifier];
}
 
- (void)parser:(DTASN1Parser *)parser foundString:(NSString *)string
{
	[self addObject:string forIdentifier:currentObjectIdentifier];
}
 
- (void)parser:(DTASN1Parser *)parser foundData:(NSData *)data
{
	[self addObject:data forIdentifier:currentObjectIdentifier];
}
 
- (void)parser:(DTASN1Parser *)parser foundNumber:(NSNumber *)number
{
	[self addObject:number forIdentifier:currentObjectIdentifier];
}
 
@end

The general method of representing certificate elements in ASN.1 is to have an object identifier and a value for it. Those object identifiers are standardized and often they are represented concatenated with dots. You usually have a SET container, with the first element being an object identifier, followed by a string.

    "2.5.4.6" = "US";
    "2.5.4.10" = "Thawte, Inc.";
    "2.5.4.11" = "Domain Validated SSL";
    "2.5.4.3" = "Thawte DV SSL CA";

Here’s a list of the object identifiers you might encounter:

0.2.262.1.10.0,extension
0.2.262.1.10.1.1,signature
1.2.840.113549.1.1,pkcs-1
1.2.840.113549.1.1.1,rsaEncryption
1.2.840.113549.1.1.4,md5withRSAEncryption
1.2.840.113549.1.1.5,sha1withRSAEncryption
1.2.840.113549.1.1.6,rsaOAEPEncryptionSET
1.2.840.113549.1.7,pkcs-7
1.2.840.113549.1.7.1,data
1.2.840.113549.1.7.2,signedData
1.2.840.113549.1.7.3,envelopedData
1.2.840.113549.1.7.4,signedAndEnvelopedData
1.2.840.113549.1.7.5,digestedData
1.2.840.113549.1.7.6,encryptedData
1.2.840.113549.1.7.7,dataWithAttributes
1.2.840.113549.1.7.8,encryptedPrivateKeyInfo
1.2.840.113549.1.9.22.1,x509Certificate(for.PKCS.#12)
1.2.840.113549.1.9.23.1,x509Crl(for.PKCS.#12)
1.2.840.113549.1.9.3,contentType
1.2.840.113549.1.9.4,messageDigest
1.2.840.113549.1.9.5,signingTime
2.16.840.1.113730.1,cert-extension
2.16.840.1.113730.1.1,netscape-cert-type
2.16.840.1.113730.1.12,netscape-ssl-server-name
2.16.840.1.113730.1.13,netscape-comment
2.16.840.1.113730.1.2,netscape-base-url
2.16.840.1.113730.1.3,netscape-revocation-url
2.16.840.1.113730.1.4,netscape-ca-revocation-url
2.16.840.1.113730.1.7,netscape-cert-renewal-url
2.16.840.1.113730.1.8,netscape-ca-policy-url
2.23.42.0,contentType
2.23.42.1,msgExt
2.23.42.10,national
2.23.42.2,field
2.23.42.2.0,fullName
2.23.42.2.1,givenName
2.23.42.2.10,amount
2.23.42.2.2,familyName
2.23.42.2.3,birthFamilyName
2.23.42.2.4,placeName
2.23.42.2.5,identificationNumber
2.23.42.2.6,month
2.23.42.2.7,date
2.23.42.2.7.11,accountNumber
2.23.42.2.7.12,passPhrase
2.23.42.2.8,address
2.23.42.3,attribute
2.23.42.3.0,cert
2.23.42.3.0.0,rootKeyThumb
2.23.42.3.0.1,additionalPolicy
2.23.42.4,algorithm
2.23.42.5,policy
2.23.42.5.0,root
2.23.42.6,module
2.23.42.7,certExt
2.23.42.7.0,hashedRootKey
2.23.42.7.1,certificateType
2.23.42.7.2,merchantData
2.23.42.7.3,cardCertRequired
2.23.42.7.5,setExtensions
2.23.42.7.6,setQualifier
2.23.42.8,brand
2.23.42.9,vendor
2.23.42.9.22,eLab
2.23.42.9.31,espace-net
2.23.42.9.37,e-COMM
2.5.29.1,authorityKeyIdentifier
2.5.29.10,basicConstraints
2.5.29.11,nameConstraints
2.5.29.12,policyConstraints
2.5.29.13,basicConstraints
2.5.29.14,subjectKeyIdentifier
2.5.29.15,keyUsage
2.5.29.16,privateKeyUsagePeriod
2.5.29.17,subjectAltName
2.5.29.18,issuerAltName
2.5.29.19,basicConstraints
2.5.29.2,keyAttributes
2.5.29.20,cRLNumber
2.5.29.21,cRLReason
2.5.29.22,expirationDate
2.5.29.23,instructionCode
2.5.29.24,invalidityDate
2.5.29.25,cRLDistributionPoints
2.5.29.26,issuingDistributionPoint
2.5.29.27,deltaCRLIndicator
2.5.29.28,issuingDistributionPoint
2.5.29.29,certificateIssuer
2.5.29.3,certificatePolicies
2.5.29.30,nameConstraints
2.5.29.31,cRLDistributionPoints
2.5.29.32,certificatePolicies
2.5.29.33,policyMappings
2.5.29.34,policyConstraints
2.5.29.35,authorityKeyIdentifier
2.5.29.36,policyConstraints
2.5.29.37,extKeyUsage
2.5.29.4,keyUsageRestriction
2.5.29.5,policyMapping
2.5.29.6,subtreesConstraint
2.5.29.7,subjectAltName
2.5.29.8,issuerAltName
2.5.29.9,subjectDirectoryAttributes
2.5.4.0,objectClass
2.5.4.1,aliasedEntryName
2.5.4.10,organizationName
2.5.4.10.1,collectiveOrganizationName
2.5.4.11,organizationalUnitName
2.5.4.11.1,collectiveOrganizationalUnitName
2.5.4.12,title
2.5.4.13,description
2.5.4.14,searchGuide
2.5.4.15,businessCategory
2.5.4.16,postalAddress
2.5.4.16.1,collectivePostalAddress
2.5.4.17,postalCode
2.5.4.17.1,collectivePostalCode
2.5.4.18,postOfficeBox
2.5.4.18.1,collectivePostOfficeBox
2.5.4.19,physicalDeliveryOfficeName
2.5.4.19.1,collectivePhysicalDeliveryOfficeName
2.5.4.2,knowledgeInformation
2.5.4.20,telephoneNumber
2.5.4.20.1,collectiveTelephoneNumber
2.5.4.21,telexNumber
2.5.4.21.1,collectiveTelexNumber
2.5.4.22.1,collectiveTeletexTerminalIdentifier
2.5.4.23,facsimileTelephoneNumber
2.5.4.23.1,collectiveFacsimileTelephoneNumber
2.5.4.25,internationalISDNNumber
2.5.4.25.1,collectiveInternationalISDNNumber
2.5.4.26,registeredAddress
2.5.4.27,destinationIndicator
2.5.4.28,preferredDeliveryMehtod
2.5.4.29,presentationAddress
2.5.4.3,commonName
2.5.4.31,member
2.5.4.32,owner
2.5.4.33,roleOccupant
2.5.4.34,seeAlso
2.5.4.35,userPassword
2.5.4.36,userCertificate
2.5.4.37,caCertificate
2.5.4.38,authorityRevocationList
2.5.4.39,certificateRevocationList
2.5.4.4,surname
2.5.4.40,crossCertificatePair
2.5.4.41,name
2.5.4.42,givenName
2.5.4.43,initials
2.5.4.44,generationQualifier
2.5.4.45,uniqueIdentifier
2.5.4.46,dnQualifier
2.5.4.47,enhancedSearchGuide
2.5.4.48,protocolInformation
2.5.4.49,distinguishedName
2.5.4.5,serialNumber
2.5.4.50,uniqueMember
2.5.4.51,houseIdentifier
2.5.4.52,supportedAlgorithms
2.5.4.53,deltaRevocationList
2.5.4.55,clearance
2.5.4.58,crossCertificatePair
2.5.4.6,countryName
2.5.4.7,localityName
2.5.4.7.1,collectiveLocalityName
2.5.4.8,stateOrProvinceName
2.5.4.8.1,collectiveStateOrProvinceName
2.5.4.9,streetAddress
2.5.4.9.1,collectiveStreetAddress
2.5.6.0,top
2.5.6.1,alias
2.5.6.10,residentialPerson
2.5.6.11,applicationProcess
2.5.6.12,applicationEntity
2.5.6.13,dSA
2.5.6.14,device
2.5.6.15,strongAuthenticationUser
2.5.6.16,certificateAuthority
2.5.6.17,groupOfUniqueNames
2.5.6.2,country
2.5.6.21,pkiUser
2.5.6.22,pkiCA
2.5.6.3,locality
2.5.6.4,organization
2.5.6.5,organizationalUnit
2.5.6.6,person
2.5.6.7,organizationalPerson
2.5.6.8,organizationalRole
2.5.6.9,groupOfNames
2.5.8,X.500-Algorithms
2.5.8.1,X.500-Alg-Encryption
2.5.8.1.1,rsa
2.54.1775.2,hashedRootKey
2.54.1775.3,certificateType
2.54.1775.4,merchantData
2.54.1775.5,cardCertRequired
2.54.1775.7,setQualifier
2.54.1775.99,set-data

There’s a catch when dealing with bytes. I found that sometimes an element actually consists of sub-elements. For example the 1.2.840.113549.1.1.1 public key data also encodes an exponent field. I was unable to find documentation on how to know that just from the data. One of the JS decoders explodes this field, the other doesn’t. Worst case you’ll have to know which fields do that and parse these out yourself.

Another thing I did not understand is the use of the context fields. Those are represented as [0] and [3] in the JS tools. The must have some meaning, but I couldn’t find it.

Conclusion

With DTASN1Parser (new member of DTFoundation) you can now knock yourself out making nice dialogs giving all the certificate information to your user he needs to be able to know that this certificate is ok to permanently accept. The big advantage of this parser over OpenSSL is that you only have to add two files to your project (or DTFoundation) and don’t need to mess around with getting OpenSSL compiled for iOS.

If you do set out to create such an Accept Certificate dialog, then get in touch, we should collaborate on that.

DTASN1Parser is quite crude, there are many special cases that are not yet implemented, because I didn’t encounter them in working with a DER-encoded certificate. If you have DER-data that makes use of these, then please let me have that for testing … or better yet, implement it in the Open Source project and send me a pull request.

]]>
https://www.cocoanetics.com/2012/02/parsing-asn-1-for-certificates-and-pleasure/feed/ 2 5990
Autoingest.java – in Objective-C https://www.cocoanetics.com/2012/02/autoingest-java-in-objective-c/ https://www.cocoanetics.com/2012/02/autoingest-java-in-objective-c/#comments Mon, 06 Feb 2012 09:31:28 +0000 http://www.cocoanetics.com/?p=5925 We are lobbying since 2009 to get Apple to publish a proper API for downloading all kinds of reports.

The first reaction we got was prohibition of ITC scraping. The second reaction was that Apple created the Mobile ITC app which unfortunately lacks any kind of possibility to get the reports out or get monetary amounts. The third reaction was a half-harted publishing of a Java class that is able to download daily and weekly sales reports.

This changes today, at least if you are like me and feel uneasy to use Java for downloading reports.

The DTITCReportDownloader project is a complete rewrite of the Autoingest Java class in proper Objective-C. This way we iOS and Mac developers can at least download these two kinds of reports without having to have a JVM installed.

Please support our cause by duping Radar rdar://6807195 which is still lacking any kind of response. I was told back in 2009 that “if enough developers wanted it” Apple would finally give us the API we are wishing for.

On GitHub: https://github.com/Cocoanetics/DTITCReportDownloader

]]>
https://www.cocoanetics.com/2012/02/autoingest-java-in-objective-c/feed/ 1 5925
DTCoreText – New Formula! https://www.cocoanetics.com/2012/01/dtcoretext-new-formula/ https://www.cocoanetics.com/2012/01/dtcoretext-new-formula/#respond Wed, 25 Jan 2012 08:02:06 +0000 http://www.cocoanetics.com/?p=5844 I chose this article’s title to try and grab your attention. Well, the product is still the same and does the same. The only difference is one that is under the hood. And as such it is your job – should you choose to accept it – to marvel at the benefits that the new old parsing engine brings us.

Ever since my friends at scribd showed me a first prototype for HTML parsing based on libxml2 I was – admittedly – jealous. This prototype basically worked by having libxml2 parse out the individual paragraphs of an article and then display each paragraph in its own cell of a UITableView. Back then I brushed off the suggestion and went with something I understood: NSScanner.

All the C-code necessary to deal with libxml2 seemed overly daunting, so I went with a simple basic structure for the parsing, something like this (pseudocode)

while (there_is_more)
{
   if (we_scanned_a_tag)
   {
      if (is_tag_a)
      {
         // a specific code
      }
      else if (is_tag_b)
      {
         //  b specific code
      }
      else if ...
   }
   else
   {
      // we must be inside a tag
 
      // deal with skipping over a tag that is incomplete, i.e. crap
 
      if (we_scanned_some_text_before_the_next_tag_open)
      {
         // append the text with correct formatting to the output
      }
   }
}

This structure had grown organically as I added support for additional tags and it went into the initWithHTML category method because that seemed to be the logical place. Little did I think ahead that this approach would preclude the possibility of having an event-based parser do callbacks into my code, because that would mean adding these event-handling methods to NSAttributedString.

Even the above pseudocode  is long, you can imagine how much of a Spaghetti the origin code became. Much of the problem didn’t actually come from all these tags, those where simply a big if-statement. Complexity came from having to deal with all these special cases where HTML might not be well-formed.

In my Open Source genstrings2 I saw how much faster pure C-code performs than NSScanner which also served to rekindle my wish to switch to libxml2 because this is also written in low-level highly optimized C.

I approached libxml2 in several steps:

  1. Jealousy – “Boy wouldn’t it be great, but I’m afraid that this is out of my league”
  2. Announce Intention – I wrote an issue on GitHub hoping for somebody to step forward
  3. Do an Experiment and Document – I googled a bit and put together Part 1 of my libxml HTML tutorial.
  4. Write a Wrapper – For Part 2 of my libxml HTML tutorial I wrote an Objective-C wrapper for libxml2.
  5. Astonishment – The feeling you get when you find that you begin to understand the C-code needed
  6. Benchmark – I removed all string building code and compared the raw parsing performance of both approaches
  7. Transform the Pasta – Moved the code for building the attributed string into an aptly named class and have this driven from events generated by the new HTML parser.

The final step I called like this because if you break up Spaghetti code into several logical pieces and then layer these into several layers that’s a different kind of Pasta, that’s called Lasagne.

For step 6, the comparison I moved the NSScanner parsing loop into its own class and directly compared the running time on my iPhone 4S resulting in this tweet:

War&Peace HTML (3.4 MB), NSScanner: 4.264s, libxml2: 1.398s = 3x as fast on single thread, plus latter fixes HTML structure

At this stage it was clear to me that I need no extra self-convincing. So I went to work in a branch of the project. Most of the work was simply copy/pasting the attributed string building code into the right place, tag start, tag end or the characters found event method. This also allowed for omitting some workarounds that where needed to deal with non-well-formed HTML.

The second big BIG advantage of libxml2 is that its HTML parser fixes up the structure for you and also adds a missing html and body tag so that you end up with a perfect structure. It even adds a </br> right after a <br>. Even though this is completely unnecessary it still makes the HTML look like perfect XML. Much nicer to work with.

While I was doing the migrating new issues and pull requests with fixes stated to come in putting me a bit under stress because I needed to include the fixes in both branches. Which is why I decided to merge the branch back into master at the earliest possible time.

The initWithHTML method has shrunken to a much more manageable size:

- (id)initWithHTML:(NSData *)data options:(NSDictionary *)options documentAttributes:(NSDictionary **)dict
{
	// only with valid data
	if (![data length])
	{
 
		return nil;
	}
 
	DTHTMLAttributedStringBuilder	*stringBuilder = [[DTHTMLAttributedStringBuilder alloc] initWithHTML:data options:options documentAttributes:dict];
 
	[stringBuilder buildString];
 
	return [stringBuilder generatedAttributedString];
}

 

Benchmark after the Merge

And of course I did some further benchmarking on my iPhone 4S to compare the resulting speed increase. All tests where done by simply adding two NSLogs at the beginning and end of the initWithHTML method and subtracting the timestamps. Note that this is just the time for building the attributed string and does not include the layouting or drawing.

Demo HTML Snippet 

NSScanner: 50 ms

libxml2: 43 ms

War&Peace ePub HTML 

NSScanner: 10.968 sec

libxml2: 8.796 sec

This is an overall parsing and string building speed increase of between 14% and  22%.

You might now ask what happened to the 60% increase we saw from just comparing the parsers. The string building itself is still all happening on the same thread as the parsing and by itself has many opportunities for optimization.

For one thing the next thing I’d like to do is to put the string building operations onto its own GCD background queue. This way the events coming in from libxml would hand off to a queue running on a different thread and would immediately return to parsing. This could easily double the overall scanning performance because you can now make use of the two CPU cores present in the more modern iOS devices.

And there are obviously many more optimizations that now are feasible to do because you no longer face a daunting monster spaghetti but instead stand a chance of understanding what is happing in DTHTMLAttributedStringBuilder.

Conclusion

The main three advantages of libxml2 over NSScanner are: Performance, HTML non-wellformedness-resilience, simpler code through event-based handling of HTML.

This merge now makes it possible again for interested developers to contribute optimizations and new features because the code has become so much simpler to read and understand.

]]>
https://www.cocoanetics.com/2012/01/dtcoretext-new-formula/feed/ 0 5844
C-Code outperforms NSScanner https://www.cocoanetics.com/2012/01/c-code-outperforms-nsscanner/ https://www.cocoanetics.com/2012/01/c-code-outperforms-nsscanner/#comments Sun, 15 Jan 2012 11:02:26 +0000 http://www.cocoanetics.com/?p=5813 Over the past week I’ve been working on DTLocalizableStringScanner or in short genstrings2. The original genstrings dates back to the NeXtStep days. You know how it is “never change a winning team” BUT “the good is the enemy of the great”, because if something kind of works, why change it?

Besides of the other problems I’ve alluded to in my previous article genstrings is very slow. Internally it is written in Objective-C as you can tell from the occasional stack trace when it crashes again. But it was created in a time when Macs did only have single CPU cores and when we did not have the awesome LLVM with ARC, GCD and multi-threading.

At the time of this writing we implemented genstrings2 twice: once based extensively on NSScanner which I became intimately acquainted with when working on DTCoreText. The other is a rewrite of the rewrite totally ditching NSScanner and doing all scanning directly on the individual characters.

My feeling is that the lower-level scanning will be orders of magnitude faster and at the same time it is way more robust because for non-literal macro parameters it knows how to deal with pairs of parenthesis and commas contained in strings. But this just begs to be benchmarked.

In order to have a sufficient amount of code to scan through I downloaded the latest source bundle for Adium. Getting a time for the individual versions is easy with the time command:

time genstrings -q -s AILocalizedString ~/Desktop/adium-1.4.4/Source/*.m
 
real     0m7.872s
user     0m6.848s
sys      0m0.053s

When benchmarking you also need to mention the type of bench you are marking. These numbers come from my 2010 MacBook Air which is somewhat skewed because the disk IO times are possibly faster than usual. But we are mostly interested in the parsing efficiency. Here the benchmark machine has 2 CPU cores (1.6 GHz Intel Core 2 Duo).

You can see in activity monitor that original genstrings is single-threaded. It uses up to 100% of a single core and shows to have only 1 thread.

The version of genstrings2 using NSScanner clocked in much better. 1.7 seconds down from 7.9 seconds, 5 times as fast.

real     0m1.663s
user     0m2.950s
sys      0m0.070s

Now for the turbo version that does all the scanning without NSScanner. Instead it is working with the individual unichar characters that make up the strings. On iOS these are 16 bit because internally all NSStrings are using UTF16.

real     0m2.167s
user     0m3.973s
sys      0m0.074s

Whoa! Hold your horses, why is that slower than my version? My first thought was that maybe NSScanner really is very efficient. But once I fired up Instruments I found this, lots of CFRelease calls were bogging down the turbo scanner.

Looking at the sample distribution in _scanMacro revealed the culprit.

From this it was quite obvious that the creation of the mutable array to hold the scanned macro parameters was taking most of the CPU time in this area of the code. Turns out this was allocating and releasing this parameter array way more than necessary. We only need to created that once we are sure that this is actually one of the macros we are looking for, not some random word that happens to be made up of macro name characters.

So moving the array creation a few lines down and changing it to an alloc/initWithCapacity (we know that we probably will have less than 10 parameters) yielded this result in the benchmark:

real     0m1.385s
user     0m2.045s
sys      0m0.081s

Looking at Instruments again we see even more potential for optimization. If a line takes a quarter of the CPU time spent in a method then you have to ask yourself if there is a way to get rid of this work. Now this validMacroCharacters is already supposed to be a lazy initialization, so how to optimize this?

While it is lazy there is still quite some overhead to do the method call. On the first time around the character set is initialized and on subsequent calls the method returns the initialized IVAR. We can easily get around this by moving the initialization of the IVAR to the class init and reference it directly.

Normally this method call plus if plus returning the IVAR would not be a problem, but _scanMacro is called thousands of times and so – as proven by Instruments – this is an easy optimization. The benchmark improvement is very impressive:

real	0m1.065s
user	0m1.839s
sys	0m0.059s

At this stage the turbo scanner (with two changes) is already almost twice as fast as the NSScanner-version and 7 times as fast as original genstrings. BUT three is a charm and it so happens that I had a stroke of genius.

My train of thought went: if I can defer CPU-intensive work to the latest moment possible and if I can reduce the amount of times when it is necessary, what other method could I come up with to skip out of _scanMacro early? After the previous optimizations most CPU time was distributed between creating an NSString object and the containsObject of the macro dictionary. Two items that cannot be easily optimized further.

Upon closer inspection I found that far too many times words were found that were made up of characters in the macro words. This resulted in these two statements being called way more often than necessary, even for words that are not plausible macro names because they are too short. On the NSScanner version I had had the optimization to ignore short commands like “if” and “do” because these would look like macro characters but are in reality reserved words. But this did not yield and advantage.

The stroke of genius was to find the lengths of the shortest and longest macro names (also in the init) and then have a simple if comparison to return from _scanMacro if we character sequence is far too short for one of our macro names. The results blew me away.

real	0m0.442s
user	0m0.691s
sys	0m0.044s

Now we have truly reached the point of impossible further optimization. Looking in Instruments the CPU times are now much wider spread, only characterIsMember cumulates more CPU times then the other lines, but this method cannot be optimized further since that would mean writing a replacement for NSCharacterSet.

Conclusion

Let’s summarize the lessons learned:

  • Good C-code easily outperforms NSScanner
  • Defer creating objects to the latest possible moment
  • Avoid lazy instantiations if they are called in tight loops that cause them to be called too often
  • Try to find simple mathematical or logical abort conditions (quickly processed) to avoid CPU-intense work where possible

Granted I could spend more effort on working out the performance problems the NSScanner version has, but I doubt that I could get anywhere near the double turbo version. The final results thus are:

  • genstrings: 7.9 sec
  • genstrings (NSScanner): 1.7 sec
  • genstrings (optimized turbo): 0.4 sec

The optimized turbo version is 20 times as fast as the original, let’s be content with that. 😀

]]>
https://www.cocoanetics.com/2012/01/c-code-outperforms-nsscanner/feed/ 3 5813
genstrings2 https://www.cocoanetics.com/2012/01/genstrings2/ https://www.cocoanetics.com/2012/01/genstrings2/#comments Wed, 04 Jan 2012 16:05:07 +0000 http://www.cocoanetics.com/?p=5797 Actually I am on vacation, but I couldn’t help myself using the breather to work on a little hobby project. This I shall reveal to you today!

If you localize your apps (iOS or Mac regardless) you are probably used to working with genstrings. Probably painfully working, which is why we built the Mac app Linguan which remote-controls genstrings and merges the results with your previously existing tokens.

But genstrings has several big problems.

The most pressing problem of using genstrings from Linguan is that it is very picky about the syntax of the string macros. If you have a parameter for the macro that is not a literal @”string” then it will crash. Several Linguan users have been stumped by this behavior because this makes it impossible for Linguan to scan their source code.

The second problem is that we don’t have access to genstrings source code, and so we neither cannot know what this tool really does, nor can we modernize it to make use of modern multi-processor systems. genstrings is single-threaded.

The third problem is that before now the only way how we could call genstrings from within Linguan is basically open an invisible shell and then read the output files from some hidden temporary output folder. You get genstrings with Xcode, but what if you don’t want to install Xcode on the machine you want to use with Linguan? Having to spawn a process for each source file individually has also proven to be impracticable because instead of a 0.3 seconds this would take 12 seconds for scanning just the Linguan source files.

If that is any indication then the genstrings man page has not been updated since May 7, 2007. Any substantial improvements to the tool are probably even older than this, somebody from Apple told me that this goes back as long as the NeXt days.

My goal was to reverse-engineer the functionality of genstrings, put that into a static library and thus be able to bring native high-performance string scanning to Linguan without the need for genstrings to be present any more.

My latest Open Source project DTLocalizableStringScanner (on GitHub) contains two targets: one for a static library which we can use for scanning the source files. The other is a re-implementation of a command line tool which can take the place of genstrings.

genstrings2 is built around NSScanner for the scanning and GCD queues for multi-threading. There might be faster ways to scan for the macros but there is no reason to do that at the expense of the code being easy to read. I’d rather fill up a Grand Central Dispatch queue with one block per source file and thus make full use of multiple CPU cores. It works on as many files in parallel as GCD permits.

You can help hone this into a worthy successor for genstrings by testing it on your own source code and telling me about edge cases that it does not handle properly. For one thing it should ignore invalid macros instead of crashing like the original, that’s one immediate advantage that you get from symbolic-linking genstrings2 to be used instead of genstrings.

There is one (undocumented) behavior that is not implemented yet. genstrings apparently expands tokens containing lists in the format [one, two, three] into three lines with [one], [two] and [three]. People would use this to localizes NSPredicateEditor. But that’s an easy exercise that I will do in the next few days.

If there are no show stoppers then DTLocalizableStringScanner will replace genstrings already in Linguan 1.0.3.

]]>
https://www.cocoanetics.com/2012/01/genstrings2/feed/ 5 5797
Decoding Safari’s Pasteboard Format https://www.cocoanetics.com/2011/09/decoding-safaris-pasteboard-format/ https://www.cocoanetics.com/2011/09/decoding-safaris-pasteboard-format/#comments Fri, 02 Sep 2011 15:43:39 +0000 http://www.cocoanetics.com/?p=5386 If you have ever looked at UIPasteboard you might have seen that there are a variety of public types, like colors, images, plain text or URLs. But applications are also free to implement their own types to be put on the pasteboard when the existing types don’t do justice to your content.

One such custom type is being used between Apple’s iOS apps, like mobile Safari, whenever you copy HTML snippets: “Apple Web Archive pasteboard type”. At first glance this looks really secretive because all you can see there is a long string of numbers representing the NSData for it.

Today I am unveiling an Open Source solution to consuming this pasteboard type as well: DTWebArchive. Using this you can let your users copy something from Safari or Mail and pasted it into your app while preserving the rich text. I put this code into a new GitHub repository because even if you don’t dabble with CoreText and NSAttributedStrings+HTML then this project can be very useful to you.

When I started implementing cut/copy/paste for my DTRichTextEditor I found that text copied from Safari is both available on the pasteboard as plain text and as the above mentioned type. At that point I thought “Boy wouldn’t it be great if we could get at the HTML that was pasted”. Well, now you can.

The reverse-engineering began by inspecting the NSData with a text editor. All I needed was to see the “BPLIST” tag at the beginning to know that this is in fact a binary plist. I deserialized it and was astonished to find that this is simply an NSDictionary.

There are several elements to this archive, the most important one being the main contents in a WebArchive class which has mime type “test/html” and is the pure HTML that we are seeking. In addition to that – if you have copied images as well – you see an array of WebRessource elements where each encapsulates a file, typically an image, sometimes a CSS file. Those are basically cached copies. The original HTML still has the web URLs in it, but you could search for the WebResource with the same URL to find the local version.

Finally if you have IFRAMEs in the main HTML then each IFRAME will also have a corresponding WebArchive in an separate array. For example a HTML5 YouTube video would be showing in an IFRAME.

Because WebKit is Open Source could can see for yourself why my cleanroom re-implementation is way more portable. Heres the source for WebArchive.mm and here for WebResource.mm which relies heavily on LegacyWebArchive.cpp.

Here’s an example where I copied text with one image.

I decoded this pasteboard item with DTWebArchive output the HTML.

DTWebArchive *archive = [[DTWebArchive alloc] initWithData:data];
 
DTWebResource *resource = archive.mainResource;
NSString *string = [[NSString alloc] initWithData:resource.data encoding:NSUTF8StringEncoding];
 
NSLog(@"%@", string);
 
[archive release];

This is not entirely correct, because it hard-codes NSUTF8StringEncoding but should take the actual encoding of the WebArchive contents. But you get the idea.

I copied this into a snippet to look at it with the NSAttributedString+HTML demo project and this is what you see:

(Actually I had to cheat a bit because I found that I first had to fix something in NSAS+HTML that would cause the image to shift upwards.)

The next step for me is now to add this project as a submodule to NSAS+HTML and implement convenience methods to convert between attributed strings and web archives. Also maybe some convenience methods might be nice to get the UIImage for a specific URL.

For you this means that you should add copy and pasting of rich HTML to your applications as well.

]]>
https://www.cocoanetics.com/2011/09/decoding-safaris-pasteboard-format/feed/ 2 5386
Start Floating https://www.cocoanetics.com/2011/07/start-floating/ https://www.cocoanetics.com/2011/07/start-floating/#comments Thu, 28 Jul 2011 16:23:48 +0000 http://www.cocoanetics.com/?p=5261 You might have noticed that I blogged much less during the past 3 months, that was for the most part because almost all of my programming time went into a secret project for Scribd. Something that is finally revealed to the public on July 19th 2011.

Preempting the next question I am usually asked at this point: “What is Scribd?” Scribd is often described as “the  YouTube of Documents”. You can upload and share any kind of document on their network and they have an HTML5 reader that you can embed on your blog.

At the time the official statement was that Scribd is working on a mobile reader for iOS and they needed much more control over the rendering and interactivity of HTML-based content than UIWebView would afford them.

Let me tell you how Scribd has completely ditched UIWebView and is revolutionizing the way you read on your iPhone. Introducing Float.

What is Float? Think of it like this: everything digital that you are reading today (blogs, tweeted articles, articles you have on a reading list, eBooks, documents, etc.) in one amazing mobile native app that displays the content in the most perfect way possible on your device.

What follows here is a look at Float from a technical point of view, which I can provide because I was put in charge of the rich text display. If you are interested in reading more about the user experience itself then have a look at the reviews on all the other major blogs. I especially liked the review on Wired.com, which mentions me by name. 🙂

Only The Best Ingredients

My story with Float started 3 months ago. Sam Soffes – whom I greatly admire – discovered my work on CoreText and got me in contact with Scribd’s CTO Jared Friedman. We both were excited by the prospect of doing something on iOS that had never been done before, both technically as well as for the user experience.

The basic approach that almost everybody in our industry takes when it comes to displaying rich text is to use UIWebViews. Which leads to several problems because Apple does not let us at the inner workings of WebKit which ticks inside of these web views. Instead you have to resort to complicated hacks to find out things like the content size of a web page or inject JavaScript commands to implement any sort of non-standard communication.

Oh and it is slow to load, takes enormous amounts of memory and is almost impossible to customize visually if you don’t want to swizzle methods or hack the view hierarchy. Long story short: UIWebView sucks. Ok, you get the picture, now let me tell you how the grown-ups do it.

With iOS 3.2 Apple brought over CoreText to the mobile platform. CoreText brings you NSAttributedString which is basically an NSString where certain areas of text can have different attributes. These attributes – like a certain font or text color – are contained in NSDictionaries that have a certain range for which they are valid. Problem is, Apple forgot to include the initWithHTML method that exists on OSX. So on Macs you can easily create an attributed string from an HTML file and render this in a CATextLayer. Not so on iOS, where you have to painstakingly construct the attributed strings yourself and not just that, CATextLayer exists, but it ignores certain attributes altogether to make it practically unusable.

Enter NSAttributedStrings+HTML, my Open Source project to make it all much easier. It contains three major parts:

  1. A category for NSAttributedString that gives you an initWithHTML
  2. Objective-C wrapper classes to wrap CoreText objects
  3. UI Classes to render this rich text

The main reason why I chose to make it FOSS is because as a single person I can never hope to implement all of HTML. If I had chosen to make this a for pay software people might have felt entitled to get certain things implemented. Instead, on GitHub, they can make they additions they need and can send me a pull request to merge their changes into the master to benefit all.

Scribd is a big believer in Open Source and so you can thank them for sponsoring the project for the past 3 months. Almost all the commits I made during this time came out of that. A few of the most recent ones came from John Engelhart who famously created the fastest JSON parser of the world: JSONKit. BTW, John recently signed up full-time to work on Scribd’s iOS Team. And so can you, they are now hiring more than ever, especially since Sam Soffes moved on to new horizons.

Personally I am too old and too married here in Austria to move to San Francisco, so I’ve been working under contract with Scribd. Because of this our contract contains three levels of licensing in to cover the various levels of ownership and rights of the source code involved:

  • Open Source – code is owned by me, licensed publicly, Scribd would pay for the continued development necessary to provide the foundation for the other levels
  • Proprietary – there is a great deal of proprietary code I developed on top of the open source. This code is owned by Scribd.
  • Cross-Licensed – a few bits of code are owned by me and developed for use in my rich text editor. These I need to own so that I can sell it in other components, but Scribd has an automatic license.

At this point I probably should give you a demo of the app so that you see what I am talking about.

//www.youtube.com/watch?v=OmIIxXFhBvs

All the rendering you see is in the Open Source project. Everything related to user interaction is the proprietary part. To give some examples for Scribd proprietary tech: dynamic switching between paging and scrolling mode, fast font resizing and style changing and multi-threaded layouting. Imagine me tinkering, programming, experimenting for the better part of two months to get the performance and responsiveness to where it is today.

How Float Got It’s Name

Let me also tell you the story of how Float got it’s name because this is closely related to this topic. Scribd’s CEO Trip Adler had a certain vision of how the ideal reading experience would look like. He felt that the user/reader should be able to decide for himself whether he preferred scrolling or paging through text. This is what he called “the Floating Reading Experience”. The user should be able to float around without such artificial bounds.

I have to admit – at that time – I was unsure how I would pull that off, secretly I thought Trip was crazy, pardon the pun: tripping. But I kept experimenting, tinkering, threw away a couple of failed approaches until I ended up with what you see today. Lo and behold, it is amazingly close to the original vision which is why Scribd rebranded the app as Float.

I learned one thing from this experience: as an engineer you might consider some ideas that your leaders come up with as crazy and impossible. But if you put enough work into it, even the impossible becomes possible after a while. The “Impossible” takes around 2 months.

There is one other ingredient in Float’s secret sauce that makes this possible. Scribd has developed amazing server-side technology to break apart fixed-structure documents of any kind and make then reflowable. For example they can dissolve a PDF and provide output where you can change the font size and pagination based on the device you are reading on. If there is something that is too complex to be put as pure text (like tables or a signature) then they render an image of that.

This is actually the much greater feat because at my end CoreText only has to worry about displaying this custom “clean room” HTML as opposed to having to worry about malformed tags and what not. I keep telling people that NSAS+HTML is great if they control the HTML and this is the perfect example. I know exactly which tags can come out of this reflow process and this predictability is key in having the display of text work all of the time.

So these are the two main ingredients that make Float unique and miles ahead of any kind of competition. And of course it gives me as a contributor a good feeling to be associated with a company that is in this position.

Try It Out Now!

Only time can tell if users will understand and accept how revolutionary Float actually is. But from a technical perspective I can honestly say that we did all that is humanly possible to make the greatest app (and service) we can. Please try it out now, it’s free. And once you have sufficiently marveled at it, let us know – preferably publicly – how you like it.

]]>
https://www.cocoanetics.com/2011/07/start-floating/feed/ 3 5261
Translating NSTimeZone Geopolitical IDs https://www.cocoanetics.com/2011/03/translating-nstimezone-geopolitical-ids/ https://www.cocoanetics.com/2011/03/translating-nstimezone-geopolitical-ids/#comments Wed, 02 Mar 2011 18:01:59 +0000 http://www.cocoanetics.com/?p=4723

For my new version of Summertime I am building a time zone picker. You can get the known time zone names from the NSTimeZone class, but unfortunately Apple does not give us any localization of these. The localizedName:local: method gives you localized names of the time zones itself (e.g. “Pacific Standard Time”) in various formats. But what I found to be missing is a way to have the geo names localizable as well.

If I have my iPhone set to German I want to find my timezone by entering “Wien”, not “Vienna”.

My initial thought was to keep this to myself, but since I only speak German and English I can never hope to have the translations be perfect unless I would pay several translators to comb through them. And you know, Google Translate is great, but not 100%. So I started a new Open Source project on GitHub: NSTimeZone+Localization which aims to remedy this.

I wrote a quick command line tool which passes the knownTimeZoneNames through Google Translate for German, Spanish, French and Dutch as these are the languages that Summertime is currently localized in. More languages can be easily added the same way, but in general I would want to ask you a favor: if you speak any of these languages please have a quick look if you see any spelling or translation mistakes.

I had a modified version of LKGoogleTranslator handy which I had to modify again to include my API key. This still uses version 1.0 of the API while 2.0 is a newer one, but since it worked I did not bother.

#import "LKGoogleTranslator.h"
int main (int argc, const char * argv[]) {
    NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
 
	LKGoogleTranslator *translator = [[LKGoogleTranslator alloc] init];
 
	NSMutableString *string = [NSMutableString string];
 
	for (NSString *timezoneName in [NSTimeZone knownTimeZoneNames])
	{
		// replace slashes with commas, works better in Google Translate
		NSString *searchName = [timezoneName stringByReplacingOccurrencesOfString:@"/" withString:@","];
		searchName = [searchName stringByReplacingOccurrencesOfString:@"_" withString:@" "];
 
		NSString *translation = [translator translateText:searchName fromLanguage:@"en" toLanguage:@"nl"];
		translation = [translation stringByReplacingOccurrencesOfString:@"," withString:@"/"];
		translation = [translation stringByReplacingOccurrencesOfString:@"/ " withString:@"/"];
		translation = [translation stringByReplacingOccurrencesOfString:@" " withString:@"_"];
 
		[string appendFormat:@"\"%@\" = \"%@\";\n", timezoneName, translation];
 
		NSLog(@"%@ = %@", timezoneName, translation);
	}
 
	NSLog(@"%@", string);
 
	[translator release];
 
    [pool drain];
    return 0;
}

The resulting string blog I just pasted into strings files. With the help of a small category extension on NSTimeZone you can now get the localized geopolitical name.

#import "NSTimeZone+Translation.h"
 
@implementation NSTimeZone (Translation)
 
- (NSString *)localizedGeoName
{
	return NSLocalizedStringFromTable(self.name, @"LocalizableTimezones", @"NSTimeZone+Translation");
}
 
@end

This works because the localizable strings files for this are called LocalizableTimezones.strings. Instead of the regular NSLocalizedString macro I’m using NSLocalizedStringFromTable with the middle parameter being the name of the strings file minus extension.

You can use this functionality in your apps if in turn you feed back errors to me so that I can correct them. If you want additional languages to be added, let me know. I’ll make the same starting list via Google Translate for any newly requested languages and then you can manually polish them.

]]>
https://www.cocoanetics.com/2011/03/translating-nstimezone-geopolitical-ids/feed/ 6 4723
OpenSource’ing MyAppSales https://www.cocoanetics.com/2011/01/opensourceing-myappsales/ https://www.cocoanetics.com/2011/01/opensourceing-myappsales/#comments Sun, 30 Jan 2011 11:25:25 +0000 http://www.cocoanetics.com/?p=4678 One of the things that people know me for is for continuing to develop on MyAppSales, my favorite tool to download and keep those elusive sales report from iTunes Connect. It’s always been a hobby and until now I’ve allowed people to access the source in exchange for a mandatory donation. This went on for almost two years now.

Those donations never made sufficient money for me to pay for professional development. But I felt that I had to ask the approx. 500 people on the google group about their opinion as they might see their donation as a purchase and not like the idea of this software now being available for free. Boy was I wrong. Resoundingly people voted “+1” for going OpenSource.

So here it is. It’s Open. Read on for how I moved the repo from my SVN server to GitHub, including the entire revision history.

I wanted to move the entire history so far onto GitHub so that it was preserved. Fortunately this capability is already built into git. It will get all revisions and convert them into commits.

Create a local folder and init for a git repo:

mkdir MyAppSales
cd MyAppSales
git init

Set an SVN URL and fetch the entire history with all revisions. Amazing! This seems to be built into git. On the first go I made the mistake of importing the project root including trunk, tags and branches. So I aborted it, since I only wanted the contents of the trunk.

git svn init https://svn.cocoanetics.com/myappsales/trunk
git svn fetch

Set the remote repo and push all there: The pull was necessary because I had commited and pushed a README before the import.

git remote add origin git@github.com:Cocoanetics/MyAppSales.git
git pull origin master
git push origin master

I almost couldn’t believe how easy it was, but from the upload taking a couple of seconds and not getting an error message I figured that it worked flawlessly. Upon checking the commits section of the online browser I found that amazingly the entire commit history was present. Made me smile to see the first import dated March 29, 2009.

One thing to notice is how my user name changed every time I moved to a new server. Commits by “oliver” where done to the first repo which my friends at Spielhaus hosted. Then I moved to my new server, changing to “odrobnik”. Now on github I’m “Cocoanetics”.

Also it turns out that the project is desperately in need of some cleanup work regarding where files are located in the folders. Class files in the root, images all over the place.

Two more things I had to do: I copied a license and a .gitignore file so that it would not push some unwanted files. The first is created by OSX, build folders we never want and the last two are user settings which you don’t need either.

.DS_Store
build
*.mode1v3
*.pbxuser

I have several updates to this source still in progress on my Laptop. I’m working on migrating the entire database to CoreData because at present bearable performance is only achieved by caching to compressed files. So this is what I’ll bring to this project next.

It’s also my hope that with the source being now open that this will encourage current and future users to contribute. The repo is located at https://github.com/Cocoanetics/MyAppSales. And of course donations are still very welcome, my PayPal address is oliver@drobnik.com.

]]>
https://www.cocoanetics.com/2011/01/opensourceing-myappsales/feed/ 5 4678
iPhone Landscape Stands – Cheap or DIY https://www.cocoanetics.com/2010/03/iphone-landscape-stands/ https://www.cocoanetics.com/2010/03/iphone-landscape-stands/#respond Thu, 04 Mar 2010 08:07:44 +0000 http://www.drobnik.com/touch/?p=2210 After having released SpeakerClock, a speech countdown clock, I got a couple of suggestions, one was not about the app itself, but about how you could stand the iphone on it’s side.

Kevin Jamison: “Now we just need to come up with a simple little stand similar to the iPod Touch uses to hold it in a tilted landscape mode. Maybe rubber coated feet so it won’t slide if you are using a podium.”

I remembered that every once in a while the blogosphere brought to light yet another clever way of solving such problems. A quick search on Google and YouTube yielded a couple of great solutions. Some that you can purchase , same that you can make yourself at little to no cost.

First let’s look at the professional solutions which typically cost between $5 and $10. After we’ve set the benchmark we’ll explore how we can achieve landscape stability ourselves. Building something useful out of physical things might be a welcome distraction from hours of coding Cocoa. 🙂

Professional Stands

MovieWedge

from MovieWedge.com

Crabble

from Seskimo:

The Crabble is the only wallet-sized stand to offer an easily and continuously adjustable landscape viewing angle, ranging from 45 to 90 degrees, and, with its rubber claws, it is the only wallet-sized stand to give you that grip.

GoGoStand

from GoGoStand:

The GoGoStand is a plastic folding stand that you can tuck away in your wallet for quick access whenever you need to prop up your phone or gadget. This stand was designed for all generations of iPhones and iPod Touches, but also works with many other phones including the Android G1, Droid, Palm Pre and more.

Travel Stand

from Griffin Technology:

Travel Stand flips open to safely hold your iPhone or iPod touch in landscape mode for comfortable, hands-free viewing of your movies, videos, music, photos and more. When the show’s over, wind your earphones around the included cord wrap and flip the Travel Stand closed. Compact enough to slip into your pocket, Travel Stand is the perfect blend of stand and storage.

Do it Yourself

Paperclip

Mini-DV Tape Box

LEGOs

The maker made the building instructions available at my request.

Paper

$100 Bill

Same folding technique as the paper version, by Enrique Pardo.

Laser-Cut Acrylic

Make Magazine has the guide, all you need is access to a laser cutter…

Credit Card

All you need are a pair of scissors, some tape and an expired credit card. By Rod Porterfield.

Cardboard Sleeve

A cup sleeve that you would get free with coffee at – say – a Starbucks.

Paper Cup

A dixie cup with optional cancer society rubber wristband for additional grip.

Business Card

By far the easiest, most spontaneous possibility, no tools required.

Conclusion

Having reviewed quite a few ideas that other people had for either sellable products or DIY options I think that I would go for a solution that I could carry amongst my credit cards. There the GoGoStand looks to me the best professional solution even though the media is hyping the MovieWedge. But I don’t see that fitting into my wallet.

Amongst the DIY it depends if you have time and tools at hand. The variant that has the most likelyhood of being portable is one using nothing but a business card. You don’t need any tools for it and the folding necessary is easier to remember than for the bill or paper versions.

Please comment below what you’re using or if you found a version that I have not covered.

]]>
https://www.cocoanetics.com/2010/03/iphone-landscape-stands/feed/ 0 2210
Featured by Apple https://www.cocoanetics.com/2010/01/featured-by-apple/ https://www.cocoanetics.com/2010/01/featured-by-apple/#respond Fri, 15 Jan 2010 18:08:52 +0000 http://www.drobnik.com/touch/?p=1859 It is with a certain pride that I am announcing that truly was involved with the creation of an app that made it to the top. Well, at least in Austria and Germany, and at least in a category.

I’m referring to Infrared Photography which is an app that was contracted out to us by none less than Michael Görmann, professional photographer and in need of a signature app that he could hand out as a sort of business card. Via a short detour he arrived at our virtual doorstep and was promptly served a cute little app.

To put the credit where it is due and put it more precisely the programming itself was carried out by my brother-in-law whom I had the pleasure to help out here and there and generally have a mentoring eye over, when it comes to iPhone programming. He was and still is to this date an export for server-side Java, Android and Symbian. But about iPhone development I can proudly say: “He got that from me.” His name is Rene Pirringer and his company is ciqua. We are closely working together on iPhone projects.

Infrared Photography is unique in a way because it gives you an endless supply of beautiful black&white photos which where made with expensive Infrared Euqipment and by the unmatched eye of Mr. Görmann. So unique in fact that Apple approached him and asked him to put a couple more screenshots online, because they were planning to feature it. And so they did.

You could see an immediate jump in downloads, but for the most part in Germany and Austria where it was featured. It seems that those countries are served by the same iTunes team at Apple. Being featured like this glued the app to the first place like you can see here:

We are thankful for having had this chance find us and all you out there who keep downloading the app. It’s an amazing reference for us.

]]>
https://www.cocoanetics.com/2010/01/featured-by-apple/feed/ 0 1859