Our DNA is written in Swift

Zarra on Locking

In my previous article dealing with Multi-Context Core Data I introduced a 3-context scheme that Marcus Zarra hat shown us in a back room at the 2012 NSConference.

Several people inquired about Locking, take for example Wim Fikkert:

Thanks for the great article. However, like Tom, I was wondering if the main context will not be blocked whenever you perform a save to disk with the persistent store. I am using the last design pattern, and I keep running into my app locking up. I have done some more searching and came across this article. Perhaps you can comment?

I don’t pretend to come anywhere close to being the Core Data expert that Zarra is, so I went straight to the horses mouth and asked him.

Marcus Zarra kindly responded shedding some light on the matter, emphasis mine.

There is always a lot of confusion around locking. [Editor: DOH!]

Whenever you save to the disk, no matter what context you are in, single or multiple, you are going to “lock” the NSPersistentStore while the write is taking place. That has always been true and most likely always will be true.

What is not true is that the main context gets locked. That is a over-simplification of what happens. If, during a save, you attempt to fetch more data from disk then you will be blocked. That is the effect people are seeing and screaming about.

Aha, so locking only affects a persistent store (aka the sqlite file). This is the 3-level approach I was referring to:

Here we see that we only have writing access to the PSC (Persistent Store Coordinator) in the background writer context. In my Open Source DTDownloadCache I am doing the actual save to disk on a delayed timer that checks if there are changes and if there are any saves them.

So what about all those other copies of the data that is already in memory?

However, if you are working with data that is already in memory then you won’t be blocked.

Also, depending on what you are fetching and how your fetch is configured, you can do a fetch that doesn’t block because the data is already in disk. That is harder to accomplish though so I relegate it to a side point. I should note that the default for a NSFetchRequest is to hit disk no matter what (this is the staleness interval)

The core problem is the locking of the NSPersistentStore. Since that cannot be avoided it needs to be lessened. To do that you spread it out into smaller hits so that the main (or other) contexts have a chance to get in and retrieve data frequently enough that the user does not get the sensation of “locking”. Small/frequent saves during imports is a common solution.

Now reading access is something that might cause a pause when NSFetchedResultsController cannot find current data in the Main MOC. Then it has to inquire its parent context for the data which has to get it from disk.

And of course Zarra also has an alternative but he doubts the viability of that even himself:

Another solution that is a bit hairy is to have *two* NSPersistentStoreCoordinator instances, one for writing one for reading. However that gets very complex very fast and I never recommend it. With a double PSC design you must tell the primary context when things change because it has no automatic notification. I mention it for completeness of the answer as opposed to a suggestion. I have yet to see that solution end well 🙂

Let’s recap:

  • writing to disk = locking the store, always
  • accessing data already in memory does not lock inside the staleness interval, otherwise is has to go up the chain an eventually get fresh data from disk
  • importing data = small/frequent saves (decoupled as shown above)

My personal conclusion is that you’ll have to experiment with how frequent you call a save on the Managed Object Context that is connected to your persistent store. There might be scenarios where you can get away with saving all the time and other where you should wait for an opportune moment in UI interaction when the user wouldn’t notice a slight pause on the main thread.

Thank you Marcus Zarra for your generously dispensed insight!

Categories: Q&A


  1. Shouldn’t the SQLite store be capable of more granular locking? Why should entity A be locked when writing to entity B? Every database I know of can do table level locking and some even do row level locking. I would have expected this kind of granularity from Core Data with the SQLite store.

  2. well, SQLite is non a high performance database engine like MySQL or Oracle. It is a liberally licensed open source project that keeps all database data in a single file.

  3. SQLite is not client server but it does have a pretty good single user SQL implementation. I’m sure it still stores different tables in different pages of data and it should be possible to at least implement table level locking and it would surprise me if it doesn’t. Are you sure it doesn’t do this?

  4. I have no idea about the inner workings of SQLite. Note that locking and blocking might not be the same thing in this context. You’d have to ask the guy who wrote SQLite about that.

  5. I’ve explored different setups, because on big data imports, it still chokes. What about attaching two Managed Object Contexts (UI and background writer) to Persistent Store directly? Another viable option is to have UI context connected to background writer together with other contexts as siblings (not children). But you have to notify sibling contexts yourself. Did you test any of these setups?

  6. The best method for big amounts of data is to NOT import, but to generated a sqlite file on the server and transfer this instead. Then you can add it as an additional persistent store.

  7. It is not SQLite that is locking. Another common misconception. It is the NSPersistentStore that is locking (Note, not the NSPersistentStoreCoordinator). SQLite can handle multiple connections, hence the complex solution of having multiple NSPersistentStoreCoordinators. As to why the NSPersistentStore locks, probably because it was deemed the easiest/safest solution at the time. I always recommend filing a radar when you don’t like the way things are currently designed. Your votes matter.

  8. Would love to see someone create a definitive CoreData bootstrap project. Magical Record gets really close to that, but multi context saving and fluid UI are still a headache.

  9. I would not recommend using MagicalRecord. Any time you add a wrapper around Core Data it is going to introduce additional unknowns. IMHO, Core Data is abstracted enough. Learn the Apple frameworks and you will be better off.

    As for a bootstrap project. I have done a few over the past couple of years. There is one on my blog ( and there is one in my shared github repository ( as well as one in my book.

    Now that it is mentioned, I should probably do an updated one on the blog that uses the new frameworks.

  10. So Markus there’s something very disturbing to me. You don’t recommend MR but all the articles Saul writes about it are on your blog. As much as you’d appreciate Saul as a human being (so do I), how on earth is he writing about a technology you (the Cocoa guy) don’t recommend on CIMGF? 🙂 😉 PS: I do use MR on 2 projects and right now I think the biggest problem I have is about this whole parent/child context crazyness 🙂

  11. Why is Magical Record on CIMGF when I don’t agree with it? A few reasons:

    1. I am not perfect nor am I all knowing. Someone could write something that I disagree with and I am in the wrong.
    2. I loathe censorship in any form.
    3. Even though I disagree with it, others may not. Even if I am correct that it is the wrong approach, others can learn from it.

    When Saul first posted about it on CIMGF I debated internally for a long while before I decided that if I trusted the developer to do his best that I will not censor what he writes just because I disagree with it or think it is wrong.

  12. How about memory management?

    According to apples documentation for core data, newly added objects are not being freed from memory until you save:
    “Put another way, just because you fetched an object doesn’t mean it will stay around.
    The exception to this rule is that a managed object context maintains a strong reference to any changed (inserted, deleted, and updated) objects until the pending transaction is committed (with a save:) or discarded (with a reset or rollback).”

    So when you make a save in a child-context, can those child-contexts free memory again?
    And when they try to access those managed objects again, do they get their objects from their parent context instead from harddrive?
    Plus… when i save inside of the background writer MOC, the background writer’s queue will basically block… do consecutive writes pile up? Or what happens when I start a second writing?