Several people inquired about Locking, take for example Wim Fikkert:
Thanks for the great article. However, like Tom, I was wondering if the main context will not be blocked whenever you perform a save to disk with the persistent store. I am using the last design pattern, and I keep running into my app locking up. I have done some more searching and came across this article. Perhaps you can comment?
I don’t pretend to come anywhere close to being the Core Data expert that Zarra is, so I went straight to the horses mouth and asked him.
Marcus Zarra kindly responded shedding some light on the matter, emphasis mine.
There is always a lot of confusion around locking. [Editor: DOH!]
Whenever you save to the disk, no matter what context you are in, single or multiple, you are going to “lock” the NSPersistentStore while the write is taking place. That has always been true and most likely always will be true.
What is not true is that the main context gets locked. That is a over-simplification of what happens. If, during a save, you attempt to fetch more data from disk then you will be blocked. That is the effect people are seeing and screaming about.
Aha, so locking only affects a persistent store (aka the sqlite file). This is the 3-level approach I was referring to:
Here we see that we only have writing access to the PSC (Persistent Store Coordinator) in the background writer context. In my Open Source DTDownloadCache I am doing the actual save to disk on a delayed timer that checks if there are changes and if there are any saves them.
So what about all those other copies of the data that is already in memory?
However, if you are working with data that is already in memory then you won’t be blocked.
Also, depending on what you are fetching and how your fetch is configured, you can do a fetch that doesn’t block because the data is already in disk. That is harder to accomplish though so I relegate it to a side point. I should note that the default for a NSFetchRequest is to hit disk no matter what (this is the staleness interval)
The core problem is the locking of the NSPersistentStore. Since that cannot be avoided it needs to be lessened. To do that you spread it out into smaller hits so that the main (or other) contexts have a chance to get in and retrieve data frequently enough that the user does not get the sensation of “locking”. Small/frequent saves during imports is a common solution.
Now reading access is something that might cause a pause when NSFetchedResultsController cannot find current data in the Main MOC. Then it has to inquire its parent context for the data which has to get it from disk.
And of course Zarra also has an alternative but he doubts the viability of that even himself:
Another solution that is a bit hairy is to have *two* NSPersistentStoreCoordinator instances, one for writing one for reading. However that gets very complex very fast and I never recommend it. With a double PSC design you must tell the primary context when things change because it has no automatic notification. I mention it for completeness of the answer as opposed to a suggestion. I have yet to see that solution end well 🙂
- writing to disk = locking the store, always
- accessing data already in memory does not lock inside the staleness interval, otherwise is has to go up the chain an eventually get fresh data from disk
- importing data = small/frequent saves (decoupled as shown above)
My personal conclusion is that you’ll have to experiment with how frequent you call a save on the Managed Object Context that is connected to your persistent store. There might be scenarios where you can get away with saving all the time and other where you should wait for an opportune moment in UI interaction when the user wouldn’t notice a slight pause on the main thread.
Thank you Marcus Zarra for your generously dispensed insight!