First of all, it would be useful to know what exactly MR_findAllInContext: does. A better solution would be to solve this in a more efficient way. What does a predicate look like? Do you indicate the batch size in the request? Do you use indexes on the attributes you request? What is the size of your data? It is hard to say if there is a better solution without any details.
You are using the current approach, which suffers from what seems like a fairly common misunderstanding of how nested contexts work.
The problem is setting contexts. Because you make the background context a child of the main context, everything you do in the background must go through the main context.
Saving the background context will cause all changes in the object graph to be transferred to the main context, which then must be saved in order to save the changes. Fulfilling a selection request in the background context sends it to the main context, which sends it to the storeβs permanent coordinator and synchronously transfers the results to the background context. Any request in the background context (fetch or save) blocks the parent context and also blocks the main thread in the same way as you execute the request directly in the main context.
Adding a background context behind the main content of the stream will not work in terms of performance. Nested contexts are simply not used that way.
In order to achieve what you want, you will need to execute a fetch request in a context that is independent of the main context, for example. A background context that is directly related to PSC. In this case, the fetch request will still block the PSC. This means that executing the request in the main context during this time will still block the main thread due to a lock conflict on the PSC. But at least the main thread will not be blocked at all.
Remember that when you pass the resulting object identifiers to the main context, get objects with objectWithID: and then access these objects, you rely on the PSC line cache to still store data for this. Be fast. Since the objects will fail first, Core Data will have to go to disk for each object if there is no more data in the line cache. It will be very slow. You can check with the tools for hits and cache skips.
source share