The answer of Kevin and Robin is the most accurate. The answer to the Oscars is pretty close to the correct one. But neither the Gnustep documentation, nor the reasons logancautrell exists for zones to exist are correct.
Zones were originally created - first NXZone, then NSZone - so that objects selected from one zone are relatively continuous in memory, this is true. As it turned out, this does not reduce the amount of memory that the application uses; in most cases, it slightly increases it.
The big goal was to be able to massively destroy many objects.
For example, if you were to load a complex document into a document-based application, the disruption of the object graph when closing the document can be quite significant.
Thus, if all objects for a document were allocated from one zone, and allocation metadata for this zone was also in this zone, then destroying all objects associated with the document would be as cheap as simply deleting a zone (which was really cheap - "here, system, return these pages" - one function call).
It turned out to be inoperative. If one link to an object in a zone leaked from the zone, your application would move BOOM as soon as the document was closed, and there was no way for the object to indicate everything that had to do with it should stop. Secondly, this model also fell prey to the resource scarcity problem that is often found in the GC'd system. That is, if the graphic object of the document is stored on resources without memory, it was not possible to efficiently clean these resources before the destruction of the zone.
In the end, the combination almost does not benefit from productivity (how often you really close complex documents), and all the added fragility created the zones as a bad idea. Itβs too late to change the APIs and we are left with the leftovers.
bbum Oct 28 '11 at 10:05
source share