Consensus regarding RC and tracking in computer science research has long been that tracing has excellent processor bandwidth, despite longer (maximum) pause times. (For example, see here , here and here .) Only very recently in 2013 an article was published (the last link in these three), which presents an RC-based system that works the same or slightly better than the best proven GC GC, regarding processor bandwidth. It goes without saying that it does not yet have “real” implementations.
Here is the tiny benchmark that I just made on my iMac with a 3.1 GHz i5, in the iOS 7.1 64-bit simulator:
long tenmillion = 10000000; NSTimeInterval t; t = [NSDate timeIntervalSinceReferenceDate]; NSMutableArray *arr = [NSMutableArray arrayWithCapacity:tenmillion]; for (long i = 0; i < tenmillion; ++i) [arr addObject:[NSObject new]]; NSLog(@"%f seconds: Allocating ten million objects and putting them in an array.", [NSDate timeIntervalSinceReferenceDate] - t); t = [NSDate timeIntervalSinceReferenceDate]; for (NSObject *obj in arr) [self doNothingWith:obj];
With ARC disabled ( -fno-objc-arc ) this gives the following:
2.029345 seconds: Allocating ten million objects and putting them in an array. 0.047976 seconds: Calling a method on an object ten million times. 0.006162 seconds: Setting a pointer ten million times.
With ARC enabled, it becomes:
1.794860 seconds: Allocating ten million objects and putting them in an array. 0.067440 seconds: Calling a method on an object ten million times. 0.788266 seconds: Setting a pointer ten million times.
Apparently, allocating call objects and methods has become somewhat cheaper. Assigning an object pointer has become an order of magnitude more expensive , although remember that I didn’t call -services in the example without ARC , and note that you can use __unsafe_unretained should you always have an access point that assigns object pointers as crazy. However, if you want to “forget about” memory management and allow ARC to insert save / release calls wherever you want, you will generally spend many processor cycles, repeatedly and in all code patches that set pointers. On the other hand, tracing GC leaves your code alone, and only at certain points (usually when distributing something), but does it in one fell swoop. (Of course, the details are more powerful, more complex in truth, given the generation of GC, incremental GC, parallel GC, etc.)
So yes, since Objective-C RC uses atomic hold / release, it's quite expensive, but Objective-C also has much more inefficiencies than using refcounting. (For example, the completely dynamic / reflective nature of methods that can be "tested" at any time during execution does not allow the compiler to perform many cross-cutting optimization methods that require data flow analysis, etc. Objc_msgSend () is always a dynamic call a related "black box from the point of view of a static analyzer, so to speak.) In general, Objective-C, since the language is not exactly the most efficient or optimal for optimization; people call it "Type C security with Smalltalk fast speed" for some reason .; -)
When writing Objective-C, it’s usually just tools around well-implemented Apple libraries that certainly use C and C ++ and assemblies or something else for their hot spots. Your own code should hardly be efficient. When there is a hot spot, you can make it very efficient by dropping it to lower levels of construction, such as pure C-style code within the same Objective-C method, but this is rarely ever needed. This is why Objective-C can afford the cost of ARC in the general case. I'm still not convinced that GC tracing has problems with inherent problems in limited shell environments and believes that you can correctly use a high-level language to work with such libraries, but RC seems to sit better with Apple / IOS. We need to consider the whole structure that they have created so far, and all their heritage libraries, when they asked themselves why they did not go with GC tracking; for example, I heard that RC is pretty deeply embedded in CoreFoundation.