Why are automatic properties not set by default?

Being that properties are just methods under the hood, it’s clear that the performance of any logic that they could execute may or may not improve performance - so it’s clear why the JIT needs to be checked to see if methods are worth using.

However, automatic properties (as I understand it) cannot have any logic and simply return or set the value of the base field. As far as I know, automatic properties are handled by the compiler and JIT, like any other methods.
(Everything below is based on the assumption that the paragraph above is correct.)

Value type properties show different behavior than the variable itself, but link type properties should presumably have the same behavior as direct access to the base variable.

// Automatic Properties Example public Object MyObj { get; private set; } 

Is there a case where the automatic properties of Reference Types can show performance while being built in?
If not, what prevents the compiler or JIT from automatically embedding them?

Note. I understand that the performance gains are likely to be small, especially if the JIT is likely to embed them anyway, if you use them quite a lot of times - but not much, since there may be amplification, it seems logical that it would seem , simple optimization will be introduced independently.

+4
source share
3 answers

However, automatic properties (as I understand it) cannot be logically and simply return or set the value of the base field. As far as I know, automatic properties are handled by the compiler and JIT, like any other methods.

These automatic properties cannot have any logic, this is a detail of the implementation, there is no special knowledge about this fact, which is required for compilation. In fact, as you say, auto properties are compiled before method calls.

Suppose auto propes were built in, and the class and property are defined in a different assembly. This would mean that when you change the implementation of a property, you have to recompile the application to see this change. This wins with the use of properties in the first place, which should allow you to change the internal implementation without recompiling the consumer application.

+4
source

EDIT: The JIT compiler doesn't work the way you think it does, and I think that’s why you probably don’t quite understand what I'm trying to pass above. I quoted your comment below:

This is another matter, but as I understand it, methods are only tested to be inline-worthy if they are called enough times. Not to mention that validation itself is a performance hit. (Let the performance size be irrelevant at the moment.)

First, most, if not all, methods are tested to see if they can be inlined. Secondly, keep in mind that methods are only ever JITed once, and it is at this time that JITER will determine whether any methods called inside it will be inline. This can happen before any program is executed by your program. What makes the so-called method a good candidate for investment?

The x86 JIT compiler (x64 and ia64 do not necessarily use the same optimization methods) checks several things to determine if a method is a good candidate for nesting, definitely not just the number of times it is called. The article lists such things as if the attachment made the code smaller, if the call site would be executed many times (i.e. in a loop), and others. Each method is optimized on its own, so the method can be embedded in one call method, but not in another, as in the loop example. These optimization heuristics are available only to JIT, the C # compiler simply does not know: it produces IL, not native code. There is a huge difference between them; native and IL-code can be completely different.

To summarize, the C # compiler does not use the built-in properties for performance reasons.


The jit compiler builds the simplest properties, including automatic properties. You can learn more about how JIT solves the built-in method in this interesting blog post .

Well, the C # compiler does not use any methods at all. I guess this is because of how the CLR is designed. Each assembly is designed to be carried from machine to machine. Many times you can change the internal behavior of a .NET assembly without recompiling all the code, it can just be a slowdown (at least when the types haven't changed). If the code was embedded, it violates this (excellent, imo) design, and you lose that luster.

First, let's talk about inlining in C ++. (Full disclosure, I did not use C ++ full time, so I can be vague, my explanations are rusty or completely wrong! I rely on my Soers to correct and scold me)

The C ++ inline keyword looks like a compiler: "Hi, I want you to enable this feature because I think it will improve performance." Un , fortunately, it only tells the compiler which you would prefer its built-in; he does not say what it should be.

Perhaps, at earlier times, when the compilers were less optimized than now, the compiler most often compiled this function as a string. However, over time, and the compilers became more intelligent, compiler writers found that in most cases they were better able to determine when the function that was the developer should be built-in. For those few cases where this was not the case, developers could use the seriouslybro_inlineme keyword (officially called __forceinline in VC ++).

Now, why do compiler writers do this? Well, turning on a feature doesn't always mean improving performance. While this certainly can, it can also ruin the performance of your programs if used incorrectly. For example, we all know that one side effect of code overlay is an increased code size or “fat code syndrome” (disclaimer: not a true term). Why is "fat code syndrome" a problem? If you look at the article linked above, it explains, in particular, that memory is slow, and the larger your code, the less likely it will be in the fastest processor cache (L1). In the end, it can only fit in memory, and then, inlining did nothing. However, compilers know when these situations can occur, and do everything possible to prevent this.

Assuming this along with your question, let's look at it this way: the C # compiler is similar to the developer's writing code for the JIT compiler: JIT is simply smarter (but not ingenious). He often knows when embedding benefits or hurts execution speed. The C # Senior Developer compiler has no idea how embedding a method call can benefit your code from executing at runtime, so it doesn’t. I guess that actually means that the C # compiler is smart because it leaves the optimization task to those who are better than in this case the JIT compiler.

+5
source

Automatic properties are simple: get / set methods of properties are automatically generated. As a result, there is nothing special for them in IL. The C # compiler itself does a small number of optimizations .

For reasons why not embedding - imagine that your type is in a separate assembly, therefore, you can change the source of this assembly to insanely complicate the get / set for the property. As a result, the compiler cannot explain the complexity of the get / set code when it first sees your automatic property when creating a new assembly depending on your type.

As you already noted in your question - “especially when the JIT probably embeds them anyway” - these property methods are likely to be included in the JIT time.

+2
source

Source: https://habr.com/ru/post/1399833/


All Articles