Are redundant roles optimized?

I am updating some old code and I find several instances when the same object is thrown repeatedly every time when one of its properties or methods needs to be called. Example:

if (recDate != null && recDate > ((System.Windows.Forms.DateTimePicker)ctrl).MinDate) { ((System.Windows.Forms.DateTimePicker)ctrl).CustomFormat = "MM/dd/yyyy"; ((System.Windows.Forms.DateTimePicker)ctrl).Value = recDate; } else { (System.Windows.Forms.DateTimePicker)ctrl).CustomFormat = " "; } ((System.Windows.Forms.DateTimePicker)ctrl).Format = DateTimePickerFormat.Custom; 

My tendency to fix this monster, but, given my limited time, I do not want to worry about anything that does not affect functionality or performance.

So I wonder if these redundant drops are optimized by the compiler? I tried to figure this out using ildasm in a simplified example, but not familiar with IL. I was only more embarrassed.

UPDATE

So far, the consensus seems to have been that a) no, the casts are not optimized, but b) although there may be a slight decrease in performance, this is unlikely, and c) I should consider fixing them anyway. I sided with the decision to fix this someday, if I have time. Meanwhile, I will not worry about them.

Thanks everyone!

+44
compiler-optimization casting c # jit
Mar 11 2018-11-11T00:
source share
4 answers

It is not optimized from IL in debugging or release builds.

simple C # test:

 using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace RedundantCastTest { class Program { static object get() { return "asdf"; } static void Main(string[] args) { object obj = get(); if ((string)obj == "asdf") Console.WriteLine("Equal: {0}, len: {1}", obj, ((string)obj).Length); } } } 

Corresponding IL (pay attention to several castclass instructions):

 .method private hidebysig static void Main(string[] args) cil managed { .entrypoint .maxstack 3 .locals init ( [0] object obj, [1] bool CS$4$0000) L_0000: nop L_0001: call object RedundantCastTest.Program::get() L_0006: stloc.0 L_0007: ldloc.0 L_0008: castclass string L_000d: ldstr "asdf" L_0012: call bool [mscorlib]System.String::op_Equality(string, string) L_0017: ldc.i4.0 L_0018: ceq L_001a: stloc.1 L_001b: ldloc.1 L_001c: brtrue.s L_003a L_001e: ldstr "Equal: {0}, len: {1}" L_0023: ldloc.0 L_0024: ldloc.0 L_0025: castclass string L_002a: callvirt instance int32 [mscorlib]System.String::get_Length() L_002f: box int32 L_0034: call void [mscorlib]System.Console::WriteLine(string, object, object) L_0039: nop L_003a: ret } 

Also, it is not optimized from IL in the release build:

 .method private hidebysig static void Main(string[] args) cil managed { .entrypoint .maxstack 3 .locals init ( [0] object obj) L_0000: call object RedundantCastTest.Program::get() L_0005: stloc.0 L_0006: ldloc.0 L_0007: castclass string L_000c: ldstr "asdf" L_0011: call bool [mscorlib]System.String::op_Equality(string, string) L_0016: brfalse.s L_0033 L_0018: ldstr "Equal: {0}, len: {1}" L_001d: ldloc.0 L_001e: ldloc.0 L_001f: castclass string L_0024: callvirt instance int32 [mscorlib]System.String::get_Length() L_0029: box int32 L_002e: call void [mscorlib]System.Console::WriteLine(string, object, object) L_0033: ret } 

None of the cases means that when creating your own code, the casts are not optimized, you need to look at the actual host of the machine. that is, by running ngen and disassembling. I would be very surprised if it were not optimized.

Regardless, I will give the Pragmatic Programmer and the broken window theorem: when you see a broken window, fix it.

+18
Mar 11 2018-11-11T00:
source share

A close check of the generated machine code in the Release assembly shows that the x86 jitter does not optimize cancellation.

Here you should look at the big picture. You assign control properties. They have a ton of side effects. In the case of DateTimePicker, assignment causes the message to be sent to the native Windows control. This, in turn, crunches in the message. The cost of a throw is negligible for the cost of side effects. Rewriting tasks will never have a noticeable difference in speed, you only do it a fraction of a percent faster.

Go ahead and rewrite the code on a lazy day on Friday. But just because it's easy to read. This poorly readable C # code also creates poorly optimized machine code that is not exactly the same.

+21
Mar 11 '11 at 22:30
source share

No; FxCop stands for this performance warning. See the information here: http://msdn.microsoft.com/en-us/library/ms182271.aspx

I would recommend running this on your code if you want to find something to fix.

+6
Mar 11 '11 at 10:45
source share

I have never heard or seen excessive cast optimizations on the CLR. Let's try a contrived example

 object number = 5; int iterations = 10000000; int[] storage = new int[iterations]; var sw = Stopwatch.StartNew(); for (int i = 0; i < iterations; i++) { storage[i] = ((int)number) + 1; storage[i] = ((int)number) + 2; storage[i] = ((int)number) + 3; } Console.WriteLine(sw.ElapsedTicks); storage = new int[iterations]; sw = Stopwatch.StartNew(); for (int i = 0; i < iterations; i++) { var j = (int)number; storage[i] = j + 1; storage[i] = j + 2; storage[i] = j + 3; } Console.WriteLine(sw.ElapsedTicks); Console.ReadLine(); 

On my machine running under exemption, I constantly get about 350 thousand ticks for excess redundancy and 280 thousand ticks for self-training. No, it looks like the CLR is not optimizing for this.

+1
Mar 11 2018-11-11T00:
source share



All Articles