Is there a way to make this "GetPropertyName" function faster?

I am currently using an expression tree to get the name of a property.

public static string GetPropertyName<T, TReturn>(Expression<Func<T, TReturn>> expression) { MemberExpression body = (MemberExpression)expression.Body; return body.Member.Name; } 

I saw somewhere where they used the string version (expression .toString ()) of the expression to cache it in the dictionary, and then get the property name a second time from the cache.

Does this really help or is there a better way?

+4
source share
2 answers

Reflection: Dodge Common Performance Pitfalls for quickly creating applications

http://msdn.microsoft.com/en-us/magazine/cc163759.aspx

Quick and light reflection functions

  • Typeof
  • Object.GetType
  • typeof == Object.GetType
  • Type equivalence APIs (including handle operator overloads)
  • get_module
  • get_MemberType
  • Some of the IsXX predicate APIs
  • New APIs for Defining Tokens / Descriptors in the .NET Framework 2.0

Costly Reflection Functions

  • GetXX API (MethodInfo, PropertyInfo, FieldInfo, etc.)
  • GetCustomAttributes
  • Type.InvokeMember
  • Call API (MethodInfo.Invoke, FieldInfo.GetValue, etc.)
  • get_Name (property Name)
  • Activator.CreateInstance
+2
source

It depends on what you mean by "helps."

The idea behind such caching is that it will be faster than validating an expression. My first reaction to this is β€œit will probably be even slower”, but in 99% of cases such intuitive guesses are wrong, so let them just ignore it for now.

Caching is a compromise. You will enter the overhead of the memory (for the dictionary) plus the time required to create the dictionary, in the hope that the ToString plus dictionary search operation will be faster than checking the expression that is worth the cost (plus increasing complexity). Even so, a faster search will not matter at all unless you do it in a large loop somewhere. Is this true in your case? If not, you should not worry.

Now, if you are writing a universal library, you do not know how users of this library will want to use it. Perhaps some of them will actually make such calls in a loop. In this case, it would be nice to close your back and try to cache; but it would still be a bad decision to implement caching for an unlikely scenario, if that worsens your more likely scenario.

And of course, judging whether caching will be for better or for worse should always be done by measuring.

+3
source

Source: https://habr.com/ru/post/1342706/


All Articles