To paraphrase your question:
Is there a cost to using a local variable?
Since C # and .NET are well designed, I expect that using a local variable as you have described does not have or has a negligible cost, but let me try to support this expectation with some facts.
The following C # code
if (IsEverythingOk()) { ... }
will be compiled into this (simplified) IL (with optimizations enabled)
call IsEverythingOk brfalse.s AfterIfBody ... if body
Using a local variable
var ok = IsEverythingOk(); if (ok) { ... }
you get this optimized (and simplified) IL:
call IsEverythingOk stloc.0 ldloc.0 brfalse.s AfterIfBody ... if body
The surface seems a little less efficient as the return value is stored on the stack and then popped, but the JIT compiler also performs some optimizations.
You can see the actual machine code generated by debugging your application with native code debugging enabled. You must do this using the release build, and you will also have to disable the debugger option, which suppresses JIT optimization when loading the module. Now you can put breakpoints in the code you want to test, and then view the disassembly. Please note that JIT is like a black box, and the behavior that I see on my computer may be different from what other people see on their computers. Given this disclaimer, the build code I get for both versions of the code (with a slight difference in how the call is made):
call IsEverythingOk test eax,eax je AfterIfBody
Thus, JIT optimizes excess unnecessary IL. In fact, in my initial experiments, the IsEverythingOk method returned true , and the JIT was able to fully optimize the branch. When I then switched to returning the field in the method, the JIT would make a direct call and immediately access the field.
On the bottom line: you should expect JIT to optimize at least simple things, such as transient local variables, even if the code generates some extra IL that seems redundant.