Why is Debug.Trace evil?

I have seen many resources strongly encouraging programmers not to use Debug.Trace from production code due to the lack of referential transparency. I still do not fully understand the problem, although I can not find the reason.

My understanding is that tracing cannot change the output of an expression (although in some cases this can cause the expression to execute in a different order or cause the expression to be evaluated that would otherwise be lazily skipped). Thus, tracing cannot affect the output of a pure function. He cannot throw away any errors. So why did he fight so hard? (If any of the above assumptions is incorrect, indicate this!). Is this just a philosophical discussion or a performance thing, or can it somehow introduce a mistake?

When programming in other, less rigorous languages, I often find it valuable to have production journal log values ​​that help diagnose problems that I cannot reproduce locally.

+5
source share
3 answers

I often find it valuable to have production application journal log values ​​that help diagnose problems that I cannot reproduce locally.

That might be good. For example, the GHC may start with options that include various trace functions. The resulting logs may be useful for investigating errors. However, the wilderness of exiting the "clean" code can make things a little difficult in a reasonable way. For example, the output of -ddump-inlinings and -ddump-rule-rewrites may alternate with another output. Thus, developers have the convenience of working with cleaner code, but at the expense of magazines that are more difficult to choose. This is a compromise.

+6
source

Of course trace foo bar can cause more errors than bar : it can (will) throw any errors foo throws!

But in fact, this is not a reason to avoid this. The real reason is that you usually want the programmer to have control over the output order. You do not want your debugger to say that "Foo is happening!" interrupt yourself and say: "Foo - hapBar happens! pening!", for example; and you don’t want the hidden debugging application to languish unprintable just because the value that it wraps was never necessary. The right way to manage this order is to acknowledge that you are doing an IO and reflect it in your type.

+10
source

Debug.Trace.trace interrupts referential transparency.

An expression such as let x = e in x+x , through referential transparency, should be equivalent to e+e , regardless of what e .

but

 let x = trace "x computed!" 12 in x+x 

will (probably) print a debug message once, and

 trace "x computed!" 12 + trace "x computed!" 12 

will (probably) print a debug message twice. It should not be.

The only pragmatic way out of this is to consider the conclusion as "a side effect on which we should not depend." We already do this in pure code: we ignore the observed “side effects”, such as elapsed time, used space, consumed energy. Pragmatically, we should not rely on an expression to consume exactly 1324 bytes during the evaluation, and write code that breaks after the new compiler can optimize this more and save 2 more bytes. Similarly, production code should never rely on trace messages.

(Above, I write "probably" since this is what I think GHC is doing at the moment, but basically another compiler could optimize let x=e in ... by nesting e , which would trace some trace messages .)

+4
source

Source: https://habr.com/ru/post/1272235/


All Articles