In non-optimized code, Rust uses dynamic checks, but it is likely that they will be eliminated in the optimized code.
I looked at the behavior of the following code:
#[derive(Debug)] struct A { s: String } impl Drop for A { fn drop(&mut self) { println!("Dropping {:?}", &self); } } fn flip_coin() -> bool { false }
According to your comment on another answer, a drop call for a line that is left alone occurs after "leaving the inner area" of println. This is similar to the mere expectation that the y regions expand to the end of their block.
If you look at the assembler language compiled without optimization, it seems that the if not only copies either y1 or y2 to x, but also zeroes out any variable provided as the source for the move. Here's the test:
.LBB14_8: movb -437(%rbp), %al andb $1, %al movb %al, -177(%rbp) testb $1, -177(%rbp) jne .LBB14_11 jmp .LBB14_12
Here is the "then" branch, which moves the string "y1" to x. Pay attention, especially to the memset call, which resets y1 after moving:
.LBB14_11: xorl %esi, %esi movl $32, %eax movl %eax, %edx leaq -64(%rbp), %rcx movq -64(%rbp), %rdi movq %rdi, -176(%rbp) movq -56(%rbp), %rdi movq %rdi, -168(%rbp) movq -48(%rbp), %rdi movq %rdi, -160(%rbp) movq -40(%rbp), %rdi movq %rdi, -152(%rbp) movq %rcx, %rdi callq memset@PLT jmp .LBB14_13
(This looks horrible until you realize that all these movq instructions just copy 32 bytes from %rbp-64 , which is y1, %rbp-176 , which is x, or at least temporary, which will eventually be x.) Note that it copies 32 bytes, not 24, which you expect for Vec (one pointer plus two clips). This is because Rust adds a hidden “fall flag” to the structure, indicating whether this value is real-time or not, after three visible fields.
And here is the else branch, doing the same for y2:
.LBB14_12: xorl %esi, %esi movl $32, %eax movl %eax, %edx leaq -128(%rbp), %rcx movq -128(%rbp), %rdi movq %rdi, -176(%rbp) movq -120(%rbp), %rdi movq %rdi, -168(%rbp) movq -112(%rbp), %rdi movq %rdi, -160(%rbp) movq -104(%rbp), %rdi movq %rdi, -152(%rbp) movq %rcx, %rdi callq memset@PLT .LBB14_13:
This is followed by the code to “leave the inner area” of println, which is painful to see, so I won’t include it here.
Then we call the procedure “glue_drop” for both y1 and y2. This seems to be a compiler-generated function that takes A, checks the String Vec drop flag, and if this set calls Invase a drop, followed by the drop procedure for the string it contains.
If I read this correctly, it's pretty smart: even if it's A, which has a drop method that we need to call first, Rust knows what it can use ... inhale ... the Vec drop flag inside String inside A as a flag which indicates whether A should be discarded.
Now that they are compiled with optimization, the inlining and flow analysis should recognize situations in which a fall will definitely happen (and omit the run-time check), or definitely won't (and omit the fall altogether). And I believe that I have heard about optimizations that duplicate the code following the then / else clause in both ways, and then specialize them. This will eliminate all checks during the execution of this code (but duplicate the println call!).
As the original poster indicates, there is an RFC proposal to move carry flags from values and instead link them to a stack of slots holding values.
Thus, it is likely that optimized code may not have any checks at run time. However, I cannot force myself to read optimized code. Why don't you give it a try?