I personally am not convinced that annotations are the right way to track hot spots. Firstly, they will be ugly in areas with a lot of errors (more @Bug lines than real code!), And for another they will uselessly fill in places that are unlikely to regress, for example, "Gradually" when some code first implemented.
In more detail, the @Bug annotation is only useful when used and used sequentially. Forcing this will be a hassle for all participants and will slow people down without providing sufficient understanding, since you have no way to find out which code affected the error and did not receive annotations.
Better, I would say, it would be to implement some kind of external analysis that looks at files that are affected by bug fixes (Commit message contains [bB][uU][gG]:? *\d+ or something like that) and analyzes this way. You can quickly check all bug fixes without adding an additional process for your developers.
Google has an interesting article: Predicting Google Errors
To your comments that annotations are more “sticky” than comments that they have a better chance of survival, I also wondered how often this difference would be useful in practice. I find this more often when comments about errors in the code are no longer informative, and stick for longer than necessary. If svn|hg|git|whatever blame in the corresponding lines there are no commits associated with the error in question, then it probably has been repeated several times, but the comment follows.
Of course, I’m not saying what you describe, it never happens, but I wonder how often this happens. If in your experience comments disappear when they can be useful, by all means, see if annotations can do better.
source share