When creating unit tests, they all must pass. This does not mean that you should not check for βfailedβ cases. It just means that the test should pass when it "fails."
Thus, you do not need to go through (preferably) a large number of tests and manually verify that the correct ones were passed and failed. This greatly violates the goal of automation.
As Mark Rottwevel notes in the comments, just checking that something failed is not always enough. Verify that the failure is the correct failure. For example, if you use error codes, and error_code equal to 0 indicates success, and you want to make sure that there is a failure, do not check this error_code != 0 ; instead, check, for example, that error_code == 19 or something that matches the correct error code.
Edit
There is one more point that I would like to add. Although the final version of your code that you are deploying should not have unsuccessful tests, the best way to ensure that you are writing the correct code is to write your tests before writing the rest of the code. Before you make any changes to the source code, write a unit test (or, ideally, a few unit tests) that should fail (or not compile) now, but pass it after your change has been made. This is a good way to make sure that the tests you write verify the right thing. So, to summarize, your final product should not have a rejection of unit tests; however, the software development process should include periods when you have already written unit tests that have not yet passed.
source share