You create test data in advance that represents the type of data that you will receive during production, and then check your code for this by updating the table every time you run the test (i.e. in your SetUp () function).
You cannot verify the actual data that you will receive during the production process, no matter what you test. You only verify that the code is working as expected for this scenario. For example, if you upload a test table with five rows of blue cars, then you want your report to show five blue cars when you test it. You are testing parts of the report, so when you are done, you will automatically check the entire report.
As a comparison, if you tested a function that expects a positive integer from 1 to 100, would you write 100 tests to test every single integer? No, you would check for something within the range, then something at the borders (e.g. -1, 0, 1, 50, 99, 100, and 101). You do not test, for example, 55, because this test will go by the same code as 50.
Define your paths and code requirements, then create the appropriate tests for each one. Your tests will reflect your requirements. If the tests pass, the code will be an accurate representation of your requirements (and if your requirements are incorrect, TDD cannot save you from this).
source share