How to measure the time profile of each django test?

I would like to measure the time (on the wall?) That each individual test case should fulfill.

I suppose wrapping test_runner in timeit will do the job, but before I down this rabbit hole, is there perhaps a smarter way to do this?

This already gives me cProfile to port, but nothing really jumps out as terribly bad. I think maybe my time can be focused on those that work the longest.

time python -m cProfile -o keep-p4-serialize.profile manage.py test -v 3 -k --parallel 4 

eg:

 test_dependencies (api.tests.TestMetricClasses) ... ok (4.003s) test_metrics (api.tests.TestMetricClasses) ... ok (8.329s) test_parameters (api.tests.TestMetricClasses) ... ok (0.001s) 
+5
source share
2 answers

This is enough to get me (relative timings), but it does not work for parallel tests, and, of course, there is an opportunity to increase the accuracy of timings. Nonetheless:

Override the default runner with settings.py :

 TEST_RUNNER = 'myapp.test_runner.MyTestRunner' 

From here create your own test_runner myapp myapp/test_runner.py :

 from django.test.runner import DiscoverRunner class MyTestRunner(DiscoverRunner): test_runner = TimedTextTestRunner 

And, in turn, override the result class so that:

 from unittest.runner import TextTestRunner, TextTestResult class TimedTextTestRunner(TextTestRunner): resultclass = TimedTextTestResult 

Now the result object is not only one result, but also a lot, so we need a collection of watches, with the help of the test. Then write down the start time of the test and print the time elapsed after printing the success line:

 class TimedTextTestResult(TextTestResult): def __init__(self, *args, **kwargs): super(TimedTextTestResult, self).__init__(*args, **kwargs) self.clocks = dict() def startTest(self, test): self.clocks[test] = time() super(TextTestResult, self).startTest(test) if self.showAll: self.stream.write(self.getDescription(test)) self.stream.write(" ... ") self.stream.flush() def addSuccess(self, test): super(TextTestResult, self).addSuccess(test) if self.showAll: self.stream.writeln("ok-dokey (%.6fs)" % (time() - self.clocks[test])) elif self.dots: self.stream.write('.') self.stream.flush() 

Which gave me test reports looking like this:

 test_price_impact (api.tests.TestGroupViews) ... ok-dokey (3.123600s) test_realised_spread (api.tests.TestGroupViews) ... ok-dokey (6.894571s) test_sqrt_trade_value (api.tests.TestGroupViews) ... ok-dokey (0.147969s) test_trade_count_share (api.tests.TestGroupViews) ... ok-dokey (3.124844s) test_trade_size (api.tests.TestGroupViews) ... ok-dokey (3.134234s) test_value_share (api.tests.TestGroupViews) ... ok-dokey (2.939364s) 
+2
source

The nose has a timer plugin that records the wall time for each individual test.

https://github.com/mahmoudimus/nose-timer/tree/master/nosetimer

The nose report coberatura xml also shows the time spent in each test, by default.


For specific django-related issues, there are a number of simple optimizations that can be increased to increase test execution time:

  • Use Sqlite if you are not using DB functionality
  • use the md5 password hasher (or no password at all)
  • disable migration
  • remove io, isolate logic as much as possible to avoid complex model dependencies.

How many tests are in test_dependencies and test_metrics ? What are the tests in them?

0
source

Source: https://habr.com/ru/post/1257368/


All Articles