I do not know any separate tools for this, but JUnit has an optional parameter called a timeout in @Test -nnotation:
The second optional parameter, a timeout, causes the test to fail if it takes longer than the specified number of hours (measured in milliseconds). The following test failed:
@Test(timeout=100) public void infinity() { while(true); }
So, you can write additional unit tests to verify that some parts work "fast enough." Of course, you need to somehow decide first what maximum period of time is required to complete a specific task.
-
If the second question matters, then here are the problems that I see:
- Variability depending on the environment in which it runs.
There will always be some kind of variability, but to minimize it, I would use Hudson or a similar automated build and testing server to run tests, so the environment would be the same every time (of course, if the server on which Hudson is running also runs all the others types of tasks, these other tasks can still affect the results). You should take this into account when determining the maximum runtime for tests (leave the "head room", so if the test takes, say, 5% more than usual, it still will not burn right away).
- How to detect changes, since micro-tests in Java have a lot of variance.
Microbenchmarks in Java are rarely reliable, I would say that test large chunks with integration tests (for example, processing a single http request or whatever you have) and measure the total time. If the test fails due to too much time, isolate the problem code by profiling, or measure and release the runtime of individual parts of the test during the test run to see which part takes the most time.
- If Caliper collects the results, how to get the results from the caliper so that they can be saved in a custom format. documentation caliber is not enough.
Unfortunately, I do not know anything about the caliper.
source share