For example, Reductio (for Java / Scala) and QuickCheck (for Haskell). Such an infrastructure, which I think of, will provide βgeneratorsβ for the built-in data types and allow the programmer to define new generators. The programmer will then define a test method that claims a property by accepting variables of the corresponding types as parameters. Then the structure generates a bunch of random data for the parameters and runs hundreds of tests of this method.
For example, if I implemented the Vector class and it had the add () method, I could verify that my addition commutes. So I would write something like (in pseudocode):
boolean testAddCommutes(Vector v1, Vector v2) { return v1.add(v2).equals(v2.add(v1)); }
I could run testAddCommutes () on two specific vectors to find out if this addition commutes. But instead of writing a few calls to testAddCommutes, I am writing a procedure, which generates arbitrary vectors. Given this, the environment can run testAddCommutes on hundreds of different inputs.
Does this allow someone to call?
source share