Numerical Regression Testing

I am working on scientific computational code (written in C ++), and in addition to performing unit tests for small components, I would like to do regression testing on some numerical outputs compared to the “known” good “answer from previous changes. There are several Features I would like:

  • Allow comparing numbers with a given tolerance (both for rounding errors and for weaker expectations)
  • The ability to distinguish between ints, double, etc. and ignore text if necessary
  • Well-formatted output to let you know what went wrong and where: in the data table with multiple columns, specify only the column entry that is different
  • Returns EXIT_SUCCESSor EXIT_FAILUREdepending on whether the files match

Are there any good scripts or applications that do this, or will I have to fold my own in Python to read and compare the output files? Of course, I am not the first person with such requirements.

[The following is not significant, but it may influence the decision of what to do. I use CMake and its CTest built-in functionality to test unit tests that use the Google Test platform. I believe it’s not difficult to add a few statements add_custom_commandto mine CMakeLists.txtto invoke any necessary regression software.]

+3
4

Python script, , .

#!/usr/bin/env python

import sys
import re
from optparse import OptionParser
from math import fabs

splitPattern = re.compile(r',|\s+|;')

class FailObject(object):
    def __init__(self, options):
        self.options = options
        self.failure = False

    def fail(self, brief, full = ""):
        print ">>>> ", brief
        if options.verbose and full != "":
            print "     ", full
        self.failure = True


    def exit(self):
        if (self.failure):
            print "FAILURE"
            sys.exit(1)
        else:
            print "SUCCESS"
            sys.exit(0)

def numSplit(line):
    list = splitPattern.split(line)
    if list[-1] == "":
        del list[-1]

    numList = [float(a) for a in list]
    return numList

def softEquiv(ref, target, tolerance):
    if (fabs(target - ref) <= fabs(ref) * tolerance):
        return True

    #if the reference number is zero, allow tolerance
    if (ref == 0.0):
        return (fabs(target) <= tolerance)

    #if reference is non-zero and it failed the first test
    return False

def compareStrings(f, options, expLine, actLine, lineNum):
    ### check that they're a bunch of numbers
    try:
        exp = numSplit(expLine)
        act = numSplit(actLine)
    except ValueError, e:
#        print "It looks like line %d is made of strings (exp=%s, act=%s)." \
#                % (lineNum, expLine, actLine)
        if (expLine != actLine and options.checkText):
            f.fail( "Text did not match in line %d" % lineNum )
        return

    ### check the ranges
    if len(exp) != len(act):
        f.fail( "Wrong number of columns in line %d" % lineNum )
        return

    ### soft equiv on each value
    for col in range(0, len(exp)):
        expVal = exp[col]
        actVal = act[col]
        if not softEquiv(expVal, actVal, options.tol):
            f.fail( "Non-equivalence in line %d, column %d" 
                    % (lineNum, col) )
    return

def run(expectedFileName, actualFileName, options):
    # message reporter
    f = FailObject(options)

    expected  = open(expectedFileName)
    actual    = open(actualFileName)
    lineNum   = 0

    while True:
        lineNum += 1
        expLine = expected.readline().rstrip()
        actLine = actual.readline().rstrip()

        ## check that the files haven't ended,
        #  or that they ended at the same time
        if expLine == "":
            if actLine != "":
                f.fail("Tested file ended too late.")
            break
        if actLine == "":
            f.fail("Tested file ended too early.")
            break

        compareStrings(f, options, expLine, actLine, lineNum)

        #print "%3d: %s|%s" % (lineNum, expLine[0:10], actLine[0:10])

    f.exit()

################################################################################
if __name__ == '__main__':
    parser = OptionParser(usage = "%prog [options] ExpectedFile NewFile")
    parser.add_option("-q", "--quiet",
                      action="store_false", dest="verbose", default=True,
                      help="Don't print status messages to stdout")

    parser.add_option("--check-text",
                      action="store_true", dest="checkText", default=False,
                      help="Verify that lines of text match exactly")

    parser.add_option("-t", "--tolerance",
                      action="store", type="float", dest="tol", default=1.e-15,
                      help="Relative error when comparing doubles")

    (options, args) = parser.parse_args()

    if len(args) != 2:
        print "Usage: numdiff.py EXPECTED ACTUAL"
        sys.exit(1)

    run(args[0], args[1], options)
0
+3

ndiff , : diff, .

0

, , nrtest, . , .

Here is a quick overview. Each test is determined by its input files and the expected output files. After execution, the output files are stored in a portable control directory. The second step then compares this test with the reference standard. A recent update included custom extensions, so you can define comparison functions for your user data.

Hope this helps.

0
source

Source: https://habr.com/ru/post/1711445/


All Articles