I have a number of functions that serve to classify data. Each function is passed the same input. The purpose of this system is to be able to abandon the new classification functions at its discretion without the need to adjust anything.
To do this, I use the classes_in_module function, taken from here . Then each classifier in one python file will be run on each input.
However, I find that the implementation of the classifier as a class or function is kludgy. Classes mean instantiation and execution, while functions do not have a pure introspection to allow me to query for a name or use inheritance to define common values.
Here is an example. Firstly, the implementation of the class:
class AbstractClassifier(object): @property def name(self): return self.__class__.__name__ class ClassifierA(AbstractClassifier): def __init__(self, data): self.data = data def run(self): return 1
This can then be used in this way, considering that classifier_list is the result of classes_in_module in a file containing ClassifierA among others:
result = [] for classifier in classifier_list: c = classifier(data) result.append(c.run())
However, this seems a little silly. This class is obviously static, and it does not need to maintain its own state, since it is used once and discarded. The classifier is really a function, but then I lose the ability to have a common name property - I would have to use the ugly introspection technique sys._getframe().f_code.co_name and replicate this code for each classifier function. And any other common properties between classifiers will also be lost.
What do you think? Should I just accept this misuse of classes? Or is there a better way?
source share