You will probably want to make a model compatible with scikit-learn, use it further with the available scikit-learn functionality. If you do this, you need to read this first: http://scikit-learn.org/stable/developers/contributing.html#rolling-your-own-estimator
In short: scikit-learn has many functions, such as grade cloning (the clone () function), meta-algorithms such as GridSearch , Pipeline , cross-validation. And all this should be able to get the values โโof the fields inside your grade and change the value of these fields (for example, GridSearch should change the parameters inside your grade before each grade), for example the alpha parameter in SGDClassifier . To change the value of a parameter, he must know his name. To get the names of all the fields in each class method get_params from BaseEstimator class (which you inherit implicitly), it is required that all parameters be specified in the __init__ method for the class, since it easily introspectively looks at all __init__ parameter names (Look at BaseEstimator , this is the class that calls this error).
So he just wants you to delete all varargs, for example
*args, **kwargs
from __init__ . You must list all the parameters of your model in the __init__ signature and initialize all the internal fields of the object.
Here is an example of the __init__ SGDClassifier method, which inherits from BaseSGDClassifier :
def __init__(self, loss="hinge", penalty='l2', alpha=0.0001, l1_ratio=0.15, fit_intercept=True, n_iter=5, shuffle=True, verbose=0, epsilon=DEFAULT_EPSILON, n_jobs=1, random_state=None, learning_rate="optimal", eta0=0.0, power_t=0.5, class_weight=None, warm_start=False, average=False): super(SGDClassifier, self).__init__( loss=loss, penalty=penalty, alpha=alpha, l1_ratio=l1_ratio, fit_intercept=fit_intercept, n_iter=n_iter, shuffle=shuffle, verbose=verbose, epsilon=epsilon, n_jobs=n_jobs, random_state=random_state, learning_rate=learning_rate, eta0=eta0, power_t=power_t, class_weight=class_weight, warm_start=warm_start, average=average)