Most Important / Contributing Features in the Sklearn MLP Classifier

I would like to know if there is a way to visualize or find the most important / contributing functions after installing the MLP classifier in Sklearn.

A simple example:

import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import LeaveOneOut
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline


data= pd.read_csv('All.csv', header=None)
X, y = data.iloc[0:, 0:249].values, data.iloc[0:,249].values

sc = StandardScaler()
mlc = MLPClassifier(activation = 'relu', random_state=1,nesterovs_momentum=True)
loo = LeaveOneOut()
pipe = make_pipeline(sc, mlc)

parameters = {"mlpclassifier__hidden_layer_sizes":[(168,),(126,),(498,),(166,)],"mlpclassifier__solver" : ('sgd','adam'), "mlpclassifier__alpha": [0.001,0.0001],"mlpclassifier__learning_rate_init":[0.005,0.001] }
clf = GridSearchCV(pipe, parameters,n_jobs= -1,cv = loo)
clf.fit(X, y)

model = clf.best_estimator_
print("the best model and parameters are the following: {} ".format(model))
+4
source share
1 answer
Good question. The lack of interpretability of NN models is one of the problems that the ML / NN community is facing.

One recent approach that has attracted attention is LIME paper (Ribeiro et al, KDD'16). Here is the relevant excerpt from the abstract:

  • " LIME - , , ".

GitHub (Python, yay!).

( LIME, , .)

+4

Source: https://habr.com/ru/post/1678884/


All Articles