Nov-10-2020, 07:24 PM
I would like to measure the importance / coefficient of each of a modest number of inputs toward a regression output for a neural network. I spent the morning searching for the best way to do this and only found things that worked sometimes or were rather complicated. I am wondering if there is some reason not to just add a delta to the inputs one at a time and rerun the predict function. I wrote the following function to use after doing a standard normalization of the inputs and running a neural network creating the model.
And then used a delta of 0.01. And the effects could be rescaled if I wanted the importance weights to add to one. It seems to work, so I am just wondering if this is the preferred way to accomplish the task, and if it isn't, then why not.
1 2 3 4 5 6 7 8 9 10 11 12 |
def marginal_effects(Data, model, delta): inputcount = Data.shape[ 1 ] baselevel = model.predict(Data).mean() identity = numpy.identity(inputcount) effects = numpy.zeros([inputcount, 1 ]) for i in range (inputcount): addedterm = numpy.zeros([inputcount,inputcount]) addedterm[i,i] = delta multiplier = identity + addedterm deltaData = Data.dot(multiplier) effects[i] = model.predict(deltaData).mean() - baselevel return effects * ( 1 / delta) |