Python Forum

Full Version: Neural Network importance weights / coefficients
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I would like to measure the importance / coefficient of each of a modest number of inputs toward a regression output for a neural network. I spent the morning searching for the best way to do this and only found things that worked sometimes or were rather complicated. I am wondering if there is some reason not to just add a delta to the inputs one at a time and rerun the predict function. I wrote the following function to use after doing a standard normalization of the inputs and running a neural network creating the model.

def marginal_effects(Data, model, delta):
    inputcount = Data.shape[1]
    baselevel = model.predict(Data).mean()
    identity = numpy.identity(inputcount)
    effects = numpy.zeros([inputcount, 1])
    for i in range(inputcount):
        addedterm = numpy.zeros([inputcount,inputcount])
        addedterm[i,i] = delta
        multiplier = identity + addedterm
        deltaData = Data.dot(multiplier)
        effects[i] = model.predict(deltaData).mean() - baselevel
    return effects * (1/delta)
And then used a delta of 0.01. And the effects could be rescaled if I wanted the importance weights to add to one. It seems to work, so I am just wondering if this is the preferred way to accomplish the task, and if it isn't, then why not.
I'm not sure weights in the traditional sense go with neural networks. Instead tend to be used as prediction engines.

If looking for weights, I would use a linear regression model and get the coefficients matrix.