Python Forum

Full Version: How come afer some iterations of gradient descent, the error starts growing?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hello.
I want to fit a logistic cruve to some data I have prepared.
The curve has a form of:
[Image: 6f42e36c949c94189976ae00853af9a1b618e099]
But I have also added a bias term to the function (+b) at the end.

I have calculated the derivatives with wolfram alpha, and hardcoded them.

At first, everything seems to work fine, but after circa 200 iterations, I see an increase in error value.
Please look at those screenshots:
Now, what could cause such a strange behaviour?

I will post the code, but I wasn't able to get a more "minimal" example, so It's quite long: