Bottom Page

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
 How come afer some iterations of gradient descent, the error starts growing?
I want to fit a logistic cruve to some data I have prepared.
The curve has a form of:

But I have also added a bias term to the function (+b) at the end.

I have calculated the derivatives with wolfram alpha, and hardcoded them.

At first, everything seems to work fine, but after circa 200 iterations, I see an increase in error value.
Please look at those screenshots:
Now, what could cause such a strange behaviour?

I will post the code, but I wasn't able to get a more "minimal" example, so It's quite long:


Top Page

Possibly Related Threads...
Thread Author Replies Views Last Post
  How to build linear regression by implementing Gradient Descent using only linear alg PythonSpeaker 1 256 Dec-01-2019, 05:35 PM
Last Post: Larz60+
  Import Excel File that Starts with Number kiki1113 1 729 Dec-20-2018, 07:13 PM
Last Post: Larz60+
  Why is my gradient descent algorithm requiring such a small alpha? JoeB 1 887 Dec-08-2017, 05:15 PM
Last Post: JoeB

Forum Jump:

Users browsing this thread: 1 Guest(s)