May-20-2020, 06:55 PM
That didn't exactly work either, but -
In general, the loss decreases and accuracy increases as you run more cycles in your model (sometimes called epochs). The down side of this is that you can "overfit" your model so it is really really good at predicting the data in your training set, but when tested on other data the results start getting worse. From your graphs you appear to have hit the "sweet spot", where the accuracy and loss in the test sets have flattened out, before they start getting worse.
Make sense?
In general, the loss decreases and accuracy increases as you run more cycles in your model (sometimes called epochs). The down side of this is that you can "overfit" your model so it is really really good at predicting the data in your training set, but when tested on other data the results start getting worse. From your graphs you appear to have hit the "sweet spot", where the accuracy and loss in the test sets have flattened out, before they start getting worse.
Make sense?