Python Forum

Full Version: Loss and Accuracy Figures.
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Please don't use links, rather post code here using bbcode tags.
[Image: open?id=1YZmVh7jyvUgsfVgh6HpZAXsB9-cTelVT]
[Image: open?id=1ztRU82Zr-O5_V4cHh9hPyKrS5OoWJ7WC]
That didn't exactly work either, but -
In general, the loss decreases and accuracy increases as you run more cycles in your model (sometimes called epochs). The down side of this is that you can "overfit" your model so it is really really good at predicting the data in your training set, but when tested on other data the results start getting worse. From your graphs you appear to have hit the "sweet spot", where the accuracy and loss in the test sets have flattened out, before they start getting worse.
Make sense?