overfitting in 03_pytorch_computer_vision #672
Replies: 2 comments
-
I have the same question here. I am assuming we are saying that the model is overfitting due to the training accuracy being slightly higher that the test accuracy. This happens in both models, so are we saying that both models are overfitting? However, it is only a marginal difference in both cases so not clear on how much greater the training accuracy would have to be to be classed as overfitting. Thanks in advance! EDIT: From a little research, I believe my assumption about the training accuracy being higher than the testing accuracy to be correct. However, given such a marginal increase in this particular case, I don't believe that the loss in accuracy achieved in the non-linear model can be attributed to overfitting in this case. |
Beta Was this translation helpful? Give feedback.
-
@HamidHamedi Yes, loss metrics are generally used to evaluate whether the model is learning effectively on both training and test data. And, you should compare loss metrics after 3-5 epochs (except during finetuning or transfer learning) as its not reliable to compare in early epochs as model is starting learn and it is just randomly guessing.
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello!
Right before 7. Building a CNN model, it is written in that "it seems like our model is overfitting on the training data".
But how we could find there is overfitting by comparing model_1_results and model_0_results?
As far as I see, model_1_results shows loss and accuracy for the test data, but as far as I know to conclude if there is overfitting we need also to look at loss in train data. Am I right?
Beta Was this translation helpful? Give feedback.
All reactions