You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is clear why the input and output shapes of the model differ ('VALID' padding). It should now be possible to create models with identical input and output spatial dimension sizes with: #38
However, it is unclear how the example notebook works. It appears to create labels (and features/inputs) with the spatial dimensions (128,128), but the model output is (60,60). How was it possible to train this model with different label and model output shapes?
The text was updated successfully, but these errors were encountered:
Following the original U-net architecture, where only the valid part of the convolution was proposed to avoid edge artefacts propagating through the network, we have implemented cropped versions for the losses and the metrics.
This means that the labels are cropped to the outputs size before the calculation of the loss/metric. You can find this implementation in this line for the losses, with this cropped_loss implementation.
Hope this helps. Thanks again for the contribution (keep them coming 😄 )
Following on from the closed issue: #35
It is clear why the input and output shapes of the model differ ('VALID' padding). It should now be possible to create models with identical input and output spatial dimension sizes with: #38
However, it is unclear how the example notebook works. It appears to create labels (and features/inputs) with the spatial dimensions (128,128), but the model output is (60,60). How was it possible to train this model with different label and model output shapes?
The text was updated successfully, but these errors were encountered: