You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The bug might be linked to #19403
I get different results when using predict_on_batch and then serve (which should use model(x, training=False)) given the way I exported it.
It is probably not a conversion issue.
Another question: Do you get different results calling model.predict or model.predict_on_batch right after training the model, i.e. without going through the export process?
Hi @SamanehSaadat Please find here a colab, that is self contained.
It downloads the models plus the model definition on the fly.
Then it tests with Keras (Jax), tf_model, and onnx.
The "nice thing" is that in local (with my machine, still GPU) I get even different results than with Colab
Hi,
I am exporting a model like this:
Then I load it like:
tf.saved_model.load(os.path.join(MODEL_DIRECTORY, "la_ao"))
and I use it like:
model.serve(np.expand_dims(np.array(frame), axis=-1))
I get slightly different results than just using:
model.predict or model.predict_on_batch
am I missing something silly in the conversion?
The text was updated successfully, but these errors were encountered: