Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Output discrepancy between keras.predict and tf_saved_model #20985

Open
edge7 opened this issue Mar 5, 2025 · 4 comments
Open

Output discrepancy between keras.predict and tf_saved_model #20985

edge7 opened this issue Mar 5, 2025 · 4 comments
Assignees
Labels

Comments

@edge7
Copy link
Contributor

edge7 commented Mar 5, 2025

Hi,
I am exporting a model like this:

       import keras

        export_archive = keras.export.ExportArchive()
        export_archive.track(model_tra)
        export_archive.add_endpoint(
            name="serve",
            fn=lambda x: model_tra.call(x, training=False),
            input_signature=[
                keras.InputSpec(shape=(None, 3, 320, 320, 1), dtype="float64")
            ],
        )
        export_archive.write_out(
            "/home/edge7/Desktop/projects/ing_edurso/wharfreenet-inference/models/la_ao"
        )

Then I load it like:
tf.saved_model.load(os.path.join(MODEL_DIRECTORY, "la_ao"))

and I use it like:

model.serve(np.expand_dims(np.array(frame), axis=-1))

I get slightly different results than just using:
model.predict or model.predict_on_batch

am I missing something silly in the conversion?

@edge7
Copy link
Contributor Author

edge7 commented Mar 5, 2025

The bug might be linked to #19403
I get different results when using predict_on_batch and then serve (which should use model(x, training=False)) given the way I exported it.
It is probably not a conversion issue.

@dhantule dhantule added the keras-team-review-pending Pending review by a Keras team member. label Mar 12, 2025
@SamanehSaadat
Copy link
Member

SamanehSaadat commented Mar 12, 2025

Hi @edge7

Would it be possible to share your model code? Or could you share a repro colab?

@SamanehSaadat
Copy link
Member

Another question: Do you get different results calling model.predict or model.predict_on_batch right after training the model, i.e. without going through the export process?

@edge7
Copy link
Contributor Author

edge7 commented Mar 13, 2025

Hi @SamanehSaadat
Please find here a colab, that is self contained.
It downloads the models plus the model definition on the fly.
Then it tests with Keras (Jax), tf_model, and onnx.

The "nice thing" is that in local (with my machine, still GPU) I get even different results than with Colab

@SamanehSaadat SamanehSaadat self-assigned this Mar 20, 2025
@SamanehSaadat SamanehSaadat removed the keras-team-review-pending Pending review by a Keras team member. label Mar 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants