How do I create an app with my own TensorFlow model #339
MMelQin
started this conversation in
Show and tell
Replies: 1 comment
-
This is a great way to test wrapping a TensorFlow based model in a custom operator to reuse a proven or approved model inference in the MONAI Framework. In the documentation Create Operator Classes, a developer can follow the examples in that section including the example MONAISegInferenceOperator that inherits InferenceOperator. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
For now, wrap the inference in an operator implementing the base Operator class or even the InferenceOperator class, with the model embedded at static content in the App itself.
The model file itself needs to be within the App folder itself, so that it will be picked up by the moai-deploy Packager on packaging the whole app folder; this also meanings that the model path need not be provided on the Packager command line as for the
-m
option.At runtime, the relative file path, or path from the App root, needs to be accessed by the inference operator to load the network. Avoid using the model factory, as the factory would have no models to load.
There are ways to avoid having the model in the path of the App folder, and then provided on running Packager to inject into the MONAI App Package, but this will require setting the environment variables on running the MAP Docker. Will go in to this a little more later.
Beta Was this translation helpful? Give feedback.
All reactions