You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* feat: Adding profiling support to the runtime
Signed-off-by: Naren Dasan <[email protected]>
* refactor: A new TRTModule implementation using the internal runtime which should give TS for free
Signed-off-by: Naren Dasan <[email protected]>
* feat: let Input generate random tensors following the spec
Signed-off-by: Naren Dasan <[email protected]>
* feat!(//core/runtime): Allow the Runtime to use binding names to align I/O
BREAKING CHANGE: This commit contains an ABI version upgrade meaning
that existing compiled modules will not work with this runtime.
Recompilation with a newer version of Torch-TensorRT will fix this.
This also ammends the C++ to allow users to explicitly set binding names
in the order they will be passed in and are expected to be returned.
This change is backwards compatible with the current API.
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
* fix(//core/runtime): Resolving some issues with the runtime ABI
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
* feat(//core/runtime): Adding a TRT layer profiler
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
* feat(//py): Exposed the new runtime in Python
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
* feat(//py/torch_tensorrt/fx): Compliant TRTModule implementation based
on shared Torch-TensorRT runtime
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
* refactor: CUDADevice -> RTDevice for better distinction from compile
time device
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
* feat(//examples): Demo that you can compile using FX then deploy in
TS!!!
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
* refactor(//py/torch_tensorrt): Updates to existing APIs for use in fx
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
* feat(//core/runtime): Encode TRT engine in base64 instead of raw bytes
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
* feat(//py/torch_tensorrt/fx): Adding the option to use the experimental
runtime
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
* fix(//core/runtime): Fixing a bug where if an exception is thrown in
downstream constructor, it would cause a segfault
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
* feat(//py/torch_tensorrt/TRTModule): Allow state_dict extraction
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
* chore: Addressing merge conflicts
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
* chore: lint
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
* chore: remove print statements
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
* fix: Fix cmake build
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
* refactor: Add a suffix to the TRTModuleNext class while it's
experimental
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
* docs: Update docs and examples
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
* refactor: Reorder the API since everything but the engine is optional
Also new destructor to order cleanup
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
0 commit comments