Skip to content

Releases: microsoft/onnxruntime

ONNX Runtime v1.21.0

08 Mar 05:33
e0b66ca
Compare
Choose a tag to compare

Announcements

  • No large announcements of note this release! We've made a lot of small refinements to streamline your ONNX Runtime experience.

GenAI & Advanced Model Features

Enhanced Decoding & Pipeline Support

  • Added "chat mode" support for CPU, GPU, and WebGPU.
  • Provided support for decoder model pipelines.
  • Added support for Java API for MultiLoRA.

API & Compatibility Updates

Bug Fixes for Model Output

  • Fixed Phi series garbage output issues with long prompts.
  • Resolved gibberish issues with top_k on CPU.

Execution & Core Optimizations

Core Refinements

  • Reduced default logger usage for improved efficiency(#23030).
  • Fixed a visibility issue in theadpool (#23098).

Execution Provider (EP) Updates

General

  • Removed TVM EP from the source tree(#22827).
  • Marked NNAPI EP for deprecation (following Google's deprecation of NNAPI).
  • Fixed a DLL delay loading issue that impacts WebGPU EP and DirectML EP's usability on Windows (#23111, #23227)

TensorRT EP Improvements

  • Added support for TensorRT 10.8.
  • Assigned DDS ops (NMS, RoiAlign, NonZero) to TensorRT by default.
  • Introduced option trt_op_types_to_exclude to exclude specific ops from TensorRT assignment.

CUDA EP Improvements

  • Added a python API preload_dlls to coexist with PyTorch
  • Miscellaneous enhancements for Flux model inference.

QNN EP Improvements

  • Introduced QNN shared memory support.
  • Improved performance for AI Hub models.
  • Added support for QAIRT/QNN SDK 2.31.
  • Added Python 3.13 package.
  • Miscellaneous bug fixes and enhancements.
  • QNN EP is now built as a shared library/DLL by default. To retain previous build behavior, use build option --use_qnn static_lib.

DirectML EP Support & Upgrades

  • Updated DirectML version from 1.15.2 to 1.15.4(#22635).

OpenVINO EP Improvements

  • Introduced OpenVINO EP Weights Sharing feature.
  • Added support for various contrib Ops in OVEP:
    • SkipLayerNormalization, MatMulNBits, FusedGemm, FusedConv, EmbedLayerNormalization, BiasGelu, Attention, DynamicQuantizeMatMul, FusedMatMul, QuickGelu, SkipSimplifiedLayerNormalization
  • Miscellaneous bug fixes and improvements.

VitisAI EP Improvements

  • Miscellaneous bug fixes and improvements.

Mobile Platform Enhancements

CoreML Updates

  • Added support for caching generated CoreML models.

Extensions & Tokenizer Improvements

Expanded Tokenizer Support

  • Now supports more tokenizer models, including ChatGLM, Baichuan2, Phi-4, etc.
  • Added full Phi-4 pre/post-processing support for text, vision, and audio.
  • Introduced RegEx pattern loading from tokenizer.json.

Image Codec Enhancements

  • ImageCodec now links to native APIs if available; otherwise, falls back to built-in libraries.

Unified Tokenizer API

  • Introduced a new tokenizer op schema to unify the tokenizer codebase.
  • Added support for loading tokenizer data from a memory blob in the C API.

Infrastructure & Build Improvements

Runtime Requirements

All the prebuilt Windows packages now require VC++ Runtime version >= 14.40(instead of 14.38). If your VC++ runtime version is lower than that, you may see a crash when ONNX Runtime was initializing. See https://github.com/microsoft/STL/wiki/Changelog#vs-2022-1710 for more details.

Updated minimum iOS and Android SDK requirements to align with React Native 0.76:

  • iOS >= 15.1
  • Android API >= 24 (Android 7)

All macOS packages now require macOS version >= 13.3.

CMake File Changes

CMake Version: Increased the minimum required CMake version from 3.26 to 3.28.
Python Version: Increased the minimum required Python version from 3.8 to 3.10 for building ONNX Runtime from source.
Improved VCPKG support

Added the following cmake options for WebGPU EP

  • onnxruntime_USE_EXTERNAL_DAWN
  • onnxruntime_CUSTOM_DAWN_SRC_PATH
  • onnxruntime_BUILD_DAWN_MONOLITHIC_LIBRARY
  • onnxruntime_ENABLE_PIX_FOR_WEBGPU_EP
  • onnxruntime_ENABLE_DAWN_BACKEND_VULKAN
  • onnxruntime_ENABLE_DAWN_BACKEND_D3D12

Added cmake option onnxruntime_BUILD_QNN_EP_STATIC_LIB for building with QNN EP as a static library.
Removed cmake option onnxruntime_USE_PREINSTALLED_EIGEN.

Fixed a build issue with Visual Studio 2022 17.3 (#23911)

Modernized Build Tools

  • Now using VCPKG for most package builds.
  • Upgraded Gradle from 7.x to 8.x.
  • Updated JDK from 11 to 17.
  • Enabled onnxruntime_USE_CUDA_NHWC_OPS by default for CUDA builds.
  • Added support for WASM64 (build from source; no package published).

Dependency Cleanup

  • Removed Google’s nsync from dependencies.

Others

Updated Node.js installation script to support network proxy usage (#23231)

Web

  • No updates of note.

Contributors

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:

Changming Sun, Yulong Wang, Tianlei Wu, Jian Chen, Wanming Lin, Adrian Lizarraga, Hector Li, Jiajia Qin, Yifan Li, Edward Chen, Prathik Rao, Jing Fang, shiyi, Vincent Wang, Yi Zhang, Dmitri Smirnov, Satya Kumar Jandhyala, Caroline Zhu, Chi Lo, Justin Chu, Scott McKay, Enrico Galli, Kyle, Ted Themistokleous, dtang317, wejoncy, Bin Miao, Jambay Kinley, Sushanth Rajasankar, Yueqing Zhang, amancini-N, ivberg, kunal-vaishnavi, liqun Fu, Corentin Maravat, Peishen Yan, Preetha Veeramalai, Ranjit Ranjan, Xavier Dupré, amarin16, jzm-intel, kailums, xhcao, A-Satti, Aleksei Nikiforov, Ankit Maheshkar, Javier Martinez, Jianhui Dai, Jie Chen, Jon Campbell, Karim Vadsariya, Michael Tyler, PARK DongHa, Patrice Vignola, Pranav Sharma, Sam Webster, Sophie Schoenmeyer, Ti-Tai Wang, Xu Xing, Yi-Hong Lyu, genmingz@AMD, junchao-zhao, sheetalarkadam, sushraja-msft, Akshay Sonawane, Alexis Tsogias, Ashrit Shetty, Bilyana Indzheva, Chen Feiyue, Christian Larson, David Fan, David Hotham, Dmitry Deshevoy, Frank Dong, Gavin Kinsey, George Wu, Grégoire, Guenther Schmuelling, Indy Zhu, Jean-Michaël Celerier, Jeff Daily, Joshua Lochner, Kee, Malik Shahzad Muzaffar, Matthieu Darbois, Michael Cho, Michael Sharp, Misha Chornyi, Po-Wei (Vincent), Sevag H, Takeshi Watanabe, Wu, Junze, Xiang Zhang, Xiaoyu, Xinpeng Dou, Xinya Zhang, Yang Gu, Yateng Hong, mindest, mingyue, raoanag, saurabh, shaoboyan091, sstamenk, tianf-fff, wonchung-microsoft, xieofxie, zz002

ONNX Runtime v1.20.2 [QNN-only]

12 Feb 22:57
8608bf0
Compare
Choose a tag to compare

What's new?

Build System & Packages

  • Merge Windows machine pools for Web CI pipeline to reduce maintenance costs (#23243) - @snnn
  • Update boost URL for React Native CI pipeline (#23281) - @jchen351
  • Move ORT Training pipeline to GitHub actions and enable CodeQL scan for the source code (#22543) - @snnn
  • Move Linux GitHub actions to a dedicated machine pool (#22566) - @snnn
  • Update Apple deployment target to iOS 15.1 and macOS 13.3 (#23308) - @snnn
  • Deprecate macOS 12 in packaging pipeline (#23017) - @mszhanyi
  • Remove net8.0-android MAUI target from MAUI test project (#23607) - @carzh

CUDA EP

  • Fixes use of numeric_limits that causes a compiler error in Visual Studio 2022 v17.12 Preview 5 (#22738, #22868) - @tianleiwu

QNN EP

  • Enable offloading graph input quantization and graph output dequantization to CPU by default. Improves inference latency by reducing the amount of I/O data copied between CPU and NPU. (#23368) - @adrianlizarraga

ONNX Runtime v1.20.1

21 Nov 22:20
5c1b7cc
Compare
Choose a tag to compare

What's new?

Python Quantization Tool

CPU EP

QNN EP

TensorRT EP

Packaging

  • Rework the native library usage so that a pre-built ORT native package can be easily used (#22345) - @skottmckay
  • Fix Maven Sha256 Checksum Issue (#22600) - @idiskyle

Contributions

Big thank you to the release manager @yf711, along with @adrianlizarraga, @HectorSVC, @jywu-msft, and everyone else who helped to make this patch release process a smooth one!

ONNX Runtime v1.20.0

01 Nov 18:02
c4fb724
Compare
Choose a tag to compare

Release Manager: @apsonawane

Announcements

  • All ONNX Runtime Training packages have been deprecated. ORT 1.19.2 was the last release for which onnxruntime-training (PyPI), onnxruntime-training-cpu (PyPI), Microsoft.ML.OnnxRuntime.Training (Nuget), onnxruntime-training-c (CocoaPods), onnxruntime-training-objc (CocoaPods), and onnxruntime-training-android (Maven Central) were published.
  • ONNX Runtime packages will stop supporting Python 3.8 and Python 3.9. This decision aligns with NumPy Python version support. To continue using ORT with Python 3.8 and Python 3.9, you can use ORT 1.19.2 and earlier.
  • ONNX Runtime 1.20 CUDA packages will include new dependencies that were not required in 1.19 packages. The following dependencies are new: libcudnn_adv.so.9, libcudnn_cnn.so.9, libcudnn_engines_precompiled.so.9, libcudnn_engines_runtime_compiled.so.9, libcudnn_graph.so.9, libcudnn_heuristic.so.9, libcudnn_ops.so.9, libnvrtc.so.12, and libz.so.1.

Build System & Packages

  • Python 3.13 support is included in PyPI packages.
  • ONNX 1.17 support will be delayed until a future release, but the ONNX version used by ONNX Runtime has been patched to include a shape inference change to the Einsum op.
  • DLLs in the Maven build are now digitally signed (fix for issue reported here).
  • (Experimental) vcpkg support added for the CPU EP. The DML EP does not yet support vcpkg, and other EPs have not been tested.

Core

  • MultiLoRA support.
  • Reduced memory utilization.
    • Fixed alignment that was causing mmap to fail for external weights.
    • Eliminated double allocations when deserializing external weights.
    • Added ability to serialize pre-packed weights so that they don’t cause an increase in memory utilization when the model is loaded.
  • Support bfloat16 and float8 data types in python I/O binding API.

Performance

  • INT4 quantized embedding support on CPU and CUDA EPs.
  • Miscellaneous performance improvements and bug fixes.

EPs

CPU

  • FP16 support for MatMulNbits, Clip, and LayerNormalization ops.

CUDA

  • Cudnn frontend integration for convolution operators.
  • Added support of cuDNN Flash Attention and Lean Attention in MultiHeadAttention op.

TensorRT

QNN

  • QNN HTP support for weight sharing across multiple ORT inference sessions. (See ORT QNN EP documentation for more information.)
  • Support for QNN SDK 2.27.

OpenVINO

  • Added support up to OpenVINO 2024.4.1.
  • Compile-time memory optimizations.
  • Enhancement of ORT EPContext Session option for optimized first inference latency.
  • Added remote tensors to ensure direct memory access for inferencing on NPU.

DirectML

Mobile

  • Improved Android QNN support, including a pre-built Maven package and various performance improvements.
  • FP16 support for ML Program models with CoreML EP.
  • FP16 XNNPACK kernels to provide a fallback option if CoreML is not available at runtime.
  • Initial support for using the native WebGPU EP on Android and iOS. _Note: The set of initial operators is limited, and the code is available from the main branch, not ORT 1.20 packages. See #22591 for more information.

Web

  • Quantized embedding support.
  • On-demand weight loading support (offloads Wasm32 heap and enables 8B-parameter LLMs).
  • Integrated Intel GPU performance improvements.
  • Opset-21 support (Reshape, Shape, Gelu).

GenAI

  • MultiLoRA support.
  • Generations can now be terminated mid-loop.
  • Logit soft capping support in Group Query Attention (GQA).
  • Additional model support, including Phi-3.5 Vision Multi-Frame, ChatGLM3, and Nemotron-Mini.
  • Python package now available for Mac.
  • Mac / iOS now available in NuGet packages.

Full release notes for ONNX Runtime generate() API v0.5.0 can be found here.

Extensions

  • Tokenization performance improvements.
  • Support for latest Hugging Face tokenization JSON format (transformers>=4.45).
  • Unigram tokenization model support.
  • OpenCV dependency removed from C API build.

Full release notes for ONNX Runtime Extensions v0.13 can be found here.

Olive

  • Olive command line interface (CLI) now available with support to execute well-defined, concrete workflows without the need to create or edit configs manually.
  • Additional improvements, including support for YAML-based workflow configs, streamlined DataConfig management, simplified workflow configuration, and more.
  • Llama and Phi-3 model updates, including an updated MultiLoRA example using the ORT generate() API.
    Full release notes for Olive v0.7.0 can be found here.

Contributors

Big thank you to the release manager @apsonawane, as well as @snnn, @jchen351, @sheetalarkadam, and everyone else who made this release possible!

Tianlei Wu, Yi Zhang, Yulong Wang, Scott McKay, Edward Chen, Adrian Lizarraga, Wanming Lin, Changming Sun, Dmitri Smirnov, Jian Chen, Jiajia Qin, Jing Fang, George Wu, Caroline Zhu, Hector Li, Ted Themistokleous, mindest, Yang Gu, jingyanwangms, liqun Fu, Adam Pocock, Patrice Vignola, Yueqing Zhang, Prathik Rao, Satya Kumar Jandhyala, Sumit Agarwal, Xu Xing, aciddelgado, duanshengliu, Guenther Schmuelling, Kyle, Ranjit Ranjan, Sheil Kumar, Ye Wang, kunal-vaishnavi, mingyueliuh, xhcao, zz002, 0xdr3dd, Adam Reeve, Arne H Juul, Atanas Dimitrov, Chen Feiyue, Chester Liu, Chi Lo, Erick Muñoz, Frank Dong, Jake Mathern, Julius Tischbein, Justin Chu, Xavier Dupré, Yifan Li, amarin16, anujj, chenduan-amd, saurabh, sfatimar, sheetalarkadam, wejoncy, Akshay Sonawane, AlbertGuan9527, Bin Miao, Christian Bourjau, Claude, Clément Péron, Emmanuel, Enrico Galli, Fangjun Kuang, Hann Wang, Indy Zhu, Jagadish Krishnamoorthy, Javier Martinez, Jeff Daily, Justin Beavers, Kevin Chen, Krishna Bindumadhavan, Lennart Hannink, Luis E. P., Mauricio A Rovira Galvez, Michael Tyler, PARK DongHa, Peishen Yan, PeixuanZuo, Po-Wei (Vincent), Pranav Sharma, Preetha Veeramalai, Sophie Schoenmeyer, Vishnudas Thaniel S, Xiang Zhang, Yi-Hong Lyu, Yufeng Li, goldsteinn, mcollinswisc, mguynn-intc, mingmingtasd, raoanag, shiyi, stsokolo, vraspar, wangshuai09

Full changelog: v1.19.2...v1.20.0

ONNX Runtime v1.19.2

04 Sep 19:33
ffceed9
Compare
Choose a tag to compare

Announcements

  • ORT 1.19.2 is a small patch release, fixing some broken workflows and introducing bug fixes.

Build System & Packages

  • Fixed the signing of native DLLs.
  • Disabled absl symbolize in Windows Release build to avoid dependency on dbghelp.dll.

Training

  • Restored support for CUDA compute capability 7.0 and 7.5 with CUDA 12, and 6.0 and 6.1 with CUDA 11.
  • Several fixes for training CI pipelines.

Mobile

  • Fixed ArgMaxOpBuilder::AddToModelBuilderImpl() nullptr Node access for CoreML EP.

Generative AI

  • Added CUDA kernel for Phi3 MoE.
  • Added smooth softmax support in CUDA and CPU kernels for the GroupQueryAttention operator.
  • Fixed number of splits calculations in GroupQueryAttention CUDA operator.
  • Enabled causal support in the MultiHeadAttention CUDA operator.

Contributors

@prathikr, @mszhanyi, @edgchen1, @tianleiwu, @wangyems, @aciddelgado, @mindest, @snnn, @baijumeswani, @MaanavD

Thanks to everyone who helped ship this release smoothly!

Full Changelog: v1.19.0...v1.19.2

ONNX Runtime v1.19.0

19 Aug 18:44
26250ae
Compare
Choose a tag to compare

Announcements

  • Note that the wrong commit was initially tagged with v1.19.0. The final commit has since been correctly tagged: 26250ae. This shouldn't effect much, but sorry for the inconvenience!

Build System & Packages

  • Numpy support for 2.x has been added
  • Qualcomm SDK has been upgraded to 2.25
  • ONNX has been upgraded from 1.16 → 1.16.1
  • Default GPU packages use CUDA 12.x and Cudnn 9.x (previously CUDA 11.x/CuDNN 8.x) CUDA 11.x/CuDNN 8.x packages are moved to the aiinfra VS feed.
  • TensorRT 10.2 support added
  • Introduced Java CUDA 12 packages on Maven.
  • Discontinued support for Xamarin. (Xamarin reached EOL on May 1, 2024)
  • Discontinued support for macOS 11 and increasing the minimum supported macOS version to 12. (macOS 11 reached EOL in September 2023)
  • Discontinued support for iOS 12 and increasing the minimum supported iOS version to 13.

Core

Performance

  • Added QDQ support for INT4 quantization in CPU and CUDA Execution Providers
  • Implemented FlashAttention on CPU to improve performance for GenAI prompt cases
  • Improved INT4 performance on CPU (X64, ARM64) and NVIDIA GPUs

Execution Providers

  • TensorRT

    • Updated to support TensorRT 10.2
    • Remove calls to deprecated api’s
    • Enable refittable embedded engine when ONNX model provided as byte stream
  • CUDA

    • Upgraded cutlass to 3.5.0 for performance improvement of memory efficient attention.
    • Updated MultiHeadAttention and Attention operators to be thread-safe.
    • Added sdpa_kernel provider option to choose kernel for Scaled Dot-Product Attention.
    • Expanded op support - Tile (bf16)
  • CPU

    • Expanded op support - GroupQueryAttention, SparseAttention (for Phi-3 small)
  • QNN

    • Updated to support QNN SDK 2.25
    • Expanded op support - HardSigmoid, ConvTranspose 3d, Clip (int32 data), Matmul (int4 weights), Conv (int4 weights), prelu (fp16)
    • Expanded fusion support – Conv + Clip/Relu fusion
  • OpenVINO

    • Added support for OpenVINO 2024.3
    • Support for enabling EpContext using session options
  • DirectML

    • Updated DirectML from 1.14.1 → 1.15.1
    • Updated ONNX opset from 17 → 20
    • Opset 19 and Opset 20 are supported with known caveats:
      • Gridsample 20: 5d not supported
      • DeformConv not supported

Mobile

Web

  • Updated JavaScript packaging to align with best practices, including slight incompatibilities when apps bundle onnxruntime-web
  • Improved CPU operators coverage for WebNN (now supported by Chrome)

Training

  • No specific updates

GenAI

  • Support for new models Qwen, Llama 3.1, Gemma 2, phi3 small
  • Support to build quantized models with method AWQ and GPTQ
  • Performance improvements for Intel and Arm CPU
  • Packing and language binding
    • Added Java bindings (build from source)
    • Separate OnnxRuntime.dll and directml.dll out of GenAI package to improve usability
    • Publish packages for Win Arm
    • Support for Android (build from source)
  • Bug fixes, like the long prompt correctness issue for phi3.

Extensions

  • Added C APIs for language, vision and audio processors including new FeatureExtractor for Whisper
  • Support for Phi-3 Small Tokenizer and new OpenAI tiktoken format for fast loading of BPE tokenizers
  • Added new CUDA custom operators such as MulSigmoid, Transpose2DCast, ReplaceZero, AddSharedInput and MulSharedInput
  • Enhanced Custom Op Lite API on GPU and fused kernels for DORT
  • Bug fixes, including null bos_token for Qwen2 tokenizer and SentencePiece converted FastTokenizer issue on non-ASCII characters, as well as necessary updates for MSVC 19.40 and numpy 2.0 release

Contributors

Changming Sun, Baiju Meswani, Scott McKay, Edward Chen, Jian Chen, Wanming Lin, Tianlei Wu, Adrian Lizarraga, Chester Liu, Yi Zhang, Yulong Wang, Hector Li, kunal-vaishnavi, pengwa, aciddelgado, Yifan Li, Xu Xing, Yufeng Li, Patrice Vignola, Yueqing Zhang, Jing Fang, Chi Lo, Dmitri Smirnov, mingyueliuh, cloudhan, Yi-Hong Lyu, Ye Wang, Ted Themistokleous, Guenther Schmuelling, George Wu, mindest, liqun Fu, Preetha Veeramalai, Justin Chu, Xiang Zhang, zz002, vraspar, kailums, guyang3532, Satya Kumar Jandhyala, Rachel Guo, Prathik Rao, Maximilian Müller, Sophie Schoenmeyer, zhijiang, maggie1059, ivberg, glen-amd, aamajumder, Xavier Dupré, Vincent Wang, Suryaprakash Shanmugam, Sheil Kumar, Ranjit Ranjan, Peishen Yan, Frank Dong, Chen Feiyue, Caroline Zhu, Adam Louly, Ștefan Talpalaru, zkep, winskuo-quic, wejoncy, vividsnow, vivianw-amd, moyo1997, mcollinswisc, jingyanwangms, Yang Gu, Tom McDonald, Sunghoon, Shubham Bhokare, RuomeiMS, Qingnan Duan, PeixuanZuo, Pavan Goyal, Nikolai Svakhin, KnightYao, Jon Campbell, Johan MEJIA, Jake Mathern, Hans, Hann Wang, Enrico Galli, Dwayne Robinson, Clément Péron, Chip Kerchner, Chen Fu, Carson M, Adam Reeve, Adam Pocock.

Big thank you to everyone who contributed to this release!

Full Changelog: v1.18.1...v1.19.0

ONNX Runtime v1.18.1

28 Jun 00:29
3871274
Compare
Choose a tag to compare

What's new?

Announcements:

  • ONNX Runtime Python packages now have numpy dependency >=1.21.6, <2.0. Support for numpy 2.0 will be added in a future release.
  • CUDA 12.x ONNX Runtime GPU packages are now built against cuDNN 9.x (1.18.0 packages previously depended on cuDNN 8.x). CUDA 11.x ONNX Runtime GPU packages continue to depend on CuDNN 8.x.
  • Windows packages require installation of Microsoft Visual C++ Redistributable Runtime 14.38 or newer.

TensorRT EP:

  • TensorRT Weightless API integration.
  • Support for TensorRT hardware compatible engines.
  • Support for INT64 types in TensorRT constant layer calibration.
  • Now using latest commit of onnx-tensorrt parser, which includes several issue fixes.
  • Additional TensorRT support and performance improvements.

Packages:

  • Publish CUDA 12 Java packages to Azure DevOps feed.
  • Various packaging pipeline fixes.

This patch release also features various other bug fixes, including a CUDA 12.5 build error fix.

Big thank you to @yf711 for driving this release as the release manager and to all our contributors!

@yf711 @jchen351 @mszhanyi @snnn @wangyems @jywu-msft @skottmckay @chilo-ms @moraxu @kevinch-nv @pengwa @wejoncy @pranavsharma @Craigacp @jslhcl @adrianlizarraga @inisis @jeffbloo @mo-ja @kunal-vaishnavi @sumitsays @neNasko1 @yufenglee @dhruvbird @wangshuai09 @xiaoyu-work @axinging @yuslepukhin @YUNQIUGUO @shubhambhokare1 @fs-eire @afantino951 @tboby @HectorSVC @baijumeswani

ONNX Runtime v1.18.0

21 May 00:28
4573740
Compare
Choose a tag to compare

Announcements

  • Windows ARM32 support has been dropped at the source code level.
  • Python version >=3.8 is now required for build.bat/build.sh (previously >=3.7). Note: If you have Python version <3.8, you can bypass the tools and use CMake directly.
  • The onnxruntime-mobile Android package and onnxruntime-mobile-c/onnxruntime-mobile-objc iOS cocoapods are being deprecated. Please use the onnxruntime-android Android package, and onnxruntime-c/onnxruntime-objc cocoapods, which support ONNX and ORT format models and all operators and data types. Note: If you require a smaller binary size, a custom build is required. See details on creating a custom Android or iOS package on Custom build | onnxruntime.

Build System & Packages

  • CoreML execution provider now depends on coremltools.
  • Flatbuffers has been upgraded from 1.12.0 → 23.5.26.
  • ONNX has been upgraded from 1.15 → 1.16.
  • EMSDK has been upgraded from 3.1.51 → 3.1.57.
  • Intel neural_speed library has been upgraded from v0.1.1 → v0.3 with several important bug fixes.
  • There is a new onnxruntime_CUDA_MINIMAL CMake option for building ONNX Runtime CUDA execution provider without any operations apart from memcpy ops.
  • Added support for Catalyst for macOS build support.
  • Added initial support for RISC-V and three new build options for it: --rv64, --riscv_toolchain_root, and --riscv_qemu_path.
  • Now you can build TensorRT EP with protobuf-lite instead of the full version of protobuf.
  • Some security-related compile/link flags have been moved from the default setting → new build option: --use_binskim_compliant_compile_flags. Note: All our release binaries are built with this flag, but when building ONNX Runtime from source, this flag is default OFF.
  • Windows ARM64 build now depends on PyTorch CPUINFO library.
  • Windows OneCore build now uses “Reverse forwarding” apisets instead of “Direct forwarding”, so onnxruntime.dll in our Nuget packages will depend on kernel32.dll. Note: Windows systems without kernel32.dll need to have reverse forwarders (see API set loader operation - Win32 apps | Microsoft Learn for more information).

Core

  • Added ONNX 1.16 support.
  • Added additional optimizations related to Dynamo-exported models.
  • Improved testing infrastructure for EPs developed as shared libraries.
  • Exposed Reserve() in OrtAllocator to allow custom allocators to work when session.use_device_allocator_for_initializers is specified.
  • Improved lock contention due to memory allocations.
  • Improved session creation time (graph and graph transformer optimizations).
  • Added new SessionOptions config entry to disable specific transformers and rules.
  • [C# API] Exposed SessionOptions.DisablePerSessionThreads to allow sharing of threadpool between sessions.
  • [Java API] Added CUDA 12 Java support.

Performance

  • Improved 4bit quant support:
    • Added HQQ quantization support to improve accuracy.
    • Implemented general GEMM kernel and improved GEMV kernel performance on GPU.
    • Improved GEMM kernel quality and performance on x64.
    • Implemented general GEMM kernel and improved GEMV performance on ARM64.
  • Improved MultiheadAttention performance on CPU.

Execution Providers

  • TensorRT

    • Added support for TensorRT 10.
    • Finalized support for DDS ops.
    • Added Python support for user provided CUDA stream.
    • Fixed various bugs.
  • CUDA

    • Added support of multiple CUDA graphs.
    • Added a provider option to disable TF32.
    • Added Python support for user provided CUDA stream.
    • Extended MoE to support of Tensor Parallelism and int4 quantization.
    • Fixed bugs in BatchNorm and TopK kernel.
  • QNN

    • Added support for up to QNN SDK 2.22.
    • Upgraded support from A16W8 → mixed 8/16-bit precision configurability per layer.
    • Added fp16 execution support via enable_htp_fp16 option.
    • Added multiple partition support for QNN context binary.
    • Expanded operator support and fixed various bugs.
    • Added support for per-channel quantized weights for Conv.
    • Integration with Qualcomm’s AIHub.
  • OpenVINO

    • Added support for up to OpenVINO 2024.1.
    • Added support for importing pre-compiled blob as EPContext blob.
    • Separated device and precision as inputs by removing support for device_id in provider options and adding precision as separate CLI option.
    • Deprecated CPU_FP32 and GPU_FP32 terminology and introduced CPU and GPU terminology.
    • AUTO:GPU,CPU will only create GPU blob, not CPU blob.
  • DirectML

    • Additional ONNX operator support: Resize-18 and Resize-19, Col2Im-18, InNaN-20, IsInf-20, and ReduceMax-20.
    • Additional contrib op support: SimplifiedLayerNormalization, SkipSimplifiedLayerNormalization, QLinearAveragePool, MatMulIntegerToFloat, GroupQueryAttention, DynamicQuantizeMatMul, and QAttention.

Mobile

  • Improved performance of ARM64 4-bit quantization.
  • Added support for building with QNN on Android.
  • Added MacCatalyst support.
  • Added visionOS support.
  • Added initial support for creating ML Program format CoreML models.
  • Added support for 1D Conv and ConvTranspose to XNNPACK EP.

Web

  • Added WebNN EP preview.
  • Improved WebGPU performance (MHA, ROE).
  • Added more WebGPU and WebNN examples.
  • Increased generative model support.
  • Optimized Buffer management to reduce memory footprint.

Training

  • Large Model Training
    • Added optimizations for Dynamo-exported models.
    • Added Mixtral integration using ORT backend.
  • On-Device Training
    • Added support for models >2GB to enable SLM training on edge devices.

GenAI

  • Added additional model support: Phi-3, Gemma, LLama-3.
  • Added DML EP support.
  • Improved tokenizer quality.
  • Improved sampling method and ORT model performance.

Extensions

  • Created Java packaging pipeline and published to Maven repository.
  • Added support for conversion of Huggingface FastTokenizer into ONNX custom operator.
  • Unified the SentencePiece tokenizer with other Byte Pair Encoding (BPE) based tokenizers.
  • Fixed Whisper large model pre-processing bug.
  • Enabled eager execution for custom operator and refactored the header file structure.

Contributors

Yi Zhang, Yulong Wang, Adrian Lizarraga, Changming Sun, Scott McKay, Tianlei Wu, Peng Wang, Hector Li, Edward Chen, Dmitri Smirnov, Patrice Vignola, Guenther Schmuelling, Ye Wang, Chi Lo, Wanming Lin, Xu Xing, Baiju Meswani, Peixuan Zuo, Vincent Wang, Markus Tavenrath, Lei Cao, Kunal Vaishnavi, Rachel Guo, Satya Kumar Jandhyala, Sheil Kumar, Yifan Li, Jiajia Qin, Maximilian Müller, Xavier Dupré, Yi-Hong Lyu, Yufeng Li, Alejandro Cid Delgado, Adam Louly, Prathik Rao, wejoncy, Zesong Wang, Adam Pocock, George Wu, Jian Chen, Justin Chu, Xiaoyu, guyang3532, Jingyan Wang, raoanag, Satya Jandhyala, Hariharan Seshadri, Jiajie Hu, Sumit Agarwal, Peter Mcaughan, Zhijiang Xu, Abhishek Jindal, Jake Mathern, Jeff Bloomfield, Jeff Daily, Linnea May, Phoebe Chen, Preetha Veeramalai, Shubham Bhokare, Wei-Sheng Chin, Yang Gu, Yueqing Zhang, Guangyun Han, inisis, ironman, Ivan Berg, Liqun Fu, Yu Luo, Rui Ren, Sahar Fatima, snadampal, wangshuai09, Zhenze Wang, Andrew Fantino, Andrew Grigorev, Ashwini Khade, Atanas Dimitrov, AtomicVar, Belem Zhang, Bowen Bao, Chen Fu, Dhruv Matani, Fangrui Song, Francesco, Frank Dong, Hans Chen, He Li, Heflin Stephen Raj, Jambay Kinley, Masayoshi Tsutsui, Matttttt, Nanashi, Phoebe Chen, Pranav Sharma, Segev Finer, Sophie Schoenmeyer, TP Boudreau, Ted Themistokleous, Thomas Boby, Xiang Zhang, Yongxin Wang, Zhang Lei, aamajumder, danyue, Duansheng Liu, enximi, fxmarty, kailums, maggie1059, mindest, mo-ja, moyo1997
Big thank you to everyone who contributed to this release!

ONNX Runtime v1.17.3

18 Apr 15:46
56b660f
Compare
Choose a tag to compare

What's new?

General:

  • Update copying API header files to make Linux logic consistent with Windows (#19736) - @mszhanyi
  • Pin ONNX version to fix DML and Python packaging pipeline exceptions (#20073) - @mszhanyi

Build System & Packages:

  • Fix minimal build with training APIs enabled bug affecting Apple framework (#19858) - @edgchen1

Core:

CUDA EP:

TensorRT EP:

Web:

Windows:

  • Fix Windows memory mapping bug affecting some larger models (#19623) - @yufenglee

Kernel Optimizations:

  • Fix GQA and Rotary Embedding bugs affecting some models (#19801, #19874) - @aciddelgado
  • Update replacement of MultiHeadAttention (MHA) and GroupQueryAttention (GQA) (#19882) - @kunal-vaishnavi
  • Add support for packed QKV input and Rotary Embedding with sm<80 using Memory Efficient Attention kernel (#20012) - @aciddelgado

Models:

This patch release also includes additional fixes by @spampana95 and @enximi. Big thank you to all our contributors!

ONNX Runtime v1.17.1

27 Feb 18:34
8f5c79c
Compare
Choose a tag to compare

This patch release includes the following updates:

General

  • Update thread affinity on server so it is only set with auto affinity (#19318) - @ivberg

Build System and Packages

  • Fix bug that was breaking arm64 build by disabling __cpuid check on arm64 builds since intrinsic is not available (#19574) - @smk2007

Core

  • Add capturestate / rundown ETW support logging for session and provider options (#19397) - @ivberg
  • Restrict L2 cache core check on Intel devices (#19483) - @smk2007

Performance

  • Optimize KahnsTopologicalSort and PriorityNodeCompare to fix performance degradation in session creation time that was affecting many models (#19475) - @smk2007

EPs

  • Enable DirectML on Windows and CUDA on Linux for Node.js binding (#19274) - @jchen351

QNN

OpenVINO

DirectML

Web

Training

  • Reduce onnxruntime-training package size so it can be published on PyPI (#19486) - @baijumeswani
  • Update default std flag used during torch extensions compilation (#19516) - @baijumeswani
  • Add ATen fallback support for bicubic interpolation algorithm (#19380) - @prathikr

Quantization

Whisper Model