Microsoft.ML.OnnxRuntime.Gpu.Windows
1.23.2
Prefix Reserved
dotnet add package Microsoft.ML.OnnxRuntime.Gpu.Windows --version 1.23.2
NuGet\Install-Package Microsoft.ML.OnnxRuntime.Gpu.Windows -Version 1.23.2
<PackageReference Include="Microsoft.ML.OnnxRuntime.Gpu.Windows" Version="1.23.2" />
<PackageVersion Include="Microsoft.ML.OnnxRuntime.Gpu.Windows" Version="1.23.2" />
<PackageReference Include="Microsoft.ML.OnnxRuntime.Gpu.Windows" />
paket add Microsoft.ML.OnnxRuntime.Gpu.Windows --version 1.23.2
#r "nuget: Microsoft.ML.OnnxRuntime.Gpu.Windows, 1.23.2"
#:package Microsoft.ML.OnnxRuntime.Gpu.Windows@1.23.2
#addin nuget:?package=Microsoft.ML.OnnxRuntime.Gpu.Windows&version=1.23.2
#tool nuget:?package=Microsoft.ML.OnnxRuntime.Gpu.Windows&version=1.23.2
About

ONNX Runtime is a cross-platform machine-learning inferencing accelerator.
ONNX Runtime can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms.
Learn more → here
NuGet Packages
ONNX Runtime Native packages
Microsoft.ML.OnnxRuntime
- Native libraries for all supported platforms
- CPU Execution Provider
- CoreML Execution Provider on macOS/iOS
- XNNPACK Execution Provider on Android/iOS
Microsoft.ML.OnnxRuntime.Gpu
- Windows and Linux
- TensorRT Execution Provider
- CUDA Execution Provider
- CPU Execution Provider
Microsoft.ML.OnnxRuntime.DirectML
- Windows
- DirectML Execution Provider
- CPU Execution Provider
Microsoft.ML.OnnxRuntime.QNN
- 64-bit Windows
- QNN Execution Provider
- CPU Execution Provider
Intel.ML.OnnxRuntime.OpenVino
- 64-bit Windows
- OpenVINO Execution Provider
- CPU Execution Provider
Other packages
Microsoft.ML.OnnxRuntime.Managed
- C# language bindings
Microsoft.ML.OnnxRuntime.Extensions
- Custom operators for pre/post processing on all supported platforms.
Learn more about Target Frameworks and .NET Standard.
-
.NETCoreApp 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.23.2)
-
.NETFramework 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.23.2)
-
.NETStandard 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.23.2)
NuGet packages (4)
Showing the top 4 NuGet packages that depend on Microsoft.ML.OnnxRuntime.Gpu.Windows:
| Package | Downloads |
|---|---|
|
Microsoft.ML.OnnxRuntime.Gpu
This package contains native shared library artifacts for all supported platforms of ONNX Runtime. |
|
|
KokoroSharp.GPU.Windows
The Gpu.Windows runtime for KokoroSharp: an inference engine for Kokoro TTS with ONNX runtime, enabling fast and flexible local text-to-speech (fp/quanted) purely via C#. It features segment streaming, voice mixing, linear job scheduling, and optional playback. |
|
|
TensorStack.Providers.CUDA
CUDA GPU backend for ONNX tensor computation. |
|
|
YoloSharpDeploGPU
使用onnx推理yolo模型(目前支持目标分类) |
GitHub repositories (1)
Showing the top 1 popular GitHub repositories that depend on Microsoft.ML.OnnxRuntime.Gpu.Windows:
| Repository | Stars |
|---|---|
|
Lyrcaxis/KokoroSharp
Fast local TTS inference engine in C# with ONNX runtime. Multi-speaker, multi-platform and multilingual. Integrate on your .NET projects using a plug-and-play NuGet package, complete with all voices.
|
| Version | Downloads | Last Updated | |
|---|---|---|---|
| 1.23.2 | 5,111 | 10/25/2025 | |
| 1.23.1 | 4,859 | 10/8/2025 | |
| 1.23.0 | 3,817 | 9/26/2025 | |
| 1.23.0-rc.2 | 174 | 9/21/2025 | |
| 1.22.1 | 38,728 | 7/1/2025 | |
| 1.22.0 | 15,015 | 5/9/2025 | |
| 1.21.2 | 5,681 | 4/24/2025 | |
| 1.21.1 | 1,595 | 4/21/2025 | |
| 1.21.0 | 39,429 | 3/8/2025 | |
| 1.20.1 | 114,895 | 11/21/2024 | |
| 1.20.0 | 29,136 | 10/31/2024 | |
| 1.19.2 | 136,065 | 9/3/2024 | |
| 1.19.1 | 18,964 | 8/21/2024 | |
| 1.19.0 | 15,521 | 8/17/2024 | |
| 1.19.0-dev-20240812-1833-cc... | 1,855 | 8/13/2024 | |
| 1.18.1 | 42,889 | 6/27/2024 | |
| 1.18.0 | 28,504 | 5/17/2024 | |
| 1.17.3 | 48,053 | 4/10/2024 | |
| 1.17.1 | 40,575 | 2/25/2024 | |
| 1.17.0 | 33,949 | 1/31/2024 |
Release Def:
Branch: refs/heads/rel-1.23.2
Commit: a83fc4d58cb48eb68890dd689f94f28288cf2278
Build: https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=974988