
ort
is an open-source Rust binding for ONNX Runtime.
These docs are for the latest alpha version of ort
, 2.0.0-rc.9
. This version is production-ready (just not API stable) and we recommend new & existing projects use it.
ort
makes it easy to deploy your machine learning models to production via ONNX Runtime, a hardware-accelerated inference engine. With ort
+ ONNX Runtime, you can run almost any ML model (including ResNet, YOLOv8, BERT, LLaMA) on almost any hardware, often far faster than PyTorch, and with the added bonus of Rust’s efficiency.
ONNX is an interoperable neural network specification. Your ML framework of choice — PyTorch, TensorFlow, Keras, PaddlePaddle, etc. — turns your model into an ONNX graph comprised of basic operations like MatMul
or Add
. This graph can then be converted into a model in another framework, or inferenced directly with ONNX Runtime.

Converting a neural network to a graph representation like ONNX opens the door to more optimizations and broader acceleration hardware support. ONNX Runtime can significantly improve the inference speed/latency of most models and enable acceleration with NVIDIA CUDA & TensorRT, Intel OpenVINO, Qualcomm QNN, Huawei CANN, and much more.
ort
is the Rust gateway to ONNX Runtime, allowing you to infer your ONNX models via an easy-to-use and ergonomic API. Many commercial, open-source, & research projects use ort
in some pretty serious production scenarios to boost inference performance:
- Twitter uses
ort
in part of their recommendations system, serving hundreds of millions of requests a day. - Bloop’s semantic code search feature is powered by
ort
. - SurrealDB’s powerful SurrealQL query language supports calling ML models, including ONNX graphs through
ort
. - Google’s Magika file type detection library is powered by
ort
. - Wasmtime, an open-source WebAssembly runtime, supports ONNX inference for the WASI-NN standard via
ort
. rust-bert
implements many ready-to-use NLP pipelines in Rust à la Hugging Face Transformers with bothtch
&ort
backends.
Getting started
Add ort to your Cargo.toml
If you have a supported platform (and you probably do), installing ort
couldn’t be any simpler! Just add it to your Cargo dependencies:
[dependencies]
ort = "=2.0.0-rc.9"
Convert your model
Your model will need to be converted to an ONNX graph before you can use it.
- The awesome folks at Hugging Face have a guide to export 🤗 Transformers models to ONNX with 🤗 Optimum.
- For any PyTorch model:
torch.onnx
- For
scikit-learn
models:sklearn-onnx
- For TensorFlow, Keras, TFlite, & TensorFlow.js:
tf2onnx
- For PaddlePaddle:
Paddle2ONNX
Load your model
Once you’ve got a model, load it via ort
by creating a Session
:
use ort::session::{builder::GraphOptimizationLevel, Session};
let model = Session::builder()?
.with_optimization_level(GraphOptimizationLevel::Level3)?
.with_intra_threads(4)?
.commit_from_file("yolov8m.onnx")?;
Perform inference
Preprocess your inputs, then run()
the session to perform inference.
let outputs = model.run(ort::inputs!["image" => image]?)?;
let predictions = outputs["output0"].try_extract_tensor::<f32>()?;
...
ort
repo!Next steps
Unlock more performance with EPs
Use execution providers to enable hardware acceleration in your app and unlock the full power of your GPU or NPU.
Show off your project!
We’d love to see what you’ve made with ort
! Show off your project in GitHub Discussions or on our Discord.