decahedron1 079ecb4703 fix: one environment (#542)
Create exactly one (1) environment throughout the entire duration of the
process and release it when it is no longer used. Hopefully the last
time I'll ever have to deal with this.

The old `Weak<Environment>` solution was flawed because it would create
another environment after the old one was released, which is for some
reason not allowed.
2026-03-03 13:11:22 -06:00
2025-03-22 23:44:02 -05:00
2025-04-23 09:09:20 -05:00
2026-03-03 13:11:22 -06:00
2026-01-15 02:21:04 -06:00
2022-11-26 15:16:30 -06:00
2025-04-13 15:16:37 -05:00
2025-11-16 23:30:45 -06:00
2026-02-28 00:36:38 -06:00
🎆
2026-01-01 01:50:26 -06:00
2026-02-27 19:08:30 -06:00
2025-10-25 15:39:24 -05:00
2025-11-16 23:30:45 -06:00


Coverage Results Crates.io Open Collective backers and sponsors
Crates.io ONNX Runtime
💖 Sponsored by
Rime.ai Authentic AI voice models for enterprise.

ort is a Rust interface for performing hardware-accelerated inference & training on machine learning models in the Open Neural Network Exchange (ONNX) format.

Based on the now-inactive onnxruntime-rs crate, ort is primarily a wrapper for Microsoft's ONNX Runtime library, but offers support for other pure-Rust runtimes.

ort with ONNX Runtime is super quick - and it supports almost any hardware accelerator you can think of. Even still, it's light enough to run on your users' devices.

When you need to deploy a PyTorch/TensorFlow/Keras/scikit-learn/PaddlePaddle model either on-device or in the datacenter, ort has you covered.

📖 Documentation

🤔 Support

🌠 Backers

💖 FOSS projects using ort

Open a PR to add your project here 🌟

  • Text Embeddings Inference (TEI) uses ort to deliver high-performance ONNX Runtime inference for text embedding models.
  • Magika uses ort for neural network-based file type detection.
  • retto uses ort for reliable, fast ONNX inference of PaddleOCR models on Desktop and WASM platforms.
  • edge-transformers uses ort for accelerated transformer model inference at the edge.
  • sbv2-api is a fast implementation of Style-BERT-VITS2 text-to-speech using ort.
  • BoquilaHUB uses ort for local AI deployment in biodiversity conservation efforts.
  • CamTrap Detector uses ort to detect animals, humans and vehicles in trail camera imagery.
  • Ortex uses ort for safe ONNX Runtime bindings in Elixir.
  • oar-ocr A comprehensive OCR library, built in Rust with ort for efficient inference.
  • FastEmbed-rs uses ort for generating vector embeddings, reranking locally.
  • Ahnlich uses ort to power their AI proxy for semantic search applications.
  • Murmure uses ort as its core engine, leveraging NVIDIA Parakeet to deliver fully local, free, private and crossplatform SpeechtoText enhanced with LLM postprocessing.
  • Valentinus uses ort to provide embedding model inference inside LMDB.
  • SilentKeys uses ort for fast, on-device real-time dictation with NVIDIA Parakeet and Silero VAD.
Description
Mirrored from GitHub
Readme 10 MiB
Languages
Rust 98.2%
Python 0.7%
TypeScript 0.6%
JavaScript 0.5%