mirror of
https://github.com/pykeio/ort
synced 2026-04-25 16:34:55 +02:00
079ecb47034ec8188e3a06fc04f49ec28a6499e8
Create exactly one (1) environment throughout the entire duration of the process and release it when it is no longer used. Hopefully the last time I'll ever have to deal with this. The old `Weak<Environment>` solution was flawed because it would create another environment after the old one was released, which is for some reason not allowed.
ort is a Rust interface for performing hardware-accelerated inference & training on machine learning models in the Open Neural Network Exchange (ONNX) format.
Based on the now-inactive onnxruntime-rs crate, ort is primarily a wrapper for Microsoft's ONNX Runtime library, but offers support for other pure-Rust runtimes.
ort with ONNX Runtime is super quick - and it supports almost any hardware accelerator you can think of. Even still, it's light enough to run on your users' devices.
When you need to deploy a PyTorch/TensorFlow/Keras/scikit-learn/PaddlePaddle model either on-device or in the datacenter, ort has you covered.
📖 Documentation
🤔 Support
🌠 Backers
💖 FOSS projects using ort
Open a PR to add your project here 🌟
- Text Embeddings Inference (TEI) uses
ortto deliver high-performance ONNX Runtime inference for text embedding models. - Magika uses
ortfor neural network-based file type detection. - retto uses
ortfor reliable, fast ONNX inference of PaddleOCR models on Desktop and WASM platforms. - edge-transformers uses
ortfor accelerated transformer model inference at the edge. sbv2-apiis a fast implementation of Style-BERT-VITS2 text-to-speech usingort.- BoquilaHUB uses
ortfor local AI deployment in biodiversity conservation efforts. - CamTrap Detector uses
ortto detect animals, humans and vehicles in trail camera imagery. - Ortex uses
ortfor safe ONNX Runtime bindings in Elixir. - oar-ocr A comprehensive OCR library, built in Rust with
ortfor efficient inference. FastEmbed-rsusesortfor generating vector embeddings, reranking locally.- Ahnlich uses
ortto power their AI proxy for semantic search applications. - Murmure uses
ortas its core engine, leveraging NVIDIA Parakeet to deliver fully local, free, private and cross‑platform Speech‑to‑Text enhanced with LLM post‑processing. - Valentinus uses
ortto provide embedding model inference inside LMDB. - SilentKeys uses
ortfor fast, on-device real-time dictation with NVIDIA Parakeet and Silero VAD.
Description
Languages
Rust
98.2%
Python
0.7%
TypeScript
0.6%
JavaScript
0.5%
