OpenVINO
| OpenVINO | |
|---|---|
| Developer | Intel Corporation |
| Initial release | May 16, 2018 |
| Stable release | 2025.3
/ September 2025.[1] |
| Repository | github |
| Written in | C++ |
| Operating system | Cross-platform |
| License | Apache License 2.0 |
| Website | openvino |
| As of | September 2025 |
OpenVINO is an open-source software toolkit developed by Intel for optimizing and deploying deep learning models. It supports several popular model formats[2] and categories, such as large language models, computer vision, and generative AI.
OpenVINO is optimized for Intel hardware, but offers support for ARM/ARM64 processors.[2] It sees great use[according to whom?] in AI Sound Processing drivers when tied with Intel's Gaussian & Neural Accelerator (GNA).
Based in C++, it extends API support for C and Python, as well as Node.js (in early preview).
OpenVINO is cross-platform and free for use under Apache License 2.0.[3]
Workflow
[edit]The simplest OpenVINO usage involves obtaining a model and running it as is. Yet for the best results, a more complete workflow is suggested:[4]
- obtain a model in one of supported frameworks,
- convert the model to OpenVINO IR using the OpenVINO Converter tool,
- optimize the model, using training-time or post-training options provided by OpenVINO's NNCF.
- execute inference, using OpenVINO Runtime by specifying one of several inference modes.
OpenVINO model format
[edit]OpenVINO IR[5] is the default format used to run inference. It is saved as a set of two files, *.bin and *.xml, containing weights and topology, respectively. It is obtained by converting a model from one of the supported frameworks, using the application's API or a dedicated converter.
Models of the supported formats may also be used for inference directly, without prior conversion to OpenVINO IR. Such an approach is more convenient but offers fewer optimization options and lower performance, since the conversion is performed automatically before inference. Some pre-converted models can be found in the Hugging Face repository.[6]
The supported model formats are:[7]
- PyTorch
- TensorFlow
- TensorFlow Lite
- ONNX (including formats that may be serialized to ONNX)
- PaddlePaddle
- JAX/Flax
OS support
[edit]OpenVINO runs on Windows, Linux and MacOS.[8]
See also
[edit]References
[edit]- ^ "Release Notes for Intel Distribution of OpenVINO toolkit 2025.3". September 2025.
- ^ a b "OpenVINO Compatibility and Support". OpenVINO Documentation. 24 January 2024.
- ^ "License". OpenVINO repository. 16 October 2018.
- ^ "OpenVINO Workflow". OpenVINO Documentation. 25 April 2024.
- ^ "OpenVINO IR". www.docs.openvino.ai. 2 February 2024.
- ^ "Hugging Face OpenVINO Space". Hugging Face.
- ^ "OpenVINO Model Preparation". OpenVINO Documentation. 24 January 2024.
- ^ "System Requirements". OpenVINO Documentation. February 2024.
- Agrawal, Vasu (2019). Ground Up Design of a Multi-modal Object Detection System (PDF) (MSc). Carnegie Mellon University Pittsburgh, PA. Archived (PDF) from the original on 26 January 2020.
- Driaba, Alexander; Gordeev, Aleksei; Klyachin, Vladimir (2019). "Recognition of Various Objects from a Certain Categorical Set in Real Time Using Deep Convolutional Neural Networks" (PDF). Institute of Mathematics and Informational Technologies Volgograd State University. Archived (PDF) from the original on 26 January 2020. Retrieved 26 January 2020.
{{cite journal}}: Cite journal requires|journal=(help) - Nanjappa, Ashwin (31 May 2019). Caffe2 Quick Start Guide: Modular and scalable deep learning made easy. Packt. pp. 91–98. ISBN 978-1789137750.