diff --git a/docs/OV_Runtime_UG/ShapeInference.md b/docs/OV_Runtime_UG/ShapeInference.md index 98bfea990eaa1c..57cf41cb3d01d3 100644 --- a/docs/OV_Runtime_UG/ShapeInference.md +++ b/docs/OV_Runtime_UG/ShapeInference.md @@ -61,7 +61,7 @@ When using the `reshape` method, you may take one of the approaches: :fragment: simple_spatials_change - To do the opposite - to resize input image to match the input shapes of the model, use the :ref:`pre-processing API `. + To do the opposite - to resize input image to match the input shapes of the model, use the :doc:`pre-processing API `. #. You can express a reshape plan, specifying the input by the port, the index, and the tensor name: @@ -161,7 +161,7 @@ There are other approaches to change model input shapes during the stage of [IR .. important:: - Shape-changing functionality could be used to turn dynamic model input into a static one and vice versa. Always set static shapes when the shape of data is NOT going to change from one inference to another. Setting static shapes can avoid memory and runtime overheads for dynamic shapes which may vary depending on hardware plugin and model used. For more information, refer to the :ref:`Dynamic Shapes `. + Shape-changing functionality could be used to turn dynamic model input into a static one and vice versa. Always set static shapes when the shape of data is NOT going to change from one inference to another. Setting static shapes can avoid memory and runtime overheads for dynamic shapes which may vary depending on hardware plugin and model used. For more information, refer to the :doc:`Dynamic Shapes `. @endsphinxdirective diff --git a/docs/OV_Runtime_UG/migration_ov_2_0/deployment_migration.md b/docs/OV_Runtime_UG/migration_ov_2_0/deployment_migration.md index acf17ad9fea31c..f5d9a1c4213ca1 100644 --- a/docs/OV_Runtime_UG/migration_ov_2_0/deployment_migration.md +++ b/docs/OV_Runtime_UG/migration_ov_2_0/deployment_migration.md @@ -10,7 +10,7 @@ To accomplish that, the 2022.1 release OpenVINO introduced significant changes t ## The Installer Package Contains OpenVINO™ Runtime Only -Since OpenVINO 2022.1, development tools have been distributed only via [PyPI](https://pypi.org/project/openvino-dev/), and are no longer included in the OpenVINO installer package. For a list of these components, refer to the [installation overview](../../../install_guides/installing-openvino-overview.md) guide. Benefits of this approach include: +Since OpenVINO 2022.1, development tools have been distributed only via [PyPI](https://pypi.org/project/openvino-dev/), and are no longer included in the OpenVINO installer package. For a list of these components, refer to the [installation overview](../../install_guides/installing-openvino-overview.md) guide. Benefits of this approach include: * simplification of the user experience - in previous versions, installation and usage of OpenVINO Development Tools differed from one distribution type to another (the OpenVINO installer vs. PyPI), * ensuring that dependencies are handled properly via the PIP package manager, and support virtual environments of development tools. diff --git a/docs/dev/build.md b/docs/dev/build.md index 5eb41a9f117dd2..65c8c0ff43832f 100644 --- a/docs/dev/build.md +++ b/docs/dev/build.md @@ -303,7 +303,7 @@ mkdir build && cd build ```sh cmake -DCMAKE_BUILD_TYPE=Release .. ``` -> **Note:** By default OpenVINO CMake scripts try to introspect the system and enable all possible functionality based on that. You can look at the CMake output and see warnings, which show that some functionality is turned off and the corresponding reason, guiding what to do to install additionally to enable unavailable functionality. Additionally, you can change CMake options to enable / disable some functionality, add / remove compilation flags, provide custom version of dependencies like TBB, PugiXML, OpenCV, Protobuf. Please, read [CMake options for custom compilation](CMakeOptionsForCustomCompilation) for this information. +> **Note:** By default OpenVINO CMake scripts try to introspect the system and enable all possible functionality based on that. You can look at the CMake output and see warnings, which show that some functionality is turned off and the corresponding reason, guiding what to do to install additionally to enable unavailable functionality. Additionally, you can change CMake options to enable / disable some functionality, add / remove compilation flags, provide custom version of dependencies like TBB, PugiXML, OpenCV, Protobuf. Please, read [CMake Options for Custom Compilation](https://github.com/openvinotoolkit/openvino/wiki/CMakeOptionsForCustomCompilation) for this information. 3. (CMake build) Build OpenVINO project: ```sh cmake --build . --config Release --jobs=$(nproc --all) @@ -366,7 +366,7 @@ cd ../openvino ```sh cmake -DCMAKE_BUILD_TYPE=Release -DOPENVINO_EXTRA_MODULES=../openvino_contrib/modules/arm_plugin .. ``` -> **Note:** By default OpenVINO CMake scripts try to introspect the system and enable all possible functionality based on that. You can look at the CMake output and see warnings, which show that some functionality is turned off and the corresponding reason, guiding what to do to install additionally to enable unavailable functionality. Additionally, you can change CMake options to enable / disable some functionality, add / remove compilation flags, provide custom version of dependencies like TBB, PugiXML, OpenCV, Protobuf. Please, read [CMake options for custom compilation](CMakeOptionsForCustomCompilation) for this information. +> **Note:** By default OpenVINO CMake scripts try to introspect the system and enable all possible functionality based on that. You can look at the CMake output and see warnings, which show that some functionality is turned off and the corresponding reason, guiding what to do to install additionally to enable unavailable functionality. Additionally, you can change CMake options to enable / disable some functionality, add / remove compilation flags, provide custom version of dependencies like TBB, PugiXML, OpenCV, Protobuf. Please, read [CMake Options for Custom Compilation](https://github.com/openvinotoolkit/openvino/wiki/CMakeOptionsForCustomCompilation) for this information. 4. (CMake build) Build OpenVINO project: ```sh cmake --build . --config Release --jobs=$(nproc --all) diff --git a/docs/install_guides/installing-openvino-yocto.md b/docs/install_guides/installing-openvino-yocto.md index 75ec2a13298153..8810f49b22e3db 100644 --- a/docs/install_guides/installing-openvino-yocto.md +++ b/docs/install_guides/installing-openvino-yocto.md @@ -100,7 +100,7 @@ openvino-model-optimizer-dev ## Additional Resources -- [Troubleshooting Guide](openvino_docs_get_started_guide_troubleshooting_issues.html#yocto-install-issues) +- [Troubleshooting Guide](@ref yocto-install-issues) - [Yocto Project](https://docs.yoctoproject.org/) - official documentation webpage - [BitBake Tool](https://docs.yoctoproject.org/bitbake/) - [Poky](https://git.yoctoproject.org/poky) diff --git a/docs/install_guides/troubleshooting-issues.md b/docs/install_guides/troubleshooting-issues.md index 58bcbfea6058c6..104610f11d157a 100644 --- a/docs/install_guides/troubleshooting-issues.md +++ b/docs/install_guides/troubleshooting-issues.md @@ -205,7 +205,8 @@ sudo apt install mokutil sudo mokutil --disable-validation ``` -## Issues with Creating a Yocto Image for OpenVINO +@anchor yocto-install-issues +## Issues with Creating a Yocto Image for OpenVINO ### Error while adding "meta-intel" layer diff --git a/docs/ops/activation/SoftPlus_4.md b/docs/ops/activation/SoftPlus_4.md index b4c98cd82cf275..9bf733e4d586e6 100644 --- a/docs/ops/activation/SoftPlus_4.md +++ b/docs/ops/activation/SoftPlus_4.md @@ -8,8 +8,6 @@ **Detailed description** -*SoftPlus* operation is introduced in this [article](https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.165.6419). - *SoftPlus* performs element-wise activation function on a given input tensor, based on the following mathematical formula: \f[ diff --git a/docs/ops/internal/AUGRUCell.md b/docs/ops/internal/AUGRUCell.md index fa58148b714a2e..ed980d826dbb34 100644 --- a/docs/ops/internal/AUGRUCell.md +++ b/docs/ops/internal/AUGRUCell.md @@ -6,7 +6,7 @@ **Short description**: *AUGRUCell* represents a single AUGRU Cell (GRU with attentional update gate). -**Detailed description**: The main difference between *AUGRUCell* and [GRUCell](../../../../../docs/ops/sequence/GRUCell_3.md) is the additional attention score input `A`, which is a multiplier for the update gate. +**Detailed description**: The main difference between *AUGRUCell* and [GRUCell](../../../docs/ops/sequence/GRUCell_3.md) is the additional attention score input `A`, which is a multiplier for the update gate. The AUGRU formula is based on the [paper arXiv:1809.03672](https://arxiv.org/abs/1809.03672). ``` diff --git a/docs/ops/internal/AUGRUSequence.md b/docs/ops/internal/AUGRUSequence.md index ec940de1ab0e5c..bb4f38b27a28e0 100644 --- a/docs/ops/internal/AUGRUSequence.md +++ b/docs/ops/internal/AUGRUSequence.md @@ -6,7 +6,7 @@ **Short description**: *AUGRUSequence* operation represents a series of AUGRU cells (GRU with attentional update gate). -**Detailed description**: The main difference between *AUGRUSequence* and [GRUSequence](../../../../../docs/ops/sequence/GRUSequence_5.md) is the additional attention score input `A`, which is a multiplier for the update gate. +**Detailed description**: The main difference between *AUGRUSequence* and [GRUSequence](../../../docs/ops/sequence/GRUSequence_5.md) is the additional attention score input `A`, which is a multiplier for the update gate. The AUGRU formula is based on the [paper arXiv:1809.03672](https://arxiv.org/abs/1809.03672). ``` diff --git a/docs/optimization_guide/model_optimization_guide.md b/docs/optimization_guide/model_optimization_guide.md index 40f13c39325dc1..ca2d463227cacf 100644 --- a/docs/optimization_guide/model_optimization_guide.md +++ b/docs/optimization_guide/model_optimization_guide.md @@ -17,11 +17,11 @@ @sphinxdirective -- :ref:`Model Optimizer ` implements most of the optimization parameters to a model by default. Yet, you are free to configure mean/scale values, batch size, RGB vs BGR input channels, and other parameters to speed up preprocess of a model (:ref:`Embedding Preprocessing Computation `). +- :doc:`Model Optimizer ` implements most of the optimization parameters to a model by default. Yet, you are free to configure mean/scale values, batch size, RGB vs BGR input channels, and other parameters to speed up preprocess of a model (:doc:`Embedding Preprocessing Computation `). -- :ref:`Post-training Quantization` is designed to optimize inference of deep learning models by applying post-training methods that do not require model retraining or fine-tuning, for example, post-training 8-bit integer quantization. +- :doc:`Post-training Quantization ` is designed to optimize inference of deep learning models by applying post-training methods that do not require model retraining or fine-tuning, for example, post-training 8-bit integer quantization. -- :ref:`Training-time Optimization`, a suite of advanced methods for training-time model optimization within the DL framework, such as PyTorch and TensorFlow 2.x. It supports methods, like Quantization-aware Training and Filter Pruning. NNCF-optimized models can be inferred with OpenVINO using all the available workflows. +- :doc:`Training-time Optimization `, a suite of advanced methods for training-time model optimization within the DL framework, such as PyTorch and TensorFlow 2.x. It supports methods, like Quantization-aware Training and Filter Pruning. NNCF-optimized models can be inferred with OpenVINO using all the available workflows. @endsphinxdirective diff --git a/docs/resources/telemetry_information.md b/docs/resources/telemetry_information.md index 57dba706ecbcf7..a748ac0e9b048d 100644 --- a/docs/resources/telemetry_information.md +++ b/docs/resources/telemetry_information.md @@ -8,7 +8,7 @@ without an explicit consent on your part and will cover only OpenVINO™ usage i It does not extend to any other Intel software, hardware, website usage, or other products. Google Analytics is used for telemetry purposes. Refer to -:ref:`Google Analytics support` to understand how the data is collected and processed. +`Google Analytics support `__ to understand how the data is collected and processed. Enable or disable Telemetry reporting ====================================== diff --git a/thirdparty/cnpy/README.md b/thirdparty/cnpy/README.md index 4f0f42ad0fc8ca..0f00ac55bfdd44 100644 --- a/thirdparty/cnpy/README.md +++ b/thirdparty/cnpy/README.md @@ -16,7 +16,7 @@ Loading data written in numpy formats into C++ is equally simple, but requires y Default installation directory is /usr/local. To specify a different directory, add `-DCMAKE_INSTALL_PREFIX=/path/to/install/dir` to the cmake invocation in step 4. -1. get [cmake](www.cmake.org) +1. get [cmake](https://cmake.org/) 2. create a build directory, say $HOME/build 3. cd $HOME/build 4. cmake /path/to/cnpy diff --git a/tools/legacy/benchmark_app/README.md b/tools/legacy/benchmark_app/README.md index 5e4bebc58992db..5b51d6e71c3225 100644 --- a/tools/legacy/benchmark_app/README.md +++ b/tools/legacy/benchmark_app/README.md @@ -53,7 +53,7 @@ Note that the benchmark_app usually produces optimal performance for any device ./benchmark_app -m -i -d CPU ``` -It still may be sub-optimal for some cases, especially for very small networks. For all devices, including the [MULTI device](../../../docs/OV_Runtime_UG/supported_plugins/MULTI.md) it is preferable to use the FP16 IR for the model. If latency of the CPU inference on the multi-socket machines is of concern. +It still may be sub-optimal for some cases, especially for very small networks. For all devices, including the [MULTI device](../../../docs/OV_Runtime_UG/multi_device.md) it is preferable to use the FP16 IR for the model. If latency of the CPU inference on the multi-socket machines is of concern. These, as well as other topics are explained in the [Performance Optimization Guide](../../../docs/optimization_guide/dldt_deployment_optimization_guide.md). Running the application with the `-h` option yields the following usage message: diff --git a/tools/pot/openvino/tools/pot/api/samples/3d_segmentation/README.md b/tools/pot/openvino/tools/pot/api/samples/3d_segmentation/README.md index e901cfaa33d68a..b653f646d2a6e7 100644 --- a/tools/pot/openvino/tools/pot/api/samples/3d_segmentation/README.md +++ b/tools/pot/openvino/tools/pot/api/samples/3d_segmentation/README.md @@ -1,7 +1,7 @@ # Quantizatiing 3D Segmentation Model {#pot_example_3d_segmentation_README} This example demonstrates the use of the [Post-training Optimization Tool API](@ref pot_compression_api_README) for the task of quantizing a 3D segmentation model. -The [Brain Tumor Segmentation](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/brain-tumor-segmentation-0002/brain-tumor-segmentation-0002.md) model from PyTorch* is used for this purpose. +The [Brain Tumor Segmentation](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/brain-tumor-segmentation-0002) model from PyTorch* is used for this purpose. A custom `DataLoader` is created to load images in NIfTI format from [Medical Segmentation Decathlon BRATS 2017](http://medicaldecathlon.com/) dataset for 3D semantic segmentation task and the implementation of Dice Index metric is used for the model evaluation. In addition, this example demonstrates how one can use image metadata obtained during image reading and preprocessing to post-process the model raw output. The code of the example is available on [GitHub](https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/3d_segmentation). diff --git a/tools/pot/openvino/tools/pot/api/samples/classification/README.md b/tools/pot/openvino/tools/pot/api/samples/classification/README.md index 0bb60fc7adc6d0..43f946a9897460 100644 --- a/tools/pot/openvino/tools/pot/api/samples/classification/README.md +++ b/tools/pot/openvino/tools/pot/api/samples/classification/README.md @@ -1,7 +1,7 @@ # Quantizing Image Classification Model {#pot_example_classification_README} This example demonstrates the use of the [Post-training Optimization Tool API](@ref pot_compression_api_README) for the task of quantizing a classification model. -The [MobilenetV2](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/mobilenet-v2-1.0-224/mobilenet-v2-1.0-224.md) model from TensorFlow* is used for this purpose. +The [MobilenetV2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-v2-1.0-224) model from TensorFlow* is used for this purpose. A custom `DataLoader` is created to load the [ImageNet](http://www.image-net.org/) classification dataset and the implementation of Accuracy at top-1 metric is used for the model evaluation. The code of the example is available on [GitHub](https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/classification). ## How to prepare the data diff --git a/tools/pot/openvino/tools/pot/api/samples/face_detection/README.md b/tools/pot/openvino/tools/pot/api/samples/face_detection/README.md index 69201af3668f45..f11e0f7998581a 100644 --- a/tools/pot/openvino/tools/pot/api/samples/face_detection/README.md +++ b/tools/pot/openvino/tools/pot/api/samples/face_detection/README.md @@ -1,7 +1,7 @@ # Quantizing Cascaded Face detection Model {#pot_example_face_detection_README} This example demonstrates the use of the [Post-training Optimization Tool API](@ref pot_compression_api_README) for the task of quantizing a face detection model. -The [MTCNN](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/mtcnn/mtcnn.md) model from Caffe* is used for this purpose. +The [MTCNN](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mtcnn) model from Caffe* is used for this purpose. A custom `DataLoader` is created to load [WIDER FACE](http://shuoyang1213.me/WIDERFACE/) dataset for a face detection task and the implementation of Recall metric is used for the model evaluation. In addition, this example demonstrates how one can implement an engine to infer a cascaded (composite) model that is represented by multiple submodels in an OpenVino™ Intermediate Representation (IR) diff --git a/tools/pot/openvino/tools/pot/api/samples/segmentation/README.md b/tools/pot/openvino/tools/pot/api/samples/segmentation/README.md index bda999f8a9fec9..9201e3062d81a2 100644 --- a/tools/pot/openvino/tools/pot/api/samples/segmentation/README.md +++ b/tools/pot/openvino/tools/pot/api/samples/segmentation/README.md @@ -1,7 +1,7 @@ # Quantizing Semantic Segmentation Model {#pot_example_segmentation_README} This example demonstrates the use of the [Post-training Optimization Tool API](@ref pot_compression_api_README) for the task of quantizing a segmentation model. -The [DeepLabV3](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/deeplabv3/deeplabv3.md) model from TensorFlow* is used for this purpose. +The [DeepLabV3](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/deeplabv3) model from TensorFlow* is used for this purpose. A custom `DataLoader` is created to load the [Pascal VOC 2012](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/) dataset for semantic segmentation task and the implementation of Mean Intersection Over Union metric is used for the model evaluation. The code of the example is available on [GitHub](https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/segmentation).