Skip to content

Commit

Permalink
DOCS: Fixing broken links in documentation. (#14935)
Browse files Browse the repository at this point in the history
  • Loading branch information
sgolebiewski-intel authored Jan 5, 2023
1 parent 0d261db commit 3017c8d
Show file tree
Hide file tree
Showing 16 changed files with 20 additions and 21 deletions.
4 changes: 2 additions & 2 deletions docs/OV_Runtime_UG/ShapeInference.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ When using the `reshape` method, you may take one of the approaches:
:fragment: simple_spatials_change


To do the opposite - to resize input image to match the input shapes of the model, use the :ref:`pre-processing API <openvino_docs_OV_UG_Preprocessing_Overview>`.
To do the opposite - to resize input image to match the input shapes of the model, use the :doc:`pre-processing API <openvino_docs_OV_UG_Preprocessing_Overview>`.


#. You can express a reshape plan, specifying the input by the port, the index, and the tensor name:
Expand Down Expand Up @@ -161,7 +161,7 @@ There are other approaches to change model input shapes during the stage of [IR

.. important::

Shape-changing functionality could be used to turn dynamic model input into a static one and vice versa. Always set static shapes when the shape of data is NOT going to change from one inference to another. Setting static shapes can avoid memory and runtime overheads for dynamic shapes which may vary depending on hardware plugin and model used. For more information, refer to the :ref:`Dynamic Shapes <openvino_docs_OV_UG_DynamicShapes>`.
Shape-changing functionality could be used to turn dynamic model input into a static one and vice versa. Always set static shapes when the shape of data is NOT going to change from one inference to another. Setting static shapes can avoid memory and runtime overheads for dynamic shapes which may vary depending on hardware plugin and model used. For more information, refer to the :doc:`Dynamic Shapes <openvino_docs_OV_UG_DynamicShapes>`.

@endsphinxdirective

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ To accomplish that, the 2022.1 release OpenVINO introduced significant changes t

## The Installer Package Contains OpenVINO™ Runtime Only

Since OpenVINO 2022.1, development tools have been distributed only via [PyPI](https://pypi.org/project/openvino-dev/), and are no longer included in the OpenVINO installer package. For a list of these components, refer to the [installation overview](../../../install_guides/installing-openvino-overview.md) guide. Benefits of this approach include:
Since OpenVINO 2022.1, development tools have been distributed only via [PyPI](https://pypi.org/project/openvino-dev/), and are no longer included in the OpenVINO installer package. For a list of these components, refer to the [installation overview](../../install_guides/installing-openvino-overview.md) guide. Benefits of this approach include:

* simplification of the user experience - in previous versions, installation and usage of OpenVINO Development Tools differed from one distribution type to another (the OpenVINO installer vs. PyPI),
* ensuring that dependencies are handled properly via the PIP package manager, and support virtual environments of development tools.
Expand Down
4 changes: 2 additions & 2 deletions docs/dev/build.md
Original file line number Diff line number Diff line change
Expand Up @@ -303,7 +303,7 @@ mkdir build && cd build
```sh
cmake -DCMAKE_BUILD_TYPE=Release ..
```
> **Note:** By default OpenVINO CMake scripts try to introspect the system and enable all possible functionality based on that. You can look at the CMake output and see warnings, which show that some functionality is turned off and the corresponding reason, guiding what to do to install additionally to enable unavailable functionality. Additionally, you can change CMake options to enable / disable some functionality, add / remove compilation flags, provide custom version of dependencies like TBB, PugiXML, OpenCV, Protobuf. Please, read [CMake options for custom compilation](CMakeOptionsForCustomCompilation) for this information.
> **Note:** By default OpenVINO CMake scripts try to introspect the system and enable all possible functionality based on that. You can look at the CMake output and see warnings, which show that some functionality is turned off and the corresponding reason, guiding what to do to install additionally to enable unavailable functionality. Additionally, you can change CMake options to enable / disable some functionality, add / remove compilation flags, provide custom version of dependencies like TBB, PugiXML, OpenCV, Protobuf. Please, read [CMake Options for Custom Compilation](https://github.com/openvinotoolkit/openvino/wiki/CMakeOptionsForCustomCompilation) for this information.
3. (CMake build) Build OpenVINO project:
```sh
cmake --build . --config Release --jobs=$(nproc --all)
Expand Down Expand Up @@ -366,7 +366,7 @@ cd ../openvino
```sh
cmake -DCMAKE_BUILD_TYPE=Release -DOPENVINO_EXTRA_MODULES=../openvino_contrib/modules/arm_plugin ..
```
> **Note:** By default OpenVINO CMake scripts try to introspect the system and enable all possible functionality based on that. You can look at the CMake output and see warnings, which show that some functionality is turned off and the corresponding reason, guiding what to do to install additionally to enable unavailable functionality. Additionally, you can change CMake options to enable / disable some functionality, add / remove compilation flags, provide custom version of dependencies like TBB, PugiXML, OpenCV, Protobuf. Please, read [CMake options for custom compilation](CMakeOptionsForCustomCompilation) for this information.
> **Note:** By default OpenVINO CMake scripts try to introspect the system and enable all possible functionality based on that. You can look at the CMake output and see warnings, which show that some functionality is turned off and the corresponding reason, guiding what to do to install additionally to enable unavailable functionality. Additionally, you can change CMake options to enable / disable some functionality, add / remove compilation flags, provide custom version of dependencies like TBB, PugiXML, OpenCV, Protobuf. Please, read [CMake Options for Custom Compilation](https://github.com/openvinotoolkit/openvino/wiki/CMakeOptionsForCustomCompilation) for this information.
4. (CMake build) Build OpenVINO project:
```sh
cmake --build . --config Release --jobs=$(nproc --all)
Expand Down
2 changes: 1 addition & 1 deletion docs/install_guides/installing-openvino-yocto.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ openvino-model-optimizer-dev

## Additional Resources

- [Troubleshooting Guide](openvino_docs_get_started_guide_troubleshooting_issues.html#yocto-install-issues)
- [Troubleshooting Guide](@ref yocto-install-issues)
- [Yocto Project](https://docs.yoctoproject.org/) - official documentation webpage
- [BitBake Tool](https://docs.yoctoproject.org/bitbake/)
- [Poky](https://git.yoctoproject.org/poky)
Expand Down
3 changes: 2 additions & 1 deletion docs/install_guides/troubleshooting-issues.md
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,8 @@ sudo apt install mokutil
sudo mokutil --disable-validation
```

## <a name="yocto-install-issues"></a>Issues with Creating a Yocto Image for OpenVINO
@anchor yocto-install-issues
## Issues with Creating a Yocto Image for OpenVINO

### Error while adding "meta-intel" layer

Expand Down
2 changes: 0 additions & 2 deletions docs/ops/activation/SoftPlus_4.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,6 @@

**Detailed description**

*SoftPlus* operation is introduced in this [article](https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.165.6419).

*SoftPlus* performs element-wise activation function on a given input tensor, based on the following mathematical formula:

\f[
Expand Down
2 changes: 1 addition & 1 deletion docs/ops/internal/AUGRUCell.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

**Short description**: *AUGRUCell* represents a single AUGRU Cell (GRU with attentional update gate).

**Detailed description**: The main difference between *AUGRUCell* and [GRUCell](../../../../../docs/ops/sequence/GRUCell_3.md) is the additional attention score input `A`, which is a multiplier for the update gate.
**Detailed description**: The main difference between *AUGRUCell* and [GRUCell](../../../docs/ops/sequence/GRUCell_3.md) is the additional attention score input `A`, which is a multiplier for the update gate.
The AUGRU formula is based on the [paper arXiv:1809.03672](https://arxiv.org/abs/1809.03672).

```
Expand Down
2 changes: 1 addition & 1 deletion docs/ops/internal/AUGRUSequence.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

**Short description**: *AUGRUSequence* operation represents a series of AUGRU cells (GRU with attentional update gate).

**Detailed description**: The main difference between *AUGRUSequence* and [GRUSequence](../../../../../docs/ops/sequence/GRUSequence_5.md) is the additional attention score input `A`, which is a multiplier for the update gate.
**Detailed description**: The main difference between *AUGRUSequence* and [GRUSequence](../../../docs/ops/sequence/GRUSequence_5.md) is the additional attention score input `A`, which is a multiplier for the update gate.
The AUGRU formula is based on the [paper arXiv:1809.03672](https://arxiv.org/abs/1809.03672).

```
Expand Down
6 changes: 3 additions & 3 deletions docs/optimization_guide/model_optimization_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,11 @@

@sphinxdirective

- :ref:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` implements most of the optimization parameters to a model by default. Yet, you are free to configure mean/scale values, batch size, RGB vs BGR input channels, and other parameters to speed up preprocess of a model (:ref:`Embedding Preprocessing Computation <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`).
- :doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` implements most of the optimization parameters to a model by default. Yet, you are free to configure mean/scale values, batch size, RGB vs BGR input channels, and other parameters to speed up preprocess of a model (:doc:`Embedding Preprocessing Computation <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`).

- :ref:`Post-training Quantization` is designed to optimize inference of deep learning models by applying post-training methods that do not require model retraining or fine-tuning, for example, post-training 8-bit integer quantization.
- :doc:`Post-training Quantization <pot_introduction>` is designed to optimize inference of deep learning models by applying post-training methods that do not require model retraining or fine-tuning, for example, post-training 8-bit integer quantization.

- :ref:`Training-time Optimization`, a suite of advanced methods for training-time model optimization within the DL framework, such as PyTorch and TensorFlow 2.x. It supports methods, like Quantization-aware Training and Filter Pruning. NNCF-optimized models can be inferred with OpenVINO using all the available workflows.
- :doc:`Training-time Optimization <nncf_ptq_introduction>`, a suite of advanced methods for training-time model optimization within the DL framework, such as PyTorch and TensorFlow 2.x. It supports methods, like Quantization-aware Training and Filter Pruning. NNCF-optimized models can be inferred with OpenVINO using all the available workflows.

@endsphinxdirective

Expand Down
2 changes: 1 addition & 1 deletion docs/resources/telemetry_information.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ without an explicit consent on your part and will cover only OpenVINO™ usage i
It does not extend to any other Intel software, hardware, website usage, or other products.

Google Analytics is used for telemetry purposes. Refer to
:ref:`Google Analytics support<https://support.google.com/analytics/answer/6004245#zippy=%2Cour-privacy-policy%2Cgoogle-analytics-cookies-and-identifiers%2Cdata-collected-by-google-analytics%2Cwhat-is-the-data-used-for%2Cdata-access>` to understand how the data is collected and processed.
`Google Analytics support <https://support.google.com/analytics/answer/6004245#zippy=%2Cour-privacy-policy%2Cgoogle-analytics-cookies-and-identifiers%2Cdata-collected-by-google-analytics%2Cwhat-is-the-data-used-for%2Cdata-access>`__ to understand how the data is collected and processed.

Enable or disable Telemetry reporting
======================================
Expand Down
2 changes: 1 addition & 1 deletion thirdparty/cnpy/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Loading data written in numpy formats into C++ is equally simple, but requires y
Default installation directory is /usr/local.
To specify a different directory, add `-DCMAKE_INSTALL_PREFIX=/path/to/install/dir` to the cmake invocation in step 4.

1. get [cmake](www.cmake.org)
1. get [cmake](https://cmake.org/)
2. create a build directory, say $HOME/build
3. cd $HOME/build
4. cmake /path/to/cnpy
Expand Down
2 changes: 1 addition & 1 deletion tools/legacy/benchmark_app/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Note that the benchmark_app usually produces optimal performance for any device
./benchmark_app -m <model> -i <input> -d CPU
```

It still may be sub-optimal for some cases, especially for very small networks. For all devices, including the [MULTI device](../../../docs/OV_Runtime_UG/supported_plugins/MULTI.md) it is preferable to use the FP16 IR for the model. If latency of the CPU inference on the multi-socket machines is of concern.
It still may be sub-optimal for some cases, especially for very small networks. For all devices, including the [MULTI device](../../../docs/OV_Runtime_UG/multi_device.md) it is preferable to use the FP16 IR for the model. If latency of the CPU inference on the multi-socket machines is of concern.
These, as well as other topics are explained in the [Performance Optimization Guide](../../../docs/optimization_guide/dldt_deployment_optimization_guide.md).

Running the application with the `-h` option yields the following usage message:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Quantizatiing 3D Segmentation Model {#pot_example_3d_segmentation_README}

This example demonstrates the use of the [Post-training Optimization Tool API](@ref pot_compression_api_README) for the task of quantizing a 3D segmentation model.
The [Brain Tumor Segmentation](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/brain-tumor-segmentation-0002/brain-tumor-segmentation-0002.md) model from PyTorch* is used for this purpose.
The [Brain Tumor Segmentation](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/brain-tumor-segmentation-0002) model from PyTorch* is used for this purpose.
A custom `DataLoader` is created to load images in NIfTI format from [Medical Segmentation Decathlon BRATS 2017](http://medicaldecathlon.com/) dataset for 3D semantic segmentation task
and the implementation of Dice Index metric is used for the model evaluation. In addition, this example demonstrates how one can use image metadata obtained during image reading and
preprocessing to post-process the model raw output. The code of the example is available on [GitHub](https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/3d_segmentation).
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Quantizing Image Classification Model {#pot_example_classification_README}

This example demonstrates the use of the [Post-training Optimization Tool API](@ref pot_compression_api_README) for the task of quantizing a classification model.
The [MobilenetV2](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/mobilenet-v2-1.0-224/mobilenet-v2-1.0-224.md) model from TensorFlow* is used for this purpose.
The [MobilenetV2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-v2-1.0-224) model from TensorFlow* is used for this purpose.
A custom `DataLoader` is created to load the [ImageNet](http://www.image-net.org/) classification dataset and the implementation of Accuracy at top-1 metric is used for the model evaluation. The code of the example is available on [GitHub](https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/classification).

## How to prepare the data
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Quantizing Cascaded Face detection Model {#pot_example_face_detection_README}

This example demonstrates the use of the [Post-training Optimization Tool API](@ref pot_compression_api_README) for the task of quantizing a face detection model.
The [MTCNN](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/mtcnn/mtcnn.md) model from Caffe* is used for this purpose.
The [MTCNN](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mtcnn) model from Caffe* is used for this purpose.
A custom `DataLoader` is created to load [WIDER FACE](http://shuoyang1213.me/WIDERFACE/) dataset for a face detection task
and the implementation of Recall metric is used for the model evaluation. In addition, this example demonstrates how one can implement
an engine to infer a cascaded (composite) model that is represented by multiple submodels in an OpenVino&trade; Intermediate Representation (IR)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Quantizing Semantic Segmentation Model {#pot_example_segmentation_README}

This example demonstrates the use of the [Post-training Optimization Tool API](@ref pot_compression_api_README) for the task of quantizing a segmentation model.
The [DeepLabV3](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/deeplabv3/deeplabv3.md) model from TensorFlow* is used for this purpose.
The [DeepLabV3](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/deeplabv3) model from TensorFlow* is used for this purpose.
A custom `DataLoader` is created to load the [Pascal VOC 2012](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/) dataset for semantic segmentation task
and the implementation of Mean Intersection Over Union metric is used for the model evaluation. The code of the example is available on [GitHub](https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/segmentation).

Expand Down

0 comments on commit 3017c8d

Please sign in to comment.