Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add FAQs and Common Issues doc page #7547

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 48 additions & 0 deletions docs/source/getting-started-faqs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# FAQs and Common Issues

This page summarizes frequently asked questions and provides guidance on issues that commonly occur when adopting ExecuTorch.

## Export

### Missing out variants: { _ }

The model likely contains torch custom operators. Custom ops need an Executorch implementation and need to be loaded at export time. See the [ExecuTorch Custom Ops Documentation](https://pytorch.org/executorch/main/kernel-library-custom-aten-kernel.html#apis) for details on how to do this.

### RuntimeError: PyTorch convert function for op _ not implemented

The model likely contains an operator that is not yet supported on ExecuTorch. In this case, consider search for or creating an issue on [GitHub](https://github.com/pytorch/executorch/issues).

## Runtime

ExecuTorch error codes are defined in [executorch/core/runtime/error.h](https://www.internalfb.com/code/fbsource/xplat/executorch/runtime/core/error.h).

### Performance Troubleshooting

Ensure the model is delegated. If not targeting a specific accelerator, use the XNNPACK delegate for CPU performance. Undelegated operators will typically fall back to the ExecuTorch portable library, which is designed as a platform-independent fallback, and is not optimized for specific hardware.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ExecuTorch portable library, which is designed as a platform-independent fallback, and is not optimized for specific hardware.

which is design to serve as a reference implementation/fallback and not intended to be used in a performance sensitive production scenarios.


Additionally, thread counts are a common source of performance issues. While we are working to improve the default behavior, ExecuTorch will currently use as many threads as there are cores. On some heterogenous mobile SOCs, this can be slow. Consider setting the thread count to cores / 2, or just set to 4. This will lead to a speedup (or maintain parity) on almost all mobile devices.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will lead to a speedup (or maintain parity) on almost all mobile devices.

This might lead to a speedup?

Because if it always will, why this is not a default?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also I would probably add a reference to a function other document that explain how CPU parallelism can be configured

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no way to this in OSS at the moment except for the unsafe API


Thread count can be set with the following function. Ensure this is done prior to loading or running a model.
```
::executorch::extension::threadpool::get_threadpool()->_unsafe_reset_threadpool(num_threads);
```

We are actively working to improve the out-of-box behavior, but the above APIs can be used to improve mobile performance as workaround until deeper changes for performant core detection land.

### Erroa setting input: 0x10 / Attempted to resize a bounded tensor...
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo


This usually means the inputs provided do not match the shape of the example inputs used during model export. If the model is expected to handle varying size inputs (dynamic shapes), make sure the model export specifies the appropriate bounds. See [Expressing Dynamism](https://pytorch.org/docs/stable/export.html#expressing-dynamism) for more information on specifying dynamic shapes.

### Error 0x14 (Operator Missing)

This usually means that the selective build configuration is incorrect. Ensure that the operator library is generated from the current version of the model and the corresponding `et_operator_library` is a dependency of the app-level `executorch_generated_lib` and the generated lib is linked into the application.

This can also occur if the ExecuTorch portable library does not yet have an implementation of the given ATen operator. In this case, consider search for or creating an issue on [GitHub](https://github.com/pytorch/executorch/issues).

### Error 0x20 (Not Found)

This error can occur for a few reasons, but the most common is a missing backend target. Ensure the appropriate backend target is linked. For XNNPACK, this is `xnnpack_backend`.

### Duplicate Kernel Registration Abort

This manifests as a crash call stack including ExecuTorch kernel registration and failing with an `et_pal_abort`. This typically means there are multiple `gen_operators_lib` targets linked into the applications. There must be only one generated lib per target, though each model can have its own `gen_selected_ops/generate_bindings_for_kernels` call.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

generated lib -> generated operator library

2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ Topics in this section will help you get started with ExecuTorch.
getting-started-setup
export-overview
runtime-build-and-cross-compilation

getting-started-faqs

.. toctree::
:glob:
Expand Down
Loading