-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing required positional arguments calling _output_padding
in _ConvTransposeNd
from torch
.
#2964
Comments
Which torch version you were using? cc @ttyio |
2.0.1, but same issue downgrading to 1.13. After a quick search in commit history, it seems it is linked to a signature change on the 22/04/2022: pytorch/pytorch@041e6e7#diff-db28bf59508ce2064dfd833cede78086de03b9567550a1a53f110256385ae7a0R613 |
Could you please try with pytorch 1.9.1? as https://github.com/NVIDIA/TensorRT/tree/release/8.6/tools/pytorch-quantization mention? or use our docker images. |
This is torch upgrade caused incompatible issue since torch 1.12, we have fixed it internally, but have not integrate to public repo. Will work on this, thanks! |
Confirmed will go out to public in next monthly release, could you use the old version pytorch first? thanks! |
Closing since there is WAR and we will fix in next monthly, thanks! |
Description
Missing required arguments in calling
torch.nn.modules.conv._QuantConvTransposeNd._output_padding
frompytorch_quantization.nn.modules.quant_conv.QuantConvTranspose1d
,pytorch_quantization.nn.modules.quant_conv.QuantConvTranspose2d
,pytorch_quantization.nn.modules.quant_conv.QuantConvTranspose3d
._ConvTransposeNd:
Whereas called in QuantConvTranspose1d:
Environment
TensorRT Version: *
NVIDIA GPU: *
NVIDIA Driver Version: *
CUDA Version: *
CUDNN Version: *
Operating System:
Python Version (if applicable):
Tensorflow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if so, version):
Relevant Files
Model link:
Steps To Reproduce
Commands or scripts:
Have you tried the latest release?:
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (
polygraphy run <model.onnx> --onnxrt
):The text was updated successfully, but these errors were encountered: