Skip to content

[Grouped Matmul] Fix PyTorch memory leak when tensors are not contiguous #1928

[Grouped Matmul] Fix PyTorch memory leak when tensors are not contiguous

[Grouped Matmul] Fix PyTorch memory leak when tensors are not contiguous #1928

Triggered via pull request January 7, 2024 09:42
Status Success
Total duration 15s
Artifacts

linting.yml

on: pull_request
Fit to window
Zoom out
Zoom in

Annotations

1 error
mypy
Process completed with exit code 2.