[Detector Support] RTX 1660 Super | AttributeError: 'TensorRtDetector' object has no attribute 'outputs' #14204
-
EDIT : Fixed, I forgot the Describe the problem you are havingHi, I’m trying to add a GPU for detecting after several months of flawless work on my instance with a cpu decoder to enhance my ~70ms inference speed. I run my Frigate within a docker container on a VM in TrueNAS scale, have passed the GPU through the layers, but am blocked at what seems to be the last link of the chain. The passthrough is working, with CPU detector I have GPU decoding without issue. It seems this kind of issue has already been encountered, but I’m a C++ developer whom never delved into machine learning / tensor flow, so I have not found the solution by myself #5058 : This tickets talks about a missing ′libyolo_layer.so′ ; I have no such file on my volume, could this be the issue ? How can I solve it ? I have also tried different models, but I do not really understand what I am doing. What configuration do you recommend with my GPU ? Which Model? Should I enable or disable FP16? Thank you very much. Version0.14.1-f4f3cfa Frigate config file
Relevant log output
And then the container restarts SystemAMD Ryzen 7 2700X / GTX 1660 Super Dockerfile
Config directory
Nvidia SMI
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
please show nvidia-smi output |
Beta Was this translation helpful? Give feedback.
your config is incomplete. You are trying to use tensorrt but you have not filled in the model config