-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Larger recording creation of greater than 10 minutes fails with mqtt disconnect error #797
Comments
@divdaisymuffin the log indicates that the pipeline has been aborted, which indicates that it has been stopped explicitly - do you know how the stop is being triggered? Smart-City-Sample/analytics/common/runva.py Line 120 in b774b2b
|
No, that is not known, it never happens if I keep "max-size-time" less than equals to 10 mins, but whenever I increase it it happens. |
Based on the trace I believe something in the system is sending a 'stop' / 'kill' signal to the container. Is the storage space of the container big enough to store the files temporarily? Also the rec2db will post and upload the clip - possible the volume for the local clips is running out of space? I believe this is where the stop would originate from:
|
I set recoding length to 12 minutes and ran for 8 hours with no problems using the built in RSTP simulator. Like @nnshah1 I believe something is stropping the pipeline, maybe due to lack of storage resource? I added some instrumentation to the video segment saving code to extract some data - see table below. Some observations
|
I repeated the above experiment with 35 minute recordings for 9 hours, again with built-in RTSP simulator. No errors were detected and video file segment sizes were consistent (just over 100Mbytes). I advise the following next steps
|
@nnshah1 and @whbruce, I have tried the suggestion of increasing storage limit of docker container of analytics pod from 500Mi to 9000Mi, it resolved my error of pipeline stopped with mqtt disconnect issue and when I went inside kubernetes pod of analytics I can see recording files present but as we know that in next step the mp4 files move fom /tmp/rec of analytics pod to /var/www/mp4/ and later uploaded to GUI. The change I made in analytics.yaml.m4 So, now do I need to increase storage pod memory limit as well? @whbruce, as its working fine for you can you provide me your analytics.yaml.m4 file and office-storage.yaml.m4 as well as cloud-storage.yaml.m4. |
Glad to hear the pipeline is no longer aborting. Can you close this issue and open a new one for moving the mp4 files as this is not related to pipeline operation. For the m4 files I used, see my segment-recording branch. |
Thanks for creating #798. Please close this issue, or let us know what still needs to be addressed. |
Hi @nnshah1 and @xwu2git
When I am providing a record time of 15 mins or larger it always fails with mqtt disconnect error.
I am attaching pipeline and log of the error on the same.
Please have a look
{ "name": "object_detection", "version": 2, "type": "GStreamer", "template":"rtspsrc udp-buffer-size=212992 name=source ! queue ! rtph264depay ! h264parse ! video/x-h264 ! tee name=t ! queue ! decodebin ! videoconvert name=\"videoconvert\" ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[person_detection_2020R2][1][network]}\" model-proc=\"{models[person_detection_2020R2][1][proc]}\" name=\"detection1\" threshold=0.50 ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[face_detection_adas][1][network]}\" model-proc=\"{models[face_detection_adas][1][proc]}\" name=\"detection\" threshold=0.50 ! gvametaconvert name=\"metaconvert\" ! queue ! gvametapublish name=\"destination\" ! appsink name=appsink t. ! splitmuxsink max-size-time=1200000000000 name=\"splitmuxsink\"", "description": "Object Detection Pipeline", "parameters": { "type" : "object", "properties" : { "inference-interval": { "element":"detection", "type": "integer", "minimum": 0, "maximum": 4294967295 }, "cpu-throughput-streams": { "element":"detection", "type": "string" }, "n-threads": { "element":"videoconvert", "type": "integer" }, "nireq": { "element":"detection", "type": "integer", "minimum": 1, "maximum": 64 }, "recording_prefix": { "type":"string", "default":"recording" } } } }
The text was updated successfully, but these errors were encountered: