0.15.0 Beta 7 #16039
Replies: 16 comments 40 replies
-
Good job everyone. Object Lifecycle pane is amazing. I'll do my best to help achieve the goal of 100% unit test coverage for the HTTP API |
Beta Was this translation helpful? Give feedback.
-
So far so good. Fingers crossed no new issues appear. Congratulations on pre-release status. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Im getting "Unable to create container: No such image: ghcr.io/blakeblackshear/frigate:0.15.0-beta6" when trying to pull the image. |
Beta Was this translation helpful? Give feedback.
-
None of my cameras are loading. Beta 5 was working great. Tons of error messages. Rolled back to Beta 5, no errors.
|
Beta Was this translation helpful? Give feedback.
-
I was able to pull beta 6 with no issues and Frigate is working fine. |
Beta Was this translation helpful? Give feedback.
-
The discussion thread is not linked in the beta7 release under releases it is still linked to beta6, this discussion has been retitled beta7 but still has the beta6 links at the top. Some cleanup may be needed. |
Beta Was this translation helpful? Give feedback.
-
Installed the latest commit from last night and unfortunately the cannot allocate memory errors continue to creep up once or twice a day. Not a huge deal breaker as the system recovers but would be good to understand if there is something that I need to do / can be done to avoid them? intel 8505, 16 gb ram, around 15 cameras with dual usb coral. In general memory use and cpu use is well contained, well under 30-40%. |
Beta Was this translation helpful? Give feedback.
-
Continuation of #15945 (comment) since that thread is closed.
I tested both v8 and v10 of some popular model with CUDA & TensorRT on GTX 970. I found that the influence time of TensorRT is 10ms faster (35ms) than CUDA (45ms). Is this normal or am I missing some configuration? Thank you. FYI I also tested the onnx detector with TensorRT & the TensorRT detector with the same model, they have the same influence time. |
Beta Was this translation helpful? Give feedback.
-
Can I use the API of different models on each camera? I want to compare the descriptions and limitations of each different API |
Beta Was this translation helpful? Give feedback.
-
The system is rebooting really quickly! Good Work! |
Beta Was this translation helpful? Give feedback.
-
My Frigate keeps restarting PC Specs - i7 7700 / AMD RX470 / 32GB DDR4 Running through Caddy and Podman Error:
CaddyFile -
|
Beta Was this translation helpful? Give feedback.
-
I think when I upgraded 0.15 beta my recording retention configuration changed. I probably had an unusual retention setup but in my case a lot of alerts and detections were deleted. I had backups so nothing was lost. I think this what happened.... Also will we be able to set retention period by object again in the future? It seems that is no longer possible? Thanks. Before upgrade
After upgrade
|
Beta Was this translation helpful? Give feedback.
-
Got it. Before I was able to have a default of 3 days which included dog, cat, bird. Then 21 days for car and 30 days for person. I'm using alert for person now and detection for car and everything else. It would be good to get some of the retention granularity back. I liked being able to discard everything but person and car after 3 days. I get thousands of cars a day so grouping them in alerts with persons didn't work well. Thanks |
Beta Was this translation helpful? Give feedback.
-
Feel like I'm beating a dead horse but is there any reason this wouldn't fire? USPS just came by, and this didn't fire at all.
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Beta Documentation: https://deploy-preview-13787--frigate-docs.netlify.app/
Images
ghcr.io/blakeblackshear/frigate:0.15.0-beta7
ghcr.io/blakeblackshear/frigate:0.15.0-beta7-standard-arm64
ghcr.io/blakeblackshear/frigate:0.15.0-beta7-tensorrt
ghcr.io/blakeblackshear/frigate:0.15.0-beta7-tensorrt-jp4
ghcr.io/blakeblackshear/frigate:0.15.0-beta7-tensorrt-jp5
ghcr.io/blakeblackshear/frigate:0.15.0-beta7-rk
ghcr.io/blakeblackshear/frigate:0.15.0-beta7-rocm
ghcr.io/blakeblackshear/frigate:0.15.0-beta7-h8l
Changes since beta 5 and 6
Major Changes for 0.15.0
Breaking Changes
There are several breaking changes in this release, Frigate will attempt to update the configuration automatically. In some cases manual changes may be required. It is always recommended to back up your current config and database before upgrading:
frigate.db
fileshm_size
is too low then a warning will be printed in the log stating that it needs to be increased.record
config has been refactored to allow for direct control of how longalerts
anddetections
are retained. These values will be automatically populated from your current config, but you may want to adjust the values that are set after updating. See the updated docs here and ensure your updated config retains the footage you want it to.hwaccel
preset they are using (preset-vaapi
may need to now bepreset-intel-qsv-h264
orpreset-intel-qsv-h265
) if camera feeds are not functioning correctly after upgrading. This may need to be adjusted on a per-camera basis. If aqsv
preset is not working properly, you may still need to use apreset-vaapi
or revert to the previous ffmpeg version as described below.path: "5.0"
in yourffmpeg:
config entry. For example:ffmpeg
is no longer part of$PATH
. In most cases this is handled automatically.exec
streams will need to add the full path for ffmpeg which in most cases will be/usr/lib/ffmpeg/7.0/bin/ffmpeg
.bin: ffmpeg
defined, it needs to be removed.model
config under adetector
has been simplified to justmodel_path
as a string value. This change will be handled automatically through config migration.Explore
The new Explore pane in Frigate 0.15 makes it easy to explore every object tracked by Frigate. It offers a variety of filters and supports keyword and phrase-based text search, searching for similar images, and searching through descriptive text generated by AI models.
The default Explore pane shows a summary of your most recent tracked objects organized by label. Clicking the small arrow icon at the end of the list will bring you to an infinitely scrolling grid view. The grid view can also be set as the default by changing the view type from the Settings button in the top right corner of the pane.
The Explore pane also serves as the new way to submit images to Frigate+. Filters can be applied to only display tracked objects with snapshots that have not been submitted to Frigate+. The left/right arrow keys on the keyboard allow quick navigation between tracked object snapshots when looking at the Tracked Object Details pane from the grid view.
AI/ML Search
Frigate 0.15 introduces two powerful search features: Semantic Search and GenAI Search. Semantic Search can be enabled on its own, while GenAI Search works in addition to Semantic Search.
Semantic Search
Semantic Search uses a CLIP model to generate embeddings (numerical representations of images) for the thumbnails of your tracked objects, enabling searches based on text descriptions or visual similarity. This is all done locally.
For instance, if Frigate detects and tracks a car, you can use similarity search to see other instances where Frigate detected and tracked that same car. You can also quickly search your tracked objects using an "image caption" approach. Searching for "red car driving on a residential street" or "person in a blue shirt walking on the sidewalk at dawn" or even "a person wearing a black t-shirt with the word 'SPORT' on it" will produce some stunning results.
Semantic Search works by running an AI model locally on your system. Small or underpowered systems like a Raspberry Pi will not run Semantic Search reliably or at all. A dedicated GPU and 16GB of RAM is recommended for best performance.
See the Semantic Search docs for system requirements, setup instructions, and usage tips.
Generative AI
GenAI Search employs generative AI models to create descriptive text for the thumbnails of your tracked objects, which are stored in the Frigate database to enhance future searches. Supported providers include Google Gemini, Ollama, and OpenAI, so you can choose whether you want to send data to the cloud or use a locally hosted provider.
See the GenAI docs for setup instructions and use case suggestions.
Improved Tools for Debugging
Review Item Details Pane
A new Review Item Details pane can be viewed by clicking / tapping on the gray chip on a review item in the Review pane. This shows more information about the review item as well as thumbnails or snapshots for individual objects (if enabled). The pane also provides links to share the review item, download it, submit images to Frigate+, view object lifecycles, and more.
Object Lifecycle Pane
The Recordings Timeline from Frigate 0.13 has been improved upon and returns to 0.15 as the
Object Lifecycle
, viewable in the Review Details pane as well as the new Explore page. The new pane shows the significant moments during the object's lifecycle: when it was first seen, when it entered a zone, became stationary, etc. It also provides information about the object's area and size ratio to assist in configuring Frigate to tune out false positives.Native Notifications
Frigate now supports notifications using the WebPush protocol. This allows Frigate to deliver notifications to devices that have registered to receive notifications in the Frigate settings, delivering them in a timely and secure manner. Currently, notifications will be delivered for all review items marked as alerts. More options for native notifications will be supported in the future.
See the notifications docs.
New Object Detectors
ONNX
ONNX is an open model standard which allows for a single model format that can run on different types of GPUs. The
default
,tensorrt
, androcm
Frigate build variants include GPU support for efficient object detection via ONNX models, simplifying configuration and support more models. There are no default included ONNX models for object detection.AMD MiGraphX
Support has been added for AMD GPUs via ROCm and MiGraphX. Currently there is no default included model for this detector.
Hailo-8
Support has been added for the Hailo8 and Hailo-8L hardware for object detection on both arm64 and amd64 platforms.
Other UI Changes
Other Backend Changes
This discussion was created from the release 0.15.0 Beta 6.
Beta Was this translation helpful? Give feedback.
All reactions