You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
CF for VMs suports various log types which practically are the app logs themselves combined with the logs written from the platform components and a related to an app. An example would be: access logs, staging logs, app restarts, rescheduling and similar. All these logs are stored in Log Cache and are available through the Log Cache API. It would be nice to have all these app and app lifecycle logs available, but we should find out how and where to get them from.
I've tested and compared what kind of logs are being printed out with cf logs in CF-for-VMs and Korifi in/after different cf lifecycle events. Note:cf logs practically calls the Log Cache v1/read api for the given app with limit=200 parameter.
During my tests I've seen so far cf push output in Korifi is pretty similar if not identical to the one on CF-for-VMs. For the other commands like cf restage, cf restart Korifi only outputs the app logs and the other platform related logs have to be fetched via the pod events kubectl get events <pod>. In Korifi we have to practically combine data from various places to get a "cf logs" output as close to what we have on CF-for-VMs.
The other aspect to think about is that CF for VMs follows the Loggregator API format and observability metadata, if we want to use the same format and where and how do we get the data from.
In regards to the api/v1/query endpoint it is Prometheus compatible API, so if can make the metrics available via Prometheus we only need to map the api endpoint somewhere.
I guess, the /api/v1/meta would be out of scope if we simply read the logs from the k8s API server server. We could add something if needed afterwards.
The one important thing to think about is how to collect and merge the logs and metrics in a single output(api) in case an app has multiple instances (a workload with multiple pods).
For implementation of the API we could take a look at the log-cache-release and the log-cache-cf-cli plugin. It will be interesting to check and decide if a simple API facade on top of the k8s API would be enough or we need a central cache component which will collect everything and serve the data with the Log Cache API.
btw. I'm one of the maintainers of the Loggregator, the CF's logging and metrics stack and I want to help ;)
Dev Notes
api/v1/read
log cache endpoint in order to get logs for appapi/v1/query
in the log cache handler and make the process stats repository loop back to this endpointThe text was updated successfully, but these errors were encountered: