I found following answer from a colleague
If your application is a high volume production system and generates logs exceeding several hundred KiB/s you should consider to switch to another method.
For example Google Kubernetes Engine (GKE) provides default log throughput of at least 100 KiB/s per node, which can scale up to 10 MiB/s on underutilized nodes with sufficient CPU resources. However, at higher throughputs, there is a risk of log loss.
Check your log throughput in the Metrics explorer and based on that you can roughly have a recommendation:
Log Throughput | Recommended Approach |
---|---|
< 100 KiB/s per node | Console logging (ConsoleAppender) |
100 KiB/s – 500 KiB/s | Buffered/asynchronous file-based logging |
\> 500 KiB/s | Direct API integration or optimized agents |