Re: Kubernetes, cgroups v2 and OOM killer - how to avoid? - Mailing list pgsql-general
From | Joe Conway |
---|---|
Subject | Re: Kubernetes, cgroups v2 and OOM killer - how to avoid? |
Date | |
Msg-id | 2992983d-b237-4cc0-91d7-e3bc8de25006@joeconway.com Whole thread Raw |
In response to | Kubernetes, cgroups v2 and OOM killer - how to avoid? (Ancoron Luciferis <ancoron.luciferis@googlemail.com>) |
Responses |
Postgres_fdw- User Mapping with md5-hashed password
|
List | pgsql-general |
On 4/8/25 13:58, Ancoron Luciferis wrote: > On 2025-04-07 15:21, Joe Conway wrote: >> On 4/5/25 07:53, Ancoron Luciferis wrote: >>> I've been investigating this topic every now and then but to this day >>> have not come to a setup that consistently leads to a PostgreSQL backend >>> process receiving an allocation error instead of being killed externally >>> by the OOM killer. >>> >>> Why this is a problem for me? Because while applications are accessing >>> their DBs (multiple services having their own DBs, some high-frequency), >>> the whole server goes into recovery and kills all backends/connections. >>> >>> While my applications are written to tolerate that, it also means that >>> at that time, esp. for the high-frequency apps, events are piling up, >>> which then leads to a burst as soon as connectivity is restored. This in >>> turn leads to peaks in resource usage in other places (event store, >>> in-memory buffers from apps, ...), which sometimes leads to a series of >>> OOM killer events being triggered, just because some analytics query >>> went overboard. >>> >>> Ideally, I'd find a configuration that only terminates one backend but >>> leaves the others working. >>> >>> I am wondering whether there is any way to receive a real ENOMEM inside >>> a cgroup as soon as I try to allocate beyond its memory.max, instead of >>> relying on the OOM killer. >>> >>> I know the recommendation is to have vm.overcommit_memory set to 2, but >>> then that affects all workloads on the host, including critical infra >>> like the kubelet, CNI, CSI, monitoring, ... >>> >>> I have already gone through and tested the obvious: >>> >>> https://www.postgresql.org/docs/current/kernel-resources.html#LINUX- >>> MEMORY-OVERCOMMIT >> >> Importantly vm.overcommit_memory set to 2 only matters when memory is >> constrained at the host level. >> >> As soon as you are running in a cgroup with a hard memory limit, >> vm.overcommit_memory is irrelevant. >> >> You can have terabytes of free memory on the host, but if cgroup memory >> usage exceeds memory.limit (cgv1) or memory.max (cgv2) the OOM killer >> will pick the process in the cgroup with the highest oom_score and whack >> it. >> >> Unfortunately there is no equivalent to vm.overcommit_memory within the >> cgroup. >> >>> And yes, I know that Linux cgroups v2 memory.max is not an actual hard >>> limit: >>> >>> https://www.kernel.org/doc/html/latest/admin-guide/cgroup- >>> v2.html#memory-interface-files >> >> Read that again -- memory.max *is* a hard limit (same as memory.limit in >> cgv1). >> >> "memory.max >> >> A read-write single value file which exists on non-root cgroups. The >> default is “max”. >> >> Memory usage hard limit. This is the main mechanism to limit memory >> usage of a cgroup. If a cgroup’s memory usage reaches this limit and >> can’t be reduced, the OOM killer is invoked in the cgroup." > > Yes, I know it says "hard limit", but then any app still can go beyond > (might just be on me here to assume any "hard limit" to imply an actual > error when trying to go beyond). The OOM killer then will kick in > eventually, but not in any way that any process inside the cgroup could > prevent. So there is no signal that the app could react to saying "hey, > you just went beyond what you're allowed, please adjust before I kill you". No, that really is a hard limit and the OOM killer is *really* fast. Once that is hit there is no time to intervene. The soft limit (memory.high) is the one you want for that. Or you can monitor PSI and try to anticipate problems, but that is difficult at best. If you want to see how that is done, check out senpai: https://github.com/facebookincubator/senpai/blob/main/README.md >> If you want a soft limit use memory.high. >> >> "memory.high >> >> A read-write single value file which exists on non-root cgroups. The >> default is “max”. >> >> Memory usage throttle limit. If a cgroup’s usage goes over the high >> boundary, the processes of the cgroup are throttled and put under >> heavy reclaim pressure. >> >> Going over the high limit never invokes the OOM killer and under >> extreme conditions the limit may be breached. The high limit should >> be used in scenarios where an external process monitors the limited >> cgroup to alleviate heavy reclaim pressure. >> >> You want to be using memory.high rather than memory.max. > > Hm, so solely relying on reclaim? I think that'll just get the whole > cgroup into ultra-slow mode and would not actually prevent too much > memory allocation. While this may work out just fine for the PostgreSQL > instance, it'll for sure have effects on the other workloads on the same > node (which I have apparently, more PG instances). > > Apparently, I also don't see a way to even try this out in a Kubernetes > environment, since there doesn't seem to be a way to set this field > through some workload manifests field. Yeah, that part I have no idea about. I quit looking at kubernetes related things about 3 years ago. Although, this link seems to indicate there is a way related to how it does QoS: https://kubernetes.io/blog/2023/05/05/qos-memory-resources/#:~:text=memory.high%20formula >> Also, I don't know what kubernetes recommends these days, but it used to >> require you to disable swap. In more recent versions of kubernetes you >> are able to run with swap enabled but I have no idea what the default is >> -- make sure you run with swap enabled. > > Yes, this is what I wanna try out next. Seriously -- this is *way* more than half the battle. If you do nothing else, be sure to do this... -- Joe Conway PostgreSQL Contributors Team RDS Open Source Databases Amazon Web Services: https://aws.amazon.com
pgsql-general by date: