>From: Vladimir Sitnikov [mailto:sitnikov.vladimir@gmail.com]
>
>Robert> This email thread is really short on clear demonstrations that X
>Robert> or Y is useful.
>
>It is useful when the whole database does **not** crash, isn't it?
>
>Case A (==current PostgeSQL mode): syscache grows, then OOMkiller chimes in, kills
>the database process, and it leads to the complete cluster failure (all other PG
>processes terminate themselves).
>
>Case B (==limit syscache by 10MiB or whatever as Tsunakawa, Takayuki
>asks): a single ill-behaved process works a bit slower and/or consumers more CPU
>than the other ones. The whole DB is still alive.
>
>I'm quite sure "case B" is much better for the end users and for the database
>administrators.
>
>So, +1 to Tsunakawa, Takayuki, it would be so great if there was a way to limit the
>memory consumption of a single process (e.g. syscache, workmem, etc, etc).
>
>Robert> However, memory usage is quite unpredictable. It depends on how
>Robert> many backends are active
>
>The number of backends can be limited by ensuring a proper limits at application
>connection pool level and/or pgbouncer and/or things like that.
>
>Robert>how many copies of work_mem and/or maintenance_work_mem are in
>Robert>use
>
>There might be other patches to cap the total use of
>work_mem/maintenance_work_mem,
>
>Robert>I don't think we
>Robert> can say that just imposing a limit on the size of the system
>Robert>caches is going to be enough to reliably prevent an out of
>Robert>memory condition
>
>The less possibilities there are for OOM the better. Quite often it is much better to fail
>a single SQL rather than kill all the DB processes.
Yeah, I agree. This limit would be useful for such extreme situation.
Regards,
Takeshi Ideriha