Thread: mem context is not reset between extended stats
Memory allocation appeared be O(1) WRT the number of statistics objects, which was not expected to me. This is true in v13 (and probably back to v10). It seems to work fine to reset the memory context within the loop, so long as the statslist is allocated in the parent context. |DROP TABLE t; CREATE TABLE t AS SELECT i, i+1 AS a, i+2 AS b, i+3 AS c, i+4 AS d, i+5 AS e FROM generate_series(1,99999)i; |SELECT format('CREATE STATISTICS sta%s (ndistinct) ON a,(1+b),(2+c),(3+d),(4+e) FROM t', a) FROM generate_series(1,9)a\gexec |SET log_statement_stats=on; SET client_min_messages=debug; ANALYZE t; |=> 369432 kB max resident size |SELECT format('CREATE STATISTICS sta%s (ndistinct) ON a,b,c,d,e FROM t', a) FROM generate_series(1,33)a\gexec |SET log_statement_stats=on; SET client_min_messages=debug; ANALYZE t; |=> 1284368 kB max resident size
Attachment
On 9/15/21 10:09 PM, Justin Pryzby wrote: > Memory allocation appeared be O(1) WRT the number of statistics objects, which > was not expected to me. This is true in v13 (and probably back to v10). > > It seems to work fine to reset the memory context within the loop, so long as > the statslist is allocated in the parent context. > Yeah, and I agree this fix seems reasonable. Thanks for looking! In principle we don't expect too many extended statistics on a single table, but building a single statistics may use quite a bit of memory, so it makes sense to release it early ... But while playing with this a bit more, I discovered a much worse issue. Consider this: create table t (a text, b text, c text, d text, e text, f text, g text, h text); insert into t select x, x, x, x, x, x, x, x from ( select md5(mod(i,100)::text) as x from generate_series(1,30000) s(i)) foo; create statistics s (dependencies) on a, b, c, d, e, f, g, h from t; analyze t; This ends up eating insane amounts of memory - on my laptop it eats ~2.5GB and then crashes with OOM. This happens because each call to dependency_degree does build_sorted_items, which detoasts the values. And resetting the context can't fix that, because this happens while building a single statistics object. IMHO the right fix is to run dependency_degree in a separate context, and reset it after each dependency. This releases the detoasted values, which are otherwise hard to deal with. This does not mean we should not do what your patch does too. That does address various other "leaks" (for example MCV calls build_sorted_items too, but only once so it does not have this same issue). These issues exist pretty much since PG10, which is where extended stats were introduced, so we'll have to backpatch it. But there's no rush and I don't want to interfere with rc1 at the moment. Attached are two patches - 0001 is your patch (seems fine, but I looked only very briefly) and 0002 is the context reset I proposed. regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Attachment
On Tue, Sep 21, 2021 at 02:15:45AM +0200, Tomas Vondra wrote: > On 9/15/21 10:09 PM, Justin Pryzby wrote: > > Memory allocation appeared be O(1) WRT the number of statistics objects, which > > was not expected to me. This is true in v13 (and probably back to v10). Of course I meant to say that it's O(N) and not O(1) :) > In principle we don't expect too many extended statistics on a single table, Yes, but note that expression statistics make it more reasonable to have multiple extended stats objects. I noticed this while testing a patch to build (I think) 7 stats objects on each of our current month's partitions. autovacuum was repeatedly killed on this vm after using using 2+GB RAM, probably in part because there were multiple autovacuum workers handling the most recent batch of inserted tables. First, I tried to determine what specifically was leaking so badly, and eventually converged to this patch. Maybe there's additional subcontexts which would be useful, but the minimum is to reset between objects. > These issues exist pretty much since PG10, which is where extended stats > were introduced, so we'll have to backpatch it. But there's no rush and I > don't want to interfere with rc1 at the moment. Ack that. It'd be *nice* if if the fix were included in v14.0, but I don't know the rules about what can change after rc1. > Attached are two patches - 0001 is your patch (seems fine, but I looked only > very briefly) and 0002 is the context reset I proposed. I noticed there seems to be a 3rd patch available, which might either be junk for testing or a cool new feature I'll hear about later ;) > From 204f4602b218ec13ac1e3fa501a7f94adc8a4ea1 Mon Sep 17 00:00:00 2001 > From: Tomas Vondra <tomas.vondra@postgresql.org> > Date: Tue, 21 Sep 2021 01:14:11 +0200 > Subject: [PATCH 1/3] reset context cheers, -- Justin
On 9/21/21 3:37 AM, Justin Pryzby wrote: > On Tue, Sep 21, 2021 at 02:15:45AM +0200, Tomas Vondra wrote: >> On 9/15/21 10:09 PM, Justin Pryzby wrote: >>> Memory allocation appeared be O(1) WRT the number of statistics objects, which >>> was not expected to me. This is true in v13 (and probably back to v10). > > Of course I meant to say that it's O(N) and not O(1) :) > Sure, I got that ;-) >> In principle we don't expect too many extended statistics on a single table, > > Yes, but note that expression statistics make it more reasonable to have > multiple extended stats objects. I noticed this while testing a patch to build > (I think) 7 stats objects on each of our current month's partitions. > autovacuum was repeatedly killed on this vm after using using 2+GB RAM, > probably in part because there were multiple autovacuum workers handling the > most recent batch of inserted tables. > > First, I tried to determine what specifically was leaking so badly, and > eventually converged to this patch. Maybe there's additional subcontexts which > would be useful, but the minimum is to reset between objects. > Agreed. I don't think there's much we could release, given the current design, because we evaluate (and process) all expressions at once. We might evaluate/process them one by one (and release the memory), but only when no other statistics kinds are requested. That seems pretty futile. >> These issues exist pretty much since PG10, which is where extended stats >> were introduced, so we'll have to backpatch it. But there's no rush and I >> don't want to interfere with rc1 at the moment. > > Ack that. It'd be *nice* if if the fix were included in v14.0, but I don't > know the rules about what can change after rc1. > IMO this is a bugfix, and I'll get it into 14.0 (and backpatch). But I don't want to interfere with the rc1 tagging and release, so I'll do that later this week. >> Attached are two patches - 0001 is your patch (seems fine, but I looked only >> very briefly) and 0002 is the context reset I proposed. > > I noticed there seems to be a 3rd patch available, which might either be junk > for testing or a cool new feature I'll hear about later ;) > Haha! Nope, that was just an experiment with doubling the repalloc() sizes in functional dependencies, instead of growing them in tiny chunks. but it does not make a measurable difference, so I haven't included that. regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Hi, I've pushed both of these patches, with some minor tweaks (freeing the statistics list, and deleting the new context), and backpatched them all the way to 10. Thanks for the report & patch, Justin! regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company