I wrote:
> Amit Langote <amitlangote09@gmail.com> writes:
>> On Tue, Dec 24, 2019 at 10:59 AM Amit Langote <amitlangote09@gmail.com> wrote:
>>> Btw, does the memory leakage fix in this patch address any of the
>>> pending concerns that were discussed on the "hyrax vs.
>>> RelationBuildPartitionDesc" thread earlier this year[1]?
>>> [1] https://www.postgresql.org/message-id/flat/3800.1560366716%40sss.pgh.pa.us#092b6b4f6bf75d2f3f90ef6a3b3eab5b
>> I thought about this a little and I think it *does* address the main
>> complaint in the above thread.
It occurred to me to also recheck the original complaint in that thread,
which was poor behavior in CLOBBER_CACHE_ALWAYS builds. I didn't have
the patience to run a full CCA test, but I did run update.sql, which
we previously established was sufficient to show the problem. There's
no apparent memory bloat, either with HEAD or with the patch. I also
see the runtime (for update.sql on its own) dropping from about
474 sec in HEAD to 457 sec with the patch. So that indicates that we're
actually saving a noticeable amount of work, not just postponing it,
at least under CCA scenarios where relcache entries get flushed a lot.
I also tried to measure update.sql's runtime in a regular debug build
(not CCA). I get pretty repeatable results of 279ms on HEAD vs 273ms
with patch, or about a 2% overall savings. That's at the very limit of
what I'd consider a reproducible difference, but still it seems to be
real. So that seems like evidence that forcing the partition data to be
loaded immediately rather than on-demand is a loser from a performance
standpoint as well as the recursion concerns that prompted this patch.
Which naturally leads one to wonder whether forcing other relcache
substructures (triggers, rules, etc) to be loaded immediately isn't
a loser as well. I'm still feeling like we're overdue to redesign how
all of this works and come up with a more uniform, less fragile/ad-hoc
approach. But I don't have the time or interest to do that right now.
Anyway, I've run out of reasons not to commit this patch, so I'll
go do that.
regards, tom lane