Re: PostgreSQL (9.3 and 9.6) eats all memory when using many tables - Mailing list pgsql-bugs

From Tom Lane
Subject Re: PostgreSQL (9.3 and 9.6) eats all memory when using many tables
Date
Msg-id 30233.1465842485@sss.pgh.pa.us
Whole thread Raw
In response to Re: PostgreSQL (9.3 and 9.6) eats all memory when using many tables  (Jeff Janes <jeff.janes@gmail.com>)
List pgsql-bugs
Jeff Janes <jeff.janes@gmail.com> writes:
> On Mon, Jun 13, 2016 at 6:36 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> While we have no direct experience with limiting the plancache size,
>> I'd expect a pretty similar issue there: a limit will either do nothing
>> except impose substantial bookkeeping overhead (if it's more than the
>> number of plans in your working set) or it will result in a performance
>> disaster from cache thrashing (if it's less).

> We don't need to keep a LRU list or do a clock sweep or anything.  We
> could go really simple and just toss the whole thing into /dev/null
> when it gets too large, and start over.

Color me skeptical as heck.  To the extent that you do have locality
of reference, this would piss it away.

Also, you can't just flush the plan cache altogether, not for PREPARE'd
statements and not for internally-prepared ones either, because there
are references being held for both of those.  You could drop the plan
tree, certainly, but that only goes so far in terms of reducing the
amount of space needed.  Dropping more than that risks subtle semantic
changes, and would break API expectations of external PLs too.

            regards, tom lane

pgsql-bugs by date:

Previous
From: Tom Lane
Date:
Subject: Re: BUG #14186: Inconsistent code modification
Next
From: Eric Worden
Date:
Subject: Re: Fwd: BUG #14181: pg_upgrade: operator family "btree_hstore_ops" does not exist