On 18.06.2013 15:48, Heikki Linnakangas wrote:
> On 18.06.2013 14:27, MauMau wrote:
>> The cause of the memory increase appears to be CacheMemoryContext. When
>> I attached to postgres with gdb and ran "call
>> MemoryContextStats(TopMemoryContext)" several times, the size of
>> CacheMemoryContext kept increasing.
>
> Hmm. I could repeat this, and it seems that the catcache for
> pg_statistic accumulates negative cache entries. Those slowly take up
> the memory.
Digging a bit deeper, this is a rather common problem with negative
catcache entries. In general, nothing stops you from polluting the cache
with as many negative cache entries as you like. Just do "select * from
table_that_doesnt_exist" for as many non-existent table names as you
want, for example. Those entries are useful at least in theory; they
speed up throwing the error the next time you try to query the same
non-existent table.
But there is a crucial difference in this case; the system created a
negative cache entry for the pg_statistic row of the table, but once the
relation is dropped, the cache entry keyed on the relation's OID, is
totally useless. It should be removed.
We have this problem with a few other catcaches too, which have what is
effectively a foreign key relationship with another catalog. For
example, the RELNAMENSP catcache is keyed on pg_class.relname,
pg_class.relnamespace, yet any negative entries are not cleaned up when
the schema is dropped. If you execute this repeatedly in a session:
CREATE SCHEMA foo;
SELECT * from foo.invalid; -- throws an error
DROP SCHEMA foo;
it will leak similarly to the original test case, but this time the leak
is into the RELNAMENSP catcache.
To fix that, I think we'll need to teach the catalog cache about the
relationships between the caches.
- Heikki