On 13.06.2023 10:55 AM, Kyotaro Horiguchi wrote:
> At Tue, 13 Jun 2023 09:55:36 +0300, Konstantin Knizhnik <knizhnik@garret.ru> wrote in
>> Postgres backend is "thick" not because of large number of local
>> variables.
>> It is because of local caches: catalog cache, relation cache, prepared
>> statements cache,...
>> If they are not rewritten, then backend still may consume a lot of
>> memory even if it will be thread rather then process.
>> But threads simplify development of global caches, although it can be
>> done with DSM.
> With the process model, that local stuff are flushed out upon
> reconnection. If we switch to the thread model, we will need an
> expiration mechanism for those stuff.
We already have invalidation mechanism. It will be also used in case of
shared cache, but we do not need to send invalidations to all backends.
I do not completely understand your point.
Right now caches (for example catalog cache) is not limited at all.
So if you have very large database schema, then this cache will consume
a lot of memory (multiplied by number of
backends). The fact that it is flushed out upon reconnection can not
help much: what if backends are not going to disconnect?
In case of shared cache we will have to address the same problem:
whether this cache should be limited (with some replacement discipline
as LRU).
Or it is unlimited. In case of shared cache, size of the cache is less
critical because it is not multiplied by number of backends.
So we can assume that catalog and relation cache should always fir in
memory (otherwise significant rewriting of all Postgtres code working
with relations will be needed).
But Postgres also have temporary tables. For them we may need local
backend cache in any case.
Global temp table patch was not approved so we still have to deal with
this awful temp tables.
In any case I do not understand why do we need some expiration mechanism
for this caches.
If there is some relation than information about this relation should be
kept in the cache as long as this relation is alive.
If there is not enough memory to cache information about all relations,
then we may need some replacement algorithm.
But I do not think that there is any sense to remove some item fro the
cache just because it is too old.