Horiguchi-san, Bruce, all,
I hesitate to say this, but I think there are the following problems with the proposed approach:
1) Tries to prune the catalog tuples only when the hash table is about to expand.
If no tuple is found to be eligible for eviction at first and the hash table expands, it gets difficult for unnecessary
orless frequently accessed tuples to be removed because it will become longer and longer until the next hash table
expansion. The hash table doubles in size each time.
For example, if many transactions are executed in a short duration that create and drop temporary tables and indexes,
thehash table could become large quickly.
2) syscache_prune_min_age is difficult to set to meet contradictory requirements.
e.g., in the above temporary objects case, the user wants to shorten syscache_prune_min_age so that the catalog tuples
fortemporary objects are removed. But that also is likely to result in the necessary catalog tuples for non-temporary
objectsbeing removed.
3) The DBA cannot control the memory usage. It's not predictable.
syscache_memory_target doesn't set the limit on memory usage despite the impression from its name. In general, the
cacheshould be able to set the upper limit on its size so that the DBA can manage things within a given amount of
memory. I think other PostgreSQL parameters are based on that idea -- shared_buffers, wal_buffers, work_mem,
temp_buffers,etc.
4) The memory usage doesn't decrease once allocated.
The normal allocation memory context, aset.c, which CacheMemoryContextuses, doesn't return pfree()d memory to the
operatingsystem. Once CacheMemoryContext becomes big, it won't get smaller.
5) Catcaches are managed independently of each other.
Even if there are many unnecessary catalog tuples in one catcache, they are not freed to make room for other
catcaches.
So, why don't we make syscache_memory_target the upper limit on the total size of all catcaches, and rethink the past
LRUmanagement?
Regards
Takayuki Tsunakawa