>> I've looked at struct vfd and some simple changes to the struct would >> already cut memory consumption in half. I can look into that. >> >> Thoughts? > > Looking forward to this.
I try to come up with something the next days.
> What also bothers me in that space is if a backend allocates 100K entries > in VFD cache, that cache is never shrank ever again, > the cache only grows (if it needs more than its lifetime maximum) until the > backend dies, although this is useful as entries are reused if free instead > of > allocating entries, whether a spike in files openings effects a long living > backend to keep holding a useless amount of > cache size it will need in the future, i don't imagine this to be common > though, what do you think about this issue from your experience ?
Currently the cache is directly mapped by the VFD index. That means we could only resize down to the maximum used VFD index.
Being able to resize independently of the maximum VFD index would require changing to a hash map like simplehash.h. I can take a look how invasive such a change would be.
-- David Geier
I've implemented the recommended global stats view on vfd cache, the implementation should be also straightforward as it follows the same cumulative shared statistics infrastructure like pgstat_bgwriter and others do.
Attached is v2 patch also contains what David suggested for global cache size and entries in the view.