From: "Merlin Moncure" <mmoncure@gmail.com>
--------------------------------------------------
Oh -- I missed earlier that this was 32 bit o/s. Well, I'd consider
drastically reducing shared buffers, down to say 256-512mb range.
Postgres function plans and various other structures, tables,
attributes are indeed cached and can use up a considerable amount of
memory in pathological cases -- this is largely depending on the
number of tables/views, number of functions and number of connections.I briefly looked at the relcache etc a little
whileback on a related
complaint and the takeaway is that the caching is heavy handed and
fairly brute force but legit and a huge win for most cases. This stuff
lives in the cache memory context and a couple of users (not that
many) have bumped into high memory usage. Solutions tend to include:
*) not rely on implementation that requires 100000 tables
*) use connection pooler
*) reset connections
*) go to 64 bit o/s
*) reduce shared_buffers for leaner memory profile (especially in 32 bit os)
Like I said, this doesn't really come up this often but the 'real'
solution in terms of postgrs is probably some kind of upper bound in
the amount of cache memory used plus some intelligence in the cache
implementation. This is tricky stuff though and so far no credible
proposals have been made and the demand for the feature is not very
high.
--------------------------------------------------
Thank you very much. I'm relieved I could understand the reason. I will
report it to the customer and ask him to consider taking the following
measures:
* reduce shared_buffers
* run somefunc() and VACUUM in different psql sessions
* process 100,000 tables in multiple psql sessions
Regards
MauMau