On 7/28/2005 2:03 PM, Tom Lane wrote:
> Phil Endecott <spam_from_postgresql_general@chezphil.org> writes:
>> For some time I had been trying to work out why every connection to my
>> database resulted in several megabytes of data being written to the
>> disk, however trivial the query. I think I've found the culprit:
>> global/pgstat.stat. This is with 7.4.7.
>
>> This is for a web application which uses a new connection for each CGI
>> request. The server doesn't have a particularly high disk bandwidth and
>> this mysterious activity had been the bottleneck for some time. The
>> system is a little unusual as one of the databases has tens of thousands
>> of tables (though I saw these writes whichever database I connected to).
>
> Well, there's the problem --- the stats subsystem is designed in a way
> that makes it rewrite its entire stats collection on every update.
> That's clearly not going to scale well to a large number of tables.
> Offhand I don't see an easy solution ... Jan, any ideas?
PostgreSQL itself doesn't work too well with tens of thousands of
tables. I don't see much of an easy solution either. The best workaround
I can offer is to move that horror-DB to a separate postmaster with
stats disabled altogether.
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck@Yahoo.com #