On Tue, Sep 14, 2010 at 2:59 PM, Markus Wanner <markus@bluegap.ch> wrote:
> On 09/14/2010 08:41 PM, Robert Haas wrote:
>>
>> To avoid consuming system resources forever if they're not being used.
>
> Well, what timeout would you choose. And how would you justify it compared
> to the amounts of system resources consumed by an idle process sitting there
> and waiting for a job?
>
> I'm not against such a timeout, but so far I felt that unlimited would be
> the best default.
I don't have a specific number in mind. 5 minutes?
>> Well, presumably that would be fairly disastrous. I would think,
>> though, that you would not have a min/max number of workers PER
>> DATABASE, but an overall limit on the upper size of the total pool
>
> That already exists (in addition to the other parameters).
Hmm. So what happens if you have 1000 databases with a minimum of 1
worker per database and an overall limit of 10 workers?
>> - I
>> can't see any reason to limit the minimum size of the pool, but I
>> might be missing something.
>
> I tried to mimic what others do, for example apache pre-fork. Maybe it's
> just another way of trying to keep the overall resource consumption at a
> reasonable level.
>
> The minimum is helpful to eliminate waits for backends starting up. Note
> here that the coordinator can only request to fork one new bgworker at a
> time. It then needs to wait until that new bgworker registers with the
> coordinator, until it can request further bgworkers from the postmaster.
> (That's due to the limitation in communication between the postmaster and
> coordinator).
Hmm, I see. That's probably not helpful for autovacuum, but I can see
it being useful for replication. I still think maybe we ought to try
to crack the nut of allowing backends to rebind to a different
database. That would simplify things here a good deal, although then
again maybe it's too complex to be worth it.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company