Re: Re: refusing connections based on load ... - Mailing list pgsql-hackers

From Lincoln Yeoh
Subject Re: Re: refusing connections based on load ...
Date
Msg-id 3.0.5.32.20010425100554.0093d790@192.228.128.13
Whole thread Raw
In response to Re: Re: refusing connections based on load ...  (ncm@zembu.com (Nathan Myers))
Responses Re: Re: Re: refusing connections based on load ...  (The Hermit Hacker <scrappy@hub.org>)
List pgsql-hackers
At 10:59 PM 23-04-2001 -0700, Nathan Myers wrote:
>On Tue, Apr 24, 2001 at 12:39:29PM +0800, Lincoln Yeoh wrote:
>> Why not be more deterministic about refusing connections and stick
>> to reducing max clients? If not it seems like a case where you're
>> promised something but when you need it, you can't have it.
>
>The point is that "number of connections" is a very poor estimate of 
>system load.  Sometimes a connection is busy, sometimes it's not.

Actually I use number of connections to estimate how much RAM I will need,
not for estimating system load.

Because once the system runs out of RAM, performance drops a lot. If I can
prevent the system running out of RAM, it can usually take whatever I throw
at it at near the max throughput. 

For my app say the max is X hits per second with a few concurrent
transactions. When I boost it the number of concurrent transactions (e.g.
25 on a 128MB machine, load~13) it goes down to maybe 0.95X hits per
second[1]. This is acceptable to me.

But once the machine starts swapping, things bog down drastically and some
connections get Server Error.

>Refusing a connection and letting the client try again later can be 
>a way to maximize throughput by keeping the system at the optimum 
>point.  (Waiting reduces delay.  Yes, this is counterintuitive, but 
>why do we queue up at ticket windows?)
>
>Delaying response, when under excessive load, to clients who already 
>have a connection -- even if they just got one -- can have a similar 
>effect, but with finer granularity and with less complexity in the 
>clients.  

With my web apps, refusing connection based on load doesn't help at all,
they are fastcgi processes and are already holding database connections
open, before even getting a web request ( might as well open the db
connection before the client talks to you).

For other apps maybe refusing connection could help. But are these cases in
the majority? In say a bank teller environment, the database connections
are probably already open, and could remain open the whole day.

Delaying transactions based on load is easier to understand for me.

Cheerio,
Link.

[1] This is a guesstimate: the hits per second drops gradually during the
benchmark.
The speed for a low concurrent test run AFTER the benchmark had a slower
hits per second than the benchmark figures.

This is probably because there was a lot of selecting and updating of the
same row, and Postgresql needs a vacuum before the speed goes back up.
Seems like the dead rows get in the way of the index or something - speed
doesn't slow down as much for lots of inserts and selects.





pgsql-hackers by date:

Previous
From: Philip Warner
Date:
Subject: Re: pg_dump
Next
From: The Hermit Hacker
Date:
Subject: Re: Re: Re: refusing connections based on load ...