Re: Troubles with performances - Mailing list pgsql-general

From Lincoln Yeoh
Subject Re: Troubles with performances
Date
Msg-id 3.0.5.32.20010122213325.00852d20@192.228.128.13
Whole thread Raw
In response to Troubles with performances  (Guillaume Lémery <glemery@comclick.com>)
List pgsql-general
At 07:07 PM 1/18/01 +0100, Guillaume Lémery wrote:
>I use PostGreSQL with a Web server which receive 200 HTTP simultaneous
>queries.
>For each HTTP query, I have about 5 SELECT queries and 3 UPDATE ones.

Just a shot in the dark:
Are you opening and closing a database connection for each query? If you
are, I suggest you don't, and instead use persistent database connections,
or some form of connection pooling.

How many http connections per second are you getting?

If it's not many connections per second, but they are taking a long time to
complete, there might be ways of reducing the number of simultaneous queries.

For example you could use buffering aka "http accelerator"- e.g. put a
webcache in front of your webserver. The idea is so that your app (and
database) can just spit out the results to the webcache at 100Mbps and not
wait for the remote client which is probably <<2Mbps (and 50-500msec away),
which will take 50 times longer or more. The webcache will buffer the
results and trickle them to the client.

However do note that some webcaches (e.g. squid) only buffer up to 8KB
before blocking (not sure if you can change that). You need a webcache
which can completely buffer your big and popular dynamic webpages (possibly
about 50-100KB). Apache mod_proxy can actually be configured to buffer
more, but I haven't really tested it in detail.

More info about your environment and configuration/architecture could be
helpful. e.g. what are you using for the stressed parts - mod_perl,
fast-cgi, php, cgi-bin, apache module.

Cheerio,
Link.


pgsql-general by date:

Previous
From: Anand Raman
Date:
Subject: problem with copy
Next
From: Alexander Jerusalem
Date:
Subject: Re: Troubles with performances