Thread: SQL Query - Performance Problem

SQL Query - Performance Problem

From
"Dean Sheppard"
Date:
Hi All,
 
I have a postgres SQL Table which contains 30 Million Records. It is a simple flat table with 4 columns. Each column has an index and I select from this table using a single column at any time.
 
Here's the problem. A simple select statement takes upto 30 seconds to respond, if I add in connection pooling and 15 - 20 concurrent requests the responses are now 15-20 minutes!! I have no idea what the tuning options may be, but I need to get these queries running in 30 seconds or so.
 
Any ideas?
 
Regards,
Dean.
 

Re: SQL Query - Performance Problem

From
Josh Berkus
Date:
Dean,

> Here's the problem. A simple select statement takes upto 30 seconds to
respond, if I add in connection pooling and 15 - 20 concurrent requests the
responses are now 15-20 minutes!! I have no idea what the tuning options may
be, but I need to get these queries running in 30 seconds or so.
>
> Any ideas?

Nope, not a sausage.   Not based on the quantity of information you've given
us.

Please read:
http://techdocs.postgresql.org/guides/SlowQueryPostingGuidelines

And re-post your question.   Thanks!

--
-Josh Berkus
 Aglio Database Solutions
 San Francisco


Re: SQL Query - Performance Problem

From
george young
Date:
On Sun, 7 Mar 2004 18:54:12 -0500
"Dean Sheppard" <dean.sheppard@rogers.com> threw this fish to the penguins:

> I have a postgres SQL Table which contains 30 Million Records. It is a simple flat table with 4 columns. Each column
hasan index and I select from this table using a single column at any time. 
>
> Here's the problem. A simple select statement takes upto 30 seconds to respond, if I add in connection pooling and 15
-20 concurrent requests the responses are now 15-20 minutes!! I have no idea what the tuning options may be, but I need
toget these queries running in 30 seconds or so. 

For questions like this, please state the exact version of postgres,
the platform, and OS.  Show the table definition, i.e. do:

   \d mybigtable

and the output of "explain analyze" on the query, i.e. do:

  explain analyze select x,y,z from mybigtable where x='foo' and y>33;

for whatever the slow query is.  Don't worry about the heavily loaded case
yet, just work on the isolated query for now.
And mention the locale/encoding if any conditions involve text comparisons.

For good measure, you might include output from:
   select * from pg_stats where tablename='mybigtable';

Hmm, be SURE you've done "vacuum analyze" on the db before expecting
good performance.  Just do (at the shell prompt, not in psql):

   vacuumdb --analyze mybigdb

Vacuum may take a while but it's worth it.  You should vacuum periodically.
There's an autovacuum available that has less impact on your system if that
is a problem.  The --analyze is only necessary when/if your distribution
of key values changes significantly.

Also, if you're using the default postgresql.conf file, see:
  http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html


-- George Young
--
"Are the gods not just?"  "Oh no, child.
What would become of us if they were?" (CSL)