Re: Large table performance - Mailing list pgsql-performance

From Dave Dutcher
Subject Re: Large table performance
Date
Msg-id 010701c73785$54a02d30$8300a8c0@tridecap.com
Whole thread Raw
In response to Large table performance  (Mark Dobbrow <mdobbrow@yahoo.com>)
Responses Re: Large table performance
List pgsql-performance
Have you run vacuum and analyze on the table?  What version of Postgres are you running?  What OS are you using?
 
This looks like a straight forward query.  With any database the first time you run the query its going to be slower because it actually has to read off disk.  The second time its faster because some or all of the data/indexes will be cached.  However 10 seconds sounds like a long time for pulling 10,000 records out of a table of 3 million.  If you post an EXPLAIN ANALYZE, it might give us a clue.
 
Dave
 
-----Original Message-----
From: pgsql-performance-owner@postgresql.org [mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Mark Dobbrow
Sent: Friday, January 12, 2007 6:31 PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Large table performance

Hello -

I have a fairly large table (3 million records), and am fetching 10,000 non-contigous records doing a simple select on an indexed column ie

select grades from large_table where teacher_id = X

This is a test database, so the number of records is always 10,000 and i have 300 different teacher ids.

The problem is, sometimes fetching un-cached records takes 0.5 secs and sometimes (more often) is takes more like 10.0 seconds

(fetching the same records for a given teacher_id a second time takes about 0.25 secs)

Has anyone seen similar behavior or know what the solution might be?

any help much appreciated,
Mark



ps. My shared_buffers is set at 5000 (kernal max), and work_mem=8192


pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: Performance of Parser?
Next
From: Dave Cramer
Date:
Subject: Re: Performance of Parser?