Advice for optimizing queries using Large Tables - Mailing list pgsql-general

From Shaun Grannis
Subject Advice for optimizing queries using Large Tables
Date
Msg-id OE46ALZbqbxQySmYjjJ00007490@hotmail.com
Whole thread Raw
List pgsql-general

Hi,

 

I hope someone out there has experience with large tables in Postgres.

 

I'm working with a table containing over 65 million records in Postgres v 7.1.3. The machine is a dual-processor Athlon MP1900 (Tyan Tiger board) with 3GB of PC2100 DDR RAM, and 3-80GB IBM 120GXP hard drives configured in a software RAID 0 Array running under RedHat Linux v. 7.2. Queries don't seem to be running as fast as "they should".

 

One of the columns (of type INTEGER) in this table holds a 4-digit value. The values are all within the range 1930-2001. I've indexed this column using the statements:

 

    CREATE INDEX value_idx ON table (value);

    VACUUM ANALYZE table;

 

This query:

 

    SELECT count(*) FROM table WHERE value=1999;

 

takes approximately 45 minutes to execute, and returns a count of approx 2.2 million records. My "instinct" is that this is much too slow for a query of an indexed column running on this hardware. Here is the EXPLAIN output:

 

    EXPLAIN SELECT count(*) FROM table WHERE value=1999;

    NOTICE: QUERY PLAN:

 

    Aggregate (cost=477861.60..477861.60 rows=1 width=0)

        -> Index Scan using value_idx on table (cost=0.00..477553.70 rows=123157 width=0)

 

    EXPLAIN

 

Here’s the table schema:

 

Table "table"

Attribute  | Type          | Modifier
-----------+---------------+----------
col01      | character(9)  |
col02      | character(15) |
col03      | character(15) |
col04      | character(1)  |
col05      | character(15) |
col06      | character(15) |
col07      | character(15) |
col08      | character(15) |
col09      | character(1)  |
col10      | integer       |
col11      | integer       |
col12      | integer       |
col13      | integer       |
col14      | integer       |
value      | integer       |


Indices: value_idx

 

I’ve set SHMMAX and SHMMALL to 2075721728 in /proc/sys/kernel/, and in postgresql.conf I’ve set the parameters to:

 

sort_mem = 262144 # sorting memory (in bytes) = 256 MB

shared_buffers = 249036 # this is 1.5GB/8192

fsync = true

 

Does anyone have any advice for optimizing the SELECT query listed above? Is this as fast as Postgresql will perform? Any good pointers on working with large tables in Postgres?

 

I appreciate your advice.

 

Best regards,

 Shaun Grannis

pgsql-general by date:

Previous
From: mdb002@yahoo.com (mdb)
Date:
Subject: Re: Temp Tables
Next
From: Jochen Mader
Date:
Subject: Re: Migration problem - serial fields