Re: slow select in big table - Mailing list pgsql-general

From Scott Marlowe
Subject Re: slow select in big table
Date
Msg-id dcc563d10904021948t7e21a04bn8885dfbbcb1531c5@mail.gmail.com
Whole thread Raw
In response to slow select in big table  (rafalak <rafalak@gmail.com>)
List pgsql-general
On Thu, Apr 2, 2009 at 2:48 PM, rafalak <rafalak@gmail.com> wrote:
> Hello i have big table
> 80mln records, ~6GB data, 2columns (int, int)
>
> if query
> select count(col1) from tab where col2=1234;
> return low records (1-10) time is good 30-40ms
> but when records is >1000 time is >12s
>
>
> How to increse performace ?
>
>
> my postgresql.conf
> shared_buffers = 810MB
> temp_buffers = 128MB
> work_mem = 512MB
> maintenance_work_mem = 256MB
> max_stack_depth = 7MB
> effective_cache_size = 800MB

Try lowering random_page_cost close to the setting of seq_page_cost
(i.e. just over 1 on a default seq_page_cost) and see if that helps.

pgsql-general by date:

Previous
From: Craig Ringer
Date:
Subject: Re: [GENERAL] Re: [GENERAL] Re: [GENERAL] ERROR: XX001: could not read block 2354 of relation...
Next
From: Scott Marlowe
Date:
Subject: Re: reducing IO and memory usage: sending the content of a table to multiple files