Re: Update on high concurrency OLTP application and Postgres - Mailing list pgsql-performance

From Jim Nasby
Subject Re: Update on high concurrency OLTP application and Postgres
Date
Msg-id CAA8A155-C735-405D-8155-7FC79010E186@nasby.net
Whole thread Raw
In response to Re: Update on high concurrency OLTP application and Postgres  (Cosimo Streppone <cosimo@streppone.it>)
List pgsql-performance
Have you ever done any testing to see if just setting
default_statistics_target to 500 has a negative impact on the system?

On Sep 22, 2006, at 4:48 PM, Cosimo Streppone wrote:

> Christian Storm wrote:
>
>>> At the moment, my rule of thumb is to check out the ANALYZE VERBOSE
>>> messages to see if all table pages are being scanned.
>>>
>>>   INFO: "mytable": scanned xxx of yyy pages, containing ...
>>>
>>> If xxx = yyy, then I keep statistics at the current level.
>>> When xxx is way less than yyy, I increase the numbers a bit
>>> and retry.
>>>
>>> It's probably primitive, but it seems to work well.
> >
>> What heuristic do you use to up the statistics for such a table?
>
> No heuristics, just try and see.
> For tables of ~ 10k pages, I set statistics to 100/200.
> For ~ 100k pages, I set them to 500 or more.
> I don't know the exact relation.
>
>> Once you've changed it, what metric do you use to
> > see if it helps or was effective?
>
> I rerun an analyze and see the results... :-)
> If you mean checking the usefulness, I can see it only
> under heavy load, if particular db queries run in the order
> of a few milliseconds.
>
> If I see normal queries that take longer and longer, or
> they even appear in the server's log (> 500 ms), then
> I know an analyze is needed, or statistics should be set higher.
--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)



pgsql-performance by date:

Previous
From: "Junaili Lie"
Date:
Subject: Re: slow i/o
Next
From: Jim Nasby
Date:
Subject: Re: IN not handled very well?