On Wed, Jul 10, 2013 at 10:47:33AM -0500, Jerry Sievers wrote:
> > What I have done in a similar situation, to minimize down time, is
> > to run a database ANALYZE with a very small target. I forget the
> > particular value we used -- it may have been 3. Then ran an
> > ANALYZE with the normal default target on a few key tables
> > (identified in testing to be the ones eating most of the scan time
> > with no statistics), and let users in. The database ANALYZE with
> > the normal default target was done while normal production hit the
> > database, without too much of a performance hit. With this
> > technique we were able to let users in with near-normal performance
> > with 10 or 15 minutes of down time rather than hours.
>
> Thanks Kevin! In fact, I've conceived of this solution route already
> and may have to resort to it. We do run with
> default_statistics_target set fairly high at 1000 (legacy setting
> here) without my knowing for certain that it's required across the
> board (and most likely is not).
>
> Curious though if it's known that the pg_statistic table can't be
> reloded from it's own pg_dump due to that error that I highlighted in
> the original post.
>
> Agree that being able to manually load pg_statistic is of questionable
> usefulness though perhaps viable under the right circumstances.
9.2 already creates an incremental statistics script called
analyze_new_cluster.sh:
Make pg_upgrade create a script to incrementally generate more
accurate optimizer statistics (Bruce Momjian)
This reduces the time needed to generate minimal cluster statistics
after an upgrade.
It runs with a 1/10/default statistics target.
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ It's impossible for everything to be true. +