On 12/30/2015 11:09 AM, Cory Tucker wrote:
> We have a performance problem accessing one of our tables, I think
> because the statistics are out of date. The table is fairly large, on
> the order of 100M rows or so.
> The fix I have employed to restore the speed of the query after I notice
> it is happening is to manually issue a VACUUM ANALYZE on the table.
> After the analyze is done, the query returns to its normal speed.
> autovacuum_analyze_scale_factor | 0.05 |
> autovacuum_analyze_threshold | 50 |
> autovacuum_vacuum_scale_factor | 0.1 |
> autovacuum_vacuum_threshold | 50 |
With this scenario you can expect an autoanalyze every 5 million rows
and autovacuum every 10 million. In my experience (and based on your
description, yours as well) this is not often enough. Not only that,
when it does run it runs longer than you would like, causing an I/O hit
while it does.
You probably should tune this table specifically, e.g.
ALTER TABLE foo SET (autovacuum_vacuum_threshold=100000,
autovacuum_vacuum_scale_factor=0);
ALTER TABLE foo SET (autovacuum_analyze_threshold=100000,
autovacuum_analyze_scale_factor=0);
That will cause autovac and autoanalyze to run every 100k records
changed (pick your own number here, but I have used this very
successfully in the past). This way not only will the table remain well
vacuum analyzed, when they run they will finish quickly and have minimal
impact.
HTH,
Joe
--
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development