Hi
We have a busy postgres 10 Db with a principal table that holds about 15 million active rows and has about 90M Inserts|Updates|Deletes a day.
All performs well, except a small number of monitoring SQL statements of the nature
select code, max(timestamp) from mainTable group by code;
After running ANALYZE or VACUUM ANALYZE on the table, the Query Plan uses the an index on columns (code, timestamp) and takes less than 1s to report the latest value for each code.
However, after about 20 minutes, the Query Plan has changed to be a full table scan mainTable and this takes about 30s. This is very puzzling because the nature of the table and data has not changed, although many values and 5M changes may have happened. The only possible cause can be auto vacuum.
Playing with
autovacuum_analyze_threshold
autovacuum_analyze_scale_factor
default_statistics_target
What is the best option to ensure that the table statistics stay upto data and that the best Query Plan is generated.One option would be to use CRON and regenerate the table statistics every 15 minutes, or disable auto vacuum ANALYZE function, but neither of these options feel write.
How does the auto vacuum analyze function work? Does it
- use a subset of the whole table to regenerate the table statistics
- use a subset of the rows changed to regenerate the table statistics
- use a subset of the rows changed to update the table statistics
- or something else
Any help or suggestions with this would be appreciated
Simon
Simon Windsor
“There is nothing in the world that some man cannot make a little worse and sell a little cheaper, and he who considers price only is that man's lawful prey.”