Thread: ERROR: unrecognized parameter "autovacuum_analyze_scale_factor"
Hey,
I have a very big toasted table in my db(9.2.5). Autovacuum doesnt gather statistics on it because the analyze_scale/threshold are default and as a result autoanalyze is never run and the statistics are wrong :
select * from pg_stat_all_Tables where relname='pg_toast_13488395';
-[ RECORD 1 ]-----+------------------------------
relid | 13388396
schemaname | pg_toast
relname | pg_toast_13488395
seq_scan | 42
seq_tup_read | 71163925
idx_scan | 5374497
idx_tup_fetch | 2530272449
n_tup_ins | 1253680014
n_tup_upd | 0
n_tup_del | 1253658363
n_tup_hot_upd | 0
n_live_tup | 49425717 wrong
n_dead_tup | 7822920 wrong
last_vacuum |
last_autovacuum | 2019-02-12 20:54:44.247083-05
last_analyze |
last_autoanalyze |
vacuum_count | 0
autovacuum_count | 3747
analyze_count | 0
autoanalyze_count | 0
When I try to set the both the scale_factor / threshold I'm getting the next error :
alter table orig_table set (toast.autovacuum_analyze_scale_factor=0);
ERROR: unrecognized parameter "autovacuum_analyze_scale_factor"
Any idea why ? I didn't find in the release documentations a note for a similar bug.
On 2019-Feb-13, Mariel Cherkassky wrote: > Hey, > I have a very big toasted table in my db(9.2.5). Six years of bugfixes missing there ... you need to think about an update. > Autovacuum doesnt gather > statistics on it because the analyze_scale/threshold are default and as a > result autoanalyze is never run and the statistics are wrong : analyze doesn't process toast tables anyway. I think the best you could do is manually vacuum this table. -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
To be honest, it isnt my db, but I just have access to it ...
Either way, so I need to change the vacuum_Analyze_scale/threshold for the original table ? But the value will be too high/low for the original table. For example if my original table has 30,000 rows and my toasted has 100,000,000 rows. I want to analyze every 50K records in the toasted (by the way, sounds legit ?) which is 0.05% of 100m. With this value it means that every 0.05*30,000=1500 updated/deletes on the original table it will run analyze on the original table which is very often...
Doesn't it seems a little bit problematic ?
בתאריך יום ד׳, 13 בפבר׳ 2019 ב-18:13 מאת Alvaro Herrera <alvherre@2ndquadrant.com>:
On 2019-Feb-13, Mariel Cherkassky wrote:
> Hey,
> I have a very big toasted table in my db(9.2.5).
Six years of bugfixes missing there ... you need to think about an
update.
> Autovacuum doesnt gather
> statistics on it because the analyze_scale/threshold are default and as a
> result autoanalyze is never run and the statistics are wrong :
analyze doesn't process toast tables anyway.
I think the best you could do is manually vacuum this table.
--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 2019-Feb-13, Mariel Cherkassky wrote: > To be honest, it isnt my db, but I just have access to it ... Well, I suggest you forget the password then :-) > Either way, so I need to change the vacuum_Analyze_scale/threshold for the > original table ? But the value will be too high/low for the original table. > For example if my original table has 30,000 rows and my toasted has > 100,000,000 rows. I want to analyze every 50K records in the toasted (by > the way, sounds legit ?) which is 0.05% of 100m. With this value it means > that every 0.05*30,000=1500 updated/deletes on the original table it will > run analyze on the original table which is very often... > Doesn't it seems a little bit problematic ? Autovacuum considers main table and toast table separately for vacuuming, so nothing you do to the parameters for the main table will affect the vacuuming schedule of the toast table. -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
I meant the anaylze, if anaylze will run very often on the original table, arent there disadvantages for it ?
בתאריך יום ד׳, 13 בפבר׳ 2019 ב-18:54 מאת Alvaro Herrera <alvherre@2ndquadrant.com>:
On 2019-Feb-13, Mariel Cherkassky wrote:
> To be honest, it isnt my db, but I just have access to it ...
Well, I suggest you forget the password then :-)
> Either way, so I need to change the vacuum_Analyze_scale/threshold for the
> original table ? But the value will be too high/low for the original table.
> For example if my original table has 30,000 rows and my toasted has
> 100,000,000 rows. I want to analyze every 50K records in the toasted (by
> the way, sounds legit ?) which is 0.05% of 100m. With this value it means
> that every 0.05*30,000=1500 updated/deletes on the original table it will
> run analyze on the original table which is very often...
> Doesn't it seems a little bit problematic ?
Autovacuum considers main table and toast table separately for
vacuuming, so nothing you do to the parameters for the main table will
affect the vacuuming schedule of the toast table.
--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 2019-Feb-14, Mariel Cherkassky wrote: > I meant the anaylze, if anaylze will run very often on the original table, > arent there disadvantages for it ? It'll waste time and resources pointlessly. Don't do it -- it won't do any good. -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
'but then I don't have accurate statistics on my toasted table..
On Fri, Feb 15, 2019, 3:39 PM Alvaro Herrera <alvherre@2ndquadrant.com wrote:
On 2019-Feb-14, Mariel Cherkassky wrote:
> I meant the anaylze, if anaylze will run very often on the original table,
> arent there disadvantages for it ?
It'll waste time and resources pointlessly. Don't do it -- it won't do
any good.
--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services