Thread: pg_stat_user_tables.n_tup_ins empty for partitioned table
Hi all, I've got a table named "root", partitioned on a date field into years (e.g., "2018") and into months like "y2018m11" using range partitioning on PostgreSQL 11.2. Tuples are inserted into root with an INSERT...SELECT. I have performed an UPDATE due to a new column inserted in the parent table y2018. Today, checking the statistics of the table, I found that no tuples were inserted: testdb=> SELECT relname, seq_scan, idx_scan, n_tup_ins, n_tup_del, n_tup_upd, n_tup_hot_upd, n_live_tup, n_dead_tup, last_vacuum, last_autovacuum, last_analyze, last_autoanalyze FROM pg_stat_user_tables; -[ RECORD 12 ]---+------------------------------ relname | y2018m11 seq_scan | 42172 idx_scan | n_tup_ins | 0 n_tup_del | 0 n_tup_upd | 28191462 n_tup_hot_upd | 86 n_live_tup | 14086279 n_dead_tup | 0 last_vacuum | last_autovacuum | 2019-03-28 17:23:35.909943+01 last_analyze | last_autoanalyze | 2019-03-28 17:25:12.773707+01 I suspect the fact that n_tup_ins is 0 is due to the fact that I did insert the tuples into the parent y2018, but I would like to better understand: how can have n_live_tup without having n_tup_ins? Moreover, I don't have any stat for the parent table root, neither for y2018. Some explaination/pointer is appreciated. Thanks, Luca
Hi, On 2019-05-22 10:08:44 +0200, Luca Ferrari wrote: > I've got a table named "root", partitioned on a date field into years > (e.g., "2018") and into months like "y2018m11" using range > partitioning on PostgreSQL 11.2. > Tuples are inserted into root with an INSERT...SELECT. I have > performed an UPDATE due to a new column inserted in the parent table > y2018. > > Today, checking the statistics of the table, I found that no tuples > were inserted: > > testdb=> SELECT relname, seq_scan, idx_scan, > n_tup_ins, n_tup_del, n_tup_upd, n_tup_hot_upd, > n_live_tup, n_dead_tup, > last_vacuum, last_autovacuum, > last_analyze, last_autoanalyze > FROM pg_stat_user_tables; > > -[ RECORD 12 ]---+------------------------------ > relname | y2018m11 > seq_scan | 42172 > idx_scan | > n_tup_ins | 0 > n_tup_del | 0 > n_tup_upd | 28191462 > n_tup_hot_upd | 86 > n_live_tup | 14086279 > n_dead_tup | 0 > last_vacuum | > last_autovacuum | 2019-03-28 17:23:35.909943+01 > last_analyze | > last_autoanalyze | 2019-03-28 17:25:12.773707+01 > > > I suspect the fact that n_tup_ins is 0 is due to the fact that I did > insert the tuples into the parent y2018, but I would like to better > understand: how can have n_live_tup without having n_tup_ins? > Moreover, I don't have any stat for the parent table root, neither for > y2018. > Some explaination/pointer is appreciated. That clearly seems wrong. Could you try build a small reproducer? Greetings, Andres Freund
On Wed, May 22, 2019 at 7:55 PM Andres Freund <andres@anarazel.de> wrote: > That clearly seems wrong. Could you try build a small reproducer? Apparently not, I've tried to simulate the same but without any success, that is n_tup_ins is always correctly set. However, I've noted that this behavior applies up to february # select schemaname, relname, n_tup_ins, n_tup_upd, n_tup_del, n_live_tup from pg_stat_user_tables where n_tup_ins = 0 and n_live_tup > 0; schemaname | relname | n_tup_ins | n_tup_upd | n_tup_del | n_live_tup ------------+----------+-----------+-----------+-----------+------------ spire | y2018m09 | 0 | 28961860 | 0 | 9708398 spire | y2018m10 | 0 | 29791202 | 0 | 14902436 spire | y2018m11 | 0 | 28191462 | 0 | 14086279 spire | y2018m12 | 0 | 29676478 | 0 | 14828806 spire | y2019m01 | 0 | 28769406 | 0 | 14381782 spire | y2019m02 | 0 | 27088208 | 0 | 13541677 spire | sensori | 0 | 892 | 0 | 446 now what I did change back then was to avoid a single INSERT...SELECT statement and provide a FOR...SELECT loop with every single insert within it. But I don't think this is the reason, could it be a restore from a backup I don't remember? Unluckily I cannot reproduce this behavior so far. And I stand correct, the PostgreSQL version is 11.1: # select version(); version --------------------------------------------------------------------------------------------------------- PostgreSQL 11.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit Any idea is appreciated.
On Fri, 24 May 2019 at 19:20, Luca Ferrari <fluca1978@gmail.com> wrote: > > On Wed, May 22, 2019 at 7:55 PM Andres Freund <andres@anarazel.de> wrote: > > That clearly seems wrong. Could you try build a small reproducer? > > Apparently not, I've tried to simulate the same but without any > success, that is n_tup_ins is always correctly set. > However, I've noted that this behavior applies up to february Did you perhaps reset the stats or failover to a standby around Feb? What does: select stats_Reset from pg_stat_database where datname = current_database(); say? -- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On Fri, May 24, 2019 at 9:26 AM David Rowley <david.rowley@2ndquadrant.com> wrote: > What does: select stats_Reset from pg_stat_database where datname = > current_database(); say? Good guess: # select stats_reset from pg_stat_database where datname = current_database(); stats_reset ------------------------------- 2019-03-28 14:40:01.945332+01 Since the partitioned table of that month has an n_tup_ins that is an order lower than n_live_tup I suspect this could be the cause: # select schemaname, relname, n_tup_ins, n_tup_upd, n_tup_del, n_live_tup from pg_stat_user_tables where relname = 'y2019m03'; -[ RECORD 1 ]-------- schemaname | spire relname | y2019m03 n_tup_ins | 1671778 n_tup_upd | 27167473 n_tup_del | 0 n_live_tup | 15231270 Since each table grows around 200000 tuples per hour, that is 480000 tuples per day, it did have 3.5 days to insert in that month that is 168000 tuples from the reset to the end of march, that is also the value of n_tup_ins. In conclusion, I did hit a tuple reset (but don't remember why). Around that days I was experimenting, unsuccesfully, with pg_backrest. I say unsuccesfully because due to our policy I could not connect the salve via ssh to the host. Could it be that hit a reset of the stats? However, sorry for the noise. Luca