Thread: postgres=# VACUUM FULL pg_statistic => ERROR: missing chunk number 0for toast value .. in pg_toast_2619

I recall seeing various discussions hoping that it'd been finally fixed - Just
wanted to report that this has happened now under postgres 10.4.

It looks like this is not related to: 0408e1ed599b06d9bca2927a50a4be52c9e74bb9
which is for "unexpected chunk number" (?)

Note that this is on the postgres database, which I think is where I saw it on
one of our internal VMs in the past (althought my memory indicates that may
have affected multiple DBs).  In the immediate case, this is customer's centos6
VM running under qemu/KVM: the same configuration as our internal VM which had
this issue (I just found a ticket dated 2016-10-06).

In case it helps:
 - the postgres database has a few things in it, primarily imported CSV logs.
   On this particular server, there's actually a 150GB table with old CSV logs
   from an script I fixed recently to avoid saving many lines than intended
   (something like for each session_id every session_line following an
   error_severity!='LOG')
 - I also have copies of pg_stat_bgwriter, pg_settings, and an aggregated copy
   of pg_buffercache here.
 - nagios: some scripts loop around all DBs; some maybe connect directly to
   postgres (for example, to list DBs).  However, I don't think check_postgres
   probably doesn't connect to postgres DB.

I'll defer fixing this for awhile in case someone wants me to save a copy of
the relation/toast/index.  From last time, I recall this just needs the right
combination of REINDEX/VACUUM/ANALYZE, and the only complication was me
needing to realize the right combination of affected DB(s).

Thanks,
Justin


Justin Pryzby <pryzby@telsasoft.com> writes:
> I'll defer fixing this for awhile in case someone wants me to save a copy of
> the relation/toast/index.  From last time, I recall this just needs the right
> combination of REINDEX/VACUUM/ANALYZE, and the only complication was me
> needing to realize the right combination of affected DB(s).

If you could come up with such a sequence that causes the problem
reproducibly, that would be of huge interest, and probably lead to
a fix promptly.  But I don't think that we can do much by looking
at the post-mortem state --- once the toast rows are gone, they're
gone, especially if the table's been vacuumed since.

            regards, tom lane


On Sat, May 19, 2018 at 11:08:23AM -0400, Tom Lane wrote:
> Justin Pryzby <pryzby@telsasoft.com> writes:
> > I'll defer fixing this for awhile in case someone wants me to save a copy of
> > the relation/toast/index.  From last time, I recall this just needs the right
> > combination of REINDEX/VACUUM/ANALYZE, and the only complication was me
> > needing to realize the right combination of affected DB(s).
> 
> If you could come up with such a sequence that causes the problem
> reproducibly, that would be of huge interest, and probably lead to
> a fix promptly.  But I don't think that we can do much by looking
> at the post-mortem state --- once the toast rows are gone, they're
> gone, especially if the table's been vacuumed since.

This is unlikely to allow reproducing it, but for sake of completeness here's a
fuller log.  I'll try to trigger on another DB.

postgres=# SELECT log_time, database, session_id, left(message,99) FROM postgres_log WHERE log_time BETWEEN '2018-05-19
07:49:01'AND '2018-05-19 07:50' AND (database IS NULL OR database='postgres') ORDER BY 1 ;
 
 2018-05-19 07:49:02.232-06 |          | 5afbc238.382f | checkpoint complete: wrote 32175 buffers (6.1%); 0 WAL file(s)
added,0 removed, 8 recycled; write=
 
 2018-05-19 07:49:02.261-06 | postgres | 5b002b4e.65f2 | statement: SHOW server_version
 2018-05-19 07:49:02.278-06 | postgres | 5b002b4e.65f7 | statement: SELECT
pg_get_indexdef('jrn_postgres_log_log_time_idx'::regclass)
 2018-05-19 07:49:02.29-06  | postgres | 5b002b4e.65f9 | statement: SELECT 1 FROM information_schema.tables WHERE
table_name='postgres_log'LIMIT 1
 
 2018-05-19 07:49:02.311-06 | postgres | 5b002b4e.65fb | statement: SELECT 1 FROM pg_class WHERE
relname='jrn_postgres_log'
 2018-05-19 07:49:02.324-06 | postgres | 5b002b4e.65fd | statement: SELECT 1 FROM pg_class WHERE
relname='jrn_postgres_log_unique_idx'
 2018-05-19 07:49:02.338-06 | postgres | 5b002b4e.65ff | statement: SELECT 1 FROM pg_class WHERE
relname='jrn_postgres_log_log_time_idx'
 2018-05-19 07:49:02.353-06 | postgres | 5b002b4e.6601 | statement: SELECT 1 FROM pg_class WHERE
relname='jrn_postgres_log_error_severity_idx'
 2018-05-19 07:49:02.37-06  | postgres | 5b002b4e.6603 | statement: SELECT 1 FROM pg_class WHERE
relname='jrn_postgres_log_message_system_idx'
 2018-05-19 07:49:02.39-06  | postgres | 5b002b4e.6605 | statement: SELECT 1 FROM pg_class WHERE
relname='jrn_postgres_log_error_message_idx'
 2018-05-19 07:49:02.405-06 | postgres | 5b002b4e.6607 | statement: SELECT 1 FROM pg_class WHERE
relname='jrn_postgres_log_duration_idx'
 2018-05-19 07:49:02.422-06 | postgres | 5b002b4e.6609 | statement: SELECT 1 FROM pg_class WHERE
relname='jrn_postgres_log_quotedquoted_idx'
 2018-05-19 07:49:02.464-06 | postgres | 5b002b4e.6619 | statement: SELECT 1 FROM pg_class WHERE
relname='postgres_log_2018_05_19_0700'
 2018-05-19 07:49:02.482-06 | postgres | 5b002b4e.661c | statement: COPY postgres_log_2018_05_19_0700 FROM
'/var/log/postgresql/postgresql-2018-05-19_074617
 2018-05-19 07:49:04.711-06 | postgres | 5b002b50.6627 | statement: SELECT 1 FROM pg_class WHERE
relname='postgres_log_2018_05_19_0700'
 2018-05-19 07:49:04.724-06 | postgres | 5b002b50.662a | statement: COPY postgres_log_2018_05_19_0700 FROM
'/var/log/postgresql/postgresql-2018-05-19_074643
 2018-05-19 07:49:06.803-06 | postgres | 5b002b52.6637 | statement: SELECT
pg_get_indexdef('jrn_postgres_log_duration_idx'::regclass)
 2018-05-19 07:49:06.837-06 | postgres | 5b002b52.6639 | statement: SELECT inhrelid::regclass::text FROM pg_inherits i
LEFTJOIN pg_constraint c ON i.inhrel
 
 2018-05-19 07:49:06.867-06 | postgres | 5b002b52.663b | statement: SELECT inhrelid::regclass::text FROM pg_inherits
WHEREinhparent='postgres_log'::regclas
 
 2018-05-19 07:49:06.918-06 | postgres | 5b002b52.6641 | statement: SELECT log_time<now()-'25 hours'::interval FROM
postgres_log_2018_05_18_0700LIMIT 1
 
 2018-05-19 07:49:14.126-06 | postgres | 5b002b5a.66c9 | statement: SELECT DISTINCT ON (session_id) log_time,
session_id,replace(regexp_replace(detail,'^(.
 
 2018-05-19 07:49:32.264-06 |          | 5afbc238.382f | checkpoint starting: time
 2018-05-19 07:49:33.972-06 |          | 5b002b59.66c1 | automatic analyze of table
"ts.public.cdrs_huawei_sgwrecord_2018_05_19"system usage: CPU: user: 6.
 
 2018-05-19 07:49:38.192-06 | postgres | 5b002b72.69d5 | statement: SELECT starelid::regclass, attname FROM
pg_statistics JOIN pg_attribute a              +
 
                            |          |               |                 ON a.attrel
 2018-05-19 07:49:38.232-06 | postgres | 5b002b72.69d8 | statement: DELETE FROM pg_statistic s USING pg_attribute a
WHERE                                  +
 
                            |          |               |                 a.attrelid=s.starelid AND a.attn
 2018-05-19 07:49:38.266-06 | postgres | 5b002b72.69da | statement: SELECT n.nspname as "Schema",
                                   +
 
                            |          |               |   c.relname as "Name",
                                   +
 
                            |          |               |   CASE c.relkind WHEN 'r' THEN 'tab
 2018-05-19 07:49:38.292-06 | postgres | 5b002b72.69dd | statement: VACUUM FULL pg_statistic
 2018-05-19 07:49:38.373-06 | postgres | 5b002b72.69dd | missing chunk number 0 for toast value 730125403 in
pg_toast_2619
...

I doubt it's related, but before VACUUM FULLing pg_statistic, the script does
this (attempting to avoid huge pg_statistic on wide tables partitioned daily,
for which only a handful of the columns are used in query conditions - as an
alternative to SET STATISTICS 0 on 1000+ columns):

DELETE FROM pg_statistic s USING pg_attribute a WHERE
s.starelid::regclass::text~'(_[0-9]{6}|_[0-9]{8})$'
AND NOT (attnotnull OR attname='start_time' OR attname LIKE '%_id')
AND [ some even uglier conditions ]

And the preceding SELECT is to display (with LIMIT) a sample of what's being
DELETEd, since it's not very exact ..

Justin


On Sat, May 19, 2018 at 11:24:57AM -0500, Justin Pryzby wrote:
> On Sat, May 19, 2018 at 11:08:23AM -0400, Tom Lane wrote:
> > Justin Pryzby <pryzby@telsasoft.com> writes:
> > > I'll defer fixing this for awhile in case someone wants me to save a copy of
> > > the relation/toast/index.  From last time, I recall this just needs the right
> > > combination of REINDEX/VACUUM/ANALYZE, and the only complication was me
> > > needing to realize the right combination of affected DB(s).
> > 
> > If you could come up with such a sequence that causes the problem
> > reproducibly, that would be of huge interest, and probably lead to
> > a fix promptly.  But I don't think that we can do much by looking
> > at the post-mortem state --- once the toast rows are gone, they're
> > gone, especially if the table's been vacuumed since.
> 
> This is unlikely to allow reproducing it, but for sake of completeness here's a
> fuller log.  I'll try to trigger on another DB.

Did not take long...

[pryzbyj@database ~]$ while :; do for db in `psql postgres -Atc "SELECT datname FROM pg_database WHERE datallowconn"`;
dofor t in pg_statistic pg_attrdef pg_constraint; do echo "$db.$t..."; PGOPTIONS=-cstatement_timeout='9s' psql $db -qc
"VACUUMFULL $t"; done; done; done
 

...
postgres.pg_statistic...
postgres.pg_attrdef...
postgres.pg_constraint...
template1.pg_statistic...
template1.pg_attrdef...
template1.pg_constraint...
ts.pg_statistic...
ERROR:  canceling statement due to statement timeout
ts.pg_attrdef...
ts.pg_constraint...
postgres.pg_statistic...
ERROR:  missing chunk number 0 for toast value 3372855171 in pg_toast_2619

I'm running this again on another DB, but I wonder if that's enough for anyone
else to reproduce it with some consistency ?  I think that took something like
10min before failing.

Justin


Justin Pryzby <pryzby@telsasoft.com> writes:
> [pryzbyj@database ~]$ while :; do for db in `psql postgres -Atc "SELECT datname FROM pg_database WHERE
datallowconn"`;do for t in pg_statistic pg_attrdef pg_constraint; do echo "$db.$t...";
PGOPTIONS=-cstatement_timeout='9s'psql $db -qc "VACUUM FULL $t"; done; done; done 

> ...
> postgres.pg_statistic...
> postgres.pg_attrdef...
> postgres.pg_constraint...
> template1.pg_statistic...
> template1.pg_attrdef...
> template1.pg_constraint...
> ts.pg_statistic...
> ERROR:  canceling statement due to statement timeout
> ts.pg_attrdef...
> ts.pg_constraint...
> postgres.pg_statistic...
> ERROR:  missing chunk number 0 for toast value 3372855171 in pg_toast_2619

Hm, so was the timeout error happening every time through on that table,
or just occasionally, or did you provoke it somehow?  I'm wondering how
your 9s timeout relates to the expected completion time.

I don't have any test DBs with anywhere near large enough stats to
require 9s to vacuum pg_statistic, but I'm trying this with a
much-reduced value of statement_timeout, and so far no failures ...

            regards, tom lane


On Sat, May 19, 2018 at 02:39:26PM -0400, Tom Lane wrote:
> Hm, so was the timeout error happening every time through on that table,
> or just occasionally, or did you provoke it somehow?  I'm wondering how
> your 9s timeout relates to the expected completion time.

I did not knowingly provoke it :)

Note that my script's non-artificial failure this morning, vac full of
pg_statistic DIDN'T timeout but the relation before it (pg_attrdef) DID.  I
guess the logs I sent earlier were incomplete.

I don't know if it times out every time..but I'm thinking timeout is
implicated, but I don't see how a time of on a previous command can cause an
error on a future session, for a non-"shared" relation.

However, I see this happened (after a few hours) on one server where I was
looping WITHOUT timeout.  So hopefully they have the same root cause and
timeout will be a good way to help trigger it.

    postgres.pg_statistic...
    ERROR:  missing chunk number 0 for toast value 615791167 in pg_toast_2619
    Sat May 19 17:18:03 EDT 2018

I should have sent the output from my script:

<<Sat May 19 07:48:51 MDT 2018: starting db=ts(analyze parents and un-analyzed tables)
...
DELETE 11185
Sat May 19 07:49:15 MDT 2018: ts: VACUUM FULL pg_catalog|pg_statistic|table|postgres|845 MB|...
ERROR:  canceling statement due to statement timeout

Sat May 19 07:49:25 MDT 2018: ts: VACUUM FULL pg_catalog|pg_attrdef|table|postgres|305 MB|...
ERROR:  canceling statement due to statement timeout

Sat May 19 07:49:36 MDT 2018: ts: VACUUM FULL pg_catalog|pg_constraint|table|postgres|14 MB|...
Sat May 19 07:49:37 MDT 2018: ts: VACUUM FULL pg_catalog|pg_constraint|table|postgres|14 MB|...done


<<Sat May 19 07:49:37 MDT 2018: starting db=postgres(analyze parents and un-analyzed tables)
DELETE 0
Sat May 19 07:49:38 MDT 2018: postgres: VACUUM FULL pg_catalog|pg_statistic|table|postgres|3344 kB|...
ERROR:  missing chunk number 0 for toast value 730125403 in pg_toast_2619

BTW I just grepped logs for this error.  I see it's happened at some point at
fifteen of our customers going back to Nov 2, 2016, shortly after I implemented
VACUUM FULL of pg_statistic (but not other tables).

I hadn't noticed most of the errors because it seems to fix itself, at least
sometimes.

Justin


On Sat, May 19, 2018 at 02:39:26PM -0400, Tom Lane wrote:
> Justin Pryzby <pryzby@telsasoft.com> writes:
> > [pryzbyj@database ~]$ while :; do for db in `psql postgres -Atc "SELECT datname FROM pg_database WHERE
datallowconn"`;do for t in pg_statistic pg_attrdef pg_constraint; do echo "$db.$t...";
PGOPTIONS=-cstatement_timeout='9s'psql $db -qc "VACUUM FULL $t"; done; done; done
 
> 
> > ...
> > postgres.pg_statistic...
> > postgres.pg_attrdef...
> > postgres.pg_constraint...
> > template1.pg_statistic...
> > template1.pg_attrdef...
> > template1.pg_constraint...
> > ts.pg_statistic...
> > ERROR:  canceling statement due to statement timeout
> > ts.pg_attrdef...
> > ts.pg_constraint...
> > postgres.pg_statistic...
> > ERROR:  missing chunk number 0 for toast value 3372855171 in pg_toast_2619
> 
> Hm, so was the timeout error happening every time through on that table,
> or just occasionally, or did you provoke it somehow?  I'm wondering how
> your 9s timeout relates to the expected completion time.

Actually statement_timeout isn't essential for this (maybe it helps to triggers
it more often - not sure).

Could you try:
time sh -ec 'while :; do time psql postgres -c "VACUUM FULL VERBOSE pg_toast.pg_toast_2619"; psql postgres -c "VACUUM
FULLVERBOSE pg_statistic"; done'; date
 

Three servers experienced error within 30min, but one server didn't fail until
12h later, and a handful others still haven't failed..

Does this help at all ?
 2018-05-24 21:57:49.98-03  | 5b075f8d.1ad1 | LOG            | pryzbyj   | postgres | statement: VACUUM FULL VERBOSE
pg_toast.pg_toast_2619
 2018-05-24 21:57:50.067-03 | 5b075f8d.1ad1 | INFO           | pryzbyj   | postgres | vacuuming
"pg_toast.pg_toast_2619"
 2018-05-24 21:57:50.09-03  | 5b075f8d.1ad1 | INFO           | pryzbyj   | postgres | "pg_toast_2619": found 0
removable,408 nonremovable row versions in 99 pages
 
 2018-05-24 21:57:50.12-03  | 5b075f8e.1ada | LOG            | pryzbyj   | postgres | statement: VACUUM FULL VERBOSE
pg_statistic
 2018-05-24 21:57:50.129-03 | 5b075f8e.1ada | INFO           | pryzbyj   | postgres | vacuuming
"pg_catalog.pg_statistic"
 2018-05-24 21:57:50.185-03 | 5b075f8e.1ada | ERROR          | pryzbyj   | postgres | missing chunk number 0 for toast
value3382957233 in pg_toast_2619
 

Some thing; this server has autovacuum logging, although it's not clear to me
if that's an essential component of the problem, either:
 2018-05-24 21:16:39.856-06 | LOG   | 5b078017.7b99 | pryzbyj   | postgres | statement: VACUUM FULL VERBOSE
pg_toast.pg_toast_2619
 2018-05-24 21:16:39.876-06 | LOG   | 5b078010.7968 |           |          | automatic vacuum of table
"postgres.pg_toast.pg_toast_2619":index scans: 1                        +
 
                            |       |               |           |          | pages: 0 removed, 117 r
 2018-05-24 21:16:39.909-06 | INFO  | 5b078017.7b99 | pryzbyj   | postgres | vacuuming "pg_toast.pg_toast_2619"
 2018-05-24 21:16:39.962-06 | INFO  | 5b078017.7b99 | pryzbyj   | postgres | "pg_toast_2619": found 0 removable, 492
nonremovablerow versions in 117 pages
 
 2018-05-24 21:16:40.025-06 | LOG   | 5b078018.7b9b | pryzbyj   | postgres | statement: VACUUM FULL VERBOSE
pg_statistic
 2018-05-24 21:16:40.064-06 | INFO  | 5b078018.7b9b | pryzbyj   | postgres | vacuuming "pg_catalog.pg_statistic"
 2018-05-24 21:16:40.145-06 | ERROR | 5b078018.7b9b | pryzbyj   | postgres | missing chunk number 0 for toast value
765874692in pg_toast_2619
 

Or this one?

postgres=# SELECT log_time, database, user_name, session_id, left(message,999) FROM postgres_log WHERE
(log_time>='2018-05-2419:56' AND log_time<'2018-05-24 19:58') AND (database='postgres' OR database IS NULL OR user_name
ISNULL OR user_name='pryzbyj') AND message NOT LIKE 'statement:%' ORDER BY 1;
 

log_time   | 2018-05-24 19:56:35.396-04
database   | 
user_name  | 
session_id | 5b075131.3ec0
left       | skipping vacuum of "pg_toast_2619" --- lock not available

...

log_time   | 2019-05-24 19:57:35.78-04
database   | 
user_name  | 
session_id | 5b07516d.445e
left       | automatic vacuum of table "postgres.pg_toast.pg_toast_2619": index scans: 1
           : pages: 0 removed, 85 remain, 0 skipped due to pins, 0 skipped frozen
           : tuples: 1 removed, 348 remain, 0 are dead but not yet removable, oldest xmin: 63803106
           : buffer usage: 179 hits, 4 misses, 87 dirtied
           : avg read rate: 1.450 MB/s, avg write rate: 31.531 MB/s
           : system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.02 s

log_time   | 2018-05-24 19:57:35.879-04
database   | postgres
user_name  | pryzbyj
session_id | 5b07516f.447f
left       | missing chunk number 0 for toast value 624341680 in pg_toast_2619

log_time   | 2018-05-24 19:57:44.332-04
database   |
user_name  |
session_id | 5af9fda3.70d5
left       | checkpoint starting: time

Justin


Moving this old thread to -hackers
https://www.postgresql.org/message-id/flat/20180519142603.GA30060%40telsasoft.com

I wanted to mention that this seems to still be an issue, now running pg11.5.

log_time               | 2019-08-30 23:20:00.118+10
user_name              | postgres
database               | ts
session_id             | 5d69227e.235
session_line           | 1
command_tag            | CLUSTER
session_start_time     | 2019-08-30 23:19:58+10
error_severity         | ERROR
sql_state_code         | XX000
message                | unexpected chunk number 1 (expected 0) for toast value 2369261203 in pg_toast_2619
query                  | CLUSTER pg_statistic USING pg_statistic_relid_att_inh_index
application_name       | psql

Note that my original report was for "missing" chunk during "VACUUM FULL", and
the current error is "unexpected chunk" during CLUSTER.  I imagine that's
related issue.  I haven't seen this in awhile (but stopped trying to reproduce
it long ago).  A recently-deployed update to this maintenance script is
probably why it's now doing CLUSTER.

On Fri, May 25, 2018 at 08:49:50AM -0500, Justin Pryzby wrote:
> On Sat, May 19, 2018 at 02:39:26PM -0400, Tom Lane wrote:
> > Justin Pryzby <pryzby@telsasoft.com> writes:
> > > [pryzbyj@database ~]$ while :; do for db in `psql postgres -Atc "SELECT datname FROM pg_database WHERE
datallowconn"`;do for t in pg_statistic pg_attrdef pg_constraint; do echo "$db.$t...";
PGOPTIONS=-cstatement_timeout='9s'psql $db -qc "VACUUM FULL $t"; done; done; done
 
> > 
> > > ...
> > > postgres.pg_statistic...
> > > postgres.pg_attrdef...
> > > postgres.pg_constraint...
> > > template1.pg_statistic...
> > > template1.pg_attrdef...
> > > template1.pg_constraint...
> > > ts.pg_statistic...
> > > ERROR:  canceling statement due to statement timeout
> > > ts.pg_attrdef...
> > > ts.pg_constraint...
> > > postgres.pg_statistic...
> > > ERROR:  missing chunk number 0 for toast value 3372855171 in pg_toast_2619
> > 
> > Hm, so was the timeout error happening every time through on that table,
> > or just occasionally, or did you provoke it somehow?  I'm wondering how
> > your 9s timeout relates to the expected completion time.
> 
> Actually statement_timeout isn't essential for this (maybe it helps to triggers
> it more often - not sure).
> 
> Could you try:
> time sh -ec 'while :; do time psql postgres -c "VACUUM FULL VERBOSE pg_toast.pg_toast_2619"; psql postgres -c "VACUUM
FULLVERBOSE pg_statistic"; done'; date
 
> 
> Three servers experienced error within 30min, but one server didn't fail until
> 12h later, and a handful others still haven't failed..
> 
> Does this help at all ?
>  2018-05-24 21:57:49.98-03  | 5b075f8d.1ad1 | LOG            | pryzbyj   | postgres | statement: VACUUM FULL VERBOSE
pg_toast.pg_toast_2619
>  2018-05-24 21:57:50.067-03 | 5b075f8d.1ad1 | INFO           | pryzbyj   | postgres | vacuuming
"pg_toast.pg_toast_2619"
>  2018-05-24 21:57:50.09-03  | 5b075f8d.1ad1 | INFO           | pryzbyj   | postgres | "pg_toast_2619": found 0
removable,408 nonremovable row versions in 99 pages
 
>  2018-05-24 21:57:50.12-03  | 5b075f8e.1ada | LOG            | pryzbyj   | postgres | statement: VACUUM FULL VERBOSE
pg_statistic
>  2018-05-24 21:57:50.129-03 | 5b075f8e.1ada | INFO           | pryzbyj   | postgres | vacuuming
"pg_catalog.pg_statistic"
>  2018-05-24 21:57:50.185-03 | 5b075f8e.1ada | ERROR          | pryzbyj   | postgres | missing chunk number 0 for
toastvalue 3382957233 in pg_toast_2619
 
> 
> Some thing; this server has autovacuum logging, although it's not clear to me
> if that's an essential component of the problem, either:
>  2018-05-24 21:16:39.856-06 | LOG   | 5b078017.7b99 | pryzbyj   | postgres | statement: VACUUM FULL VERBOSE
pg_toast.pg_toast_2619
>  2018-05-24 21:16:39.876-06 | LOG   | 5b078010.7968 |           |          | automatic vacuum of table
"postgres.pg_toast.pg_toast_2619":index scans: 1                        +
 
>                             |       |               |           |          | pages: 0 removed, 117 r
>  2018-05-24 21:16:39.909-06 | INFO  | 5b078017.7b99 | pryzbyj   | postgres | vacuuming "pg_toast.pg_toast_2619"
>  2018-05-24 21:16:39.962-06 | INFO  | 5b078017.7b99 | pryzbyj   | postgres | "pg_toast_2619": found 0 removable, 492
nonremovablerow versions in 117 pages
 
>  2018-05-24 21:16:40.025-06 | LOG   | 5b078018.7b9b | pryzbyj   | postgres | statement: VACUUM FULL VERBOSE
pg_statistic
>  2018-05-24 21:16:40.064-06 | INFO  | 5b078018.7b9b | pryzbyj   | postgres | vacuuming "pg_catalog.pg_statistic"
>  2018-05-24 21:16:40.145-06 | ERROR | 5b078018.7b9b | pryzbyj   | postgres | missing chunk number 0 for toast value
765874692in pg_toast_2619
 
> 
> Or this one?
> 
> postgres=# SELECT log_time, database, user_name, session_id, left(message,999) FROM postgres_log WHERE
(log_time>='2018-05-2419:56' AND log_time<'2018-05-24 19:58') AND (database='postgres' OR database IS NULL OR user_name
ISNULL OR user_name='pryzbyj') AND message NOT LIKE 'statement:%' ORDER BY 1;
 
> 
> log_time   | 2018-05-24 19:56:35.396-04
> database   | 
> user_name  | 
> session_id | 5b075131.3ec0
> left       | skipping vacuum of "pg_toast_2619" --- lock not available
> 
> ...
> 
> log_time   | 2019-05-24 19:57:35.78-04
> database   | 
> user_name  | 
> session_id | 5b07516d.445e
> left       | automatic vacuum of table "postgres.pg_toast.pg_toast_2619": index scans: 1
>            : pages: 0 removed, 85 remain, 0 skipped due to pins, 0 skipped frozen
>            : tuples: 1 removed, 348 remain, 0 are dead but not yet removable, oldest xmin: 63803106
>            : buffer usage: 179 hits, 4 misses, 87 dirtied
>            : avg read rate: 1.450 MB/s, avg write rate: 31.531 MB/s
>            : system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.02 s
> 
> log_time   | 2018-05-24 19:57:35.879-04
> database   | postgres
> user_name  | pryzbyj
> session_id | 5b07516f.447f
> left       | missing chunk number 0 for toast value 624341680 in pg_toast_2619
> 
> log_time   | 2018-05-24 19:57:44.332-04
> database   |
> user_name  |
> session_id | 5af9fda3.70d5
> left       | checkpoint starting: time