RE: autovacuum failing on pg_largeobject and disk usage of thepg_largeobject growing unchecked - Mailing list pgsql-general

From Jim Hurne
Subject RE: autovacuum failing on pg_largeobject and disk usage of thepg_largeobject growing unchecked
Date
Msg-id OF6F76F708.753D5B4C-ON8525858F.00625B6E-8525858F.006DF3EF@notes.na.collabserv.com
Whole thread Raw
In response to RE: autovacuum failing on pg_largeobject and disk usage of thepg_largeobject growing unchecked  ("Jim Hurne" <jhurne@us.ibm.com>)
Responses Re: autovacuum failing on pg_largeobject and disk usage of thepg_largeobject growing unchecked
RE: autovacuum failing on pg_largeobject and disk usage of the pg_largeobject growing unchecked
List pgsql-general
We're still struggling to figure out why autovacuum seems to be failing or
is ineffective sometimes against the pg_largeobject table.

We decided to try a VACUUM FULL VERBOSE on one of our problematic
databases. It did complete, but it took a surprisingly long time:

  INFO:  vacuuming "pg_catalog.pg_largeobject"
  INFO:  "pg_largeobject": found 97326130 removable, 22230 nonremovable
row versions in 207508868 pages
  DETAIL:  0 dead row versions cannot be removed yet.
  CPU: user: 1085.87 s, system: 3803.66 s, elapsed: 10883.94 s.
  VACUUM

So it took about 3 hours.  It's surprising because there was only 58 MB of
undeleted data in about 65 large objects. When we ran this particular
vacuum, we made sure there were no other processes connected to the DB and
that no other statements were running that would have locked the table.
It's my understanding that a VACUUM FULL works by creating a brand new
table and copying the "non-deleted" data into the new table. If that's the
case, I would have expected it to complete very quickly, given how little
"non-deleted" data existed at the time.

We are of course going to continue to try different things, but does
anyone have any other suggestions on what we should be looking at or what
settings we might want to adjust?

For reference, here was my original problem description:

> We have a cloud service that uses PostgreSQL to temporarily store binary

> content. We're using PostgreSQL's Large Objects to store the binary
> content. Each large object lives anywhere from a few hundred
milliseconds
> to 5-10 minutes, after which it is deleted.
>
> Normally, this works just fine and we have no issues. However, roughly
> every 3 months or so, disk usage on our PostgreSQL database starts to
> increase at an alarming rate. More specifically, the pg_largeobject
> table's disk usage steadily increases even though we've deleted most of
> the large objects. This will continue until the database server runs out

> of disk space (and then all sorts of bad things happen of course).
>
> It looks to us like autovacuum against the pg_largeobject table stops
> working because the pg_stat_all_tables.last_autovacuum column stops
> updating, or is updated infrequently, while it is still updated on other

> tables.
>
> We're currently using PostgreSQL version 10.10. Our service is JVM-based

> and we're using the PostgreSQL JDBC driver version 42.2.5.


Regards,

Jim Hurne




pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: DISTINCT on jsonb fields and Indexes
Next
From: Michael Lewis
Date:
Subject: Re: DISTINCT on jsonb fields and Indexes