Thread: large objects & space usage

large objects & space usage

From
Kevin Kempter
Date:
Hi List;

I'm engaging with a client who is trying to solve what sounds like a bloat
issue. However in further conversations they told me this:

1) we have a table that is an integer key and a blob column
2) the table was at 30Gig, we went in and deleted the last 2 months of data
and it's still at 30Gog

At this point I thought bloat and that vacuum full or a table rebuild would
fix the issue.  However later they told me:

3) we did a pg_dump and a reload into a database on the test box and the table
is still 30Gig.

I haven't gotten access to their servers yet so I only have the above info
however I wonder is there special space considerations or storage
requirements, etc with blobs ?

Thanks in advance...

/Kevin

Re: large objects & space usage

From
"Milen A. Radev"
Date:
Kevin Kempter написа:
> Hi List;
>
> I'm engaging with a client who is trying to solve what sounds like a bloat
> issue. However in further conversations they told me this:
>
> 1) we have a table that is an integer key and a blob column
> 2) the table was at 30Gig, we went in and deleted the last 2 months of data
> and it's still at 30Gog
>
> At this point I thought bloat and that vacuum full or a table rebuild would
> fix the issue.  However later they told me:
>
> 3) we did a pg_dump and a reload into a database on the test box and the table
> is still 30Gig.
>
> I haven't gotten access to their servers yet so I only have the above info
> however I wonder is there special space considerations or storage
> requirements, etc with blobs ?
[...]

So what is it - large objects or BYTEA?


--
Milen A. Radev

Re: large objects & space usage

From
Tom Lane
Date:
Kevin Kempter <kevin@kevinkempterllc.com> writes:
> 3) we did a pg_dump and a reload into a database on the test box and the table
> is still 30Gig.

You need to find out *exactly* what they did there.

The idea that comes to mind is that they dropped and reloaded the table
itself, but didn't do anything about cleaning out pg_largeobject, and
when they say "the table is still 30Gig" they actually mean the whole
database directory is still 30Gig.

If there are no other tables storing large objects in the database,
dropping and reloading the whole database is probably the simplest
solution.  Otherwise they're going to need to make sure the unwanted
LOs have been cleaned out (see contrib/vacuumlo) and then do a VACUUM
FULL or similar on pg_largeobject.

            regards, tom lane

Re: large objects & space usage

From
David Wall
Date:
> If there are no other tables storing large objects in the database,
> dropping and reloading the whole database is probably the simplest
> solution.  Otherwise they're going to need to make sure the unwanted
> LOs have been cleaned out (see contrib/vacuumlo) and then do a VACUUM
> FULL or similar on pg_largeobject.
>
Does the psql command 'vacuum;' by itself automatically vacuum
pg_largeobject along with all of tables in the database?

David


Re: large objects & space usage

From
Tom Lane
Date:
David Wall <d.wall@computer.org> writes:
> Does the psql command 'vacuum;' by itself automatically vacuum
> pg_largeobject along with all of tables in the database?

Yes, if it's issued by a superuser or the database owner.

However, that won't help much if pg_largeobject has become bloated by a
lot of no-longer-wanted BLOBs.  After deleting those, you'll probably
need VACUUM FULL or CLUSTER or some such if you want to reclaim the
disk space.  Plain VACUUM usually isn't very successful at shortening
a table file.

            regards, tom lane