Re: Starting Postgres when there is no disk space - Mailing list pgsql-general

From Igal Sapir
Subject Re: Starting Postgres when there is no disk space
Date
Msg-id CA+zig0893F9TqAVxO0n5Ynduq=PUowx2gqdBp3vUFYAqZcApGg@mail.gmail.com
Whole thread Raw
In response to Re: Starting Postgres when there is no disk space  (Jeff Janes <jeff.janes@gmail.com>)
Responses Re: Starting Postgres when there is no disk space
List pgsql-general
Jeff,

On Fri, May 3, 2019 at 6:56 AM Jeff Janes <jeff.janes@gmail.com> wrote:
On Wed, May 1, 2019 at 10:25 PM Igal Sapir <igal@lucee.org> wrote:

I have a scheduled process that runs daily to delete old data and do full vacuum.  Not sure why this happened (again).

If you are doing a regularly scheduled "vacuum full", you are almost certainly doing something wrong.  Are these "vacuum full" completing, or are they failing (probably due to transient out of space errors)?

A ordinary non-full vacuum will make the space available for internal reuse. It will not return the space to filesystem (usually), so won't get you out of the problem.  But it should prevent you from getting into the problem in the first place.  If it is failing to reuse the space adequately, you should figure out why, rather than just blindly jumping to regularly scheduled "vacuum full".  For example, what is it that is bloating, the tables themselves, their indexes, or their TOAST tables?  Or is there any bloat in the first place? Are you sure your deletions are equal to your insertions, over the long term average?  If you are doing "vacuum full" and you are certain it is completing successfully, but it doesn't free up much space, then that is strong evidence that you don't actually have bloat, you just have more live data than you think you do.  (It could also mean you have done something silly with your "fillfactor" settings.)

If you don't want the space to be reused, to keep a high correlation between insert time and physical order of the rows for example, then you should look into partitioning, as you have already noted.

Now that you have the system up again and some space freed up, I'd create a "ballast" file with a few gig of random (to avoid filesystem-level compression, should you have such a filesystem) data on the same device that holds your main data, that can be deleted in an emergency to give you enough free space to at least start the system.  Of course, monitoring is also nice, but the ballast file is more robust and there is no reason you can't have both.

Thank you for the tips.  I stand corrected.  These are regular VACUUM calls after the deletion, not VACUUM FULL.  It's a daily process that deletes records from N days ago, and then performs VACUUM, so yes, all of the inserted records should be deleted after N days.

The bloat is in a TOAST table.  The primary table has a JSONB column which can get quite large.  The fillfactor setting was not modified from its default value (does the primary table fillfactor affect the toast table?  either way they are both default in this case).

Ballast file is a great idea.  I was just thinking about that a couple of days ago, but instead of one file I think that I will have a bunch of them at 1GB each.  That will give me more flexibility in clearing space as needed and keeping more "safety buffers" for when I make space.

Thanks for your help,

Igal

pgsql-general by date:

Previous
From: Ravi Krishna
Date:
Subject: Re: Back Slash \ issue
Next
From: Adrian Klaver
Date:
Subject: Re: Back Slash \ issue