Re: Starting Postgres when there is no disk space - Mailing list pgsql-general

From Igal Sapir
Subject Re: Starting Postgres when there is no disk space
Date
Msg-id CA+zig0_jsDT8e3d7mDz+TaNuastEkyPZhFJ0kDedDadHNLw8Cw@mail.gmail.com
Whole thread Raw
In response to Re: Starting Postgres when there is no disk space  (Igal Sapir <igal@lucee.org>)
List pgsql-general
If anyone ever needs, I wrote this 1-liner bash loop to create 16 temp files of 640MB random data each (well, 2-liner if you count the "config" line):

$ COUNT=16; TMPDIR=/pgdata/tmp/
$ for ((i=1; i<=6; i++)); do dd if=/dev/zero of="/pgdata/tmp/$(cat /dev/urandom | tr -cd 'a-f0-9' | head -c 20).tmp" count=81920 bs=8192; done;

Which produces about 10GB of unusable space that I can free up in the event that I run out of disk (10GB might be excessive, but it works for me for the time being):

$ ls -lh $TMPDIR
total 10G
-rw-r--r-- 1 root root 640M May  3 12:42 0a81845a5de0d926572e.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 1800a815773f34b8be98.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 1b182057d9b764d3b2a8.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 40f7b4cab222699d121a.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 498e9bc0852ed83af04f.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 49e84e5189e424c012be.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 7c984b156d11b5817aa5.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 7d1195b03906e3539495.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 9677ff969c7add0e7f92.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 9ae9d483adddf3317d7c.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 a546f3f363ca733427e7.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 a965856cb1118d98f66a.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 c162da7ecdb8824e3baf.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 d7c97019ce658b90285b.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 e76fc603ffe2c977c826.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 fed72361b202f9492d7f.tmp


Best,

Igal

On Fri, May 3, 2019 at 9:09 AM Igal Sapir <igal@lucee.org> wrote:
Jeff,

On Fri, May 3, 2019 at 6:56 AM Jeff Janes <jeff.janes@gmail.com> wrote:
On Wed, May 1, 2019 at 10:25 PM Igal Sapir <igal@lucee.org> wrote:

I have a scheduled process that runs daily to delete old data and do full vacuum.  Not sure why this happened (again).

If you are doing a regularly scheduled "vacuum full", you are almost certainly doing something wrong.  Are these "vacuum full" completing, or are they failing (probably due to transient out of space errors)?

A ordinary non-full vacuum will make the space available for internal reuse. It will not return the space to filesystem (usually), so won't get you out of the problem.  But it should prevent you from getting into the problem in the first place.  If it is failing to reuse the space adequately, you should figure out why, rather than just blindly jumping to regularly scheduled "vacuum full".  For example, what is it that is bloating, the tables themselves, their indexes, or their TOAST tables?  Or is there any bloat in the first place? Are you sure your deletions are equal to your insertions, over the long term average?  If you are doing "vacuum full" and you are certain it is completing successfully, but it doesn't free up much space, then that is strong evidence that you don't actually have bloat, you just have more live data than you think you do.  (It could also mean you have done something silly with your "fillfactor" settings.)

If you don't want the space to be reused, to keep a high correlation between insert time and physical order of the rows for example, then you should look into partitioning, as you have already noted.

Now that you have the system up again and some space freed up, I'd create a "ballast" file with a few gig of random (to avoid filesystem-level compression, should you have such a filesystem) data on the same device that holds your main data, that can be deleted in an emergency to give you enough free space to at least start the system.  Of course, monitoring is also nice, but the ballast file is more robust and there is no reason you can't have both.

Thank you for the tips.  I stand corrected.  These are regular VACUUM calls after the deletion, not VACUUM FULL.  It's a daily process that deletes records from N days ago, and then performs VACUUM, so yes, all of the inserted records should be deleted after N days.

The bloat is in a TOAST table.  The primary table has a JSONB column which can get quite large.  The fillfactor setting was not modified from its default value (does the primary table fillfactor affect the toast table?  either way they are both default in this case).

Ballast file is a great idea.  I was just thinking about that a couple of days ago, but instead of one file I think that I will have a bunch of them at 1GB each.  That will give me more flexibility in clearing space as needed and keeping more "safety buffers" for when I make space.

Thanks for your help,

Igal

pgsql-general by date:

Previous
From: Rob Sargent
Date:
Subject: Re: Back Slash \ issue
Next
From: Adrian Klaver
Date:
Subject: Re: CREATE EXTENSION to load the language into the database